New0% platform fee on Tier 2+ — upgrade from 4% Pay As You Go. Start now
Full request visibility

Every request.
Fully visible.

Logs, alerts, and callbacks for every LLM call. See what happened, get notified when things break, and pipe data to your existing tools.

request-logs
TimeModelStatusLatencyCost
12:04:31gpt-4o2001.2s$0.032
12:04:28claude-3.5-sonnet2000.8s$0.018
12:04:22gpt-4o4000.1s$0.000
12:04:19gemini-1.5-pro2002.1s$0.041
Showing 4 of 12,847 requests

Observability stack

Logs. Alerts. Callbacks. Control.

Request Logs

searchable · filterable · expandable

Every LLM request logged with model, status, latency, cost, tokens, and full prompt/response (if data policy allows). Search by request ID, model, status, or time range.

RequestModelStatusLatencyCost
req_a1b2gpt-4o2001.2s$0.032
req_c3d4claude-3.5-sonnet2000.8s$0.018
req_e5f6gpt-4o4000.1s$0.000
req_g7h8gemini-1.5-pro2002.1s$0.041
Filter by model, status, time range, or request ID

Logging Callbacks

connect · forward

Pipe logs to your existing observability stack. One toggle per provider.

LangfuseDatadogS3SlackCustom

8 Alert Types

toggle · per-org

LLM errors, budget warnings, provider outages, and more. Toggle each independently.

LLM errors
Budget threshold
Provider outage
High latency
+ 4 more

Data Policy Control

per-org · compliance

Choose exactly what gets logged. Full prompts and responses, metadata only, or nothing at all. Compliance-friendly, SOC2-ready.

Full logging

Prompts + responses + metadata

Metadata only

Model, tokens, cost, latency

No logging

Nothing stored

Notification Channels

multi-channel

Get alerts where your team lives. Email, Slack, Teams, or custom webhooks.

Email
Slack
Teams
Webhook

Real-time

sub-second · searchable

Logs appear within milliseconds. Search, filter, and drill into any request as it happens.

<100msingestion latency

See everything. Miss nothing.

Full observability on every plan. Logs, alerts, callbacks, and data policy controls included.