Know What Your AI Did
Every request through Raptor is logged with full metadata. Debugging, compliance, analytics—all covered.
What’s Logged
| Field | Example |
|---|
request_id | abc123-def456-... |
timestamp | 2024-01-15T10:30:00Z |
method | POST |
path | /v1/chat/completions |
status | 200 |
cache_hit | true / false |
latency_ms | 5 |
upstream_latency_ms | 450 |
firewall_result | null / blocked |
Dashboard
View all requests in your dashboard:
- Go to Traces in the sidebar
- Filter by date, status, cache hit, or agent
- Click a row for full request details
Every response includes a request ID:
X-Raptor-Request-Id: abc123-def456-...
Use this to look up the full evidence record in your dashboard.
Agent Tracking
Tag requests by agent for multi-agent systems:
client = OpenAI(
base_url="https://proxy.raptordata.dev/v1",
default_headers={
"X-Raptor-Api-Key": "rpt_...",
"X-Raptor-Workspace-Id": "...",
"X-Raptor-Agent-Id": "support-bot" # Add this
}
)
Then filter by agent in your dashboard.
Compliance
Raptor evidence logs support common compliance requirements:
| Requirement | How Raptor Helps |
|---|
| Audit trail | Every request logged with timestamp and metadata |
| Data retention | Configurable retention periods |
| Access control | Workspace isolation, API key scoping |
| Request tracing | Unique request IDs for debugging |
Retention
| Plan | Retention |
|---|
| Free | 7 days |
| Pro | 30 days |
| Enterprise | Custom |
Evidence logging is completely async. Logs are sent to a background worker via an in-memory channel. Your requests never wait for logging.
Request → Response (immediate)
└──→ Log channel → Background worker → Database
Zero impact on request latency. We buffer up to 10,000 log entries and batch-write to the database.