Skip to main content

OpenAI Integration

Works with both Python and TypeScript. Your existing code stays the same—just change the base URL.

Setup

from openai import OpenAI

client = OpenAI(
    api_key="sk-your-openai-key",
    base_url="https://proxy.raptordata.dev/v1",
    default_headers={
        "X-Raptor-Api-Key": "rpt_your-key",
        "X-Raptor-Workspace-Id": "your-workspace-id"
    }
)

Chat Completions

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

Streaming

stream = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="")

Tools / Function Calling

Works exactly like the official SDK:
tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get weather for a location",
        "parameters": {
            "type": "object",
            "properties": {"location": {"type": "string"}},
            "required": ["location"]
        }
    }
}]

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Weather in Paris?"}],
    tools=tools
)

Embeddings

response = client.embeddings.create(
    model="text-embedding-3-small",
    input="Hello world"
)
embedding = response.data[0].embedding

Check Cache Status

with client.with_raw_response.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello"}]
) as response:
    print(f"Cache: {response.headers.get('x-raptor-cache')}")
    print(f"Latency: {response.headers.get('x-raptor-latency-ms')}ms")

Response Headers

HeaderDescription
X-Raptor-Cachehit or miss
X-Raptor-Latency-MsTotal Raptor overhead
X-Raptor-Upstream-Latency-MsTime waiting for OpenAI
X-Raptor-Request-IdUnique ID for debugging

Error Handling

Errors from OpenAI pass through unchanged. Raptor adds firewall blocks:
from openai import APIError

try:
    response = client.chat.completions.create(...)
except APIError as e:
    if e.status_code == 403 and 'blocked_by_firewall' in str(e):
        print("Blocked by Raptor firewall")
    else:
        raise
All OpenAI features work: GPT-4, GPT-4o, o1, embeddings, images, audio, and more. If it works with OpenAI, it works through Raptor.