OpenAI Integration
Works with both Python and TypeScript. Your existing code stays the same—just change the base URL.
Setup
from openai import OpenAI
client = OpenAI(
api_key="sk-your-openai-key",
base_url="https://proxy.raptordata.dev/v1",
default_headers={
"X-Raptor-Api-Key": "rpt_your-key",
"X-Raptor-Workspace-Id": "your-workspace-id"
}
)
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'sk-your-openai-key',
baseURL: 'https://proxy.raptordata.dev/v1',
defaultHeaders: {
'X-Raptor-Api-Key': 'rpt_your-key',
'X-Raptor-Workspace-Id': 'your-workspace-id'
}
});
Chat Completions
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
});
console.log(response.choices[0].message.content);
Streaming
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
const stream = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
Works exactly like the official SDK:
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"]
}
}
}]
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Weather in Paris?"}],
tools=tools
)
Embeddings
response = client.embeddings.create(
model="text-embedding-3-small",
input="Hello world"
)
embedding = response.data[0].embedding
Check Cache Status
with client.with_raw_response.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
) as response:
print(f"Cache: {response.headers.get('x-raptor-cache')}")
print(f"Latency: {response.headers.get('x-raptor-latency-ms')}ms")
const { data, response } = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello' }]
}).withResponse();
console.log(`Cache: ${response.headers.get('x-raptor-cache')}`);
| Header | Description |
|---|
X-Raptor-Cache | hit or miss |
X-Raptor-Latency-Ms | Total Raptor overhead |
X-Raptor-Upstream-Latency-Ms | Time waiting for OpenAI |
X-Raptor-Request-Id | Unique ID for debugging |
Error Handling
Errors from OpenAI pass through unchanged. Raptor adds firewall blocks:
from openai import APIError
try:
response = client.chat.completions.create(...)
except APIError as e:
if e.status_code == 403 and 'blocked_by_firewall' in str(e):
print("Blocked by Raptor firewall")
else:
raise
All OpenAI features work: GPT-4, GPT-4o, o1, embeddings, images, audio, and more. If it works with OpenAI, it works through Raptor.