Migrate from OpenAI
Switch from direct OpenAI API to AI Gateway with a single line change.
Overview
Migrating from direct OpenAI API calls to AI Gateway is simple - just change the base URL and API key. All your existing code works unchanged while you gain caching, rate limiting, cost tracking, and fallback support.
Quick Migration
Before (Direct OpenAI)
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});After (AI Gateway)
import OpenAI from 'openai';
const openai = new OpenAI({
- apiKey: process.env.OPENAI_API_KEY,
+ baseURL: 'https://api.transactional.dev/ai/v1',
+ apiKey: process.env.GATEWAY_API_KEY, // gw_sk_xxx
});
// All existing code works unchanged!
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});Step-by-Step Migration
1. Get Your Gateway API Key
- Go to AI Gateway Dashboard
- Click API Keys
- Create a new key (starts with
gw_sk_)
2. Add Your OpenAI Provider Key
- Go to Provider Keys in the dashboard
- Add your OpenAI API key
- AI Gateway will use this to call OpenAI on your behalf
3. Update Your Code
Change the base URL and API key:
const openai = new OpenAI({
baseURL: 'https://api.transactional.dev/ai/v1',
apiKey: process.env.GATEWAY_API_KEY,
});4. Use As Normal
All your existing code works:
// Chat completions
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
// Streaming
const stream = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
// Function calling
const tools = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [...],
tools: [...],
});
// JSON mode
const json = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [...],
response_format: { type: 'json_object' },
});
// Vision
const vision = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What is this?' },
{ type: 'image_url', image_url: { url: '...' } },
],
},
],
});Python Migration
Before
from openai import OpenAI
client = OpenAI(
api_key=os.environ["OPENAI_API_KEY"]
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)After
from openai import OpenAI
client = OpenAI(
- api_key=os.environ["OPENAI_API_KEY"]
+ base_url="https://api.transactional.dev/ai/v1",
+ api_key=os.environ["GATEWAY_API_KEY"]
)
# All existing code works unchanged!
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)What You Get
After migration, you automatically get:
| Feature | Description |
|---|---|
| Semantic Caching | Automatic response caching for identical/similar requests |
| Cost Tracking | Real-time cost monitoring per request |
| Rate Limiting | Protect your API keys with configurable limits |
| Fallback | Automatic failover to backup providers |
| Analytics | Usage dashboards and insights |
| Request Logs | Full request/response logging |
| Multi-Provider | Use Anthropic, Google, etc. with same code |
Environment Variables
Update your environment:
# Remove or keep for fallback
# OPENAI_API_KEY=sk-xxx
# Add Gateway key
GATEWAY_API_KEY=gw_sk_xxxFramework Examples
Next.js
// lib/openai.ts
import OpenAI from 'openai';
export const openai = new OpenAI({
baseURL: 'https://api.transactional.dev/ai/v1',
apiKey: process.env.GATEWAY_API_KEY!,
});Express
// lib/openai.ts
import OpenAI from 'openai';
export const openai = new OpenAI({
baseURL: 'https://api.transactional.dev/ai/v1',
apiKey: process.env.GATEWAY_API_KEY,
});Vercel AI SDK
import { createOpenAI } from '@ai-sdk/openai';
const openai = createOpenAI({
baseURL: 'https://api.transactional.dev/ai/v1',
apiKey: process.env.GATEWAY_API_KEY!,
});Using Multiple Providers
With AI Gateway, switch between providers by changing the model:
// OpenAI
const openaiResponse = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [...],
});
// Anthropic (same client!)
const anthropicResponse = await openai.chat.completions.create({
model: 'claude-3-5-sonnet',
messages: [...],
});
// Google (same client!)
const googleResponse = await openai.chat.completions.create({
model: 'gemini-1.5-pro',
messages: [...],
});Accessing Gateway Headers
Get caching and cost info from response headers:
// Using fetch directly
const response = await fetch('https://api.transactional.dev/ai/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.GATEWAY_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }],
}),
});
// Check headers
console.log('Cache:', response.headers.get('x-cache')); // HIT or MISS
console.log('Cost:', response.headers.get('x-cost-total'));
console.log('Provider:', response.headers.get('x-provider'));Rollback
If you need to rollback, revert the changes:
const openai = new OpenAI({
- baseURL: 'https://api.transactional.dev/ai/v1',
- apiKey: process.env.GATEWAY_API_KEY,
+ apiKey: process.env.OPENAI_API_KEY,
});Troubleshooting
401 Unauthorized
- Verify your Gateway API key is correct (starts with
gw_sk_) - Check the key hasn't been revoked
Provider Key Not Found
- Add your OpenAI API key in the Gateway dashboard under Provider Keys
Rate Limited
- Check your rate limit settings in the Gateway dashboard
- Upgrade your plan for higher limits
Next Steps
- AI Gateway Overview - Full feature documentation
- Caching - Configure semantic caching
- Fallback - Set up provider fallbacks
- Cost Tracking - Monitor spending
On This Page
- Overview
- Quick Migration
- Before (Direct OpenAI)
- After (AI Gateway)
- Step-by-Step Migration
- 1. Get Your Gateway API Key
- 2. Add Your OpenAI Provider Key
- 3. Update Your Code
- 4. Use As Normal
- Python Migration
- Before
- After
- What You Get
- Environment Variables
- Framework Examples
- Next.js
- Express
- Vercel AI SDK
- Using Multiple Providers
- Accessing Gateway Headers
- Rollback
- Troubleshooting
- 401 Unauthorized
- Provider Key Not Found
- Rate Limited
- Next Steps