Supported Providers
Complete list of supported AI providers and their models.
Overview
AI Gateway supports multiple LLM providers through a unified OpenAI-compatible API. Add your provider API keys in the dashboard to enable access to their models.
Provider Matrix
| Provider | Status | Models | Features |
|---|---|---|---|
| OpenAI | Fully Supported | GPT-4o, GPT-4-turbo, o1, o1-mini | Streaming, Functions, Vision |
| Anthropic | Fully Supported | Claude Opus 4, Claude Sonnet 4, Claude 3.5, Claude 3 | Streaming, Tool Use |
| Google AI | Coming Soon | Gemini 2.0, Gemini 1.5 Pro, Gemini 1.5 Flash | - |
| AWS Bedrock | Coming Soon | Claude, Llama 3, Titan | - |
| Azure OpenAI | Coming Soon | GPT-4, GPT-3.5 | - |
Adding Provider Keys
- Navigate to AI Gateway Settings
- Under "Provider Keys", click Add Key
- Select your provider from the dropdown
- Paste your API key
- Click Save
Key Security
- Provider keys are encrypted at rest
- Keys are never exposed in logs or responses
- You can rotate keys at any time
- Deleting a key immediately revokes access
Model Naming
Use the provider's model names directly:
// OpenAI
model: 'gpt-4o'
model: 'gpt-4-turbo'
model: 'gpt-3.5-turbo'
model: 'o1'
model: 'o1-mini'
// Anthropic
model: 'claude-3-5-sonnet'
model: 'claude-3-opus'
model: 'claude-3-haiku'
model: 'claude-sonnet-4'
model: 'claude-opus-4'Provider Priority
When you have multiple providers configured, you can set priority for fallback:
- Go to Settings > Provider Keys
- Drag providers to reorder priority
- The first provider is used by default
- If it fails, the next provider is tried
Provider-Specific Limits
| Provider | Default Rate Limit | Max Tokens | Context Window |
|---|---|---|---|
| OpenAI | 3,500 RPM | 4,096-128K | 128K (GPT-4o) |
| Anthropic | 1,000 RPM | 4,096-200K | 200K (Claude 3.5) |
These limits are from the provider. AI Gateway respects and passes through provider rate limits.
Using Multiple Providers
Fallback Configuration
Set up automatic failover between providers:
// Primary request goes to OpenAI
// If it fails, AI Gateway tries Anthropic
const response = await openai.chat.completions.create({
model: 'gpt-4o', // Primary
messages: [...],
});Configure fallback mapping in Settings:
| Primary Model | Fallback Model |
|---|---|
| gpt-4o | claude-3-5-sonnet |
| gpt-3.5-turbo | claude-3-haiku |
Load Balancing
Coming soon: Distribute requests across providers based on cost, latency, or availability.
Checking Provider Status
View provider health in the dashboard:
- Go to AI Gateway > Analytics
- See real-time status for each provider
- View error rates and latency by provider
Next Steps
- OpenAI Integration - Full OpenAI setup guide
- Anthropic Integration - Full Anthropic setup guide
- Fallback Configuration - Set up automatic failover