Quickstart
Get started with AI Gateway in 5 minutes.
Prerequisites
Before you begin, you'll need:
- A Transactional account (sign up here)
- An API key from at least one provider (OpenAI, Anthropic, etc.)
Step 1: Add a Provider Key
First, add your LLM provider's API key to AI Gateway:
- Navigate to AI Gateway in your dashboard
- Go to the Settings tab
- Under "Provider Keys", click Add Key
- Select your provider (e.g., OpenAI)
- Paste your API key and click Save
Tip: You can add multiple providers for fallback support.
Step 2: Create a Gateway API Key
Generate an API key to authenticate your requests:
- In the AI Gateway dashboard, go to the Settings tab
- Under "Gateway API Keys", click Create API Key
- Give your key a name (e.g., "Production")
- Copy the generated key (starts with
gw_sk_)
Important: Store this key securely. You won't be able to see it again.
Step 3: Make Your First Request
Using the OpenAI SDK (Recommended)
Install the OpenAI SDK:
npm install openaiConfigure it to use AI Gateway:
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://api.transactional.dev/ai/v1',
apiKey: process.env.GATEWAY_API_KEY, // gw_sk_...
});
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'user', content: 'Hello, how are you?' }
],
});
console.log(response.choices[0].message.content);Using curl
curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
-H "Authorization: Bearer gw_sk_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'Using Python
from openai import OpenAI
client = OpenAI(
base_url="https://api.transactional.dev/ai/v1",
api_key="gw_sk_your_key_here"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)Step 4: View Your Request
After making a request:
- Go to the Requests tab in AI Gateway
- You should see your request with:
- Model used
- Token count
- Latency
- Cost
- Cache status
Step 5: Enable Caching (Optional)
Enable response caching to reduce costs:
- Go to Settings > Cache Settings
- Toggle "Enable Caching" on
- Set a TTL (e.g., 3600 seconds = 1 hour)
- Click Save
Now, identical requests will return cached responses instantly.
Using Different Providers
AI Gateway translates requests to each provider's format automatically:
OpenAI Models
const response = await openai.chat.completions.create({
model: 'gpt-4o', // or gpt-4-turbo, gpt-3.5-turbo, o1, o1-mini
messages: [{ role: 'user', content: 'Hello!' }],
});Anthropic Models
const response = await openai.chat.completions.create({
model: 'claude-3-5-sonnet', // or claude-3-opus, claude-3-haiku
messages: [{ role: 'user', content: 'Hello!' }],
});Note: You must have the corresponding provider key added in Settings.
Environment Variables
We recommend storing your gateway key in an environment variable:
# .env
GATEWAY_API_KEY=gw_sk_your_key_hereconst openai = new OpenAI({
baseURL: 'https://api.transactional.dev/ai/v1',
apiKey: process.env.GATEWAY_API_KEY,
});What's Next?
Now that you're up and running:
- Configure Caching - Reduce costs with response caching
- Set Up Fallbacks - Automatic provider failover
- Enable Streaming - Real-time streaming responses
- View Analytics - Track costs and usage
Troubleshooting
401 Unauthorized
- Check that your gateway API key is correct
- Ensure the key starts with
gw_sk_ - Verify the key hasn't been revoked
400 Bad Request - Provider not configured
- Add the provider's API key in Settings
- Check that you've selected the correct provider
429 Too Many Requests
- You've hit rate limits
- Check your plan's rate limits in Settings
Need help?
- Check the API Errors reference
- Contact support
On This Page
- Prerequisites
- Step 1: Add a Provider Key
- Step 2: Create a Gateway API Key
- Step 3: Make Your First Request
- Using the OpenAI SDK (Recommended)
- Using curl
- Using Python
- Step 4: View Your Request
- Step 5: Enable Caching (Optional)
- Using Different Providers
- OpenAI Models
- Anthropic Models
- Environment Variables
- What's Next?
- Troubleshooting
- 401 Unauthorized
- 400 Bad Request - Provider not configured
- 429 Too Many Requests
- Need help?