Troubleshooting
Common issues and solutions for AI Gateway.
Common Issues
Authentication Errors
401 Unauthorized - Invalid API Key
Symptoms:
{
"error": {
"code": "unauthorized",
"message": "Invalid or missing API key"
}
}Solutions:
- Verify your key starts with
gw_sk_ - Check the key hasn't been revoked in Settings
- Ensure the Authorization header format is correct:
Authorization: Bearer gw_sk_your_key_here - Check for extra whitespace in the key
403 Forbidden - IP Not Allowed
Symptoms:
{
"error": {
"code": "ip_not_allowed",
"message": "Request IP is not in the allowlist"
}
}Solutions:
- Check your current IP:
curl https://api.transactional.dev/ip - Add your IP to the allowlist in Settings > Security
- If using a proxy/load balancer, ensure X-Forwarded-For is set
Provider Errors
400 Bad Request - Provider Not Configured
Symptoms:
{
"error": {
"code": "provider_not_configured",
"message": "No API key configured for provider 'openai'"
}
}Solutions:
- Go to Settings
- Add the provider's API key under "Provider Keys"
- Verify the key is valid with the provider
400 Bad Request - Model Not Found
Symptoms:
{
"error": {
"code": "model_not_found",
"message": "Model 'gpt-5' not found"
}
}Solutions:
- Check available models:
GET /ai/v1/models - Verify model name spelling
- Ensure you have the provider configured for that model
Rate Limiting
429 Too Many Requests
Symptoms:
{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Retry after 30 seconds."
}
}Solutions:
- Check
Retry-Afterheader for wait time - Implement exponential backoff
- Upgrade your plan for higher limits
- Enable caching to reduce requests
Connection Issues
Timeout Errors
Symptoms:
- Request hangs for 30+ seconds
ETIMEDOUTorECONNRESETerrors
Solutions:
- Check provider status pages:
- Implement request timeouts:
const response = await openai.chat.completions.create({ model: 'gpt-4o', messages: [...], }, { timeout: 30000, // 30 seconds }); - Configure fallback providers
SSL/TLS Errors
Symptoms:
UNABLE_TO_VERIFY_LEAF_SIGNATURE- Certificate errors
Solutions:
- Ensure your system time is correct
- Update your CA certificates
- Check for corporate proxy interference
Response Issues
Empty or Truncated Responses
Symptoms:
choices[0].message.contentis null or empty- Response cuts off mid-sentence
Solutions:
- Check
finish_reason:length- Increasemax_tokenscontent_filter- Review content policy
- For Anthropic models, always set
max_tokens:await openai.chat.completions.create({ model: 'claude-3-5-sonnet', max_tokens: 4096, // Required for Anthropic messages: [...], });
Unexpected Model Response
Symptoms:
- Model doesn't follow instructions
- Response format is wrong
Solutions:
- Check your system prompt is included
- Verify message order (system, user, assistant)
- Try lowering temperature for more deterministic output
- Use JSON mode for structured output:
response_format: { type: 'json_object' }
Caching Issues
Cache Not Working
Symptoms:
X-Cache: MISSon repeated identical requests- No cost savings from caching
Solutions:
- Verify caching is enabled in Settings
- Check requests are truly identical:
- Same model
- Same messages (exact text)
- Same temperature and other parameters
- Ensure you're not sending
X-Cache-Control: no-cache
Stale Cache Responses
Symptoms:
- Getting outdated responses
Solutions:
- Adjust cache TTL in Settings
- Force fresh response:
headers: { 'X-Cache-Control': 'no-cache' } - Clear cache in Settings > Cache Settings
Streaming Issues
Stream Interruptions
Symptoms:
- Stream stops mid-response
SSE errormessages
Solutions:
- Handle stream errors:
try { for await (const chunk of stream) { // Process chunk } } catch (error) { if (error.code === 'ECONNRESET') { // Retry the request } } - Check network stability
- Implement reconnection logic
No Streaming Output
Symptoms:
stream: truebut receiving complete response
Solutions:
- Verify you're iterating the stream:
const stream = await openai.chat.completions.create({ stream: true, // ... }); // Must iterate! for await (const chunk of stream) { // ... } - Check your HTTP client supports streaming
Debugging Tips
Enable Debug Logging
const openai = new OpenAI({
baseURL: 'https://api.transactional.dev/ai/v1',
apiKey: process.env.GATEWAY_API_KEY,
});
// Log all requests
openai.on('request', (request) => {
console.log('Request:', request.method, request.path);
});
openai.on('response', (response) => {
console.log('Response:', response.status);
console.log('X-Cache:', response.headers.get('x-cache'));
});Check Request Headers
View what headers are being sent:
curl -v https://api.transactional.dev/ai/v1/chat/completions \
-H "Authorization: Bearer gw_sk_xxx" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"Hi"}]}'View Request in Dashboard
- Go to AI Gateway > Requests
- Find your request by timestamp
- View full request/response details
- Check for error messages
Getting Help
If you're still stuck:
- Check the docs - Search for your specific error
- View status page - status.transactional.dev
- Contact support - support@transactional.dev
Include in your support request:
- Request ID (from
X-Request-IDheader) - Timestamp of the issue
- Error message received
- Code snippet (without API keys)
On This Page
- Common Issues
- Authentication Errors
- 401 Unauthorized - Invalid API Key
- 403 Forbidden - IP Not Allowed
- Provider Errors
- 400 Bad Request - Provider Not Configured
- 400 Bad Request - Model Not Found
- Rate Limiting
- 429 Too Many Requests
- Connection Issues
- Timeout Errors
- SSL/TLS Errors
- Response Issues
- Empty or Truncated Responses
- Unexpected Model Response
- Caching Issues
- Cache Not Working
- Stale Cache Responses
- Streaming Issues
- Stream Interruptions
- No Streaming Output
- Debugging Tips
- Enable Debug Logging
- Check Request Headers
- View Request in Dashboard
- Getting Help