Transactional

Troubleshooting

Common issues and solutions for AI Gateway.

Common Issues

Authentication Errors

401 Unauthorized - Invalid API Key

Symptoms:

{
  "error": {
    "code": "unauthorized",
    "message": "Invalid or missing API key"
  }
}

Solutions:

  1. Verify your key starts with gw_sk_
  2. Check the key hasn't been revoked in Settings
  3. Ensure the Authorization header format is correct:
    Authorization: Bearer gw_sk_your_key_here
    
  4. Check for extra whitespace in the key

403 Forbidden - IP Not Allowed

Symptoms:

{
  "error": {
    "code": "ip_not_allowed",
    "message": "Request IP is not in the allowlist"
  }
}

Solutions:

  1. Check your current IP: curl https://api.transactional.dev/ip
  2. Add your IP to the allowlist in Settings > Security
  3. If using a proxy/load balancer, ensure X-Forwarded-For is set

Provider Errors

400 Bad Request - Provider Not Configured

Symptoms:

{
  "error": {
    "code": "provider_not_configured",
    "message": "No API key configured for provider 'openai'"
  }
}

Solutions:

  1. Go to Settings
  2. Add the provider's API key under "Provider Keys"
  3. Verify the key is valid with the provider

400 Bad Request - Model Not Found

Symptoms:

{
  "error": {
    "code": "model_not_found",
    "message": "Model 'gpt-5' not found"
  }
}

Solutions:

  1. Check available models: GET /ai/v1/models
  2. Verify model name spelling
  3. Ensure you have the provider configured for that model

Rate Limiting

429 Too Many Requests

Symptoms:

{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Rate limit exceeded. Retry after 30 seconds."
  }
}

Solutions:

  1. Check Retry-After header for wait time
  2. Implement exponential backoff
  3. Upgrade your plan for higher limits
  4. Enable caching to reduce requests

Connection Issues

Timeout Errors

Symptoms:

  • Request hangs for 30+ seconds
  • ETIMEDOUT or ECONNRESET errors

Solutions:

  1. Check provider status pages:
  2. Implement request timeouts:
    const response = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [...],
    }, {
      timeout: 30000, // 30 seconds
    });
  3. Configure fallback providers

SSL/TLS Errors

Symptoms:

  • UNABLE_TO_VERIFY_LEAF_SIGNATURE
  • Certificate errors

Solutions:

  1. Ensure your system time is correct
  2. Update your CA certificates
  3. Check for corporate proxy interference

Response Issues

Empty or Truncated Responses

Symptoms:

  • choices[0].message.content is null or empty
  • Response cuts off mid-sentence

Solutions:

  1. Check finish_reason:
    • length - Increase max_tokens
    • content_filter - Review content policy
  2. For Anthropic models, always set max_tokens:
    await openai.chat.completions.create({
      model: 'claude-3-5-sonnet',
      max_tokens: 4096,  // Required for Anthropic
      messages: [...],
    });

Unexpected Model Response

Symptoms:

  • Model doesn't follow instructions
  • Response format is wrong

Solutions:

  1. Check your system prompt is included
  2. Verify message order (system, user, assistant)
  3. Try lowering temperature for more deterministic output
  4. Use JSON mode for structured output:
    response_format: { type: 'json_object' }

Caching Issues

Cache Not Working

Symptoms:

  • X-Cache: MISS on repeated identical requests
  • No cost savings from caching

Solutions:

  1. Verify caching is enabled in Settings
  2. Check requests are truly identical:
    • Same model
    • Same messages (exact text)
    • Same temperature and other parameters
  3. Ensure you're not sending X-Cache-Control: no-cache

Stale Cache Responses

Symptoms:

  • Getting outdated responses

Solutions:

  1. Adjust cache TTL in Settings
  2. Force fresh response:
    headers: { 'X-Cache-Control': 'no-cache' }
  3. Clear cache in Settings > Cache Settings

Streaming Issues

Stream Interruptions

Symptoms:

  • Stream stops mid-response
  • SSE error messages

Solutions:

  1. Handle stream errors:
    try {
      for await (const chunk of stream) {
        // Process chunk
      }
    } catch (error) {
      if (error.code === 'ECONNRESET') {
        // Retry the request
      }
    }
  2. Check network stability
  3. Implement reconnection logic

No Streaming Output

Symptoms:

  • stream: true but receiving complete response

Solutions:

  1. Verify you're iterating the stream:
    const stream = await openai.chat.completions.create({
      stream: true,
      // ...
    });
     
    // Must iterate!
    for await (const chunk of stream) {
      // ...
    }
  2. Check your HTTP client supports streaming

Debugging Tips

Enable Debug Logging

const openai = new OpenAI({
  baseURL: 'https://api.transactional.dev/ai/v1',
  apiKey: process.env.GATEWAY_API_KEY,
});
 
// Log all requests
openai.on('request', (request) => {
  console.log('Request:', request.method, request.path);
});
 
openai.on('response', (response) => {
  console.log('Response:', response.status);
  console.log('X-Cache:', response.headers.get('x-cache'));
});

Check Request Headers

View what headers are being sent:

curl -v https://api.transactional.dev/ai/v1/chat/completions \
  -H "Authorization: Bearer gw_sk_xxx" \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"Hi"}]}'

View Request in Dashboard

  1. Go to AI Gateway > Requests
  2. Find your request by timestamp
  3. View full request/response details
  4. Check for error messages

Getting Help

If you're still stuck:

  1. Check the docs - Search for your specific error
  2. View status page - status.transactional.dev
  3. Contact support - support@transactional.dev

Include in your support request:

  • Request ID (from X-Request-ID header)
  • Timestamp of the issue
  • Error message received
  • Code snippet (without API keys)