Transactional

Error Handling

Understanding and handling AI Gateway errors.

Error Response Format

All errors follow this format:

{
  "error": {
    "code": "error_code",
    "message": "Human-readable error message",
    "type": "error_type",
    "param": "parameter_name",
    "provider_error": { }
  }
}
FieldTypeDescription
codestringMachine-readable error code
messagestringHuman-readable description
typestringError category
paramstringParameter that caused error (if applicable)
provider_errorobjectOriginal provider error (if applicable)

HTTP Status Codes

StatusMeaningWhen It Occurs
400Bad RequestInvalid request format or parameters
401UnauthorizedInvalid or missing API key
403ForbiddenKey lacks permission
404Not FoundResource doesn't exist
429Too Many RequestsRate limit exceeded
500Internal ErrorServer or provider error
502Bad GatewayProvider unavailable
503Service UnavailableGateway temporarily unavailable

Error Types

Authentication Errors

401 Unauthorized

{
  "error": {
    "code": "unauthorized",
    "message": "Invalid or missing API key",
    "type": "authentication_error"
  }
}

Causes:

  • Missing Authorization header
  • Invalid API key format
  • Revoked or expired key

Solutions:

  • Verify key starts with gw_sk_
  • Check key hasn't been revoked
  • Ensure header format is Bearer gw_sk_xxx

403 Forbidden

{
  "error": {
    "code": "forbidden",
    "message": "API key does not have permission for this resource",
    "type": "authentication_error"
  }
}

Causes:

  • Key lacks required permissions
  • Organization doesn't have feature access

Request Errors

400 Bad Request - Invalid Parameters

{
  "error": {
    "code": "invalid_request",
    "message": "Invalid value for 'temperature': must be between 0 and 2",
    "type": "invalid_request_error",
    "param": "temperature"
  }
}

Common causes:

  • Invalid temperature (must be 0-2)
  • Invalid top_p (must be 0-1)
  • Missing required model field
  • Missing required messages array
  • Empty messages array

400 Bad Request - Provider Not Configured

{
  "error": {
    "code": "provider_not_configured",
    "message": "No API key configured for provider 'openai'. Add your API key in Settings.",
    "type": "configuration_error"
  }
}

Solution: Add the provider's API key in Settings.

400 Bad Request - Model Not Found

{
  "error": {
    "code": "model_not_found",
    "message": "Model 'gpt-5' not found. Check available models at GET /ai/v1/models",
    "type": "invalid_request_error",
    "param": "model"
  }
}

Solution: Use a valid model ID. Check Models API for available models.

Rate Limit Errors

429 Too Many Requests

{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Rate limit exceeded. Please retry after 30 seconds.",
    "type": "rate_limit_error"
  }
}

Headers:

Retry-After: 30
X-RateLimit-Limit: 200
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1706140830

Solutions:

  • Wait for Retry-After seconds
  • Implement exponential backoff
  • Upgrade your plan for higher limits

Provider Errors

500 Provider Error

{
  "error": {
    "code": "provider_error",
    "message": "OpenAI API returned an error",
    "type": "provider_error",
    "provider_error": {
      "code": "context_length_exceeded",
      "message": "This model's maximum context length is 128000 tokens..."
    }
  }
}

Common provider errors:

  • Context length exceeded
  • Content policy violation
  • Invalid model parameters
  • Provider rate limits

502 Bad Gateway

{
  "error": {
    "code": "provider_unavailable",
    "message": "Unable to connect to provider. Trying fallback...",
    "type": "provider_error"
  }
}

Note: If fallback is configured, AI Gateway will automatically retry with another provider.

Handling Errors

TypeScript

import OpenAI from 'openai';
 
const openai = new OpenAI({
  baseURL: 'https://api.transactional.dev/ai/v1',
  apiKey: process.env.GATEWAY_API_KEY,
});
 
try {
  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello!' }],
  });
} catch (error) {
  if (error instanceof OpenAI.APIError) {
    console.error(`Error ${error.status}: ${error.message}`);
 
    if (error.status === 429) {
      // Rate limited - wait and retry
      const retryAfter = error.headers?.['retry-after'] || 30;
      await sleep(retryAfter * 1000);
    } else if (error.status === 400) {
      // Bad request - fix the request
      console.error('Invalid request:', error.error?.param);
    } else if (error.status === 500) {
      // Provider error - check provider_error for details
      console.error('Provider error:', error.error?.provider_error);
    }
  }
}

Python

from openai import OpenAI, APIError, RateLimitError
import time
 
client = OpenAI(
    base_url="https://api.transactional.dev/ai/v1",
    api_key=os.environ["GATEWAY_API_KEY"]
)
 
try:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "Hello!"}]
    )
except RateLimitError as e:
    retry_after = int(e.response.headers.get("retry-after", 30))
    time.sleep(retry_after)
except APIError as e:
    print(f"Error {e.status_code}: {e.message}")

Exponential Backoff

async function callWithBackoff(fn: () => Promise<any>, maxRetries = 5) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await fn();
    } catch (error) {
      if (error.status === 429 || error.status >= 500) {
        const delay = Math.min(1000 * Math.pow(2, i), 30000);
        console.log(`Retrying in ${delay}ms...`);
        await sleep(delay);
        continue;
      }
      throw error;
    }
  }
  throw new Error('Max retries exceeded');
}

Error Codes Reference

CodeStatusDescription
unauthorized401Invalid or missing API key
forbidden403Insufficient permissions
invalid_request400Invalid request parameters
model_not_found400Model doesn't exist
provider_not_configured400Provider API key not added
rate_limit_exceeded429Too many requests
provider_error500Upstream provider error
provider_unavailable502Cannot reach provider
service_unavailable503Gateway temporarily down

Next Steps