Transactional

Quickstart

Get started with AI Gateway in 5 minutes.

Prerequisites

Before you begin, you'll need:

  • A Transactional account (sign up here)
  • An API key from at least one provider (OpenAI, Anthropic, etc.)

Step 1: Add a Provider Key

First, add your LLM provider's API key to AI Gateway:

  1. Navigate to AI Gateway in your dashboard
  2. Go to the Settings tab
  3. Under "Provider Keys", click Add Key
  4. Select your provider (e.g., OpenAI)
  5. Paste your API key and click Save

Tip: You can add multiple providers for fallback support.

Step 2: Create a Gateway API Key

Generate an API key to authenticate your requests:

  1. In the AI Gateway dashboard, go to the Settings tab
  2. Under "Gateway API Keys", click Create API Key
  3. Give your key a name (e.g., "Production")
  4. Copy the generated key (starts with gw_sk_)

Important: Store this key securely. You won't be able to see it again.

Step 3: Make Your First Request

Install the OpenAI SDK:

npm install openai

Configure it to use AI Gateway:

import OpenAI from 'openai';
 
const openai = new OpenAI({
  baseURL: 'https://api.transactional.dev/ai/v1',
  apiKey: process.env.GATEWAY_API_KEY, // gw_sk_...
});
 
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'user', content: 'Hello, how are you?' }
  ],
});
 
console.log(response.choices[0].message.content);

Using curl

curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
  -H "Authorization: Bearer gw_sk_your_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ]
  }'

Using Python

from openai import OpenAI
 
client = OpenAI(
    base_url="https://api.transactional.dev/ai/v1",
    api_key="gw_sk_your_key_here"
)
 
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)
 
print(response.choices[0].message.content)

Step 4: View Your Request

After making a request:

  1. Go to the Requests tab in AI Gateway
  2. You should see your request with:
    • Model used
    • Token count
    • Latency
    • Cost
    • Cache status

Step 5: Enable Caching (Optional)

Enable response caching to reduce costs:

  1. Go to Settings > Cache Settings
  2. Toggle "Enable Caching" on
  3. Set a TTL (e.g., 3600 seconds = 1 hour)
  4. Click Save

Now, identical requests will return cached responses instantly.

Using Different Providers

AI Gateway translates requests to each provider's format automatically:

OpenAI Models

const response = await openai.chat.completions.create({
  model: 'gpt-4o',  // or gpt-4-turbo, gpt-3.5-turbo, o1, o1-mini
  messages: [{ role: 'user', content: 'Hello!' }],
});

Anthropic Models

const response = await openai.chat.completions.create({
  model: 'claude-3-5-sonnet',  // or claude-3-opus, claude-3-haiku
  messages: [{ role: 'user', content: 'Hello!' }],
});

Note: You must have the corresponding provider key added in Settings.

Environment Variables

We recommend storing your gateway key in an environment variable:

# .env
GATEWAY_API_KEY=gw_sk_your_key_here
const openai = new OpenAI({
  baseURL: 'https://api.transactional.dev/ai/v1',
  apiKey: process.env.GATEWAY_API_KEY,
});

What's Next?

Now that you're up and running:

Troubleshooting

401 Unauthorized

  • Check that your gateway API key is correct
  • Ensure the key starts with gw_sk_
  • Verify the key hasn't been revoked

400 Bad Request - Provider not configured

  • Add the provider's API key in Settings
  • Check that you've selected the correct provider

429 Too Many Requests

  • You've hit rate limits
  • Check your plan's rate limits in Settings

Need help?