Transactional

Chat Completions

Send chat completion requests through AI Gateway.

Endpoint

POST https://api.transactional.dev/ai/v1/chat/completions

Request

Headers

HeaderRequiredDescription
AuthorizationYesBearer gw_sk_your_key
Content-TypeYesapplication/json
X-Cache-ControlNono-cache to skip caching

Body Parameters

ParameterTypeRequiredDescription
modelstringYesModel ID (e.g., gpt-4o, claude-3-5-sonnet)
messagesarrayYesArray of message objects
temperaturenumberNoSampling temperature (0-2), default: 1
max_tokensnumberNoMaximum tokens to generate
top_pnumberNoNucleus sampling (0-1)
frequency_penaltynumberNoFrequency penalty (-2 to 2)
presence_penaltynumberNoPresence penalty (-2 to 2)
stopstring/arrayNoStop sequences
streambooleanNoEnable streaming
toolsarrayNoFunction/tool definitions
tool_choicestring/objectNoTool selection mode
response_formatobjectNoResponse format (e.g., JSON mode)
userstringNoUser ID for tracking

Message Object

interface Message {
  role: 'system' | 'user' | 'assistant' | 'tool';
  content: string | ContentPart[];
  name?: string;
  tool_calls?: ToolCall[];
  tool_call_id?: string;
}

Examples

Basic Request

curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
  -H "Authorization: Bearer gw_sk_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Hello!"}
    ]
  }'

With Parameters

curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
  -H "Authorization: Bearer gw_sk_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "user", "content": "Write a haiku about coding"}
    ],
    "temperature": 0.7,
    "max_tokens": 100,
    "user": "user-123"
  }'

With Function Calling

curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
  -H "Authorization: Bearer gw_sk_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "user", "content": "What is the weather in Paris?"}
    ],
    "tools": [
      {
        "type": "function",
        "function": {
          "name": "get_weather",
          "description": "Get current weather for a location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {"type": "string", "description": "City name"}
            },
            "required": ["location"]
          }
        }
      }
    ],
    "tool_choice": "auto"
  }'

With Streaming

curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
  -H "Authorization: Bearer gw_sk_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "user", "content": "Tell me a story"}
    ],
    "stream": true
  }'

JSON Mode

curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
  -H "Authorization: Bearer gw_sk_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "system", "content": "You output valid JSON."},
      {"role": "user", "content": "List 3 fruits with their colors"}
    ],
    "response_format": {"type": "json_object"}
  }'

Response

Success Response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1706140800,
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 10,
    "total_tokens": 30
  }
}

Streaming Response

Each chunk:

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1706140800,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1706140800,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1706140800,"model":"gpt-4o","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}

data: [DONE]

Function Call Response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1706140800,
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "tool_calls": [
          {
            "id": "call_abc123",
            "type": "function",
            "function": {
              "name": "get_weather",
              "arguments": "{\"location\":\"Paris\"}"
            }
          }
        ]
      },
      "finish_reason": "tool_calls"
    }
  ],
  "usage": {
    "prompt_tokens": 50,
    "completion_tokens": 20,
    "total_tokens": 70
  }
}

Response Headers

HeaderDescription
X-Request-IDUnique request identifier
X-ProviderProvider that served the request
X-CacheHIT or MISS
X-Cost-TotalTotal cost in USD
X-RateLimit-RemainingRemaining requests

Finish Reasons

ReasonDescription
stopNatural completion or stop sequence
lengthMax tokens reached
tool_callsModel wants to call a tool
content_filterContent was filtered

SDK Examples

TypeScript

import OpenAI from 'openai';
 
const openai = new OpenAI({
  baseURL: 'https://api.transactional.dev/ai/v1',
  apiKey: process.env.GATEWAY_API_KEY,
});
 
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' }
  ],
  temperature: 0.7,
});
 
console.log(response.choices[0].message.content);

Python

from openai import OpenAI
 
client = OpenAI(
    base_url="https://api.transactional.dev/ai/v1",
    api_key=os.environ["GATEWAY_API_KEY"]
)
 
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ],
    temperature=0.7
)
 
print(response.choices[0].message.content)

Error Responses

See Error Handling for complete error reference.

Common Errors

StatusCodeDescription
400invalid_requestMissing or invalid parameters
401unauthorizedInvalid API key
429rate_limit_exceededToo many requests
500provider_errorUpstream provider error

Next Steps