POST https://api.transactional.dev/ai/v1/chat/completions
| Header | Required | Description |
|---|
Authorization | Yes | Bearer gw_sk_your_key |
Content-Type | Yes | application/json |
X-Cache-Control | No | no-cache to skip caching |
| Parameter | Type | Required | Description |
|---|
model | string | Yes | Model ID (e.g., gpt-4o, claude-3-5-sonnet) |
messages | array | Yes | Array of message objects |
temperature | number | No | Sampling temperature (0-2), default: 1 |
max_tokens | number | No | Maximum tokens to generate |
top_p | number | No | Nucleus sampling (0-1) |
frequency_penalty | number | No | Frequency penalty (-2 to 2) |
presence_penalty | number | No | Presence penalty (-2 to 2) |
stop | string/array | No | Stop sequences |
stream | boolean | No | Enable streaming |
tools | array | No | Function/tool definitions |
tool_choice | string/object | No | Tool selection mode |
response_format | object | No | Response format (e.g., JSON mode) |
user | string | No | User ID for tracking |
interface Message {
role: 'system' | 'user' | 'assistant' | 'tool';
content: string | ContentPart[];
name?: string;
tool_calls?: ToolCall[];
tool_call_id?: string;
}
curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
-H "Authorization: Bearer gw_sk_your_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
}'
curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
-H "Authorization: Bearer gw_sk_your_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Write a haiku about coding"}
],
"temperature": 0.7,
"max_tokens": 100,
"user": "user-123"
}'
curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
-H "Authorization: Bearer gw_sk_your_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "What is the weather in Paris?"}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}'
curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
-H "Authorization: Bearer gw_sk_your_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Tell me a story"}
],
"stream": true
}'
curl -X POST https://api.transactional.dev/ai/v1/chat/completions \
-H "Authorization: Bearer gw_sk_your_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You output valid JSON."},
{"role": "user", "content": "List 3 fruits with their colors"}
],
"response_format": {"type": "json_object"}
}'
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1706140800,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 10,
"total_tokens": 30
}
}
Each chunk:
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1706140800,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1706140800,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1706140800,"model":"gpt-4o","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1706140800,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_abc123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{\"location\":\"Paris\"}"
}
}
]
},
"finish_reason": "tool_calls"
}
],
"usage": {
"prompt_tokens": 50,
"completion_tokens": 20,
"total_tokens": 70
}
}
| Header | Description |
|---|
X-Request-ID | Unique request identifier |
X-Provider | Provider that served the request |
X-Cache | HIT or MISS |
X-Cost-Total | Total cost in USD |
X-RateLimit-Remaining | Remaining requests |
| Reason | Description |
|---|
stop | Natural completion or stop sequence |
length | Max tokens reached |
tool_calls | Model wants to call a tool |
content_filter | Content was filtered |
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://api.transactional.dev/ai/v1',
apiKey: process.env.GATEWAY_API_KEY,
});
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
],
temperature: 0.7,
});
console.log(response.choices[0].message.content);
from openai import OpenAI
client = OpenAI(
base_url="https://api.transactional.dev/ai/v1",
api_key=os.environ["GATEWAY_API_KEY"]
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
temperature=0.7
)
print(response.choices[0].message.content)
See Error Handling for complete error reference.
| Status | Code | Description |
|---|
| 400 | invalid_request | Missing or invalid parameters |
| 401 | unauthorized | Invalid API key |
| 429 | rate_limit_exceeded | Too many requests |
| 500 | provider_error | Upstream provider error |