Transactional

OpenAI Integration

Add tracing and observability to your OpenAI API calls.

Overview

The Observability SDK provides automatic tracing for OpenAI API calls. Track every completion, measure latency, monitor costs, and debug issues with full visibility.

Installation

npm install @transactional/observability openai

Setup

Basic Integration

Wrap your OpenAI client with the observability wrapper:

import OpenAI from 'openai';
import { initObservability, wrapOpenAI } from '@transactional/observability';
 
// Initialize observability
initObservability({
  dsn: process.env.TRANSACTIONAL_OBSERVABILITY_DSN!,
});
 
// Create and wrap the OpenAI client
const openai = wrapOpenAI(new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
}));

With Custom Options

const openai = wrapOpenAI(
  new OpenAI({ apiKey: process.env.OPENAI_API_KEY }),
  {
    // Default user for all traces
    userId: 'system',
 
    // Default metadata
    metadata: {
      environment: process.env.NODE_ENV,
      version: process.env.APP_VERSION,
    },
 
    // Enable error tracking
    captureErrors: true,
  }
);

Usage Examples

Chat Completions

All completions are automatically traced:

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain quantum computing' },
  ],
});
 
// Trace automatically captures:
// - Model name
// - Input messages
// - Output response
// - Token usage (prompt, completion, total)
// - Latency
// - Cost

Streaming

Streaming completions are fully traced:

const stream = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Tell me a story' }],
  stream: true,
});
 
for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
 
// Trace captures full streamed response and token counts

Function Calling

Tool usage is automatically captured:

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'What is the weather in Paris?' }],
  tools: [
    {
      type: 'function',
      function: {
        name: 'get_weather',
        description: 'Get current weather',
        parameters: {
          type: 'object',
          properties: {
            location: { type: 'string' },
          },
        },
      },
    },
  ],
});
 
// Trace includes tool calls and their arguments

Embeddings

Embedding calls are traced:

const embedding = await openai.embeddings.create({
  model: 'text-embedding-3-small',
  input: 'Hello world',
});
 
// Trace captures model, input text, and dimensions

Adding Context

Per-Request Context

Add context to individual requests:

const response = await openai.chat.completions.create(
  {
    model: 'gpt-4o',
    messages: [...],
  },
  {
    // Observability context
    observability: {
      name: 'summarize-article',
      userId: 'user-123',
      sessionId: 'session-456',
      metadata: {
        articleId: 'article-789',
        feature: 'summarization',
      },
      tags: ['summarize', 'articles'],
    },
  }
);

Global Context

Set context that applies to all requests:

import { getObservability } from '@transactional/observability';
 
const obs = getObservability();
 
// Set user context
obs.setUser({ id: 'user-123', email: 'user@example.com' });
 
// Set tags
obs.setTags({ environment: 'production', team: 'ai' });

Manual Tracing

For complex workflows, use manual tracing:

import { getObservability } from '@transactional/observability';
 
const obs = getObservability();
 
async function generateWithRetry(prompt: string) {
  const trace = obs.trace({
    name: 'generate-with-retry',
    input: { prompt },
  });
 
  try {
    for (let attempt = 1; attempt <= 3; attempt++) {
      const generation = obs.generation({
        name: `attempt-${attempt}`,
        modelName: 'gpt-4o',
        input: { prompt },
      });
 
      try {
        const response = await openai.chat.completions.create({
          model: 'gpt-4o',
          messages: [{ role: 'user', content: prompt }],
        });
 
        await generation.end({
          output: response.choices[0].message,
          promptTokens: response.usage?.prompt_tokens,
          completionTokens: response.usage?.completion_tokens,
        });
 
        await trace.end({ output: response.choices[0].message });
        return response;
      } catch (error) {
        await generation.error(error as Error);
        if (attempt === 3) throw error;
      }
    }
  } catch (error) {
    await trace.error(error as Error);
    throw error;
  }
}

Error Tracking

Capture OpenAI errors for debugging:

import { getObservability } from '@transactional/observability';
 
const obs = getObservability();
 
try {
  const response = await openai.chat.completions.create({...});
} catch (error) {
  // Capture the error with context
  obs.captureException(error as Error, {
    tags: {
      provider: 'openai',
      model: 'gpt-4o',
    },
    extra: {
      prompt: messages[0].content,
      attempt: retryCount,
    },
  });
  throw error;
}

What Gets Traced

API MethodTracedDetails Captured
chat.completions.createYesModel, messages, response, tokens, cost
chat.completions.create (streaming)YesFull response, tokens
embeddings.createYesModel, input, dimensions
moderations.createYesInput, categories
images.generateComing Soon-
audio.transcriptions.createComing Soon-

Viewing Traces

Dashboard

  1. Go to Observability Dashboard
  2. Select your project
  3. Click Traces to see all OpenAI calls

Trace Details

Each trace shows:

  • Model used
  • Input messages
  • Output response
  • Token breakdown (prompt, completion, total)
  • Cost calculation
  • Latency
  • Any errors

Best Practices

1. Wrap Once, Use Everywhere

// lib/openai.ts
import OpenAI from 'openai';
import { wrapOpenAI } from '@transactional/observability';
 
export const openai = wrapOpenAI(new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
}));
// Use the wrapped client everywhere
import { openai } from '@/lib/openai';

2. Use Sessions for Conversations

const response = await openai.chat.completions.create(
  { model: 'gpt-4o', messages },
  {
    observability: {
      sessionId: `chat-${conversationId}`,
      userId: user.id,
    },
  }
);

3. Add Meaningful Names

const response = await openai.chat.completions.create(
  { model: 'gpt-4o', messages },
  {
    observability: {
      name: 'customer-support-reply',  // Descriptive name
    },
  }
);

4. Track Feature Usage

const response = await openai.chat.completions.create(
  { model: 'gpt-4o', messages },
  {
    observability: {
      tags: ['feature:summarization', 'tier:pro'],
    },
  }
);

Troubleshooting

Traces Not Appearing

  1. Verify initObservability() is called before creating the client
  2. Check that wrapOpenAI() is used
  3. Confirm DSN is correct

Missing Token Counts

Some streaming responses may not include token counts. The SDK estimates tokens when not provided.

High Latency in Traces

If observability adds noticeable latency:

  1. Check network connectivity to the observability API
  2. Increase batchSize to reduce API calls
  3. Ensure flushInterval is appropriate

Next Steps