Transactional

Migrate from LangChain

Add observability to your existing LangChain application with callback handlers.

Overview

Already using LangChain? Add full observability with a single callback handler. All your chains, agents, and LLM calls are automatically traced.

Quick Migration

Before (Direct LangChain)

import { ChatOpenAI } from '@langchain/openai';
 
const model = new ChatOpenAI({ modelName: 'gpt-4o' });
 
const response = await model.invoke('Explain quantum computing');

After (With Observability)

import { ChatOpenAI } from '@langchain/openai';
+ import { initObservability } from '@transactional/observability';
+ import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
 
+ initObservability({ dsn: process.env.TRANSACTIONAL_OBSERVABILITY_DSN! });
 
const model = new ChatOpenAI({ modelName: 'gpt-4o' });
 
+ const handler = new TransactionalCallbackHandler();
 
- const response = await model.invoke('Explain quantum computing');
+ const response = await model.invoke('Explain quantum computing', {
+   callbacks: [handler],
+ });

Step-by-Step Migration

1. Install the SDK

npm install @transactional/observability

2. Get Your DSN

  1. Go to Observability Dashboard
  2. Create or select a project
  3. Copy the DSN from project settings

3. Initialize Observability

Add initialization at your application entry point:

// app.ts or index.ts
import { initObservability } from '@transactional/observability';
 
initObservability({
  dsn: process.env.TRANSACTIONAL_OBSERVABILITY_DSN!,
});

4. Create a Callback Handler

import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
 
// Create a handler for each request/conversation
const handler = new TransactionalCallbackHandler({
  sessionId: 'conversation-123',  // Optional
  userId: 'user-456',             // Optional
});

5. Pass to LangChain Calls

// Chat models
const response = await model.invoke('Hello', {
  callbacks: [handler],
});
 
// Chains
const result = await chain.invoke({ input: 'Hello' }, {
  callbacks: [handler],
});
 
// Agents
const agentResult = await executor.invoke({ input: 'Hello' }, {
  callbacks: [handler],
});

Migration Examples

Chat Models

import { ChatOpenAI } from '@langchain/openai';
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
 
const model = new ChatOpenAI({ modelName: 'gpt-4o' });
const handler = new TransactionalCallbackHandler({ userId: 'user-123' });
 
const response = await model.invoke(
  [{ role: 'user', content: 'Hello!' }],
  { callbacks: [handler] }
);

Chains

import { LLMChain } from 'langchain/chains';
import { PromptTemplate } from '@langchain/core/prompts';
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
 
const chain = new LLMChain({
  llm: model,
  prompt: PromptTemplate.fromTemplate('Summarize: {text}'),
});
 
const handler = new TransactionalCallbackHandler({
  sessionId: 'summarization-session',
});
 
const result = await chain.invoke(
  { text: 'Long article here...' },
  { callbacks: [handler] }
);

Retrieval QA

import { RetrievalQAChain } from 'langchain/chains';
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
 
const handler = new TransactionalCallbackHandler({
  userId: user.id,
  sessionId: `qa-${conversationId}`,
  metadata: { type: 'rag' },
});
 
// Pass to both retriever and LLM for full tracing
const result = await qaChain.invoke(
  { query: 'What is the refund policy?' },
  { callbacks: [handler] }
);

Agents

import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
 
const handler = new TransactionalCallbackHandler({
  userId: user.id,
  metadata: { agentType: 'functions' },
});
 
const result = await executor.invoke(
  { input: 'What is the weather in Paris?' },
  { callbacks: [handler] }
);
 
// Trace captures:
// - Agent planning steps
// - Tool calls and results
// - Final response

LCEL (LangChain Expression Language)

import { ChatOpenAI } from '@langchain/openai';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
 
const chain = model.pipe(new StringOutputParser());
 
const handler = new TransactionalCallbackHandler();
 
const result = await chain.invoke('Hello', {
  callbacks: [handler],
});

Handler Configuration

Per-Request Options

const handler = new TransactionalCallbackHandler({
  // Group related traces
  sessionId: 'conversation-123',
 
  // Track which user
  userId: 'user-456',
 
  // Custom metadata
  metadata: {
    environment: 'production',
    feature: 'chat',
    version: '1.0.0',
  },
});

Reusing Handlers

Create a new handler for each independent request:

// API route - new handler per request
app.post('/chat', async (req, res) => {
  const handler = new TransactionalCallbackHandler({
    userId: req.user?.id,
    sessionId: req.body.conversationId,
  });
 
  const result = await chain.invoke(
    { input: req.body.message },
    { callbacks: [handler] }
  );
 
  res.json(result);
});

What Gets Traced

The callback handler automatically captures:

LangChain EventTrace TypeDetails Captured
LLM startGenerationModel name, prompts
LLM end-Response, tokens, cost
Chain startSpanChain name, inputs
Chain end-Outputs
Tool startSpanTool name, inputs
Tool end-Outputs
Retriever startSpanQuery
Retriever end-Documents
Agent actionSpanAction, inputs
Error-Error message, stack

Trace Structure Example

For a RAG chain, the trace looks like:

Trace: qa-chain
├── Span: retrieval
│   └── Generation: embedding (text-embedding-3-small)
├── Span: format-documents
└── Generation: llm-response (gpt-4o)
    ├── Input: 1,234 tokens
    ├── Output: 456 tokens
    └── Cost: $0.0234

Framework Examples

Next.js API Route

// app/api/chat/route.ts
import { ChatOpenAI } from '@langchain/openai';
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
 
export async function POST(request: Request) {
  const { message, userId, sessionId } = await request.json();
 
  const model = new ChatOpenAI({ modelName: 'gpt-4o' });
  const handler = new TransactionalCallbackHandler({
    userId,
    sessionId,
  });
 
  const response = await model.invoke(message, {
    callbacks: [handler],
  });
 
  return Response.json({ message: response.content });
}

Express

import { ChatOpenAI } from '@langchain/openai';
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
 
app.post('/chat', async (req, res) => {
  const handler = new TransactionalCallbackHandler({
    userId: req.user?.id,
    sessionId: req.body.sessionId,
  });
 
  const model = new ChatOpenAI({ modelName: 'gpt-4o' });
 
  const response = await model.invoke(req.body.message, {
    callbacks: [handler],
  });
 
  res.json({ message: response.content });
});

Error Tracking

Capture LangChain errors:

import { getObservability } from '@transactional/observability';
 
const obs = getObservability();
 
try {
  const result = await chain.invoke({ input }, { callbacks: [handler] });
} catch (error) {
  obs.captureException(error as Error, {
    tags: { framework: 'langchain' },
    extra: { input },
  });
  throw error;
}

Verifying Migration

  1. Make a LangChain call with the callback handler
  2. Go to Observability Dashboard
  3. Select your project and click Traces
  4. You should see the full trace hierarchy

Common Issues

Traces Not Appearing

  1. Ensure initObservability() is called at startup
  2. Verify the callback handler is passed in callbacks array
  3. Check DSN is correct

Nested Chains Not Showing

Pass the callback handler to all nested components:

// Pass to the chain, not just the top-level invoke
const chain = new LLMChain({
  llm: model,
  prompt,
  callbacks: [handler],  // Handler at chain level
});
 
// OR pass at invoke time
await chain.invoke({ input }, { callbacks: [handler] });

Missing Token Counts

Some models don't return token counts. The SDK estimates based on text length when actual counts aren't available.

Next Steps