LangChain Integration
Automatic tracing for LangChain applications.
Overview
The Observability SDK provides a LangChain callback handler that automatically traces all LLM calls, chain executions, and agent steps.
Installation
npm install @transactional/observability @langchain/core @langchain/openaiSetup
Initialize the SDK and create a callback handler:
import { initObservability } from '@transactional/observability';
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
// Initialize once at startup
initObservability({
dsn: process.env.TRANSACTIONAL_OBSERVABILITY_DSN!,
});
// Create a handler for each conversation/request
const handler = new TransactionalCallbackHandler({
sessionId: 'conversation-123', // Optional
userId: 'user-456', // Optional
metadata: { // Optional
environment: 'production',
},
});Basic Usage
With Chat Models
import { ChatOpenAI } from '@langchain/openai';
const model = new ChatOpenAI({ modelName: 'gpt-4o' });
const response = await model.invoke('Explain quantum computing', {
callbacks: [handler],
});With Chains
import { LLMChain } from 'langchain/chains';
import { PromptTemplate } from '@langchain/core/prompts';
const chain = new LLMChain({
llm: model,
prompt: PromptTemplate.fromTemplate('Summarize: {text}'),
callbacks: [handler],
});
const result = await chain.invoke({ text: 'Long article here...' });With Agents
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
const executor = AgentExecutor.fromAgentAndTools({
agent,
tools,
callbacks: [handler],
});
const result = await executor.invoke({ input: 'What is the weather?' });Handler Options
const handler = new TransactionalCallbackHandler({
// Group traces into a session (e.g., conversation)
sessionId: 'conversation-123',
// Track which user made the request
userId: 'user-456',
// Add custom metadata to all traces
metadata: {
environment: 'production',
version: '1.0.0',
},
});What Gets Traced
The handler automatically captures:
| Event | Trace Type | Details Captured |
|---|---|---|
| Chain start | Span | Chain name, inputs |
| LLM start | Generation | Model name, prompts |
| LLM end | - | Outputs, token counts |
| Chain end | - | Outputs |
| Tool start | Span | Tool name, inputs |
| Tool end | - | Outputs |
| Error | - | Error message, stack trace |
Trace Structure
For a typical RAG chain, the trace structure looks like:
Trace: qa-chain
├── Span: retrieval
│ └── Generation: embedding (text-embedding-3-small)
├── Span: format-docs
└── Generation: llm-response (gpt-4o)
Example: RAG Application
import { initObservability } from '@transactional/observability';
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
import { ChatOpenAI } from '@langchain/openai';
import { RetrievalQAChain } from 'langchain/chains';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
initObservability({ dsn: process.env.TRANSACTIONAL_OBSERVABILITY_DSN! });
async function answerQuestion(userId: string, question: string) {
const handler = new TransactionalCallbackHandler({
sessionId: `qa-${userId}`,
userId,
metadata: { type: 'qa' },
});
const model = new ChatOpenAI({
modelName: 'gpt-4o',
callbacks: [handler],
});
const chain = RetrievalQAChain.fromLLM(model, retriever, {
callbacks: [handler],
});
const result = await chain.invoke({ query: question });
return result.text;
}Example: Streaming
import { ChatOpenAI } from '@langchain/openai';
const model = new ChatOpenAI({
modelName: 'gpt-4o',
streaming: true,
});
const stream = await model.stream('Tell me a story', {
callbacks: [handler],
});
for await (const chunk of stream) {
process.stdout.write(chunk.content);
}
// Trace is automatically completed with full token countsMultiple Handlers
You can use multiple callback handlers:
import { ConsoleCallbackHandler } from '@langchain/core/callbacks';
const response = await model.invoke('Hello', {
callbacks: [
handler,
new ConsoleCallbackHandler(), // Also log to console
],
});Troubleshooting
Traces not appearing
- Ensure
initObservability()is called before creating handlers - Verify the handler is passed in the
callbacksarray - Check that your DSN is correct
Missing token counts
Some models don't return token counts. The SDK will estimate tokens based on the text length when actual counts aren't available.
Nested chains not showing
Make sure you pass the callback handler to all chains and models in your pipeline, not just the top-level chain.