Migrate from LangChain
Use AI Gateway as your LLM backend for LangChain applications.
Overview
LangChain applications can use AI Gateway for all LLM calls. This gives you centralized cost tracking, caching, rate limiting, and easy provider switching while keeping your LangChain code intact.
Quick Migration
Before (Direct Provider)
import { ChatOpenAI } from '@langchain/openai';
const model = new ChatOpenAI({
modelName: 'gpt-4o',
openAIApiKey: process.env.OPENAI_API_KEY,
});
const response = await model.invoke('Hello!');After (AI Gateway)
import { ChatOpenAI } from '@langchain/openai';
const model = new ChatOpenAI({
modelName: 'gpt-4o',
- openAIApiKey: process.env.OPENAI_API_KEY,
+ openAIApiKey: process.env.GATEWAY_API_KEY,
+ configuration: {
+ baseURL: 'https://api.transactional.dev/ai/v1',
+ },
});
// All existing code works unchanged!
const response = await model.invoke('Hello!');Step-by-Step Migration
1. Get Your Gateway API Key
- Go to AI Gateway Dashboard
- Click API Keys
- Create a new key (starts with
gw_sk_)
2. Add Your Provider Keys
- Go to Provider Keys in the dashboard
- Add your OpenAI, Anthropic, or other provider keys
- AI Gateway will use these to call providers on your behalf
3. Update Your LangChain Models
Configure LangChain models to use AI Gateway:
import { ChatOpenAI } from '@langchain/openai';
const model = new ChatOpenAI({
modelName: 'gpt-4o',
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: {
baseURL: 'https://api.transactional.dev/ai/v1',
},
});Migration Examples
ChatOpenAI
import { ChatOpenAI } from '@langchain/openai';
const model = new ChatOpenAI({
modelName: 'gpt-4o',
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: {
baseURL: 'https://api.transactional.dev/ai/v1',
},
temperature: 0.7,
maxTokens: 1024,
});
const response = await model.invoke([
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
]);Using Anthropic Models
Use Anthropic models through the OpenAI-compatible interface:
import { ChatOpenAI } from '@langchain/openai';
// Use ChatOpenAI with Anthropic model names
const claude = new ChatOpenAI({
modelName: 'claude-3-5-sonnet', // Anthropic model via Gateway
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: {
baseURL: 'https://api.transactional.dev/ai/v1',
},
});
const response = await claude.invoke('Explain quantum computing');With Chains
import { ChatOpenAI } from '@langchain/openai';
import { LLMChain } from 'langchain/chains';
import { PromptTemplate } from '@langchain/core/prompts';
const model = new ChatOpenAI({
modelName: 'gpt-4o',
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: {
baseURL: 'https://api.transactional.dev/ai/v1',
},
});
const chain = new LLMChain({
llm: model,
prompt: PromptTemplate.fromTemplate('Summarize: {text}'),
});
const result = await chain.invoke({ text: 'Long article here...' });With Agents
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
const model = new ChatOpenAI({
modelName: 'gpt-4o',
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: {
baseURL: 'https://api.transactional.dev/ai/v1',
},
});
const agent = await createOpenAIFunctionsAgent({
llm: model,
tools,
prompt,
});
const executor = AgentExecutor.fromAgentAndTools({
agent,
tools,
});
const result = await executor.invoke({ input: 'What is the weather?' });With RAG
import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
import { RetrievalQAChain } from 'langchain/chains';
const model = new ChatOpenAI({
modelName: 'gpt-4o',
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: {
baseURL: 'https://api.transactional.dev/ai/v1',
},
});
// Embeddings also work through Gateway
const embeddings = new OpenAIEmbeddings({
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: {
baseURL: 'https://api.transactional.dev/ai/v1',
},
});
const chain = RetrievalQAChain.fromLLM(model, retriever);
const result = await chain.invoke({ query: 'What is the refund policy?' });Streaming
Streaming works normally:
const model = new ChatOpenAI({
modelName: 'gpt-4o',
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: {
baseURL: 'https://api.transactional.dev/ai/v1',
},
streaming: true,
});
const stream = await model.stream('Tell me a story');
for await (const chunk of stream) {
process.stdout.write(chunk.content);
}Multi-Provider Setup
Create models for different providers:
import { ChatOpenAI } from '@langchain/openai';
const gatewayConfig = {
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: {
baseURL: 'https://api.transactional.dev/ai/v1',
},
};
// OpenAI GPT-4
const gpt4 = new ChatOpenAI({
...gatewayConfig,
modelName: 'gpt-4o',
});
// Anthropic Claude
const claude = new ChatOpenAI({
...gatewayConfig,
modelName: 'claude-3-5-sonnet',
});
// Google Gemini
const gemini = new ChatOpenAI({
...gatewayConfig,
modelName: 'gemini-1.5-pro',
});
// Use whichever model you need
const response = await claude.invoke('Hello!');Creating a Reusable Factory
// lib/langchain.ts
import { ChatOpenAI } from '@langchain/openai';
export function createGatewayModel(modelName: string, options?: {
temperature?: number;
maxTokens?: number;
streaming?: boolean;
}) {
return new ChatOpenAI({
modelName,
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: {
baseURL: 'https://api.transactional.dev/ai/v1',
},
temperature: options?.temperature ?? 0.7,
maxTokens: options?.maxTokens,
streaming: options?.streaming ?? false,
});
}// Usage
import { createGatewayModel } from '@/lib/langchain';
const model = createGatewayModel('gpt-4o', {
temperature: 0.5,
maxTokens: 1024,
});What You Get
After migration, you automatically get:
| Feature | Description |
|---|---|
| Semantic Caching | Automatic response caching |
| Cost Tracking | Real-time cost monitoring |
| Rate Limiting | Configurable request limits |
| Fallback | Automatic failover to backup providers |
| Multi-Provider | Switch providers by changing model name |
| Analytics | Usage dashboards |
| Request Logs | Full request/response logging |
Combining with Observability
Use both AI Gateway and Observability for complete visibility:
import { ChatOpenAI } from '@langchain/openai';
import { initObservability } from '@transactional/observability';
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
// Initialize observability
initObservability({
dsn: process.env.TRANSACTIONAL_OBSERVABILITY_DSN!,
});
// Create model with AI Gateway
const model = new ChatOpenAI({
modelName: 'gpt-4o',
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: {
baseURL: 'https://api.transactional.dev/ai/v1',
},
});
// Create observability handler
const handler = new TransactionalCallbackHandler({
userId: 'user-123',
sessionId: 'session-456',
});
// Use both
const response = await model.invoke('Hello!', {
callbacks: [handler],
});
// You get:
// - AI Gateway: caching, cost tracking, rate limiting
// - Observability: tracing, analytics, debuggingRollback
If you need to rollback, remove the Gateway configuration:
const model = new ChatOpenAI({
modelName: 'gpt-4o',
- openAIApiKey: process.env.GATEWAY_API_KEY,
- configuration: {
- baseURL: 'https://api.transactional.dev/ai/v1',
- },
+ openAIApiKey: process.env.OPENAI_API_KEY,
});Troubleshooting
Model Not Found
- Use Gateway model names (e.g.,
claude-3-5-sonnetnotclaude-3-5-sonnet-20241022) - Check available models in the dashboard
Authentication Error
- Verify your Gateway API key is correct (starts with
gw_sk_) - Ensure provider keys are configured in the dashboard
Configuration Not Applied
Make sure you're passing the configuration object:
const model = new ChatOpenAI({
modelName: 'gpt-4o',
openAIApiKey: process.env.GATEWAY_API_KEY,
configuration: { // This is required!
baseURL: 'https://api.transactional.dev/ai/v1',
},
});Next Steps
- AI Gateway Overview - Full feature documentation
- Caching - Configure semantic caching
- Fallback - Set up provider fallbacks
- Observability Integration - Add tracing
On This Page
- Overview
- Quick Migration
- Before (Direct Provider)
- After (AI Gateway)
- Step-by-Step Migration
- 1. Get Your Gateway API Key
- 2. Add Your Provider Keys
- 3. Update Your LangChain Models
- Migration Examples
- ChatOpenAI
- Using Anthropic Models
- With Chains
- With Agents
- With RAG
- Streaming
- Multi-Provider Setup
- Creating a Reusable Factory
- What You Get
- Combining with Observability
- Rollback
- Troubleshooting
- Model Not Found
- Authentication Error
- Configuration Not Applied
- Next Steps