Migrate from LangChain

Use AI Gateway as your LLM backend for LangChain applications.

Overview

LangChain applications can use AI Gateway for all LLM calls. This gives you centralized cost tracking, caching, rate limiting, and easy provider switching while keeping your LangChain code intact.

Quick Migration

Before (Direct Provider)

import { ChatOpenAI } from '@langchain/openai';
 
const model = new ChatOpenAI({
  modelName: 'gpt-4o',
  openAIApiKey: process.env.OPENAI_API_KEY,
});
 
const response = await model.invoke('Hello!');

After (AI Gateway)

import { ChatOpenAI } from '@langchain/openai';
 
const model = new ChatOpenAI({
  modelName: 'gpt-4o',
-  openAIApiKey: process.env.OPENAI_API_KEY,
+  openAIApiKey: process.env.GATEWAY_API_KEY,
+  configuration: {
+    baseURL: 'https://api.transactional.dev/ai/v1',
+  },
});
 
// All existing code works unchanged!
const response = await model.invoke('Hello!');

Step-by-Step Migration

1. Get Your Gateway API Key

  1. Go to AI Gateway Dashboard
  2. Click API Keys
  3. Create a new key (starts with gw_sk_)

2. Add Your Provider Keys

  1. Go to Provider Keys in the dashboard
  2. Add your OpenAI, Anthropic, or other provider keys
  3. AI Gateway will use these to call providers on your behalf

3. Update Your LangChain Models

Configure LangChain models to use AI Gateway:

import { ChatOpenAI } from '@langchain/openai';
 
const model = new ChatOpenAI({
  modelName: 'gpt-4o',
  openAIApiKey: process.env.GATEWAY_API_KEY,
  configuration: {
    baseURL: 'https://api.transactional.dev/ai/v1',
  },
});

Migration Examples

ChatOpenAI

import { ChatOpenAI } from '@langchain/openai';
 
const model = new ChatOpenAI({
  modelName: 'gpt-4o',
  openAIApiKey: process.env.GATEWAY_API_KEY,
  configuration: {
    baseURL: 'https://api.transactional.dev/ai/v1',
  },
  temperature: 0.7,
  maxTokens: 1024,
});
 
const response = await model.invoke([
  { role: 'system', content: 'You are a helpful assistant.' },
  { role: 'user', content: 'Hello!' },
]);

Using Anthropic Models

Use Anthropic models through the OpenAI-compatible interface:

import { ChatOpenAI } from '@langchain/openai';
 
// Use ChatOpenAI with Anthropic model names
const claude = new ChatOpenAI({
  modelName: 'claude-3-5-sonnet',  // Anthropic model via Gateway
  openAIApiKey: process.env.GATEWAY_API_KEY,
  configuration: {
    baseURL: 'https://api.transactional.dev/ai/v1',
  },
});
 
const response = await claude.invoke('Explain quantum computing');

With Chains

import { ChatOpenAI } from '@langchain/openai';
import { LLMChain } from 'langchain/chains';
import { PromptTemplate } from '@langchain/core/prompts';
 
const model = new ChatOpenAI({
  modelName: 'gpt-4o',
  openAIApiKey: process.env.GATEWAY_API_KEY,
  configuration: {
    baseURL: 'https://api.transactional.dev/ai/v1',
  },
});
 
const chain = new LLMChain({
  llm: model,
  prompt: PromptTemplate.fromTemplate('Summarize: {text}'),
});
 
const result = await chain.invoke({ text: 'Long article here...' });

With Agents

import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
 
const model = new ChatOpenAI({
  modelName: 'gpt-4o',
  openAIApiKey: process.env.GATEWAY_API_KEY,
  configuration: {
    baseURL: 'https://api.transactional.dev/ai/v1',
  },
});
 
const agent = await createOpenAIFunctionsAgent({
  llm: model,
  tools,
  prompt,
});
 
const executor = AgentExecutor.fromAgentAndTools({
  agent,
  tools,
});
 
const result = await executor.invoke({ input: 'What is the weather?' });

With RAG

import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
import { RetrievalQAChain } from 'langchain/chains';
 
const model = new ChatOpenAI({
  modelName: 'gpt-4o',
  openAIApiKey: process.env.GATEWAY_API_KEY,
  configuration: {
    baseURL: 'https://api.transactional.dev/ai/v1',
  },
});
 
// Embeddings also work through Gateway
const embeddings = new OpenAIEmbeddings({
  openAIApiKey: process.env.GATEWAY_API_KEY,
  configuration: {
    baseURL: 'https://api.transactional.dev/ai/v1',
  },
});
 
const chain = RetrievalQAChain.fromLLM(model, retriever);
const result = await chain.invoke({ query: 'What is the refund policy?' });

Streaming

Streaming works normally:

const model = new ChatOpenAI({
  modelName: 'gpt-4o',
  openAIApiKey: process.env.GATEWAY_API_KEY,
  configuration: {
    baseURL: 'https://api.transactional.dev/ai/v1',
  },
  streaming: true,
});
 
const stream = await model.stream('Tell me a story');
 
for await (const chunk of stream) {
  process.stdout.write(chunk.content);
}

Multi-Provider Setup

Create models for different providers:

import { ChatOpenAI } from '@langchain/openai';
 
const gatewayConfig = {
  openAIApiKey: process.env.GATEWAY_API_KEY,
  configuration: {
    baseURL: 'https://api.transactional.dev/ai/v1',
  },
};
 
// OpenAI GPT-4
const gpt4 = new ChatOpenAI({
  ...gatewayConfig,
  modelName: 'gpt-4o',
});
 
// Anthropic Claude
const claude = new ChatOpenAI({
  ...gatewayConfig,
  modelName: 'claude-3-5-sonnet',
});
 
// Google Gemini
const gemini = new ChatOpenAI({
  ...gatewayConfig,
  modelName: 'gemini-1.5-pro',
});
 
// Use whichever model you need
const response = await claude.invoke('Hello!');

Creating a Reusable Factory

// lib/langchain.ts
import { ChatOpenAI } from '@langchain/openai';
 
export function createGatewayModel(modelName: string, options?: {
  temperature?: number;
  maxTokens?: number;
  streaming?: boolean;
}) {
  return new ChatOpenAI({
    modelName,
    openAIApiKey: process.env.GATEWAY_API_KEY,
    configuration: {
      baseURL: 'https://api.transactional.dev/ai/v1',
    },
    temperature: options?.temperature ?? 0.7,
    maxTokens: options?.maxTokens,
    streaming: options?.streaming ?? false,
  });
}
// Usage
import { createGatewayModel } from '@/lib/langchain';
 
const model = createGatewayModel('gpt-4o', {
  temperature: 0.5,
  maxTokens: 1024,
});

What You Get

After migration, you automatically get:

FeatureDescription
Semantic CachingAutomatic response caching
Cost TrackingReal-time cost monitoring
Rate LimitingConfigurable request limits
FallbackAutomatic failover to backup providers
Multi-ProviderSwitch providers by changing model name
AnalyticsUsage dashboards
Request LogsFull request/response logging

Combining with Observability

Use both AI Gateway and Observability for complete visibility:

import { ChatOpenAI } from '@langchain/openai';
import { initObservability } from '@transactional/observability';
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
 
// Initialize observability
initObservability({
  dsn: process.env.TRANSACTIONAL_OBSERVABILITY_DSN!,
});
 
// Create model with AI Gateway
const model = new ChatOpenAI({
  modelName: 'gpt-4o',
  openAIApiKey: process.env.GATEWAY_API_KEY,
  configuration: {
    baseURL: 'https://api.transactional.dev/ai/v1',
  },
});
 
// Create observability handler
const handler = new TransactionalCallbackHandler({
  userId: 'user-123',
  sessionId: 'session-456',
});
 
// Use both
const response = await model.invoke('Hello!', {
  callbacks: [handler],
});
 
// You get:
// - AI Gateway: caching, cost tracking, rate limiting
// - Observability: tracing, analytics, debugging

Rollback

If you need to rollback, remove the Gateway configuration:

const model = new ChatOpenAI({
  modelName: 'gpt-4o',
-  openAIApiKey: process.env.GATEWAY_API_KEY,
-  configuration: {
-    baseURL: 'https://api.transactional.dev/ai/v1',
-  },
+  openAIApiKey: process.env.OPENAI_API_KEY,
});

Troubleshooting

Model Not Found

  • Use Gateway model names (e.g., claude-3-5-sonnet not claude-3-5-sonnet-20241022)
  • Check available models in the dashboard

Authentication Error

  • Verify your Gateway API key is correct (starts with gw_sk_)
  • Ensure provider keys are configured in the dashboard

Configuration Not Applied

Make sure you're passing the configuration object:

const model = new ChatOpenAI({
  modelName: 'gpt-4o',
  openAIApiKey: process.env.GATEWAY_API_KEY,
  configuration: {  // This is required!
    baseURL: 'https://api.transactional.dev/ai/v1',
  },
});

Next Steps