Quickstart
Get started with Observability in 5 minutes.
Prerequisites
Before you begin, you'll need:
- A Transactional account (sign up here)
- Node.js 18+ (for TypeScript/JavaScript projects)
- An AI application using LLMs
Step 1: Create a Project
- Navigate to Observability in your dashboard
- Click New Project
- Enter a name for your project (e.g., "My AI App")
- Select your platform (Node.js, React, Next.js, etc.)
- Click Create Project
Step 2: Get Your DSN
After creating your project, you'll see your Data Source Name (DSN). This is a unique identifier that connects your application to your project.
https://pk_abc123...@api.transactional.dev/observability/42
Copy this DSN - you'll need it to configure the SDK.
Security Note: Your DSN contains a public key that's safe to use in server-side code. Never expose secret keys in client-side code.
Step 3: Install the SDK
Install the Observability SDK in your project:
npm install @transactional/observabilityStep 4: Initialize the SDK
Add the initialization code to your application's entry point:
import { initObservability } from '@transactional/observability';
initObservability({
dsn: process.env.TRANSACTIONAL_OBSERVABILITY_DSN,
});Add your DSN to your environment variables:
TRANSACTIONAL_OBSERVABILITY_DSN=https://pk_abc123...@api.transactional.dev/observability/42Step 5: Track Your First Trace
Now you can start tracking LLM calls:
import { getObservability } from '@transactional/observability';
async function chat(userMessage: string) {
const obs = getObservability();
// Create a trace for this conversation
const trace = obs.trace({
name: 'chat',
input: { userMessage },
userId: 'user-123', // Optional: track by user
});
try {
// Track the LLM generation
const generation = obs.generation({
name: 'openai-completion',
modelName: 'gpt-4o',
input: {
messages: [{ role: 'user', content: userMessage }],
},
});
// Make your LLM call
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: userMessage }],
});
// End the generation with the result
await generation.end({
output: response.choices[0].message,
promptTokens: response.usage?.prompt_tokens,
completionTokens: response.usage?.completion_tokens,
});
// End the trace
await trace.end({
output: { response: response.choices[0].message.content },
});
return response.choices[0].message.content;
} catch (error) {
await trace.error(error as Error);
throw error;
}
}Step 6: View Your Traces
- Go to your project in the dashboard
- Click on the Traces tab
- You should see your first trace with all the details
Using with LangChain
If you're using LangChain, tracing is even simpler:
import { TransactionalCallbackHandler } from '@transactional/observability/langchain';
import { ChatOpenAI } from '@langchain/openai';
const handler = new TransactionalCallbackHandler({
sessionId: 'conversation-123',
});
const model = new ChatOpenAI({ modelName: 'gpt-4o' });
// All calls are automatically traced!
const response = await model.invoke('Hello!', {
callbacks: [handler],
});What's Next?
Now that you're up and running, explore more features:
- Traces - Understand the trace model
- Sessions - Group traces into conversations
- Analytics - Monitor your AI spending
- LangChain Integration - Full LangChain setup guide
Troubleshooting
Traces not appearing
- Verify your DSN is correct
- Check that
initObservability()is called before any tracing - Ensure your application can reach
api.transactional.dev - Check the browser/server console for errors
Missing token counts
Make sure you're passing promptTokens and completionTokens to generation.end(). If your LLM provider doesn't return token counts, the SDK will estimate them.
Need help?
- Check our API Reference
- Contact support