Overview
Monitor, trace, and optimize your AI applications with comprehensive LLM observability.
What is Observability?
Observability provides comprehensive monitoring for your AI-powered applications. Track every LLM call, monitor costs, analyze performance, capture errors, and debug issues with detailed trace analysis.
Key Capabilities
- Trace Analysis - Full visibility into every LLM call with inputs, outputs, and timing
- Cost Tracking - Real-time cost monitoring per model, per trace, and per user
- Session Replay - Replay multi-turn conversations to understand AI behavior
- Model Registry - Centralized registry of LLM models with pricing data
- Error Tracking - Capture, group, and manage application errors with rich context
Why Observability?
Building AI applications is complex. Without proper observability, you face:
- Hidden Costs - LLM calls can be expensive, and costs spiral without visibility
- Debugging Challenges - AI behavior is hard to reproduce without full trace data
- Performance Issues - Latency problems are difficult to identify without metrics
- Compliance Gaps - Many industries require audit trails of AI decisions
Observability solves these problems with purpose-built monitoring for AI applications.
Key Features
Full Trace Visibility
See exactly what's happening in your AI pipelines:
- Complete request/response logging for every LLM call
- Nested spans for complex chains and agents
- Token usage and cost calculation per call
- Latency breakdown by component
Cost Management
Keep your AI spending under control:
- Real-time cost tracking per model and provider
- Per-user and per-trace cost attribution
- Budget alerts when spending exceeds thresholds
- Historical cost analytics and trends
Developer-First SDK
Integrate with a few lines of code:
import { initObservability, getObservability } from '@transactional/observability';
initObservability({
dsn: 'your-dsn-here',
});
const obs = getObservability();
// Create a trace
const trace = obs.trace({
name: 'chat-completion',
userId: 'user-123',
});
// Track LLM generation
const generation = obs.generation({
name: 'gpt-4o',
modelName: 'gpt-4o',
input: { messages: [...] },
});Framework Integrations
Native support for popular AI frameworks:
- LangChain - Automatic tracing via callback handler
- Vercel AI SDK - Wrapper for seamless integration
- OpenAI - Direct SDK support
- Anthropic - Direct SDK support
Error Tracking
Capture and manage errors across your applications:
- Automatic error grouping using fingerprinting
- Stack traces with source map support
- User and request context capture
- Breadcrumbs for debugging user journeys
- Alerts for new issues, regressions, and error spikes
import { getObservability } from '@transactional/observability';
const obs = getObservability();
// Capture errors
obs.captureException(error, {
tags: { feature: 'checkout' },
user: { id: 'user-123' },
});
// Add breadcrumbs
obs.addBreadcrumb({
type: 'user',
message: 'User clicked checkout',
});Learn more in the Error Tracking Guide.
Getting Started
- Create a Project - Set up your first Observability project in the dashboard
- Get Your DSN - Copy the Data Source Name from project settings
- Install the SDK - Add the SDK to your application
- Start Tracing - Begin capturing LLM calls automatically
Ready to get started? Check out our Quickstart Guide.
Next Steps
- Quickstart Guide - Get up and running in 5 minutes
- TypeScript SDK - Full SDK documentation
- LangChain Integration - Using with LangChain
- Error Tracking - Capture and manage application errors
- API Reference - REST API documentation