Traces
Understanding the trace model and lifecycle in Observability.
What is a Trace?
A trace represents a complete unit of work in your AI application - from the initial user request to the final response. Traces capture the full context of what happened, including all LLM calls, processing steps, and timing information.
Trace Structure
Trace: chat-completion
├── metadata: { userId, sessionId, tags }
├── input: { userMessage }
├── output: { response }
├── startTime: 2024-01-15T10:30:00Z
├── endTime: 2024-01-15T10:30:02Z
├── duration: 2000ms
└── children:
├── Span: retrieve-context
│ └── Generation: embed-query
└── Generation: generate-response
Trace Lifecycle
1. Creation
A trace begins when you call trace():
const trace = obs.trace({
name: 'chat-completion',
input: { userMessage: 'Hello!' },
userId: 'user-123',
sessionId: 'session-456',
});2. Active
During the trace, you add observations (generations, spans):
const generation = obs.generation({
name: 'openai-completion',
modelName: 'gpt-4o',
input: { messages: [...] },
});
// LLM call happens here
await generation.end({
output: { response: '...' },
promptTokens: 150,
completionTokens: 50,
});3. Completion
End the trace with output or error:
// Success
await trace.end({
output: { response: 'Hello! How can I help?' },
});
// Or error
await trace.error(new Error('Something went wrong'));4. Flushing
The SDK batches traces and sends them to the server periodically:
// Force flush (e.g., before shutdown)
await obs.shutdown();Trace Properties
| Property | Type | Description |
|---|---|---|
id | string | Unique trace identifier |
name | string | Human-readable name |
input | object | Input data |
output | object | Output data (after end) |
userId | string | Associated user |
sessionId | string | Session for grouping |
metadata | object | Arbitrary metadata |
tags | string[] | Tags for filtering |
startTime | Date | When trace started |
endTime | Date | When trace ended |
duration | number | Duration in milliseconds |
status | string | 'running', 'completed', 'error' |
Trace Attributes
Required Attributes
- name: Descriptive name for the trace
Recommended Attributes
- userId: Track which user initiated the request
- sessionId: Group related traces
- input: The initial input data
Optional Attributes
- metadata: Environment, version, feature flags
- tags: Filterable labels
Example: Complete Trace
import { getObservability } from '@transactional/observability';
async function handleChatRequest(userId: string, message: string) {
const obs = getObservability();
// Start trace
const trace = obs.trace({
name: 'chat-request',
input: { message },
userId,
sessionId: `chat-${userId}`,
metadata: {
environment: 'production',
version: '1.2.0',
},
tags: ['chat', 'support'],
});
try {
// Add a span for context retrieval
const retrievalSpan = obs.observation({
type: 'SPAN',
name: 'retrieve-context',
input: { query: message },
});
const context = await retrieveContext(message);
await retrievalSpan.end({
output: { documentCount: context.length },
});
// Add a generation for the LLM call
const generation = obs.generation({
name: 'generate-response',
modelName: 'gpt-4o',
input: {
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: message },
],
},
});
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [...],
});
await generation.end({
output: response.choices[0].message,
promptTokens: response.usage?.prompt_tokens,
completionTokens: response.usage?.completion_tokens,
});
// End trace successfully
await trace.end({
output: { response: response.choices[0].message.content },
});
return response.choices[0].message.content;
} catch (error) {
// End trace with error
await trace.error(error as Error);
throw error;
}
}Viewing Traces
Dashboard
- Go to Observability
- Select your project
- Click Traces tab
- View trace list with:
- Name
- Duration
- Status
- Token count
- Cost
Trace Detail View
Click a trace to see:
- Complete timeline
- All child observations
- Input/output data
- Token usage breakdown
- Cost calculation
Trace ID Propagation
Pass trace context across services:
// Service A: Create trace
const trace = obs.trace({ name: 'api-request' });
// Pass trace ID to Service B
const response = await fetch('/service-b', {
headers: {
'X-Trace-ID': trace.id,
},
});// Service B: Continue trace
const traceId = request.headers.get('X-Trace-ID');
const childSpan = obs.observation({
type: 'SPAN',
name: 'service-b-processing',
traceId, // Links to parent trace
});Best Practices
1. Name Traces Descriptively
// Good
trace({ name: 'user-chat-completion' });
trace({ name: 'document-summarization' });
trace({ name: 'code-review-request' });
// Bad
trace({ name: 'request' });
trace({ name: 'process' });2. Always End Traces
Use try/finally to ensure traces end:
const trace = obs.trace({ name: 'operation' });
try {
// Your code
await trace.end({ output: result });
} catch (error) {
await trace.error(error);
throw error;
}3. Include Useful Metadata
trace({
name: 'chat',
metadata: {
environment: process.env.NODE_ENV,
version: process.env.APP_VERSION,
feature: 'chat-v2',
},
});4. Use Sessions for Conversations
// Same sessionId for all turns in a conversation
trace({
name: 'chat-turn',
sessionId: `conversation-${conversationId}`,
});Next Steps
- Sessions - Group related traces
- Generations - Track LLM calls
- Spans - Add custom spans
On This Page
- What is a Trace?
- Trace Structure
- Trace Lifecycle
- 1. Creation
- 2. Active
- 3. Completion
- 4. Flushing
- Trace Properties
- Trace Attributes
- Required Attributes
- Recommended Attributes
- Optional Attributes
- Example: Complete Trace
- Viewing Traces
- Dashboard
- Trace Detail View
- Trace ID Propagation
- Best Practices
- 1. Name Traces Descriptively
- 2. Always End Traces
- 3. Include Useful Metadata
- 4. Use Sessions for Conversations
- Next Steps