Transactional

Spans

Creating custom spans for non-LLM operations in your AI pipeline.

What is a Span?

A span represents a timed operation within a trace that isn't an LLM call. Use spans to track retrieval, processing, database queries, API calls, and other steps in your AI pipeline.

Span Structure

Span: retrieve-documents
├── type: SPAN
├── name: retrieve-documents
├── input: { query: '...' }
├── output: { documents: [...] }
├── startTime: 2024-01-15T10:30:00Z
├── endTime: 2024-01-15T10:30:01Z
├── duration: 1000ms
└── metadata: { source: 'pinecone' }

Creating Spans

const obs = getObservability();
 
// Create a span
const span = obs.observation({
  type: 'SPAN',
  name: 'retrieve-documents',
  input: { query: userQuery, topK: 5 },
});
 
// Do your work
const documents = await vectorStore.similaritySearch(userQuery, 5);
 
// End the span
await span.end({
  output: {
    documentCount: documents.length,
    documents: documents.map(d => d.id),
  },
});

Span Types

Retrieval Spans

Track vector search and document retrieval:

const retrievalSpan = obs.observation({
  type: 'SPAN',
  name: 'vector-search',
  input: {
    query: userQuery,
    topK: 10,
    filter: { category: 'docs' },
  },
  metadata: {
    vectorStore: 'pinecone',
    index: 'production',
  },
});
 
const results = await pinecone.query({
  vector: embedding,
  topK: 10,
  filter: { category: { $eq: 'docs' } },
});
 
await retrievalSpan.end({
  output: {
    matchCount: results.matches.length,
    topScore: results.matches[0]?.score,
  },
});

Processing Spans

Track data transformation:

const processingSpan = obs.observation({
  type: 'SPAN',
  name: 'format-context',
  input: { documentCount: documents.length },
});
 
const formattedContext = documents
  .map(doc => `## ${doc.title}\n${doc.content}`)
  .join('\n\n');
 
await processingSpan.end({
  output: {
    contextLength: formattedContext.length,
    truncated: formattedContext.length > 10000,
  },
});

API Call Spans

Track external API calls:

const apiSpan = obs.observation({
  type: 'SPAN',
  name: 'fetch-user-profile',
  input: { userId: user.id },
});
 
const profile = await fetch(`/api/users/${user.id}`).then(r => r.json());
 
await apiSpan.end({
  output: { hasPreferences: !!profile.preferences },
});

Database Spans

Track database operations:

const dbSpan = obs.observation({
  type: 'SPAN',
  name: 'save-conversation',
  input: { messageCount: messages.length },
});
 
await db.insert(conversations).values({
  userId,
  messages: JSON.stringify(messages),
});
 
await dbSpan.end({
  output: { success: true },
});

Nested Spans

Create hierarchies for complex operations:

// Parent span
const ragSpan = obs.observation({
  type: 'SPAN',
  name: 'rag-pipeline',
  input: { query: userQuery },
});
 
  // Child span: embedding
  const embedSpan = obs.observation({
    type: 'SPAN',
    name: 'generate-embedding',
    parentObservationId: ragSpan.id,
  });
 
  const embedding = await generateEmbedding(userQuery);
  await embedSpan.end({ output: { dimensions: embedding.length } });
 
  // Child span: retrieval
  const retrieveSpan = obs.observation({
    type: 'SPAN',
    name: 'retrieve-documents',
    parentObservationId: ragSpan.id,
  });
 
  const docs = await retrieve(embedding);
  await retrieveSpan.end({ output: { count: docs.length } });
 
// End parent
await ragSpan.end({
  output: { documentCount: docs.length },
});

Span Properties

PropertyTypeDescription
idstringUnique identifier
typestringAlways 'SPAN'
namestringHuman-readable name
inputobjectInput data
outputobjectOutput data
startTimeDateWhen span started
endTimeDateWhen span ended
durationnumberDuration in ms
metadataobjectAdditional context
parentObservationIdstringParent span/generation ID

Example: Complete RAG Pipeline

import { getObservability } from '@transactional/observability';
 
async function ragPipeline(query: string): Promise<string> {
  const obs = getObservability();
 
  const trace = obs.trace({
    name: 'rag-query',
    input: { query },
  });
 
  try {
    // Step 1: Generate query embedding
    const embedSpan = obs.observation({
      type: 'SPAN',
      name: 'embed-query',
      input: { query },
    });
 
    const queryEmbedding = await openai.embeddings.create({
      model: 'text-embedding-3-small',
      input: query,
    });
 
    await embedSpan.end({
      output: { dimensions: queryEmbedding.data[0].embedding.length },
      metadata: { tokens: queryEmbedding.usage.total_tokens },
    });
 
    // Step 2: Retrieve relevant documents
    const retrieveSpan = obs.observation({
      type: 'SPAN',
      name: 'retrieve-documents',
      input: { topK: 5 },
    });
 
    const results = await vectorStore.query({
      vector: queryEmbedding.data[0].embedding,
      topK: 5,
    });
 
    await retrieveSpan.end({
      output: {
        documentCount: results.matches.length,
        topScore: results.matches[0]?.score,
      },
    });
 
    // Step 3: Format context
    const formatSpan = obs.observation({
      type: 'SPAN',
      name: 'format-context',
      input: { documentCount: results.matches.length },
    });
 
    const context = results.matches
      .map(m => m.metadata.content)
      .join('\n\n');
 
    await formatSpan.end({
      output: { contextLength: context.length },
    });
 
    // Step 4: Generate response (LLM call)
    const generation = obs.generation({
      name: 'generate-response',
      modelName: 'gpt-4o',
      input: {
        messages: [
          { role: 'system', content: `Context:\n${context}` },
          { role: 'user', content: query },
        ],
      },
    });
 
    const response = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [
        { role: 'system', content: `Context:\n${context}` },
        { role: 'user', content: query },
      ],
    });
 
    await generation.end({
      output: response.choices[0].message,
      promptTokens: response.usage?.prompt_tokens,
      completionTokens: response.usage?.completion_tokens,
    });
 
    const answer = response.choices[0].message.content ?? '';
 
    await trace.end({ output: { answer } });
 
    return answer;
  } catch (error) {
    await trace.error(error as Error);
    throw error;
  }
}

Viewing Spans

In Trace View

Spans appear in the trace timeline:

  1. Go to Traces
  2. Click a trace
  3. See spans alongside generations
  4. View timing and duration

Performance Analysis

Use spans to identify bottlenecks:

  • Sort by duration
  • Filter slow spans
  • Analyze span patterns

Best Practices

1. Name Spans by Operation

// Good - describes the operation
obs.observation({ type: 'SPAN', name: 'retrieve-documents' });
obs.observation({ type: 'SPAN', name: 'validate-input' });
obs.observation({ type: 'SPAN', name: 'format-response' });
 
// Bad - too generic
obs.observation({ type: 'SPAN', name: 'step1' });
obs.observation({ type: 'SPAN', name: 'process' });

2. Include Relevant Input/Output

// Good - meaningful context
obs.observation({
  type: 'SPAN',
  name: 'search',
  input: { query, topK: 5, filter: 'docs' },
});
 
span.end({
  output: { resultCount: 5, topScore: 0.95 },
});
 
// Bad - no context
obs.observation({ type: 'SPAN', name: 'search' });
span.end({});

3. Always End Spans

const span = obs.observation({...});
try {
  const result = await doWork();
  await span.end({ output: result });
} catch (error) {
  await span.error(error);
  throw error;
}

4. Use Metadata for Context

obs.observation({
  type: 'SPAN',
  name: 'vector-search',
  metadata: {
    vectorStore: 'pinecone',
    index: 'production-v2',
    namespace: 'docs',
  },
});

Next Steps