Transactional

Dashboard

Overview of the Observability analytics dashboard.

Overview

The Observability dashboard provides real-time insights into your AI applications. Monitor traces, analyze costs, track performance, and debug issues from a single view.

Dashboard Sections

Summary Cards

At the top of the dashboard, key metrics at a glance:

MetricDescription
Total TracesNumber of traces in selected period
Total TokensSum of all tokens used
Total CostSum of all LLM costs
Avg LatencyAverage trace duration
Error RatePercentage of failed traces

Time Range Selector

Filter data by time period:

  • Last hour
  • Last 24 hours
  • Last 7 days
  • Last 30 days
  • Custom range

Traces Chart

Visualize trace volume over time:

  • Stacked by status (success, error)
  • Hover for detailed counts
  • Click to drill into specific time periods

Token Usage Chart

Track token consumption:

  • Input vs output tokens
  • By model
  • Trend over time

Cost Chart

Monitor spending:

  • Daily/hourly breakdown
  • By model comparison
  • Cumulative view

Filters

Filter data by:

  • Model: gpt-4o, claude-3-5-sonnet, etc.
  • Status: Success, Error
  • User: Specific user IDs
  • Tags: Custom tags
  • Session: Specific sessions

Full-text search across:

  • Trace names
  • Input content
  • Output content
  • Metadata

Quick Actions

  • Export: Download data as CSV/JSON
  • Share: Generate shareable link
  • Alert: Create alert from current view

Dashboard Tabs

Traces Tab

List view of all traces:

ColumnDescription
NameTrace name
StatusSuccess/Error indicator
DurationTime from start to end
TokensTotal tokens used
CostTotal cost
TimeWhen trace occurred

Click a trace for detailed view.

Sessions Tab

Group traces by session:

ColumnDescription
Session IDSession identifier
TracesNumber of traces
DurationSession duration
TokensTotal session tokens
CostTotal session cost
UserAssociated user

Generations Tab

Filter to LLM calls only:

ColumnDescription
ModelModel used
Prompt TokensInput tokens
Completion TokensOutput tokens
LatencyResponse time
CostGeneration cost

Models Tab

Aggregate by model:

ColumnDescription
ModelModel name
RequestsTotal requests
TokensTotal tokens
Avg LatencyAverage response time
CostTotal cost
Error RateFailure percentage

Trace Detail View

Click any trace to see:

Timeline

Visual representation of the trace:

Trace: chat-completion (2.5s)
├── Span: retrieve-context (500ms)
│   └── Generation: embed-query (200ms)
└── Generation: generate-response (1.8s)

Input Panel

Full input data:

{
  "userMessage": "What is machine learning?",
  "userId": "user-123",
  "sessionId": "session-456"
}

Output Panel

Full output data:

{
  "response": "Machine learning is a subset of AI...",
  "model": "gpt-4o",
  "tokens": 500
}

Metrics

  • Start time
  • End time
  • Duration
  • Total tokens
  • Total cost
  • Status

Metadata

Custom metadata attached to the trace:

{
  "environment": "production",
  "version": "1.2.0",
  "feature": "chat-v2"
}

Customizing the Dashboard

Saved Views

Save custom filter combinations:

  1. Apply desired filters
  2. Click Save View
  3. Name your view
  4. Access from Saved Views dropdown

Default Time Range

Set default time range:

  1. Go to Settings
  2. Under Dashboard Defaults
  3. Select preferred time range

Column Visibility

Choose visible columns:

  1. Click column selector icon
  2. Toggle columns on/off
  3. Drag to reorder

Keyboard Shortcuts

ShortcutAction
/Focus search
rRefresh data
Navigate time range
escClose detail panel
j kNavigate list

Next Steps