Transactional

Alerts

Configure notifications for new issues, regressions, and error thresholds.

Overview

Error alerts notify your team when issues need attention. Configure alerts for:

  • New Issues - First occurrence of a unique error
  • Regressions - A resolved issue reoccurs
  • Thresholds - Error count exceeds a limit

Alert Trigger Types

New Issue

Triggers when an error with a new fingerprint is captured - the first occurrence of a unique error type.

Alert: New Issue Detected
Issue: TypeError: Cannot read property 'name' of undefined
First Seen: 2024-01-15 10:30:00 UTC
Environment: production

Use for:

  • Critical errors you want to know about immediately
  • New deployments monitoring
  • Production error awareness

Regression

Triggers when a previously resolved issue receives a new occurrence.

Alert: Issue Regression
Issue: TypeError: Cannot read property 'name' of undefined
Originally Resolved: 2024-01-10
New Occurrence: 2024-01-15 10:30:00 UTC

Use for:

  • Ensuring fixes are actually deployed
  • Catching issues reintroduced by new code
  • Monitoring flaky errors

Threshold

Triggers when error count exceeds a limit within a time window.

Alert: Error Threshold Exceeded
Issue: PaymentError: Card declined
Count: 50 errors in 5 minutes
Threshold: 25 errors in 5 minutes

Use for:

  • Detecting error spikes
  • Monitoring high-volume endpoints
  • Service degradation alerts

Notification Channels

Email

Send alerts to team email addresses:

{
  channel: 'EMAIL',
  channelConfig: {
    recipients: [
      'oncall@example.com',
      'errors@example.com',
    ],
  },
}

Slack

Send alerts to Slack channels:

{
  channel: 'SLACK',
  channelConfig: {
    webhookUrl: 'https://hooks.slack.com/services/xxx/yyy/zzz',
    channel: '#errors', // Optional override
  },
}

Webhook

Send alerts to custom HTTP endpoints:

{
  channel: 'WEBHOOK',
  channelConfig: {
    url: 'https://api.example.com/error-webhook',
    headers: {
      'Authorization': 'Bearer xxx',
      'X-Custom-Header': 'value',
    },
  },
}

Creating Alert Rules

Dashboard

  1. Go to Observability > Your Project > Settings
  2. Click Alert Rules
  3. Click Create Rule
  4. Configure:
    • Name: Descriptive name
    • Trigger Type: New Issue, Regression, or Threshold
    • Channel: Email, Slack, or Webhook
    • Filters: Environment, severity (optional)

API

// Create an alert rule via API
const response = await fetch('/api/observability/projects/{projectId}/alerts', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    name: 'Production Critical Errors',
    triggerType: 'NEW_ISSUE',
    channel: 'SLACK',
    channelConfig: {
      webhookUrl: 'https://hooks.slack.com/services/xxx',
    },
    isEnabled: true,
    environments: ['production'],
    severities: ['ERROR', 'FATAL'],
    cooldownMinutes: 60,
  }),
});

Alert Configuration

Rule Properties

PropertyDescriptionRequired
nameHuman-readable rule nameYes
triggerTypeNEW_ISSUE, REGRESSION, or THRESHOLDYes
channelEMAIL, SLACK, or WEBHOOKYes
channelConfigChannel-specific configurationYes
isEnabledWhether rule is activeNo (default: true)
environmentsFilter by environmentNo
severitiesFilter by severity levelNo
cooldownMinutesMinimum time between alertsNo (default: 60)
thresholdConfigFor THRESHOLD type onlyRequired for THRESHOLD

Threshold Configuration

{
  triggerType: 'THRESHOLD',
  thresholdConfig: {
    count: 50,         // Error count
    windowMinutes: 5,  // Time window
  },
}

Environment Filtering

Limit alerts to specific environments:

{
  name: 'Production Alerts Only',
  environments: ['production'],
  // Won't trigger for staging/development errors
}

Severity Filtering

Limit alerts to specific severity levels:

{
  name: 'Critical Errors Only',
  severities: ['ERROR', 'FATAL'],
  // Won't trigger for DEBUG, INFO, or WARNING
}

Available severities:

  • DEBUG
  • INFO
  • WARNING
  • ERROR
  • FATAL

Cooldown / Rate Limiting

Prevent alert fatigue with cooldown periods:

{
  name: 'Rate Limited Alerts',
  cooldownMinutes: 60, // At most one alert per hour for this rule
}

Cooldown behavior:

  • After an alert triggers, subsequent triggers are suppressed
  • Cooldown resets after the specified time
  • Each rule has its own cooldown

Slack Integration

Creating a Slack Webhook

  1. Go to your Slack workspace settings
  2. Navigate to Apps > Manage > Custom Integrations
  3. Click Incoming Webhooks
  4. Click Add to Slack
  5. Choose a channel and click Add Incoming Webhooks integration
  6. Copy the webhook URL

Slack Alert Format

🚨 New Issue: TypeError: Cannot read property 'name' of undefined

Project: my-project
Environment: production
Severity: ERROR
First Seen: 2024-01-15 10:30:00 UTC

Culprit: src/components/UserCard.tsx:42

View Issue: https://app.transactional.dev/observability/issues/xxx

Channel Override

{
  channel: 'SLACK',
  channelConfig: {
    webhookUrl: 'https://hooks.slack.com/services/xxx',
    channel: '#critical-errors', // Override webhook's default channel
  },
}

Webhook Integration

Webhook Payload

{
  "alertType": "NEW_ISSUE",
  "issue": {
    "id": "issue-abc123",
    "title": "TypeError: Cannot read property 'name' of undefined",
    "culprit": "src/components/UserCard.tsx:42",
    "severity": "ERROR",
    "platform": "REACT",
    "firstSeen": "2024-01-15T10:30:00Z",
    "lastSeen": "2024-01-15T10:30:00Z",
    "occurrenceCount": 1,
    "userCount": 1
  },
  "project": {
    "id": "proj-xyz789",
    "name": "my-project",
    "slug": "my-project"
  },
  "organization": {
    "id": 123,
    "name": "My Org"
  },
  "timestamp": "2024-01-15T10:30:05Z",
  "dashboardUrl": "https://app.transactional.dev/observability/issues/xxx"
}

Webhook Security

Include authentication headers:

{
  channel: 'WEBHOOK',
  channelConfig: {
    url: 'https://api.example.com/webhook',
    headers: {
      'Authorization': 'Bearer secret-token',
      'X-Webhook-Secret': 'hmac-secret',
    },
  },
}

Example Configurations

Production Monitoring

// New issues in production
{
  name: 'Production New Issues',
  triggerType: 'NEW_ISSUE',
  channel: 'SLACK',
  channelConfig: {
    webhookUrl: process.env.SLACK_WEBHOOK_URL,
    channel: '#prod-errors',
  },
  environments: ['production'],
  cooldownMinutes: 30,
}

Critical Error Pager

// Fatal errors to PagerDuty
{
  name: 'Critical Error Pager',
  triggerType: 'NEW_ISSUE',
  channel: 'WEBHOOK',
  channelConfig: {
    url: 'https://events.pagerduty.com/v2/enqueue',
    headers: {
      'Content-Type': 'application/json',
    },
  },
  severities: ['FATAL'],
  cooldownMinutes: 5,
}

Regression Detection

// Notify when resolved issues come back
{
  name: 'Regression Alert',
  triggerType: 'REGRESSION',
  channel: 'EMAIL',
  channelConfig: {
    recipients: ['engineering@example.com'],
  },
  environments: ['production', 'staging'],
  cooldownMinutes: 120,
}

Error Spike Detection

// Alert when errors spike
{
  name: 'Error Spike Alert',
  triggerType: 'THRESHOLD',
  thresholdConfig: {
    count: 100,
    windowMinutes: 5,
  },
  channel: 'SLACK',
  channelConfig: {
    webhookUrl: process.env.SLACK_WEBHOOK_URL,
    channel: '#incidents',
  },
  environments: ['production'],
  cooldownMinutes: 15,
}

Managing Alert Rules

List Rules

GET /api/observability/projects/{projectId}/alerts

Get Rule

GET /api/observability/projects/{projectId}/alerts/{alertId}

Update Rule

PATCH /api/observability/projects/{projectId}/alerts/{alertId}

Delete Rule

DELETE /api/observability/projects/{projectId}/alerts/{alertId}

Toggle Rule

// Disable
PATCH /api/observability/projects/{projectId}/alerts/{alertId}
{ "isEnabled": false }
 
// Enable
PATCH /api/observability/projects/{projectId}/alerts/{alertId}
{ "isEnabled": true }

Best Practices

1. Start with New Issue Alerts

Begin with new issue alerts to catch novel errors:

{
  name: 'All New Issues',
  triggerType: 'NEW_ISSUE',
  channel: 'SLACK',
  environments: ['production'],
}

2. Use Severity Filtering

Don't alert on everything:

{
  severities: ['ERROR', 'FATAL'], // Ignore DEBUG, INFO, WARNING
}

3. Set Appropriate Cooldowns

Prevent alert fatigue:

// Low priority: 2 hours
{ cooldownMinutes: 120 }
 
// Normal: 1 hour
{ cooldownMinutes: 60 }
 
// Critical: 15 minutes
{ cooldownMinutes: 15 }

4. Separate Channels by Severity

// Critical → PagerDuty
{
  name: 'Critical to Pager',
  severities: ['FATAL'],
  channel: 'WEBHOOK',
  channelConfig: { url: 'https://pagerduty...' },
}
 
// Normal → Slack
{
  name: 'Errors to Slack',
  severities: ['ERROR'],
  channel: 'SLACK',
  channelConfig: { webhookUrl: '...' },
}

5. Environment-Specific Rules

// Production: immediate notification
{
  name: 'Production Alerts',
  environments: ['production'],
  cooldownMinutes: 15,
}
 
// Staging: daily digest is enough
{
  name: 'Staging Summary',
  environments: ['staging'],
  cooldownMinutes: 1440, // 24 hours
}

Troubleshooting

Alerts Not Triggering

  1. Check rule is enabled - Verify isEnabled: true
  2. Check filters - Environment and severity must match
  3. Check cooldown - May be in cooldown period
  4. Check channel config - Webhook URL, email addresses

Too Many Alerts

  1. Increase cooldown - Raise cooldownMinutes
  2. Add filters - Filter by environment or severity
  3. Use thresholds - Switch from NEW_ISSUE to THRESHOLD

Slack Messages Not Arriving

  1. Check webhook URL - Verify it's valid
  2. Check channel access - Webhook may not have access
  3. Check Slack logs - Look for failed webhook deliveries

Next Steps