Alerts
Configure notifications for new issues, regressions, and error thresholds.
Overview
Error alerts notify your team when issues need attention. Configure alerts for:
- New Issues - First occurrence of a unique error
- Regressions - A resolved issue reoccurs
- Thresholds - Error count exceeds a limit
Alert Trigger Types
New Issue
Triggers when an error with a new fingerprint is captured - the first occurrence of a unique error type.
Alert: New Issue Detected
Issue: TypeError: Cannot read property 'name' of undefined
First Seen: 2024-01-15 10:30:00 UTC
Environment: production
Use for:
- Critical errors you want to know about immediately
- New deployments monitoring
- Production error awareness
Regression
Triggers when a previously resolved issue receives a new occurrence.
Alert: Issue Regression
Issue: TypeError: Cannot read property 'name' of undefined
Originally Resolved: 2024-01-10
New Occurrence: 2024-01-15 10:30:00 UTC
Use for:
- Ensuring fixes are actually deployed
- Catching issues reintroduced by new code
- Monitoring flaky errors
Threshold
Triggers when error count exceeds a limit within a time window.
Alert: Error Threshold Exceeded
Issue: PaymentError: Card declined
Count: 50 errors in 5 minutes
Threshold: 25 errors in 5 minutes
Use for:
- Detecting error spikes
- Monitoring high-volume endpoints
- Service degradation alerts
Notification Channels
Send alerts to team email addresses:
{
channel: 'EMAIL',
channelConfig: {
recipients: [
'oncall@example.com',
'errors@example.com',
],
},
}Slack
Send alerts to Slack channels:
{
channel: 'SLACK',
channelConfig: {
webhookUrl: 'https://hooks.slack.com/services/xxx/yyy/zzz',
channel: '#errors', // Optional override
},
}Webhook
Send alerts to custom HTTP endpoints:
{
channel: 'WEBHOOK',
channelConfig: {
url: 'https://api.example.com/error-webhook',
headers: {
'Authorization': 'Bearer xxx',
'X-Custom-Header': 'value',
},
},
}Creating Alert Rules
Dashboard
- Go to Observability > Your Project > Settings
- Click Alert Rules
- Click Create Rule
- Configure:
- Name: Descriptive name
- Trigger Type: New Issue, Regression, or Threshold
- Channel: Email, Slack, or Webhook
- Filters: Environment, severity (optional)
API
// Create an alert rule via API
const response = await fetch('/api/observability/projects/{projectId}/alerts', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json',
},
body: JSON.stringify({
name: 'Production Critical Errors',
triggerType: 'NEW_ISSUE',
channel: 'SLACK',
channelConfig: {
webhookUrl: 'https://hooks.slack.com/services/xxx',
},
isEnabled: true,
environments: ['production'],
severities: ['ERROR', 'FATAL'],
cooldownMinutes: 60,
}),
});Alert Configuration
Rule Properties
| Property | Description | Required |
|---|---|---|
name | Human-readable rule name | Yes |
triggerType | NEW_ISSUE, REGRESSION, or THRESHOLD | Yes |
channel | EMAIL, SLACK, or WEBHOOK | Yes |
channelConfig | Channel-specific configuration | Yes |
isEnabled | Whether rule is active | No (default: true) |
environments | Filter by environment | No |
severities | Filter by severity level | No |
cooldownMinutes | Minimum time between alerts | No (default: 60) |
thresholdConfig | For THRESHOLD type only | Required for THRESHOLD |
Threshold Configuration
{
triggerType: 'THRESHOLD',
thresholdConfig: {
count: 50, // Error count
windowMinutes: 5, // Time window
},
}Environment Filtering
Limit alerts to specific environments:
{
name: 'Production Alerts Only',
environments: ['production'],
// Won't trigger for staging/development errors
}Severity Filtering
Limit alerts to specific severity levels:
{
name: 'Critical Errors Only',
severities: ['ERROR', 'FATAL'],
// Won't trigger for DEBUG, INFO, or WARNING
}Available severities:
DEBUGINFOWARNINGERRORFATAL
Cooldown / Rate Limiting
Prevent alert fatigue with cooldown periods:
{
name: 'Rate Limited Alerts',
cooldownMinutes: 60, // At most one alert per hour for this rule
}Cooldown behavior:
- After an alert triggers, subsequent triggers are suppressed
- Cooldown resets after the specified time
- Each rule has its own cooldown
Slack Integration
Creating a Slack Webhook
- Go to your Slack workspace settings
- Navigate to Apps > Manage > Custom Integrations
- Click Incoming Webhooks
- Click Add to Slack
- Choose a channel and click Add Incoming Webhooks integration
- Copy the webhook URL
Slack Alert Format
🚨 New Issue: TypeError: Cannot read property 'name' of undefined
Project: my-project
Environment: production
Severity: ERROR
First Seen: 2024-01-15 10:30:00 UTC
Culprit: src/components/UserCard.tsx:42
View Issue: https://app.transactional.dev/observability/issues/xxx
Channel Override
{
channel: 'SLACK',
channelConfig: {
webhookUrl: 'https://hooks.slack.com/services/xxx',
channel: '#critical-errors', // Override webhook's default channel
},
}Webhook Integration
Webhook Payload
{
"alertType": "NEW_ISSUE",
"issue": {
"id": "issue-abc123",
"title": "TypeError: Cannot read property 'name' of undefined",
"culprit": "src/components/UserCard.tsx:42",
"severity": "ERROR",
"platform": "REACT",
"firstSeen": "2024-01-15T10:30:00Z",
"lastSeen": "2024-01-15T10:30:00Z",
"occurrenceCount": 1,
"userCount": 1
},
"project": {
"id": "proj-xyz789",
"name": "my-project",
"slug": "my-project"
},
"organization": {
"id": 123,
"name": "My Org"
},
"timestamp": "2024-01-15T10:30:05Z",
"dashboardUrl": "https://app.transactional.dev/observability/issues/xxx"
}Webhook Security
Include authentication headers:
{
channel: 'WEBHOOK',
channelConfig: {
url: 'https://api.example.com/webhook',
headers: {
'Authorization': 'Bearer secret-token',
'X-Webhook-Secret': 'hmac-secret',
},
},
}Example Configurations
Production Monitoring
// New issues in production
{
name: 'Production New Issues',
triggerType: 'NEW_ISSUE',
channel: 'SLACK',
channelConfig: {
webhookUrl: process.env.SLACK_WEBHOOK_URL,
channel: '#prod-errors',
},
environments: ['production'],
cooldownMinutes: 30,
}Critical Error Pager
// Fatal errors to PagerDuty
{
name: 'Critical Error Pager',
triggerType: 'NEW_ISSUE',
channel: 'WEBHOOK',
channelConfig: {
url: 'https://events.pagerduty.com/v2/enqueue',
headers: {
'Content-Type': 'application/json',
},
},
severities: ['FATAL'],
cooldownMinutes: 5,
}Regression Detection
// Notify when resolved issues come back
{
name: 'Regression Alert',
triggerType: 'REGRESSION',
channel: 'EMAIL',
channelConfig: {
recipients: ['engineering@example.com'],
},
environments: ['production', 'staging'],
cooldownMinutes: 120,
}Error Spike Detection
// Alert when errors spike
{
name: 'Error Spike Alert',
triggerType: 'THRESHOLD',
thresholdConfig: {
count: 100,
windowMinutes: 5,
},
channel: 'SLACK',
channelConfig: {
webhookUrl: process.env.SLACK_WEBHOOK_URL,
channel: '#incidents',
},
environments: ['production'],
cooldownMinutes: 15,
}Managing Alert Rules
List Rules
GET /api/observability/projects/{projectId}/alerts
Get Rule
GET /api/observability/projects/{projectId}/alerts/{alertId}
Update Rule
PATCH /api/observability/projects/{projectId}/alerts/{alertId}
Delete Rule
DELETE /api/observability/projects/{projectId}/alerts/{alertId}
Toggle Rule
// Disable
PATCH /api/observability/projects/{projectId}/alerts/{alertId}
{ "isEnabled": false }
// Enable
PATCH /api/observability/projects/{projectId}/alerts/{alertId}
{ "isEnabled": true }Best Practices
1. Start with New Issue Alerts
Begin with new issue alerts to catch novel errors:
{
name: 'All New Issues',
triggerType: 'NEW_ISSUE',
channel: 'SLACK',
environments: ['production'],
}2. Use Severity Filtering
Don't alert on everything:
{
severities: ['ERROR', 'FATAL'], // Ignore DEBUG, INFO, WARNING
}3. Set Appropriate Cooldowns
Prevent alert fatigue:
// Low priority: 2 hours
{ cooldownMinutes: 120 }
// Normal: 1 hour
{ cooldownMinutes: 60 }
// Critical: 15 minutes
{ cooldownMinutes: 15 }4. Separate Channels by Severity
// Critical → PagerDuty
{
name: 'Critical to Pager',
severities: ['FATAL'],
channel: 'WEBHOOK',
channelConfig: { url: 'https://pagerduty...' },
}
// Normal → Slack
{
name: 'Errors to Slack',
severities: ['ERROR'],
channel: 'SLACK',
channelConfig: { webhookUrl: '...' },
}5. Environment-Specific Rules
// Production: immediate notification
{
name: 'Production Alerts',
environments: ['production'],
cooldownMinutes: 15,
}
// Staging: daily digest is enough
{
name: 'Staging Summary',
environments: ['staging'],
cooldownMinutes: 1440, // 24 hours
}Troubleshooting
Alerts Not Triggering
- Check rule is enabled - Verify
isEnabled: true - Check filters - Environment and severity must match
- Check cooldown - May be in cooldown period
- Check channel config - Webhook URL, email addresses
Too Many Alerts
- Increase cooldown - Raise
cooldownMinutes - Add filters - Filter by environment or severity
- Use thresholds - Switch from NEW_ISSUE to THRESHOLD
Slack Messages Not Arriving
- Check webhook URL - Verify it's valid
- Check channel access - Webhook may not have access
- Check Slack logs - Look for failed webhook deliveries
Next Steps
- Overview - Error tracking introduction
- Capturing Errors - Learn capture methods
- Context - Add debugging context
On This Page
- Overview
- Alert Trigger Types
- New Issue
- Regression
- Threshold
- Notification Channels
- Slack
- Webhook
- Creating Alert Rules
- Dashboard
- API
- Alert Configuration
- Rule Properties
- Threshold Configuration
- Environment Filtering
- Severity Filtering
- Cooldown / Rate Limiting
- Slack Integration
- Creating a Slack Webhook
- Slack Alert Format
- Channel Override
- Webhook Integration
- Webhook Payload
- Webhook Security
- Example Configurations
- Production Monitoring
- Critical Error Pager
- Regression Detection
- Error Spike Detection
- Managing Alert Rules
- List Rules
- Get Rule
- Update Rule
- Delete Rule
- Toggle Rule
- Best Practices
- 1. Start with New Issue Alerts
- 2. Use Severity Filtering
- 3. Set Appropriate Cooldowns
- 4. Separate Channels by Severity
- 5. Environment-Specific Rules
- Troubleshooting
- Alerts Not Triggering
- Too Many Alerts
- Slack Messages Not Arriving
- Next Steps