Setting up Predictive Monitoring for website
Predictive Monitoring detects signs of impending failure before it happens. Classic monitoring reacts: CPU 90% → alert. Predictive monitoring warns: CPU growing at +2% per hour, will reach 90% in 6 hours. The difference is time for preventive action.
Prediction methods
Trend Analysis (linear regression). Analyzes metric trend over N hours for extrapolation. Simple to implement, works for monotonic trends (memory leaks, queue accumulation).
Seasonality-aware forecasting. Accounts for daily and weekly patterns. Prophet (Facebook/Meta) or Holt-Winters ETS. Suitable for metrics with regular cycles.
Anomaly Detection. ML models detect abnormal behavior without preset thresholds. Isolation Forest, LSTM for time series.
SLO Burn Rate. Not predicting the future, but early indicator: if error budget burns 14x faster than normal, monthly budget exhausts in 2 hours.
Prometheus: trend-based alerting
Simple prediction using predict_linear():
# Predict when disk fills
- alert: DiskWillFillSoon
expr: |
predict_linear(node_filesystem_avail_bytes{mountpoint="/"}[6h], 24 * 3600) < 0
for: 30m
labels:
severity: warning
annotations:
summary: "Disk on {{ $labels.instance }} will be full in < 24 hours"
current_free: "{{ $value | humanize1024 }}B"
# Predict memory growth
- alert: MemoryLeakDetected
expr: |
predict_linear(node_memory_MemAvailable_bytes[2h], 4 * 3600) <
0.1 * node_memory_MemTotal_bytes
for: 15m
labels:
severity: warning
annotations:
summary: "Memory may be exhausted in ~4 hours on {{ $labels.instance }}"
Burn Rate Alert (SLO-based):
- alert: FastBurnRate
expr: |
(
rate(http_requests_total{status=~"5.."}[1h])
/ rate(http_requests_total[1h])
) > 14.4 * (1 - 0.999)
for: 5m
labels:
severity: critical
annotations:
summary: "Error budget burning 14.4x faster than target — will exhaust in ~2 hours"
AWS CloudWatch Anomaly Detection
AWS Anomaly Detection—built-in ML without model tuning:
resource "aws_cloudwatch_metric_alarm" "cpu_anomaly" {
alarm_name = "cpu-anomaly-detection"
comparison_operator = "GreaterThanUpperThreshold"
evaluation_periods = 2
threshold_metric_id = "e1"
alarm_description = "CPU anomaly detected"
metric_query {
id = "e1"
expression = "ANOMALY_DETECTION_BAND(m1, 2)"
label = "CPUUtilization (Expected)"
return_data = true
}
metric_query {
id = "m1"
return_data = false
metric {
metric_name = "CPUUtilization"
namespace = "AWS/EC2"
period = 300
stat = "Average"
dimensions = {
InstanceId = aws_instance.app.id
}
}
}
}
ANOMALY_DETECTION_BAND(m1, 2) predicts expected metric range (accounting for seasonality) and alerts when exceeding 2σ.
Facebook Prophet for complex patterns
For metrics with pronounced weekly/daily patterns:
from prophet import Prophet
import pandas as pd
import boto3
def fetch_metric_history(metric_name: str, days: int = 90) -> pd.DataFrame:
cw = boto3.client('cloudwatch')
# ... fetch from CloudWatch or Prometheus
return df # columns: ds (datetime), y (value)
def predict_metric(metric_name: str, hours_ahead: int = 24) -> dict:
df = fetch_metric_history(metric_name)
model = Prophet(
seasonality_mode='multiplicative',
daily_seasonality=True,
weekly_seasonality=True,
changepoint_prior_scale=0.05
)
model.fit(df)
future = model.make_future_dataframe(periods=hours_ahead, freq='h')
forecast = model.predict(future)
# Last hours_ahead rows—prediction
predictions = forecast.tail(hours_ahead)[['ds', 'yhat', 'yhat_lower', 'yhat_upper']]
# Find when prediction exceeds threshold
threshold = get_threshold(metric_name)
breach_time = predictions[predictions['yhat'] > threshold]['ds'].min()
return {
'metric': metric_name,
'predicted_breach': breach_time.isoformat() if pd.notna(breach_time) else None,
'hours_until_breach': (breach_time - pd.Timestamp.now()).total_seconds() / 3600
}
Practical scenarios
Disk fill prediction. predict_linear() in Prometheus—standard approach. Alert 24-48 hours before fill.
Memory leak detection. Monotonic memory growth under stable load—sign of leak. Alert when growth rate exceeds threshold.
Proactive scaling. AWS Predictive Scaling analyzes historical traffic and scales ASG before peak period.
Database degradation. Rising P95 query time at stable RPS—sign of index degradation or bloat. predict_linear(pg_query_duration_p95[2h], 6h) > SLO_threshold.
Integrating predictions into alerts
Predictive alerts should lead to actions, not panic:
- Alert "disk fills in 24 hours" → create low-priority ticket, don't wake at night
- Alert "error budget exhausts in 2 hours" → wake on-call immediately
Configure via Alertmanager routes:
routes:
- match:
alertname: DiskWillFillSoon
receiver: ticket-only # Create ticket, don't call
- match:
alertname: FastBurnRate
receiver: pagerduty-critical
Implementation timeline
- predict_linear alerts in Prometheus — 1-2 days
- CloudWatch Anomaly Detection — 1 day
- SLO burn rate alerts — 1-2 days
- Prophet-based forecasting service — 5-10 days
- Alert integration + tuning — 2-3 days







