Implementing automatic resource scaling based on load
Auto-scaling is the ability of infrastructure to automatically add or remove resources in response to load changes. Without it, you either overpay for resources during quiet times or the site crashes during peak load. Properly configured scaling solves both problems simultaneously.
Scaling levels
Vertical (Scale Up): increasing the power of one instance. Automatically—via AWS Graviton Flex (limited), mostly requires manual intervention or shutdown. Suitable for stateful components (DB).
Horizontal (Scale Out): adding new instances/pods. Preferred for stateless services. Works instantly without downtime.
Metrics for scaling
What to scale is more important than how. Wrong metric choice leads to untimely scaling.
| Metric | When to use | Drawbacks |
|---|---|---|
| CPU Utilization | Compute-intensive tasks | Lags: scaling starts after degradation |
| Request Rate (RPS) | Web servers, APIs | Requires baseline calibration |
| Queue Depth | Async processing | Optimal for queue-based architectures |
| Response Time (P95) | SLO-oriented approach | Most accurate, harder to configure |
| Custom business metric | Specific scenarios | Requires additional integration |
AWS Auto Scaling Group
resource "aws_autoscaling_group" "app" {
name = "app-asg"
min_size = 2
max_size = 20
desired_capacity = 3
vpc_zone_identifier = var.private_subnet_ids
launch_template {
id = aws_launch_template.app.id
version = "$Latest"
}
health_check_type = "ELB"
health_check_grace_period = 60
target_group_arns = [aws_lb_target_group.app.arn]
}
# Target Tracking: keep CPU at 60%
resource "aws_autoscaling_policy" "cpu_tracking" {
name = "cpu-tracking"
autoscaling_group_name = aws_autoscaling_group.app.name
policy_type = "TargetTrackingScaling"
target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}
target_value = 60.0
scale_in_cooldown = 300
scale_out_cooldown = 60
}
}
Scale-out cooldown (60s) should be less than scale-in cooldown (300s)—react fast to growth, slowly remove resources (let load stabilize).
Kubernetes Horizontal Pod Autoscaler (HPA)
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app
minReplicas: 2
maxReplicas: 50
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 60
- type: Pods
pods:
metric:
name: http_requests_per_second
target:
type: AverageValue
averageValue: "100"
Custom metric http_requests_per_second from Prometheus via kube-state-metrics + Prometheus Adapter.
KEDA: scaling based on external sources
KEDA (Kubernetes Event-Driven Autoscaling) scales pods by metrics from external systems: Redis, RabbitMQ, Kafka, SQS.
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: queue-processor
spec:
scaleTargetRef:
name: worker-deployment
minReplicaCount: 1
maxReplicaCount: 30
triggers:
- type: rabbitmq
metadata:
host: amqp://rabbitmq:5672/
queueName: tasks
queueLength: "50" # 1 pod per 50 messages in queue
Scaling to zero pods when queue is empty saves resources.
Predictive scaling
AWS Predictive Scaling predicts load based on historical data (requires minimum 14 days) and proactively adds resources. Effective for patterns with regular peaks (morning traffic, business activity peak).
resource "aws_autoscaling_policy" "predictive" {
name = "predictive"
autoscaling_group_name = aws_autoscaling_group.app.name
policy_type = "PredictiveScaling"
predictive_scaling_configuration {
mode = "ForecastAndScale"
scheduling_buffer_time = 300 # Start 5 min before predicted peak
max_capacity_breach_behavior = "IncreaseMaxCapacity"
metric_specification {
target_value = 60
predefined_scaling_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}
predefined_load_metric_specification {
predefined_metric_type = "ASGTotalNetworkIn"
}
}
}
}
Scaling test
Load test before production launch:
# k6 for load generation
k6 run --vus 1000 --duration 10m script.js
# Watch in real time
watch -n5 "aws autoscaling describe-auto-scaling-groups \
--auto-scaling-group-names app-asg \
--query 'AutoScalingGroups[0].Instances[*].InstanceId' \
--output table"
Check: reaction time to load growth, no downtime during scale-out, correct connection draining during scale-in.
Implementation timeline
- ASG with Target Tracking (AWS) — 2-3 days
- HPA + Prometheus Adapter (Kubernetes) — 3-5 days
- KEDA for queue-based workloads — 2-3 days
- Predictive scaling — 1-2 days (after 14 days of data)
- Load testing + cooldown tuning — 2-3 days







