Configuring Redis Sentinel for High Availability
Redis Sentinel is a monitoring and automatic failover system for a single Redis instance. Unlike Cluster, Sentinel doesn't shard data—the entire dataset is on one master with replicas. Sentinel monitors the master and automatically promotes a replica to a new master if it fails, notifying clients of the address change.
Sentinel is suitable when the data volume fits in one server's RAM but automatic failover is needed without manual intervention.
Architecture
Minimal setup: 1 master + 2 replicas + 3 Sentinel instances. Three Sentinels are needed for quorum: when losing master connection, Sentinels vote on failover, majority is required (quorum ≥ 2). With fewer Sentinels, split-brain is possible.
Sentinel can run on the same servers as Redis—separate machines are not required.
Server 1: Redis Master (6379) + Sentinel (26379)
Server 2: Redis Replica 1 (6379) + Sentinel (26379)
Server 3: Redis Replica 2 (6379) + Sentinel (26379)
Configuring Master and Replicas
Master /etc/redis/redis.conf (server 1):
bind 0.0.0.0
port 6379
requirepass RedisPassword123
masterauth RedisPassword123
maxmemory 4gb
maxmemory-policy volatile-lru
appendonly yes
appendfsync everysec
protected-mode no
Replica /etc/redis/redis.conf (server 2, 3):
bind 0.0.0.0
port 6379
requirepass RedisPassword123
masterauth RedisPassword123
replicaof 192.168.1.10 6379
replica-read-only yes
replica-lazy-flush no
appendonly yes
appendfsync everysec
protected-mode no
replica-read-only yes—replicas accept only reads. Writing goes through master.
Sentinel Configuration
/etc/redis/sentinel.conf on each server (only sentinel announce-ip changes):
port 26379
daemonize yes
logfile /var/log/redis/sentinel.log
# Master name, IP, port, quorum
sentinel monitor mymaster 192.168.1.10 6379 2
# Password to connect to Redis
sentinel auth-pass mymaster RedisPassword123
# Milliseconds without response before master is considered unavailable
sentinel down-after-milliseconds mymaster 5000
# Parallel replica switchover
sentinel parallel-syncs mymaster 1
# Failover timeout
sentinel failover-timeout mymaster 60000
# Notification on failover
sentinel notification-script mymaster /opt/redis/notify.sh
# Announce own IP (important with NAT/Docker)
sentinel announce-ip 192.168.1.10
sentinel announce-port 26379
quorum 2—2 out of 3 Sentinels needed to decide master is unavailable. With down-after-milliseconds 5000—failover starts 5–15 seconds after master falls.
Starting Sentinel:
redis-sentinel /etc/redis/sentinel.conf
# or
redis-server /etc/redis/sentinel.conf --sentinel
Status Check
# Connect to Sentinel
redis-cli -p 26379
# Master info
SENTINEL masters
SENTINEL master mymaster
# Replica list
SENTINEL replicas mymaster
# Sentinel list
SENTINEL sentinels mymaster
# Manually initiate failover (for testing)
SENTINEL failover mymaster
After SENTINEL failover mymaster—Sentinel promotes one replica to master and updates configs. Old master (if alive) becomes a replica of the new master.
Connecting Applications via Sentinel
Clients don't connect directly to the master—they connect to Sentinel, ask for the current master address, then open a connection to it. On failover, Sentinel notifies clients of the new master.
PHP phpredis:
$sentinel = new RedisSentinel(
host: '192.168.1.10',
port: 26379,
timeout: 2.5,
persistent: null,
retryInterval: 100,
readTimeout: 2.5,
auth: 'RedisPassword123'
);
$masterInfo = $sentinel->master('mymaster');
// ['name' => 'mymaster', 'ip' => '192.168.1.11', 'port' => '6379', ...]
Predis with Sentinel support:
use Predis\Client;
$client = new Client(
[
'tcp://192.168.1.10:26379',
'tcp://192.168.1.11:26379',
'tcp://192.168.1.12:26379',
],
[
'replication' => 'sentinel',
'service' => 'mymaster',
'parameters' => [
'password' => 'RedisPassword123',
],
]
);
Laravel config/database.php with Sentinel:
'redis' => [
'client' => 'predis',
'options' => [
'replication' => 'sentinel',
'service' => env('REDIS_SENTINEL_SERVICE', 'mymaster'),
'parameters' => [
'password' => env('REDIS_PASSWORD'),
'database' => 0,
],
],
'default' => [
['host' => '192.168.1.10', 'port' => 26379],
['host' => '192.168.1.11', 'port' => 26379],
['host' => '192.168.1.12', 'port' => 26379],
],
],
Notification Script on Failover
/opt/redis/notify.sh:
#!/bin/bash
EVENT_TYPE=$1
EVENT_NAME=$2
EVENT_DESCRIPTION=$3
TELEGRAM_BOT_TOKEN="your_token"
TELEGRAM_CHAT_ID="your_chat_id"
MESSAGE="Redis Sentinel Event: $EVENT_TYPE
Service: $EVENT_NAME
Details: $EVENT_DESCRIPTION
Time: $(date)"
curl -s -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage" \
-d "chat_id=${TELEGRAM_CHAT_ID}" \
-d "text=${MESSAGE}"
chmod +x /opt/redis/notify.sh
Simulating Failover
Test before production:
# On server with master—stop Redis
systemctl stop redis
# Observe in Sentinel logs
tail -f /var/log/redis/sentinel.log
# After 5–15 seconds—one of replicas became master
redis-cli -p 26379 SENTINEL master mymaster
# ip field shows new master
# Start old master again—it becomes a replica
systemctl start redis
redis-cli -p 26379 SENTINEL replicas mymaster
Sentinel vs. Cluster
Sentinel—for datasets fitting on one server. Automatic failover, read-scaling via replicas, simple setup. No horizontal write scaling.
Cluster—for data > one server's RAM, horizontal write scaling. More complex setup, multi-key operation limitations.
Timeline
Sentinel setup with 3 servers, automatic failover testing, client configuration, and notifications—1–2 business days.







