Automatic failover on primary server failure

Our company is engaged in the development, support and maintenance of sites of any complexity. From simple one-page sites to large-scale cluster systems built on micro services. Experience of developers is confirmed by certificates from vendors.
Development and maintenance of all types of websites:
Informational websites or web applications
Business card websites, landing pages, corporate websites, online catalogs, quizzes, promo websites, blogs, news resources, informational portals, forums, aggregators
E-commerce websites or web applications
Online stores, B2B portals, marketplaces, online exchanges, cashback websites, exchanges, dropshipping platforms, product parsers
Business process management web applications
CRM systems, ERP systems, corporate portals, production management systems, information parsers
Electronic service websites or web applications
Classified ads platforms, online schools, online cinemas, website builders, portals for electronic services, video hosting platforms, thematic portals

These are just some of the technical types of websites we work with, and each of them can have its own specific features and functionality, as well as be customized to meet the specific needs and goals of the client.

Our competencies:
Development stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    847
  • image_website-sbh_0.png
    Website development for SBH Partners
    999
  • image_website-_0.png
    Website development for Red Pear
    451

Setting Up Automatic Failover for Primary Server Failure

Automatic failover is traffic switching to a backup server without human intervention. Goal: reduce RTO (Recovery Time Objective) from "until someone wakes up" to 30-120 seconds. For e-commerce or SaaS, this is the difference between losing 5 minutes and losing an hour of revenue.

Failover Levels and Where Each Applies

DNS-level (Route 53 Health Checks, Cloudflare Failover). The simplest approach. Health check monitors primary every 10-30 seconds. On failure — changes DNS record to backup server IP. Latency: TTL + detection time = 60-300 seconds. Suitable for most web applications.

Load Balancer (AWS ALB/NLB, nginx upstream). Health checks at balancer level, switching in 5-30 seconds. Requires both servers in the same cloud or region.

VRRP / Keepalived (bare metal / VPS). Virtual IP moves between servers on master failure. Switching in 2-5 seconds. Classic for on-premise and dedicated setups.

Database failover. Separate concern — application must know about new primary DB. Patroni (PostgreSQL), MHA (MySQL), AWS RDS Multi-AZ handle this automatically.

Implementation on AWS Route 53

Route 53 Failover Policy:
  Primary record → 1.2.3.4 (primary server)
    Health check: HTTP GET /health, port 443
    Failure threshold: 3 consecutive failures
    Request interval: 10 seconds
  Secondary record → 5.6.7.8 (backup server)
    Evaluate target health: Yes

The /health endpoint in the application should check real state: DB accessible, cache working, disk space not exhausted. Return 200 only when fully operational.

Keepalived for Bare Metal / VPS

# /etc/keepalived/keepalived.conf on PRIMARY
vrrp_script check_app {
    script "/usr/local/bin/check_app.sh"
    interval 5
    weight -20
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    virtual_ipaddress {
        192.168.1.100/24
    }
    track_script {
        check_app
    }
}

The check_app.sh script verifies application availability locally. After two consecutive failed checks — BACKUP server with priority 90 takes the virtual IP.

Data Synchronization Between Servers

Failover is meaningless without current data on the backup server:

  • Database: master-slave replication (PostgreSQL streaming replication, MySQL GTID replication). Replication lag is monitored, alert if exceeding 30 seconds
  • Files: lsyncd (realtime rsync) or S3-compatible storage as shared point
  • Sessions: Redis with replication or sticky sessions through balancer
  • Configuration: Ansible pull from shared git repository

Testing Failover

Regular drills are mandatory. Failover that hasn't been tested is failover that won't work when needed.

Check protocol:

  1. Verify monitoring captures baseline state
  2. Simulate failure: systemctl stop nginx or iptables -I INPUT -p tcp --dport 80 -j DROP on primary
  3. Record time until switching
  4. Verify functionality through backup server
  5. Restore primary, verify switchback

Target metrics: detection time < 30s, switch time < 60s, total RTO < 120s.

"Split-brain" State and How to Avoid It

Issue: both servers think they're primary. In Keepalived, solved through fencing (STONITH) — on conflict, weaker node is forcibly shut down. In PostgreSQL/Patroni — through DCS (etcd, Consul, ZooKeeper) as arbiter.

Setup Timeline

  • DNS failover (Route 53 / Cloudflare) — 1-2 days
  • Keepalived + data synchronization — 3-5 days
  • Full scheme with DB failover (Patroni) — 5-10 days
  • Testing and documentation — 1-2 days

Monitoring Failover Events

Each switchover is an incident requiring investigation. Alertmanager or PagerDuty capture the event. Ticket automatically created in Jira/Linear. Post-incident — root cause analysis: why primary failed.