Polygon Node Deployment
A Polygon node is needed when public RPC becomes insufficient: Alchemy/Infura rate limits cut requests at peak load, no uptime guarantees, and for a serious dApp it's unacceptable to depend on someone else's infrastructure. Polygon PoS is not an Ethereum L2, but a sidechain with its own set of validators and Heimdall/Bor architecture.
Polygon PoS Node Architecture
A Polygon PoS node consists of two components that must run simultaneously:
Heimdall—validator layer (Tendermint-based). Responsible for: finalizing checkpoints on Ethereum mainnet, consensus layer, bridge between Polygon and Ethereum. Runs on port 26656 (P2P) and 1317 (API).
Bor—execution layer, go-ethereum fork. It actually executes transactions and smart contracts. Compatible with Ethereum JSON-RPC API. Port 30303 (P2P), 8545 (HTTP RPC), 8546 (WebSocket).
Both components must be synchronized. Common deployment error—starting Bor before Heimdall reaches current block.
Server Requirements
| Node Type | CPU | RAM | Disk | Network |
|---|---|---|---|---|
| Full node (archive) | 16+ cores | 64+ GB | 8+ TB NVMe | 1 Gbps |
| Full node (pruned) | 8 cores | 32 GB | 500 GB NVMe | 500 Mbps |
| Sentry node | 4 cores | 16 GB | 200 GB | 250 Mbps |
Archive node—full history of all states, needed for eth_getStorageAt on historical blocks and for analytics. Pruned—sufficient for most dApps.
Polygon chain is active, state size grows fast. In 6 months after deployment—disk will be significantly more full, plan with margin.
Deployment via Official Scripts
Polygon supports official Ansible playbook:
git clone https://github.com/maticnetwork/node-ansible
cd node-ansible
# Edit inventory.yml: specify IP and SSH keys
# Choose type: sentry, validator or fullnode
ansible-playbook playbooks/network.yml \
--inventory inventory.yml \
-e "node_type=sentry network_version=mainnet node_name=my-polygon-node"
Manual deployment (more control):
# Heimdall
git clone https://github.com/maticnetwork/heimdall
cd heimdall && make install
heimdalld init --chain-id heimdall-137
# Download genesis: https://raw.githubusercontent.com/maticnetwork/heimdall/master/builder/files/genesis-mainnet-v1.json
heimdalld start --rest-server
# Bor (in separate process)
git clone https://github.com/maticnetwork/bor
cd bor && make bor
bor server --config /path/to/config.toml
Snapshots: Save Synchronization Time
Synchronization from scratch—several weeks. Polygon Foundation publishes snapshots on S3:
# Heimdall snapshot
aria2c -x6 -s6 "https://snapshot-download.polygon.technology/snapshots/heimdall/mainnet/latest.tar.gz"
# Bor snapshot (several TB for archive)
aria2c -x6 -s6 "https://snapshot-download.polygon.technology/snapshots/bor/mainnet/..."
aria2c instead of wget—multithreaded download, 3–5x faster for large files.
Bor Configuration for Production
# config.toml
[jsonrpc]
enabled = true
host = "0.0.0.0" # in production—localhost only, before nginx
port = 8545
[jsonrpc.ws]
enabled = true
port = 8546
[p2p]
maxpeers = 50
[cache]
cache = 4096 # MB, increase with more RAM
Don't expose RPC publicly directly. Nginx with rate limiting and IP whitelist before node—mandatory. Otherwise node becomes public RPC and exhausts resources from other people's requests.
Monitoring Synchronization
// Check status via JSON-RPC
const latestBlock = await provider.getBlockNumber() // your node
const publicBlock = await publicProvider.getBlockNumber() // Alchemy
const lag = publicBlock - latestBlock
if (lag > 10) alert('Node is lagging!')
Heimdall status: curl localhost:26657/status—field catching_up: false means synchronized.
What Gets Done in 1–3 Days
Choice of node type for your tasks, deployment of Heimdall + Bor on your server, snapshot download and application for fast synchronization, systemd service setup for autostart, nginx proxy with basic protection, monitoring setup (Grafana + Prometheus with Bor/Heimdall metrics), endpoint transfer for integration into your application.







