Deploying a BSC Node
BSC (BNB Smart Chain) is an EVM-compatible network with Proof of Staked Authority consensus (21 validators). A full BSC node takes significantly more space than Ethereum: as of 2024, full chaindata exceeds 3 TB, making the choice of node type a critical decision.
Types of BSC Nodes
| Type | Size | Capabilities | Use Case |
|---|---|---|---|
| Full Node (fast sync) | ~1.5 TB | Current state + recent history | RPC for dApp, monitoring |
| Full Node (snap sync) | ~700 GB | Current state | Quick start |
| Archive Node | 3+ TB | Complete history of all states | eth_call on old blocks, analytics |
| Validator Node | ~700 GB | Consensus participation | BNB staking |
For most tasks (own RPC, event indexing, contract interaction), a Full Node with snap sync is sufficient.
Server Requirements
CPU: 16+ vCPU (BSC more CPU-intensive than Ethereum due to frequent blocks)
RAM: 32 GB minimum, 64 GB recommended for stable operation
Disk: 2 TB NVMe SSD (HDD not suitable — too slow I/O)
Net: 100 Mbps+, preferably low latency to Asian peers
OS: Ubuntu 22.04 LTS
BSC produces a block every ~3 seconds (vs 12 s for Ethereum). This means 4x more blocks in the same period and proportionally more I/O and CPU load during sync.
Installing geth (BSC Fork)
BSC uses a fork of go-ethereum — bsc (Binance Chain go-ethereum fork):
# Install dependencies
apt-get update && apt-get install -y build-essential git golang-go
# Or via pre-built binary (recommended for production)
wget https://github.com/bnb-chain/bsc/releases/latest/download/geth_linux
chmod +x geth_linux
mv geth_linux /usr/local/bin/geth
# Check version
geth version
Configuration
Download genesis file and network config:
mkdir -p /data/bsc && cd /data/bsc
# Download genesis and mainnet config
wget https://github.com/bnb-chain/bsc/releases/latest/download/mainnet.zip
unzip mainnet.zip
# Initialize genesis block
geth --datadir /data/bsc init genesis.json
Configuration file config.toml:
[Eth]
NetworkId = 56
SyncMode = "snap" # snap for quick start, full for full verification
[Eth.TxPool]
Locals = []
Journal = "transactions.rlp"
Rejournal = 3600000000000
PriceLimit = 3000000000 # 3 Gwei minimum (BSC specific)
PriceBump = 10
AccountSlots = 512
GlobalSlots = 10240
AccountQueue = 256
GlobalQueue = 5120
Lifetime = 10800000000000
[Node]
DataDir = "/data/bsc"
HTTPHost = "0.0.0.0"
HTTPPort = 8545
HTTPVirtualHosts = ["*"]
HTTPModules = ["eth", "net", "web3", "txpool"]
WSHost = "0.0.0.0"
WSPort = 8546
WSModules = ["eth", "net", "web3"]
[Node.P2P]
MaxPeers = 50
Launch
geth \
--config /data/bsc/config.toml \
--datadir /data/bsc \
--cache 16384 \ # 16 GB RAM cache — critical for speed
--rpc.allow-unprotected-txs \
--history.transactions 90 \ # store tx index for last 90 days
--txlookuplimit 0 \
--syncmode snap \
--snapshot \
--diffsync \ # BSC-specific sync optimization
2>&1 | tee /var/log/bsc-node.log
Systemd Unit
[Unit]
Description=BSC Full Node
After=network.target
[Service]
Type=simple
User=bsc
ExecStart=/usr/local/bin/geth \
--config /data/bsc/config.toml \
--datadir /data/bsc \
--cache 16384 \
--syncmode snap \
--snapshot \
--diffsync
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
Monitoring Sync
# Current node block
geth attach /data/bsc/geth.ipc --exec "eth.blockNumber"
# Sync status
geth attach /data/bsc/geth.ipc --exec "eth.syncing"
# Returns false if synced, or object with progress
# Number of peers
geth attach /data/bsc/geth.ipc --exec "net.peerCount"
# If 0 — P2P connection problem (firewall, ports)
Ports to open: 30303/tcp and 30303/udp for P2P. RPC (8545) and WS (8546) — only on localhost or behind reverse proxy with auth.
Snap Sync: What Happens
In snap sync, the node doesn't download or verify each historical block. Instead, it downloads a snapshot of the current state (state trie), then catches up with fresh blocks. This is much faster (hours instead of days), but the node cannot respond to requests like eth_call or eth_getBalance for historical blocks — only recent ones.
Snap sync takes ~6–24 hours on NVMe SSD with good connectivity.
Nginx Reverse Proxy with Rate Limiting
Don't expose RPC directly. Minimal protection:
upstream bsc_rpc {
server 127.0.0.1:8545;
}
server {
listen 443 ssl;
server_name rpc.yourdomain.com;
location / {
limit_req zone=rpc_limit burst=50 nodelay;
proxy_pass http://bsc_rpc;
proxy_set_header Host $host;
}
}
# In http block:
limit_req_zone $binary_remote_addr zone=rpc_limit:10m rate=100r/s;
An unprotected RPC quickly starts receiving requests from bots scanning the network.







