Arbitrum Node Deployment
Arbitrum One — Optimistic Rollup on top of Ethereum. An Arbitrum node doesn't just "synchronize the blockchain" — it verifies L2 state relative to L1, and the configuration determines what level of trust you get.
Types of Arbitrum Nodes
Full node — synchronizes all transactions and state, but doesn't verify fraud proofs independently. For most use cases (RPC for application, indexing) — sufficient.
Archive node — stores full history of all states. Needed for eth_getStorageAt on historical blocks, forks for testing. Requires significantly more space.
Validator node — actively verifies and can challenge fraudulent state assertions. Requires ETH staking, intended for exchange-level/protocol operators.
For the vast majority of tasks you need Full node or Archive node.
System Requirements
| Type | CPU | RAM | SSD | Network |
|---|---|---|---|---|
| Full node | 4 cores | 8 GB | 500 GB NVMe | 100 Mbps |
| Archive node | 8 cores | 16 GB | 3+ TB NVMe | 200 Mbps |
Arbitrum uses NVMe — HDD catastrophically slow for synchronization. AWS: c6i.xlarge for full, r6i.2xlarge for archive.
Deployment via Docker
Official method — Docker Compose. Arbitrum node (nitro) requires connected Ethereum L1 node (Geth, Erigon) or L1 RPC endpoint:
# docker-compose.yml
services:
nitro:
image: offchainlabs/nitro-node:v3.2.0-d81324d
ports:
- "8547:8547" # HTTP RPC
- "8548:8548" # WebSocket
volumes:
- ./arbitrum-data:/home/user/.arbitrum
command:
- --l1.url=https://mainnet.infura.io/v3/${INFURA_KEY}
- --l2.chain-id=42161
- --http.api=eth,net,web3,debug
- --http.corsdomain=*
- --http.addr=0.0.0.0
- --http.vhosts=*
- --ws.port=8548
- --ws.addr=0.0.0.0
- --ws.api=eth,net,web3,debug
- --node.data-availability.enable=false
restart: unless-stopped
For archive node add:
- --node.caching.archive
Snapshots for Fast Synchronization
Synchronization from scratch on Arbitrum One — several days. Arbitrum provides snapshots for quick start:
# Download latest snapshot (several hundred GB for full node)
curl -O https://snapshot.arbitrum.foundation/arb1/nitro-genesis.tar
# Extract to data directory
tar -xvf nitro-genesis.tar -C ./arbitrum-data/
After snapshot node catches up only the difference — from several hours to a day.
Monitoring Synchronization Status
# Current node block
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
http://localhost:8547
# Synchronization status
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' \
http://localhost:8547
# false — synchronized; object with currentBlock/highestBlock — in progress
Useful metric: lag from chain head. Alert if node lags > 100 blocks (300 seconds) — sign of L1 RPC or disk I/O issue.
Nitro vs Classic
Arbitrum Classic (old architecture until ~2022) — data until block 22207817. Nitro — everything after. If you need access to historical data before Nitro transition — need separate classic node or use Arbitrum's official archive RPC for old blocks.
Using the Node
After synchronization — standard JSON-RPC on http://localhost:8547. For applications in same docker network: http://nitro:8547. For public access — nginx with rate limiting before node.
Full node deployment from snapshot: 4–8 hours (download + catch-up). Archive node from scratch: 1–3 days.







