Blockchain Node Monitoring Setup

We design and develop full-cycle blockchain solutions: from smart contract architecture to launching DeFi protocols, NFT marketplaces and crypto exchanges. Security audits, tokenomics, integration with existing infrastructure.
Showing 1 of 1 servicesAll 1306 services
Blockchain Node Monitoring Setup
Simple
~2-3 business days
FAQ
Blockchain Development Services
Blockchain Development Stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1051
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    827
  • image_logo-aider_0.jpg
    AIDER company logo development
    762
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    850

Blockchain Node Monitoring

Node went down at 3 AM, your application started returning errors, users can't make transactions. You found out at 9 AM from the first complaining user. This is typical without monitoring. Proper monitoring—alert in 2 minutes after problem starts, not 6 hours later.

What to Monitor

For any node (go-ethereum, Bor, Bitcoin Core, Solana validator)—key metrics:

Synchronization—main metric. Node can be running but lagging behind the network by hundreds of blocks:

  • Lag in blocks: current_block - network_head_block
  • Normal: 0–5 blocks lag
  • Alert: > 10 blocks for EVM networks, > 2 for Solana

Peer count—number of connected peers. If < 3—node likely isolated from network. If 0—network problem or wrong config.

RPC availability—does node respond to requests. eth_blockNumber as health check—simplest way.

System metrics—CPU, RAM, disk space. For archive node disk grows 1–2 GB per day, without monitoring in a few months it fills up, node stops.

Monitoring Stack: Prometheus + Grafana

Standard approach for production:

go-ethereum exports metrics in Prometheus format out of the box:

geth --metrics --metrics.addr 127.0.0.1 --metrics.port 6060

Available: chain_head_block, p2p_peers, rpc_calls, timing metrics.

For nodes without native Prometheus—exporter:

# Simple exporter for any EVM node
from prometheus_client import Gauge, start_http_server
from web3 import Web3

node_block = Gauge('node_current_block', 'Current block number')
node_peers = Gauge('node_peer_count', 'Number of peers')

def collect():
    w3 = Web3(Web3.HTTPProvider('http://localhost:8545'))
    node_block.set(w3.eth.block_number)
    node_peers.set(w3.net.peer_count)

Grafana dashboards—ready dashboards available on Grafana.com for most nodes (Ethereum, Polygon, BSC). Imported by ID, adapted for specific stack.

Alerting

Alertmanager (part of Prometheus stack)—alert setup:

# alerts.yml
groups:
  - name: blockchain-node
    rules:
      - alert: NodeSyncLag
        expr: (network_head_block - node_current_block) > 10
        for: 2m
        labels:
          severity: critical
        annotations:
          summary: "Node is lagging {{ $value }} blocks behind"

      - alert: NodeRPCDown
        expr: up{job="ethereum-node"} == 0
        for: 1m
        labels:
          severity: critical

      - alert: DiskSpaceLow
        expr: (node_filesystem_avail_bytes / node_filesystem_size_bytes) < 0.15
        for: 5m
        labels:
          severity: warning

Notification channels: Telegram, Slack, PagerDuty—via Alertmanager receivers. For small teams—Telegram sufficient.

Uptime Checks: External Monitoring

Prometheus monitors from inside. Need also external monitoring—in case server itself goes down or network:

  • Uptime Kuma (self-hosted)—simple, supports HTTP, TCP, JSON-query checks
  • Better Uptime / BetterStack—SaaS, convenient for small teams
  • Healthchecks.io—for cron-like checks that should run regularly

For node: HTTP check on POST / with body {"method":"eth_blockNumber","id":1}—if 200 returned, node is alive at basic level.

Logs

Centralized logs—separate topic, but minimum: journald with rotation and grep by error levels. For multiple nodes—Loki + Grafana.

Critical patterns in geth logs to alert on: "database corruption", "fatal error", "peer discovery disabled".

What Gets Set Up in 2–3 Days

Prometheus + Grafana on your server or cloud, metrics exporter for your node (go-ethereum, Bor, Bitcoin Core or other), alerts to Telegram/Slack on lag, RPC unavailability, system metrics, external uptime monitoring, dashboard with metrics history and current state.