Blockchain Infrastructure Development

We design and develop full-cycle blockchain solutions: from smart contract architecture to launching DeFi protocols, NFT marketplaces and crypto exchanges. Security audits, tokenomics, integration with existing infrastructure.
Showing 1 of 1 servicesAll 1306 services
Blockchain Infrastructure Development
Complex
from 2 weeks to 3 months
FAQ
Blockchain Development Services
Blockchain Development Stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1051
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    827
  • image_logo-aider_0.jpg
    AIDER company logo development
    762
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    850

Developing Blockchain Infrastructure

"Infrastructure" is everything between blockchain and your product. Smart contracts are written, but without nodes, indexers, event processing pipelines, and monitoring they're useless in production. Infrastructure errors kill projects that passed smart contract audits: lost events, RPC provider downtime during peak load, no replay mechanism when service crashes.

Blockchain Infrastructure Layers

┌─────────────────────────────────────────────────────┐
│                  Application Layer                   │
│           (Frontend, API, Business Logic)            │
├─────────────────────────────────────────────────────┤
│               Data Access Layer                      │
│        (GraphQL API, REST API, WebSocket)            │
├─────────────────────────────────────────────────────┤
│               Indexing Layer                         │
│    (The Graph / custom indexer / event processor)    │
├─────────────────────────────────────────────────────┤
│               Node Layer                             │
│      (Archive node / full node / light client)       │
├─────────────────────────────────────────────────────┤
│               Blockchain Layer                       │
│        (Ethereum / L2 / custom chain)                │
└─────────────────────────────────────────────────────┘

Each layer has reliability and scaling requirements. Let's examine each.

Node Layer: Own Node or RPC Provider

When Own Node Needed

RPC providers (Infura, Alchemy, QuickNode) — right choice for 80% of projects. They solve operations, provide SLA, scale automatically.

Own node needed when:

  • Archive node required for historical queries (eth_call on past blocks) — providers charge premium
  • Rate limits critical — at 100k+ requests/day provider is expensive or limiting
  • Privacy — provider sees all requests; sensitive for protocols where tx metadata matters
  • Specific methods — some debug/trace methods unavailable at providers

Running Geth/Reth

# Reth (Rust Ethereum) — syncs faster than Geth
reth node \
  --chain mainnet \
  --http \
  --http.addr 0.0.0.0 \
  --http.port 8545 \
  --http.api eth,net,web3,debug,trace \
  --ws \
  --ws.addr 0.0.0.0 \
  --ws.port 8546 \
  --datadir /data/reth

Requirements (Ethereum full node):

  • CPU: 4+ cores
  • RAM: 16+ GB (32 GB recommended)
  • SSD: 2+ TB NVMe (not HDD, not SATA SSD)
  • Bandwidth: 25+ Mbps stable

Reth sync time: ~24–48 hours for full node, ~2–4 days for archive.

RPC Multiplexing

Even with one provider need failover. Pattern — load balancer with health checks:

class RpcMultiplexer {
  private providers: JsonRpcProvider[];
  private healthStatus: Map<string, boolean> = new Map();

  constructor(endpoints: string[]) {
    this.providers = endpoints.map(url => new JsonRpcProvider(url));
    this.startHealthChecks();
  }

  async getHealthyProvider(): Promise<JsonRpcProvider> {
    const healthy = this.providers.filter(
      (p, i) => this.healthStatus.get(String(i)) !== false
    );
    if (healthy.length === 0) throw new Error('No healthy RPC providers');
    return healthy[Math.floor(Math.random() * healthy.length)];
  }

  private startHealthChecks(): void {
    setInterval(async () => {
      for (let i = 0; i < this.providers.length; i++) {
        try {
          await this.providers[i].getBlockNumber();
          this.healthStatus.set(String(i), true);
        } catch {
          this.healthStatus.set(String(i), false);
        }
      }
    }, 15_000);
  }
}

Indexing Layer: The Graph vs Custom Indexer

The Graph

For most EVM projects The Graph is right choice. Write subgraph (AssemblyScript + GraphQL schema), deploy to Subgraph Studio, get GraphQL API.

Pros: decentralized (use via Graph Network), fast start, GraphQL out-of-box.

Cons: limited handler capabilities (no arbitrary HTTP calls, no complex logic), AssemblyScript instead of TypeScript, debugging inconvenient.

Custom Indexer

Need when: complex event processing, multiple contracts with dependencies, cross-chain data, off-chain source integration.

class EventIndexer {
  private db: Pool;
  private provider: JsonRpcProvider;

  async indexFromBlock(startBlock: number): Promise<void> {
    let currentBlock = startBlock;
    const headBlock = await this.provider.getBlockNumber();

    while (currentBlock <= headBlock) {
      const batch = Math.min(currentBlock + 999, headBlock);
      
      const logs = await this.provider.getLogs({
        fromBlock: currentBlock,
        toBlock: batch,
        address: CONTRACT_ADDRESSES,
      });

      await this.db.query('BEGIN');
      try {
        for (const log of logs) {
          await this.processLog(log);
        }
        await this.updateCursor(batch);
        await this.db.query('COMMIT');
      } catch (e) {
        await this.db.query('ROLLBACK');
        throw e;
      }

      currentBlock = batch + 1;
    }

    await this.startRealtimeIndexing(headBlock);
  }
}

Key principles: cursor-based resumption, transactionality, idempotency, reorg handling.

Event Processing Pipeline

For high-load systems — Kafka as event bus. Separation of listener and processor enables scaling and failure resilience.

Monitoring

Prometheus + Grafana. Key metrics: indexer lag vs head block, transaction processing speed, RPC errors, block processing time.

Alerts:

  • indexer_lag_blocks > 100 → critical
  • rpc_errors_total rate > 10/min → warning
  • block_processing_seconds > 30 → warning

Key Management and Secrets

Privilege separation:

  • Read-only keys (indexers, API): on-chain read only
  • Transaction signing keys (bots, automation): minimal rights, AWS KMS or HSM
  • Admin keys (Gnosis Safe multisig): contract management only, never in automation
  • Rotation — rotate API keys every 90 days

Typical Project Phases

Phase Content Time
Assessment Requirements, architecture, pattern selection 1–3 days
Node setup Node/RPC config, multiplexing 3–5 days
Indexer Subgraph or custom indexer 1–2 weeks
Event pipeline Kafka/Redis, processors, webhooks 3–5 days
Monitoring Prometheus + Grafana + alerts 2–3 days
Load testing Stress test, optimization 2–3 days
Documentation Runbook, incident response 1–2 days

Total for typical DeFi protocol: 3–6 weeks production-ready infrastructure with monitoring and docs.