CockroachDB Setup for Web Application

Our company is engaged in the development, support and maintenance of sites of any complexity. From simple one-page sites to large-scale cluster systems built on micro services. Experience of developers is confirmed by certificates from vendors.
Development and maintenance of all types of websites:
Informational websites or web applications
Business card websites, landing pages, corporate websites, online catalogs, quizzes, promo websites, blogs, news resources, informational portals, forums, aggregators
E-commerce websites or web applications
Online stores, B2B portals, marketplaces, online exchanges, cashback websites, exchanges, dropshipping platforms, product parsers
Business process management web applications
CRM systems, ERP systems, corporate portals, production management systems, information parsers
Electronic service websites or web applications
Classified ads platforms, online schools, online cinemas, website builders, portals for electronic services, video hosting platforms, thematic portals

These are just some of the technical types of websites we work with, and each of them can have its own specific features and functionality, as well as be customized to meet the specific needs and goals of the client.

Showing 1 of 1 servicesAll 2065 services
CockroachDB Setup for Web Application
Complex
~2-3 business days
FAQ
Our competencies:
Development stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    847
  • image_website-sbh_0.png
    Website development for SBH Partners
    999
  • image_website-_0.png
    Website development for Red Pear
    451

Setting Up CockroachDB for Web Application

CockroachDB is a distributed SQL database compatible with PostgreSQL protocol. Horizontal scaling without manual sharding, automatic data replication and multi-region deployments — this is what standard PostgreSQL cannot do without serious additions. If your application already uses Postgres drivers, switching to CockroachDB is minimal in code.

When CockroachDB makes sense

Global applications with users in multiple regions where low latency is needed everywhere. Systems that cannot be stopped — CockroachDB survives node loss without downtime. Horizontal write scaling — standard Postgres cannot do this. Compliance requirements for data storage in specific regions.

Installation (single-node for development)

wget -qO - https://binaries.cockroachdb.com/cockroach-latest.linux-amd64.tgz | tar xz
mv cockroach-*/cockroach /usr/local/bin/

# Run without SSL for dev
cockroach start-single-node --insecure --background \
  --store=/var/lib/cockroachdb \
  --listen-addr=localhost:26257 \
  --http-addr=localhost:8080 \
  --log-dir=/var/log/cockroachdb

Creating a database:

cockroach sql --insecure
CREATE DATABASE myapp;
CREATE USER myapp WITH PASSWORD 'strong_password';
GRANT ALL ON DATABASE myapp TO myapp;

Production cluster (three nodes)

# On each node — generate certificates
cockroach cert create-ca --certs-dir=/etc/cockroachdb/certs --ca-key=/etc/cockroachdb/certs/ca.key
cockroach cert create-node 10.0.0.1 localhost $(hostname) --certs-dir=/etc/cockroachdb/certs --ca-key=/etc/cockroachdb/certs/ca.key
cockroach cert create-client root --certs-dir=/etc/cockroachdb/certs --ca-key=/etc/cockroachdb/certs/ca.key

# Start node 1
cockroach start \
  --certs-dir=/etc/cockroachdb/certs \
  --advertise-addr=10.0.0.1 \
  --join=10.0.0.1,10.0.0.2,10.0.0.3 \
  --store=/var/lib/cockroachdb \
  --background

# After all nodes start — initialize cluster
cockroach init --certs-dir=/etc/cockroachdb/certs --host=10.0.0.1

Schema and migrations

Syntax is practically identical to PostgreSQL:

CREATE TABLE users (
    id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    email       STRING NOT NULL UNIQUE,
    name        STRING NOT NULL,
    created_at  TIMESTAMPTZ NOT NULL DEFAULT now()
);

CREATE TABLE orders (
    id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    user_id     UUID NOT NULL REFERENCES users(id),
    status      STRING NOT NULL DEFAULT 'pending',
    total       DECIMAL(10,2) NOT NULL,
    created_at  TIMESTAMPTZ NOT NULL DEFAULT now(),

    INDEX idx_orders_user (user_id, created_at DESC),
    INDEX idx_orders_status (status, created_at DESC)
);

CockroachDB uses UUIDs as PK by default — sequential integers create hotspots on one node.

Multi-region setup

-- Enable geo-partitioning
ALTER DATABASE myapp SET PRIMARY REGION 'eu-central-1';
ALTER DATABASE myapp ADD REGION 'us-east-1';
ALTER DATABASE myapp ADD REGION 'ap-southeast-1';

-- User table with regional attachment
ALTER TABLE users SET LOCALITY REGIONAL BY ROW;

-- After this, each row is stored closer to the user
-- crdb_region defines the nearest region

Connecting from Node.js

CockroachDB works with any PostgreSQL driver:

import { Pool } from 'pg'

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  // DATABASE_URL = postgresql://myapp:[email protected]:26257/myapp?sslmode=require
  max: 25,
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 5000,
})

// CockroachDB recommends retry logic for serializable transactions
async function withRetry<T>(fn: () => Promise<T>, maxRetries = 3): Promise<T> {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await fn()
    } catch (err: any) {
      // 40001 — serialization failure (transaction needs retry)
      if (err.code === '40001' && attempt < maxRetries - 1) {
        const delay = Math.min(100 * Math.pow(2, attempt), 2000)
        await new Promise(r => setTimeout(r, delay + Math.random() * 100))
        continue
      }
      throw err
    }
  }
  throw new Error('max retries exceeded')
}

// Transaction with retries
async function transferFunds(fromId: string, toId: string, amount: number) {
  return withRetry(async () => {
    const client = await pool.connect()
    try {
      await client.query('BEGIN')
      const { rows: [from] } = await client.query(
        'SELECT balance FROM accounts WHERE id = $1 FOR UPDATE', [fromId]
      )
      if (from.balance < amount) throw new Error('insufficient funds')

      await client.query(
        'UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, fromId]
      )
      await client.query(
        'UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, toId]
      )
      await client.query('COMMIT')
    } catch (e) {
      await client.query('ROLLBACK')
      throw e
    } finally {
      client.release()
    }
  })
}

Monitoring via built-in UI

# DB Console available on port 8080
# Key sections:
# - Statements: top queries by execution time
# - Network Latency: latency between nodes
# - Replication: range replication status

# CLI monitoring
cockroach node status --certs-dir=/etc/cockroachdb/certs --host=10.0.0.1
cockroach debug zip debug.zip --certs-dir=/etc/cockroachdb/certs --host=10.0.0.1

Backups

-- Full backup to S3
BACKUP INTO 's3://my-bucket/cockroachdb-backups?AWS_ACCESS_KEY_ID=...&AWS_SECRET_ACCESS_KEY=...'
  AS OF SYSTEM TIME '-10s';

-- Incremental
BACKUP INTO LATEST IN 's3://my-bucket/cockroachdb-backups?...';

-- Schedule
CREATE SCHEDULE daily_backup
  FOR BACKUP INTO 's3://my-bucket/cockroachdb-backups?...'
  RECURRING '@daily'
  FULL BACKUP ALWAYS;

Timelines

Setting up three-node cluster in one region with monitoring configuration: 2–3 days. Configuring multi-region with geo-partitioning: 3–5 days. Migration from PostgreSQL (schema, data, application, load testing): 1–2 weeks.