Setting Up CockroachDB for Web Application
CockroachDB is a distributed SQL database compatible with PostgreSQL protocol. Horizontal scaling without manual sharding, automatic data replication and multi-region deployments — this is what standard PostgreSQL cannot do without serious additions. If your application already uses Postgres drivers, switching to CockroachDB is minimal in code.
When CockroachDB makes sense
Global applications with users in multiple regions where low latency is needed everywhere. Systems that cannot be stopped — CockroachDB survives node loss without downtime. Horizontal write scaling — standard Postgres cannot do this. Compliance requirements for data storage in specific regions.
Installation (single-node for development)
wget -qO - https://binaries.cockroachdb.com/cockroach-latest.linux-amd64.tgz | tar xz
mv cockroach-*/cockroach /usr/local/bin/
# Run without SSL for dev
cockroach start-single-node --insecure --background \
--store=/var/lib/cockroachdb \
--listen-addr=localhost:26257 \
--http-addr=localhost:8080 \
--log-dir=/var/log/cockroachdb
Creating a database:
cockroach sql --insecure
CREATE DATABASE myapp;
CREATE USER myapp WITH PASSWORD 'strong_password';
GRANT ALL ON DATABASE myapp TO myapp;
Production cluster (three nodes)
# On each node — generate certificates
cockroach cert create-ca --certs-dir=/etc/cockroachdb/certs --ca-key=/etc/cockroachdb/certs/ca.key
cockroach cert create-node 10.0.0.1 localhost $(hostname) --certs-dir=/etc/cockroachdb/certs --ca-key=/etc/cockroachdb/certs/ca.key
cockroach cert create-client root --certs-dir=/etc/cockroachdb/certs --ca-key=/etc/cockroachdb/certs/ca.key
# Start node 1
cockroach start \
--certs-dir=/etc/cockroachdb/certs \
--advertise-addr=10.0.0.1 \
--join=10.0.0.1,10.0.0.2,10.0.0.3 \
--store=/var/lib/cockroachdb \
--background
# After all nodes start — initialize cluster
cockroach init --certs-dir=/etc/cockroachdb/certs --host=10.0.0.1
Schema and migrations
Syntax is practically identical to PostgreSQL:
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email STRING NOT NULL UNIQUE,
name STRING NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE orders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id),
status STRING NOT NULL DEFAULT 'pending',
total DECIMAL(10,2) NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
INDEX idx_orders_user (user_id, created_at DESC),
INDEX idx_orders_status (status, created_at DESC)
);
CockroachDB uses UUIDs as PK by default — sequential integers create hotspots on one node.
Multi-region setup
-- Enable geo-partitioning
ALTER DATABASE myapp SET PRIMARY REGION 'eu-central-1';
ALTER DATABASE myapp ADD REGION 'us-east-1';
ALTER DATABASE myapp ADD REGION 'ap-southeast-1';
-- User table with regional attachment
ALTER TABLE users SET LOCALITY REGIONAL BY ROW;
-- After this, each row is stored closer to the user
-- crdb_region defines the nearest region
Connecting from Node.js
CockroachDB works with any PostgreSQL driver:
import { Pool } from 'pg'
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
// DATABASE_URL = postgresql://myapp:[email protected]:26257/myapp?sslmode=require
max: 25,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
})
// CockroachDB recommends retry logic for serializable transactions
async function withRetry<T>(fn: () => Promise<T>, maxRetries = 3): Promise<T> {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await fn()
} catch (err: any) {
// 40001 — serialization failure (transaction needs retry)
if (err.code === '40001' && attempt < maxRetries - 1) {
const delay = Math.min(100 * Math.pow(2, attempt), 2000)
await new Promise(r => setTimeout(r, delay + Math.random() * 100))
continue
}
throw err
}
}
throw new Error('max retries exceeded')
}
// Transaction with retries
async function transferFunds(fromId: string, toId: string, amount: number) {
return withRetry(async () => {
const client = await pool.connect()
try {
await client.query('BEGIN')
const { rows: [from] } = await client.query(
'SELECT balance FROM accounts WHERE id = $1 FOR UPDATE', [fromId]
)
if (from.balance < amount) throw new Error('insufficient funds')
await client.query(
'UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, fromId]
)
await client.query(
'UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, toId]
)
await client.query('COMMIT')
} catch (e) {
await client.query('ROLLBACK')
throw e
} finally {
client.release()
}
})
}
Monitoring via built-in UI
# DB Console available on port 8080
# Key sections:
# - Statements: top queries by execution time
# - Network Latency: latency between nodes
# - Replication: range replication status
# CLI monitoring
cockroach node status --certs-dir=/etc/cockroachdb/certs --host=10.0.0.1
cockroach debug zip debug.zip --certs-dir=/etc/cockroachdb/certs --host=10.0.0.1
Backups
-- Full backup to S3
BACKUP INTO 's3://my-bucket/cockroachdb-backups?AWS_ACCESS_KEY_ID=...&AWS_SECRET_ACCESS_KEY=...'
AS OF SYSTEM TIME '-10s';
-- Incremental
BACKUP INTO LATEST IN 's3://my-bucket/cockroachdb-backups?...';
-- Schedule
CREATE SCHEDULE daily_backup
FOR BACKUP INTO 's3://my-bucket/cockroachdb-backups?...'
RECURRING '@daily'
FULL BACKUP ALWAYS;
Timelines
Setting up three-node cluster in one region with monitoring configuration: 2–3 days. Configuring multi-region with geo-partitioning: 3–5 days. Migration from PostgreSQL (schema, data, application, load testing): 1–2 weeks.







