Setting up Serverless Database (DynamoDB / PlanetScale / Neon)
Serverless database is a managed database that scales automatically (including to zero), billed by consumption not uptime. The optimal choice depends on data model and access patterns.
Comparison of Options
| DB | Type | Scale | Best Scenario |
|---|---|---|---|
| DynamoDB | Key-value / Document | Unlimited | High throughput, predictable queries |
| PlanetScale | MySQL-compatible | Up to terabyte | Relational data, GitHub-style branching |
| Neon | PostgreSQL | Small-medium | Full SQL, dev environments |
| FaunaDB | Document / Relational | Medium | Multi-region consistency |
| Upstash | Redis | Small-medium | Cache, queues, rate limiting |
DynamoDB: Designing for Serverless
DynamoDB requires thinking about data access before schema design. Poorly designed DynamoDB tables are expensive and slow.
Single-Table Design — entire domain in one table, PK/SK encode entity type:
# E-commerce schema
# PK | SK | Data
# USER#u123 | PROFILE | {name, email}
# USER#u123 | ORDER#o456 | {status, total, items}
# USER#u123 | ORDER#o789 | {status, total, items}
# ORDER#o456 | ITEM#i001 | {product_id, qty, price}
# PRODUCT#p001 | METADATA | {title, description}
import boto3
from boto3.dynamodb.conditions import Key
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('ecommerce')
def get_user_with_orders(user_id: str) -> dict:
response = table.query(
KeyConditionExpression=Key('PK').eq(f'USER#{user_id}') &
Key('SK').begins_with('ORDER#')
)
return response['Items']
def put_order(user_id: str, order: dict):
with table.batch_writer() as batch:
# User → order link
batch.put_item(Item={
'PK': f'USER#{user_id}',
'SK': f'ORDER#{order["id"]}',
**order
})
# Direct access by order_id
batch.put_item(Item={
'PK': f'ORDER#{order["id"]}',
'SK': 'METADATA',
'GSI1PK': f'STATUS#{order["status"]}',
'GSI1SK': order['created_at'],
**order
})
GSI (Global Secondary Index) — for alternative access patterns (get all orders by status).
DynamoDB On-Demand vs Provisioned
On-Demand: pay per read/write. No capacity planning. Ideal for unpredictable traffic or new apps.
Provisioned + Auto Scaling: set baseline RCU/WCU, auto-scaling on spikes. Cheaper for predictable load.
resource "aws_dynamodb_table" "ecommerce" {
name = "ecommerce"
billing_mode = "PAY_PER_REQUEST" # On-demand
hash_key = "PK"
range_key = "SK"
attribute {
name = "PK"
type = "S"
}
attribute {
name = "SK"
type = "S"
}
attribute {
name = "GSI1PK"
type = "S"
}
attribute {
name = "GSI1SK"
type = "S"
}
global_secondary_index {
name = "GSI1"
hash_key = "GSI1PK"
range_key = "GSI1SK"
projection_type = "ALL"
}
ttl {
attribute_name = "expires_at"
enabled = true
}
}
Neon: Serverless PostgreSQL
Neon separates compute and storage. Compute scales to zero when idle, storage billed by volume.
import psycopg2
import os
# Connection string from Neon dashboard
conn = psycopg2.connect(
os.environ['DATABASE_URL'],
# Neon recommends connection pooling via pgBouncer
# DATABASE_URL already contains pooler endpoint
)
# Standard PostgreSQL — no SDK specifics
with conn.cursor() as cur:
cur.execute("SELECT * FROM orders WHERE user_id = %s", (user_id,))
orders = cur.fetchall()
Branching in Neon — create DB branch like git branch in seconds:
neon branches create --name feature/new-schema --parent main
# Test migration on branch
neon connection-string feature/new-schema
# → postgresql://user:[email protected]/neondb
# After successful test — merge to main (via standard migrations)
PlanetScale
MySQL-compatible, branching database workflow, no foreign key constraints (Vitess-based):
import { connect } from '@planetscale/database'
const conn = connect({
host: process.env.DATABASE_HOST,
username: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD
})
// HTTP-based protocol — works from Lambda without connection overhead
const results = await conn.execute(
'SELECT * FROM orders WHERE user_id = ?',
[userId]
)
PlanetScale uses HTTP protocol instead of TCP — especially good for Lambda without persistent connections.
Connection Pooling for Lambda
Lambda creates new DB connection on each cold start. With 100 concurrent Lambdas — 100 connections. This kills PostgreSQL/MySQL.
Solutions:
- RDS Proxy (AWS) — connection pooler before RDS, transparent to app
- PgBouncer — self-hosted, before PostgreSQL
- Neon/PlanetScale — built-in pooling in managed service
- Prisma Accelerate — connection pooler + query cache for Prisma ORM
Setup Timeline
- DynamoDB single-table design + basic operations — 3-5 days
- Neon / PlanetScale connection + pooling — 1-2 days
- Migrating existing DB to serverless — 5-14 days







