Setting up Paperclip with PostgreSQL for a production environment
PostgreSQL production for Paperclip is not the default configuration. Orchestrating AI agents generates a large volume of records: every agent action, every LLM call, every task result—everything is written to the database. Proper configuration is critical for performance and reliability.
PostgreSQL configuration
Performance tuning:
shared_buffers = 25% RAM, effective_cache_size = 75% RAM, work_mem for complex queries, wal_buffers for write-heavy load. max_connections via PgBouncer connection pooling.
Indexes: Paperclip schema analysis → composite indexes for frequently used queries (agent_id + created_at, organization_id + status). Partial indexes for active tasks.
Partitioning: The agent log table is partitioned by date (RANGE partitioning). Retention policy: automatic archiving and purge of old partitions.
Replication: Streaming replication on replica for read-heavy query offloading and failover. pg_auto_failover or Patroni for automatic failover.
Backups: pg_basebackup + WAL archiving to S3 (pgBackRest). Point-in-time recovery to any second. RTO: <1 hour, RPO: <1 minute.
Monitoring
pg_stat_statements for slow query analysis. Prometheus postgres_exporter + Grafana dashboard. Alerts: bloat, long-running queries, replication lag, connection pool exhaustion.







