On April 12, 2025, at 16:22 UTC, our primary PostgreSQL database connection pool was exhausted, causing severe API degradation for 45 minutes. A combination of factors led to the incident: (1) a slow analytical query that held connections for 8+ minutes, (2) connection leaks in a recently deployed background job (order-sync-worker v1.2.0), and (3) a traffic spike from a marketing campaign that increased connection demand by 40%. With all 200 connections in use, new requests queued and eventually timed out. P99 API latency reached 30 seconds. Resolution required killing the long-running query, rolling back the leaky worker, and temporarily increasing the pool size.