At FOSS Asia 2026 (Mar 8-10, Bangkok), note 1:
It’s been said many times already — at small to medium scale, PostgreSQL can cover a surprising number of use cases: relational database, document store (JSONB), cache, full-text search, vector search, and yes — a message queue.
There was a great talk about using PostgreSQL as a queue, and the approach is simpler than you’d think.
The idea#
Instead of adding RabbitMQ, Redis, or SQS to your stack — just use a table:

The key columns: name, key, status, payload (JSONB), retry_count, and timestamps. The table is partitioned by enqueue_dt for performance.
The secret sauce: SELECT ... FOR UPDATE SKIP LOCKED#
This is what makes PostgreSQL work as a queue. A consumer grabs pending messages and locks them in a single transaction — other consumers skip already-locked rows instead of waiting. No double processing, no external locking mechanism.
The consumer workflow:
SELECT ... FOR UPDATE SKIP LOCKED— grab and lock pending messages- Process each message — call APIs, run business logic
UPDATEstatus (success/failure)COMMIT
Partitioning for scale#
Partitioning by date (PARTITION BY RANGE (enqueue_dt)) gives you:
- Smaller indexes = faster queries
- Drop old partitions instantly (vs slow
DELETE) VACUUMbecomes trivial because it only touches a small slice of data
When to reach for something bigger#
If you don’t want a DIY solution and need visibility timeouts, dead letter queues, message archiving, and a proper API around it — check out PGMQ. It’s a PostgreSQL extension that wraps all of this into a clean interface, similar to AWS SQS:
SELECT pgmq.create('my_queue');
SELECT pgmq.send('my_queue', '{"event": "order_created"}'::jsonb);
SELECT * FROM pgmq.read('my_queue', 30, 1); -- 30s visibility timeoutPGMQ adds visibility timeouts, dead letter queues, message archiving, and client libraries for most languages.
