RabbitMQ Complete Guide: Message Queues, Exchanges & Patterns
RabbitMQ is the most widely deployed open-source message broker. It accepts messages from producers, routes them through exchanges to queues using flexible binding rules, and delivers them to consumers. If your application needs to decouple services, distribute work across workers, buffer traffic spikes, or implement event-driven architectures, RabbitMQ is a proven choice used by millions of production systems worldwide.
This guide covers everything from installation to production clustering: the AMQP protocol, exchange types, queue configuration, message durability, dead letter exchanges, retry patterns, Python and Node.js integration with full code examples, monitoring, performance tuning, and how RabbitMQ compares to Kafka.
Table of Contents
- What Is RabbitMQ and AMQP
- Installation
- Core Concepts
- Exchange Types
- Publishing and Consuming Messages
- Python Integration with Pika
- Node.js Integration with amqplib
- Acknowledgments and Durability
- Dead Letter Exchanges and Retry Patterns
- Clustering and High Availability
- Monitoring and Management UI
- RabbitMQ vs Kafka
- Best Practices and Common Patterns
- Performance Tuning
- Frequently Asked Questions
What Is RabbitMQ and AMQP
RabbitMQ is a message broker that implements the Advanced Message Queuing Protocol (AMQP 0-9-1). A message broker sits between producers (applications that send messages) and consumers (applications that receive and process messages). Instead of services talking directly to each other, they communicate through the broker, which decouples them in time, space, and implementation.
AMQP defines the wire protocol that clients and brokers use to communicate. The key entities in AMQP are:
- Connection — a TCP connection between your application and the broker
- Channel — a lightweight virtual connection multiplexed over a single TCP connection
- Exchange — receives messages from producers and routes them to queues based on rules
- Queue — a buffer that stores messages until consumers retrieve them
- Binding — a rule that links an exchange to a queue with a routing key or pattern
The message flow is: Producer sends a message to an exchange with a routing key. The exchange evaluates its bindings and routes the message to one or more queues. Consumers subscribed to those queues receive the messages. RabbitMQ also supports MQTT, STOMP, and HTTP via plugins, but AMQP is the native protocol.
Installation
Docker (recommended for development)
The fastest way to get RabbitMQ running with the management UI enabled:
# Run RabbitMQ with management plugin on ports 5672 (AMQP) and 15672 (UI)
docker run -d --name rabbitmq \
-p 5672:5672 \
-p 15672:15672 \
-e RABBITMQ_DEFAULT_USER=admin \
-e RABBITMQ_DEFAULT_PASS=secretpassword \
rabbitmq:3-management
# Verify it's running
docker logs rabbitmq
For a full Docker Compose setup with persistent data:
# docker-compose.yml
services:
rabbitmq:
image: rabbitmq:3-management
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: secretpassword
RABBITMQ_DEFAULT_VHOST: /
volumes:
- rabbitmq_data:/var/lib/rabbitmq
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "check_port_connectivity"]
interval: 30s
timeout: 10s
retries: 3
volumes:
rabbitmq_data:
Ubuntu / Debian
# Install Erlang and RabbitMQ from official repos
sudo apt-get update && sudo apt-get install -y erlang-base rabbitmq-server
sudo rabbitmq-plugins enable rabbitmq_management
sudo systemctl enable --now rabbitmq-server
macOS
brew install rabbitmq
# Start as a service
brew services start rabbitmq
# Management UI at http://localhost:15672 (guest/guest)
Core Concepts
Understanding RabbitMQ requires grasping how messages flow from producers through exchanges to queues and finally to consumers.
Virtual Hosts
Virtual hosts (vhosts) provide logical separation within a single RabbitMQ instance. Each vhost has its own set of exchanges, queues, bindings, and permissions. Think of them like namespaces or databases in a database server. The default vhost is /.
# Create a vhost
rabbitmqctl add_vhost my_app
# Set permissions for a user on the vhost
rabbitmqctl set_permissions -p my_app admin ".*" ".*" ".*"
Queues
Queues store messages until consumers process them. Key queue properties:
- Durable — survives broker restarts (the queue definition, not necessarily the messages)
- Exclusive — used by only one connection and deleted when that connection closes
- Auto-delete — deleted when the last consumer unsubscribes
- Arguments — optional settings like TTL, max length, dead letter exchange
Bindings and Routing Keys
A binding connects an exchange to a queue. When you publish a message, you specify a routing key. The exchange uses the routing key and its type to decide which queues receive the message. The exact matching logic depends on the exchange type.
Exchange Types
Direct Exchange
Routes messages to queues where the binding key exactly matches the routing key. This is the simplest and most common pattern for point-to-point messaging.
Producer --[routing_key="order.created"]--> Direct Exchange
|
+--> Queue "orders" (binding_key="order.created") --> Consumer
+--> Queue "audit" (binding_key="order.created") --> Audit Consumer
The default exchange (empty string name) is a direct exchange pre-declared by RabbitMQ. Every queue is automatically bound to it with its queue name as the binding key, so you can publish directly to any queue by name.
Topic Exchange
Routes messages based on wildcard pattern matching on the routing key. Routing keys are dot-separated words. Bindings can use * (matches exactly one word) and # (matches zero or more words).
Producer --[routing_key="log.error.payment"]--> Topic Exchange
|
+--> Queue "all-logs" (binding_key="log.#") --> matches
+--> Queue "error-logs" (binding_key="log.error.*") --> matches
+--> Queue "payment-logs" (binding_key="log.*.payment") --> matches
+--> Queue "info-logs" (binding_key="log.info.*") --> no match
Fanout Exchange
Broadcasts every message to all bound queues, ignoring routing keys entirely. Use this for notification systems, event broadcasting, or when every consumer needs every message.
Producer --[any routing_key]--> Fanout Exchange
|
+--> Queue "email-service" --> Email Consumer
+--> Queue "sms-service" --> SMS Consumer
+--> Queue "push-service" --> Push Consumer
Headers Exchange
Routes based on message header attributes instead of routing keys. You specify headers in the binding with an x-match argument: all means every header must match, any means at least one must match. Useful for complex routing that does not fit into key-based patterns.
Publishing and Consuming Messages
The basic workflow is: open a connection, create a channel, declare exchanges and queues, publish or consume. Here is the conceptual flow using the RabbitMQ CLI for testing:
# Publish a test message using rabbitmqadmin
rabbitmqadmin publish exchange=amq.default routing_key=test_queue \
payload="Hello RabbitMQ" properties='{"delivery_mode": 2}'
# Consume (get) a message
rabbitmqadmin get queue=test_queue ackmode=ack_requeue_false
# List queues and their message counts
rabbitmqctl list_queues name messages consumers
Python Integration with Pika
Pika is the most popular Python client for RabbitMQ. Install it with pip install pika.
Publisher
import pika
import json
# Establish connection
connection = pika.BlockingConnection(
pika.ConnectionParameters(
host='localhost',
port=5672,
credentials=pika.PlainCredentials('admin', 'secretpassword'),
virtual_host='/'
)
)
channel = connection.channel()
# Declare a durable exchange and queue
channel.exchange_declare(exchange='orders', exchange_type='direct', durable=True)
channel.queue_declare(queue='order_processing', durable=True)
channel.queue_bind(queue='order_processing', exchange='orders', routing_key='order.new')
# Publish a persistent message
message = json.dumps({'order_id': 12345, 'amount': 99.99, 'customer': 'alice'})
channel.basic_publish(
exchange='orders',
routing_key='order.new',
body=message,
properties=pika.BasicProperties(
delivery_mode=2, # persistent message
content_type='application/json',
headers={'priority': 'high'}
)
)
print(f"Published: {message}")
connection.close()
Consumer
import pika
import json
import time
def process_order(ch, method, properties, body):
"""Callback for each message received."""
order = json.loads(body)
print(f"Processing order {order['order_id']} for ${order['amount']}")
try:
# Simulate processing work
time.sleep(1)
print(f"Order {order['order_id']} completed")
ch.basic_ack(delivery_tag=method.delivery_tag)
except Exception as e:
print(f"Error processing order: {e}")
# Reject and requeue on failure
ch.basic_nack(delivery_tag=method.delivery_tag, requeue=True)
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='localhost',
credentials=pika.PlainCredentials('admin', 'secretpassword'))
)
channel = connection.channel()
channel.queue_declare(queue='order_processing', durable=True)
# Prefetch 1 message at a time for fair dispatch
channel.basic_qos(prefetch_count=1)
channel.basic_consume(queue='order_processing', on_message_callback=process_order)
print("Waiting for orders. Press Ctrl+C to exit.")
channel.start_consuming()
Topic Exchange with Python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Declare a topic exchange for log routing
channel.exchange_declare(exchange='logs', exchange_type='topic', durable=True)
# Bind queues to specific patterns
channel.queue_declare(queue='all_logs', durable=True)
channel.queue_bind(queue='all_logs', exchange='logs', routing_key='log.#')
channel.queue_declare(queue='error_logs', durable=True)
channel.queue_bind(queue='error_logs', exchange='logs', routing_key='log.error.*')
# Publish log messages with different routing keys
channel.basic_publish(exchange='logs', routing_key='log.error.auth',
body='Authentication failed for user bob')
channel.basic_publish(exchange='logs', routing_key='log.info.auth',
body='User alice logged in successfully')
channel.basic_publish(exchange='logs', routing_key='log.error.payment',
body='Payment gateway timeout')
# error_logs queue gets 2 messages (error.auth and error.payment)
# all_logs queue gets all 3 messages
connection.close()
Node.js Integration with amqplib
Install with npm install amqplib. The amqplib library provides both callback and promise-based APIs.
Publisher
const amqp = require('amqplib');
async function publishOrder(order) {
const connection = await amqp.connect('amqp://admin:secretpassword@localhost');
const channel = await connection.createChannel();
const exchange = 'orders';
const routingKey = 'order.new';
await channel.assertExchange(exchange, 'direct', { durable: true });
await channel.assertQueue('order_processing', { durable: true });
await channel.bindQueue('order_processing', exchange, routingKey);
const message = JSON.stringify(order);
channel.publish(exchange, routingKey, Buffer.from(message), {
persistent: true,
contentType: 'application/json'
});
console.log(`Published order: ${order.orderId}`);
setTimeout(() => connection.close(), 500);
}
publishOrder({ orderId: 12345, amount: 99.99, customer: 'alice' });
Consumer
const amqp = require('amqplib');
async function startConsumer() {
const connection = await amqp.connect('amqp://admin:secretpassword@localhost');
const channel = await connection.createChannel();
await channel.assertQueue('order_processing', { durable: true });
channel.prefetch(1); // Fair dispatch: one message at a time
console.log('Waiting for orders...');
channel.consume('order_processing', async (msg) => {
if (!msg) return;
const order = JSON.parse(msg.content.toString());
console.log(`Processing order ${order.orderId}`);
try {
// Simulate async processing
await new Promise(resolve => setTimeout(resolve, 1000));
console.log(`Order ${order.orderId} completed`);
channel.ack(msg);
} catch (err) {
console.error(`Failed to process order: ${err.message}`);
channel.nack(msg, false, true); // requeue
}
});
}
startConsumer().catch(console.error);
Message Acknowledgment and Durability
RabbitMQ provides several mechanisms to prevent message loss. Understanding all three layers is critical for production systems.
Consumer Acknowledgments
When a consumer receives a message, it must acknowledge it. Until acknowledged, the message stays in the queue and will be redelivered if the consumer disconnects. There are three acknowledgment modes:
- basic_ack — message processed successfully, remove it from the queue
- basic_nack — processing failed, optionally requeue the message
- basic_reject — like nack but for a single message only
# Manual acknowledgment patterns
def on_message(ch, method, properties, body):
try:
result = process(body)
ch.basic_ack(delivery_tag=method.delivery_tag)
except TransientError:
# Requeue for retry
ch.basic_nack(delivery_tag=method.delivery_tag, requeue=True)
except PermanentError:
# Send to dead letter exchange, do not requeue
ch.basic_nack(delivery_tag=method.delivery_tag, requeue=False)
Publisher Confirms
Publisher confirms let the producer know that the broker has received and persisted the message. Without confirms, a producer has no way to know if a message was lost due to a network issue or broker crash.
channel.confirm_delivery()
try:
channel.basic_publish(
exchange='orders',
routing_key='order.new',
body=json.dumps(order),
properties=pika.BasicProperties(delivery_mode=2),
mandatory=True # Return message if no queue matches
)
print("Message confirmed by broker")
except pika.exceptions.UnroutableError:
print("Message could not be routed to any queue")
Durability Checklist
For messages to survive a broker restart, you need all three:
- Durable queue —
queue_declare(queue='name', durable=True) - Persistent message —
delivery_mode=2in message properties - Publisher confirms —
channel.confirm_delivery()
Dead Letter Exchanges and Retry Patterns
A dead letter exchange (DLX) receives messages that cannot be delivered normally. Messages are dead-lettered when:
- A consumer rejects the message with
requeue=False - The message TTL expires
- The queue exceeds its maximum length
Setting Up a Dead Letter Exchange
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Declare the dead letter exchange and queue
channel.exchange_declare(exchange='dlx', exchange_type='direct', durable=True)
channel.queue_declare(queue='dead_letters', durable=True)
channel.queue_bind(queue='dead_letters', exchange='dlx', routing_key='failed')
# Declare the main queue with DLX configuration
channel.queue_declare(
queue='tasks',
durable=True,
arguments={
'x-dead-letter-exchange': 'dlx',
'x-dead-letter-routing-key': 'failed',
'x-message-ttl': 60000 # Messages expire after 60 seconds
}
)
connection.close()
Retry with Exponential Backoff
Use a delay queue pattern: failed messages go to a delay queue with a TTL. When the TTL expires, they are dead-lettered back to the original queue for another attempt.
# Create retry queues with increasing delays (5s, 15s, 60s)
delays = [5000, 15000, 60000]
for i, delay in enumerate(delays):
channel.queue_declare(queue=f'retry_{i}', durable=True, arguments={
'x-dead-letter-exchange': '', # default exchange
'x-dead-letter-routing-key': 'tasks', # back to main queue
'x-message-ttl': delay
})
def on_message(ch, method, properties, body):
retry_count = (properties.headers or {}).get('x-retry-count', 0)
try:
process(body)
ch.basic_ack(delivery_tag=method.delivery_tag)
except Exception:
ch.basic_ack(delivery_tag=method.delivery_tag)
if retry_count < 3:
ch.basic_publish(exchange='', routing_key=f'retry_{retry_count}',
body=body, properties=pika.BasicProperties(delivery_mode=2,
headers={'x-retry-count': retry_count + 1}))
else:
ch.basic_publish(exchange='dlx_final', routing_key='dead',
body=body, properties=pika.BasicProperties(delivery_mode=2))
Clustering and High Availability
A RabbitMQ cluster connects multiple broker nodes so they share users, vhosts, queues, exchanges, bindings, and runtime parameters. Clustering provides both high availability and increased throughput.
Setting Up a Cluster
All cluster nodes must share the same Erlang cookie. Use Docker Compose to spin up a multi-node cluster:
# On each node, ensure the same Erlang cookie
echo "secret_cookie_value" > /var/lib/rabbitmq/.erlang.cookie
chmod 400 /var/lib/rabbitmq/.erlang.cookie
# From node 2 and 3, join the cluster
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit@rabbit1
rabbitmqctl start_app
# Verify cluster status
rabbitmqctl cluster_status
Quorum Queues
Quorum queues are the recommended replicated queue type for production. They use the Raft consensus algorithm to replicate data across cluster nodes, providing strong data safety guarantees. Quorum queues replace the older mirrored (HA) queues which are deprecated.
# Declare a quorum queue (replicated across cluster nodes)
channel.queue_declare(
queue='critical_orders',
durable=True,
arguments={
'x-queue-type': 'quorum',
'x-quorum-initial-group-size': 3, # replicate across 3 nodes
'x-delivery-limit': 5 # built-in redelivery limit
}
)
Streams
RabbitMQ Streams (introduced in 3.9) are an append-only log data structure, similar to Kafka topics. Streams allow multiple consumers to read the same data independently, support time-based and offset-based consumption, and are optimized for high-throughput scenarios.
# Declare a stream
channel.queue_declare(
queue='events_stream',
durable=True,
arguments={
'x-queue-type': 'stream',
'x-max-length-bytes': 5_000_000_000, # 5 GB retention
'x-stream-max-segment-size-bytes': 100_000_000 # 100 MB segments
}
)
Monitoring and Management UI
RabbitMQ ships with a built-in management plugin that provides a web UI and HTTP API for monitoring and administration.
Management UI
Access the management UI at http://localhost:15672. It shows:
- Overview: connections, channels, exchanges, queues, message rates
- Queue details: message counts, consumer count, memory usage, message rates
- Connection and channel details with per-channel flow control status
- Node health: memory usage, disk space, file descriptors, Erlang processes
HTTP API
# Get overview (cluster-wide stats)
curl -u admin:secretpassword http://localhost:15672/api/overview
# List queues with message counts
curl -u admin:secretpassword http://localhost:15672/api/queues | jq '.[] | {name, messages, consumers}'
# Get specific queue details
curl -u admin:secretpassword http://localhost:15672/api/queues/%2F/order_processing
# Health check endpoint
curl -u admin:secretpassword http://localhost:15672/api/health/checks/alarms
Prometheus and Grafana
For production monitoring, enable the Prometheus plugin. See the Prometheus & Grafana guide for dashboard setup.
# Enable Prometheus metrics endpoint
rabbitmq-plugins enable rabbitmq_prometheus
# Metrics available at http://localhost:15692/metrics
# Key metrics to monitor:
# - rabbitmq_queue_messages (messages ready + unacked)
# - rabbitmq_queue_consumers
# - rabbitmq_connections
# - rabbitmq_channel_messages_published_total
# - rabbitmq_channel_messages_delivered_total
# - rabbitmq_node_mem_used
CLI Diagnostics
# Check node health
rabbitmq-diagnostics check_running
rabbitmq-diagnostics check_port_connectivity
rabbitmq-diagnostics status
# List connections and channels
rabbitmqctl list_connections name state channels
rabbitmqctl list_channels name consumer_count messages_unacknowledged
# Check for memory and disk alarms
rabbitmqctl node_health_check
RabbitMQ vs Kafka
Both are message brokers but with fundamentally different architectures and trade-offs. See the Apache Kafka Complete Guide for details on Kafka.
| Feature | RabbitMQ | Kafka |
|---|---|---|
| Model | Message broker (push) | Distributed log (pull) |
| Message retention | Deleted after ack | Retained on disk (configurable) |
| Routing | Rich (exchanges, bindings, patterns) | Topic-partition only |
| Throughput | Tens of thousands msg/s | Millions msg/s |
| Message replay | No (unless using Streams) | Yes (offset-based) |
| Priority queues | Yes | No |
| Message ordering | Per-queue FIFO | Per-partition FIFO |
| Best for | Task distribution, RPC, routing | Event streaming, log aggregation |
Choose RabbitMQ when you need complex routing logic, message priority, request-reply (RPC) patterns, per-message acknowledgment, or integration with Celery for Python task processing.
Choose Kafka when you need event replay, very high throughput, stream processing, multiple independent consumer groups reading the same data, or long-term event storage.
Best Practices and Common Patterns
Work Queue (Competing Consumers)
Multiple consumers share a queue to distribute CPU-intensive tasks. RabbitMQ round-robins messages. Use prefetch_count=1 so busy consumers do not get overloaded while idle ones sit empty.
# Fair dispatch: each consumer gets one message at a time
channel.basic_qos(prefetch_count=1)
channel.basic_consume(queue='tasks', on_message_callback=process_task)
Publish/Subscribe (Fanout)
Every consumer gets a copy of every message. Each consumer declares its own exclusive queue and binds it to the fanout exchange.
Request/Reply (RPC)
The client sends a request with a reply_to queue and a correlation_id. The server processes the request and publishes the response to the reply queue with the matching correlation ID.
import uuid
corr_id = str(uuid.uuid4())
result = channel.queue_declare(queue='', exclusive=True)
reply_queue = result.method.queue
# Send request with reply_to and correlation_id
channel.basic_publish(
exchange='', routing_key='rpc_queue', body='compute_sum(1, 2)',
properties=pika.BasicProperties(
reply_to=reply_queue, correlation_id=corr_id))
# Server publishes response to reply_queue with matching correlation_id
Connection Management
- Use one TCP connection per application instance, multiple channels on that connection
- Implement connection recovery with automatic reconnection logic
- Close channels and connections gracefully on shutdown
- Set heartbeat intervals (default 60s) to detect dead connections
General Best Practices
- Always use durable queues and persistent messages for important data
- Enable publisher confirms for critical message flows
- Set prefetch count to limit unacknowledged messages per consumer
- Configure dead letter exchanges on every production queue
- Use quorum queues instead of classic mirrored queues for replication
- Monitor queue depth and consumer count — growing queues indicate consumers cannot keep up
- Keep messages small (under 128 KB); use object storage for large payloads and pass references
- Name queues and exchanges descriptively:
order.processing, notq1
Performance Tuning
Prefetch Count
The prefetch count controls how many unacknowledged messages a consumer can hold. Too low (1) means the consumer waits for the broker between each message. Too high means one consumer hogs messages while others sit idle.
# For CPU-bound tasks: low prefetch (1-5)
channel.basic_qos(prefetch_count=1)
# For I/O-bound tasks (HTTP calls, DB queries): higher prefetch (10-50)
channel.basic_qos(prefetch_count=25)
# For very fast consumers: even higher (100-250)
channel.basic_qos(prefetch_count=100)
Batch Publishing
If you need to publish many messages, batch them to reduce round-trips. With publisher confirms, you can confirm entire batches instead of individual messages.
channel.confirm_delivery()
# Publish a batch of messages
for order in orders:
channel.basic_publish(
exchange='orders', routing_key='order.new',
body=json.dumps(order),
properties=pika.BasicProperties(delivery_mode=2))
# All messages in the batch are confirmed together
Lazy Queues
Lazy queues move messages to disk as early as possible, keeping only a small buffer in memory. Use them for queues that may accumulate millions of messages (backlog scenarios).
channel.queue_declare(
queue='bulk_imports',
durable=True,
arguments={'x-queue-mode': 'lazy'}
)
Erlang VM Tuning
# /etc/rabbitmq/rabbitmq.conf
vm_memory_high_watermark.relative = 0.6 # default 0.4, raise for busy brokers
disk_free_limit.absolute = 2GB # block publishing when disk is low
tcp_listen_options.backlog = 4096 # handle connection bursts
tcp_listen_options.sndbuf = 196608 # larger TCP buffers for throughput
tcp_listen_options.recbuf = 196608
# Set file descriptor limit in systemd: LimitNOFILE=65536
Frequently Asked Questions
What is the difference between RabbitMQ and Apache Kafka?
RabbitMQ is a traditional message broker that routes messages through exchanges to queues and deletes them after consumer acknowledgment. It excels at complex routing patterns, per-message acknowledgment, priority queues, and request-reply workflows. Kafka is a distributed commit log designed for high-throughput event streaming where messages are retained on disk and can be replayed by multiple consumer groups. Use RabbitMQ when you need smart routing, task distribution, or RPC patterns. Use Kafka when you need event replay, very high throughput, stream processing, or multiple independent consumers reading the same data. See the Kafka Complete Guide for a deep dive.
How do I choose the right exchange type in RabbitMQ?
Use direct exchanges when you need exact routing key matching for point-to-point messaging, like routing tasks to specific worker queues. Use topic exchanges when you need pattern-based routing with wildcards (* and #) for flexible pub/sub scenarios, like routing logs by severity and source. Use fanout exchanges when every bound queue should receive a copy of every message, like broadcasting notifications. Use headers exchanges when you need to route based on message header attributes instead of routing keys, useful for complex multi-attribute routing.
How do I make RabbitMQ messages durable and survive broker restarts?
You need three things: (1) declare the queue as durable=True so the queue definition survives a restart, (2) set the message delivery_mode=2 (persistent) so the message is written to disk, and (3) use publisher confirms so the producer knows the broker has persisted the message. Even with these settings, there is a small window where a message could be lost if the broker crashes before fsyncing. For critical data, combine durable queues, persistent messages, publisher confirms, and consumer acknowledgments.
What is a dead letter exchange and when should I use one?
A dead letter exchange (DLX) receives messages that cannot be delivered or processed. Messages are dead-lettered when a consumer rejects them with requeue=False, when they expire due to TTL, or when the queue exceeds its max length. Use DLX for retry patterns (route failed messages to a delay queue, then back to the original queue), for capturing failed messages for analysis, and for implementing bounded retry logic with exponential backoff. Every production RabbitMQ deployment should configure dead letter exchanges.
How many connections and channels should my application use?
Use one TCP connection per application instance and create multiple channels on that connection for concurrent operations. Each channel is a lightweight virtual connection multiplexed over the TCP connection. A good rule is one channel per thread or coroutine. Avoid creating a new connection per publish or consume operation, as TCP connections are expensive. For publishers, one channel is usually enough unless you need publisher confirms on independent streams. For consumers, use one channel per consumer to isolate prefetch counts and acknowledgments.