Skip to content

Widhian Bramantya

coding is an art form

Menu
  • About Me
Menu
rabbitmq

RabbitMQ Performance Tuning: Optimizing Throughput and Latency

Posted on September 13, 2025September 13, 2025 by admin

Introduction

RabbitMQ is fast, but performance can go down if it is not configured well.
Two key metrics are:

  • Throughput: how many messages per second RabbitMQ can handle.
  • Latency: how long it takes for a message to move from producer to consumer.

This article explains simple ways to tune RabbitMQ for better throughput and lower latency. This article is a continuation of my earlier posts on High Availability in RabbitMQ: Clustering and Mirrored Queues Explained and Scaling Microservices with RabbitMQ: Patterns and Best Practices. Together, they give a broader view of RabbitMQ performance, reliability, and scaling patterns.

Hardware and System Settings

  1. Use SSD storage
    • Faster disk = faster persistence for durable queues.
  2. Increase memory
    • RabbitMQ keeps messages in RAM as much as possible. More RAM = faster.
  3. Tune file descriptors
    • RabbitMQ needs many open file handles for connections.
    • Increase ulimit -n to something like 65535.
  4. Use multiple cores
    • RabbitMQ is multi-threaded. More CPU cores = more parallel handling.

RabbitMQ Configuration

a. Prefetch Count (QoS)

  • Controls how many messages a consumer can receive before ack.
  • Low value = fairness but slower.
  • Higher value = more throughput, but risk of overload.

Example in Go:

ch.Qos(50, 0, false) // allow 50 unacked messages per consumer

Publisher Confirms vs Transactions

RabbitMQ supports two ways to make sure published messages are safely stored:

  1. Transactions
    • Similar to database transactions.
    • Producer starts a transaction, publishes messages, and commits.
    • If commit fails, messages are rolled back.
    • Very reliable but slow, because it blocks.
    Go Example (Transactions):
ch.Tx() // start transaction
err = ch.Publish("", "task_queue", false, false, amqp.Publishing{
    ContentType: "text/plain",
    Body:        []byte("Hello Transaction!"),
})
if err != nil {
    ch.TxRollback()
    log.Println("Transaction rolled back")
} else {
    ch.TxCommit()
    log.Println("Transaction committed")
}
  1. Publisher Confirms
    • More modern and faster.
    • Enable confirms with ch.Confirm(false).
    • Broker sends ACK/NACK after message is stored.
    • Works asynchronously, much lighter than transactions.
    Go Example (Publisher Confirms):
ch.Confirm(false) // enable confirms

ackChan := ch.NotifyPublish(make(chan amqp.Confirmation, 1))

err = ch.Publish("", "task_queue", false, false, amqp.Publishing{
    ContentType: "text/plain",
    Body:        []byte("Hello Confirm!"),
})
if err != nil {
    log.Fatal("Publish failed:", err)
}

confirm := <-ackChan
if confirm.Ack {
    log.Println("Message confirmed by broker")
} else {
    log.Println("Message not confirmed (NACK)")
}

Comparison

FeatureTransactions (Tx)Publisher Confirms
ReliabilityStrongStrong
SpeedSlowFast
BehaviorBlockingAsync
Best Use CaseRareRecommended

Connection Management

  • Reuse channels instead of creating too many.
  • Avoid opening thousands of short-lived connections.
  • Use a connection pool for efficiency.
See also  Reliable Messaging with RabbitMQ: Acknowledgments, Durability, and Persistence
Pages: 1 2 3
Category: RabbitMQ

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Linkedin

Widhian Bramantya

Recent Posts

  • Log Management at Scale: Integrating Elasticsearch with Beats, Logstash, and Kibana
  • Index Lifecycle Management (ILM) in Elasticsearch: Automatic Data Control Made Simple
  • Blue-Green Deployment in Elasticsearch: Safe Reindexing and Zero-Downtime Upgrades
  • Maintaining Super Large Datasets in Elasticsearch
  • Elasticsearch Best Practices for Beginners
  • Implementing the Outbox Pattern with Debezium
  • Production-Grade Debezium Connector with Kafka (Postgres Outbox Example – E-Commerce Orders)
  • Connecting Debezium with Kafka for Real-Time Streaming
  • Debezium Architecture – How It Works and Core Components
  • What is Debezium? – An Introduction to Change Data Capture
  • Offset Management and Consumer Groups in Kafka
  • Partitions, Replication, and Fault Tolerance in Kafka
  • Delivery Semantics in Kafka: At Most Once, At Least Once, Exactly Once
  • Producers and Consumers: How Data Flows in Kafka
  • Kafka Architecture Explained: Brokers, Topics, Partitions, and Offsets
  • Getting Started with Apache Kafka: Core Concepts and Use Cases
  • Security Best Practices for RabbitMQ in Production
  • Understanding RabbitMQ Virtual Hosts (vhosts) and Their Uses
  • RabbitMQ Performance Tuning: Optimizing Throughput and Latency
  • High Availability in RabbitMQ: Clustering and Mirrored Queues Explained

Recent Comments

  1. Playing with VPC AWS (Part 2) – Widhian's Blog on Playing with VPC AWS (Part 1): VPC, Subnet, Internet Gateway, Route Table, NAT, and Security Group
  2. Basic Concept of ElasticSearch (Part 3): Translog, Flush, and Refresh – Widhian's Blog on Basic Concept of ElasticSearch (Part 1): Introduction
  3. Basic Concept of ElasticSearch (Part 2): Architectural Perspective – Widhian's Blog on Basic Concept of ElasticSearch (Part 3): Translog, Flush, and Refresh
  4. Basic Concept of ElasticSearch (Part 3): Translog, Flush, and Refresh – Widhian's Blog on Basic Concept of ElasticSearch (Part 2): Architectural Perspective
  5. Basic Concept of ElasticSearch (Part 1): Introduction – Widhian's Blog on Basic Concept of ElasticSearch (Part 2): Architectural Perspective

Archives

  • October 2025
  • September 2025
  • August 2025
  • November 2021
  • October 2021
  • August 2021
  • July 2021
  • June 2021
  • March 2021
  • January 2021

Categories

  • Debezium
  • Devops
  • ElasticSearch
  • Golang
  • Kafka
  • Lua
  • NATS
  • Programming
  • RabbitMQ
  • Redis
  • VPC
© 2025 Widhian Bramantya | Powered by Minimalist Blog WordPress Theme