Demystifying Producer Flow Control and Resource Usages in Apache ActiveMQ (I)

One of the very first lessons you learn straight after starting your journey with Apache ActiveMQ is what Producer Flow Control is (abbreviated as PFC hereafter). And you learn it the hard way!

Producer Flow Control is typically described as the "culprit" for seemingly frozen brokers and stuck queues and topics; your producer app is trying to dispatch messages to the broker but something exogenous and exotic is blocking it.

As soon as you learn about the existence of this mechanism, you scream out loud: "#*@!&, why would someone come up with such a silly functionality?". Read on to understand why PFC makes so much sense in a messaging system.

So, what really is Producer Flow Control?

Producer Flow Control is both: (1) a mechanism of defence for the broker and (2) a method to guarantee operational continuity in the face of unplanned messaging volumes. It slows down or temporarily halts fast producers in a non-intrusive manner while slow consumers are happily digesting messages at their own pace.

Without Producer Flow Control, you incur in risks such as a destination overflowing the JVM heap and blowing up with an java.lang.OutOfMemoryError or a single destination hijacking broker resources and penalising all other destinations, etc.

Having said that, there is another option instead of silently halting producers: to send back an exception to the producer. But that method is far more intrusive and requires the client's source code to cater for this scenario. Mind you, ActiveMQ can be configured to take this approach too.

Note that if Producer Flow Control is being triggered too often, this has a hidden meaning. It means that your actual messaging volume/rates requirements are underserved by the configuration/infrastructure you've put in place. You will need to optimise your configuration, scale up or scale out (vertical or horizontal scaling).

So now you see why PFC makes a lot of sense in a messaging system. Fair enough, another topic altogether is whether PFC should be enabled out-of-the-box and if the default 1mb memory limit per destination is adequate. But that's a different story.

Point #1: Resources are not infinite

See, messaging systems are highly-concurrent, complexly volatile platforms whose sweet spot is the real-time exchange of messages. Real-time entails that message consumers are able to keep up with producers. The worst nightmare of a message broker is to have fast producers and slow consumers, but regrettably, this is a very common situation.

When this happens, the broker must use its resources to buffer up messages somewhere in between, right? But resources aren't infinite, they are limited by nature. So the broker only has so much to play with.

Point #2: YOU are in the driver's seat: define global boundaries

Apache ActiveMQ allows you to be in total control of resource usage. Messaging systems quickly become the backbone of an enterprise, and thus they need to be highly predictable platforms. That's why it's good that you explicitly define the boundaries within which ActiveMQ runs, or else things will unravel in unpredictable manner.

Apache ActiveMQ understands two levels of limits: global limits (<systemUsage /> config element) and per-destination memory limits. This post will tackle the former, while in the next one we'll talk about the latter.

Global limits. There are three limits that govern the entirety of the AMQ instance's operations. Once you set them, ActiveMQ will watch that they are honoured:

  • Max. memory usage => the amount of RAM/virtual memory that your broker is entitled to use. It ultimately translates into the quota of Java heap your broker can use to buffer up messages. (<memoryUsage /> config element)
  • Max. persistence store usage => as per the JMS spec, PERSISTENT messages must be kept in durable storage until they are fully delivered and acknowledged by all interested consumers, so that they survive broker restarts and possible crashes. This global limit defines the maximum size allocation for that store. (<storeUsage /> config element)
  • Max. temporary storage usage => the maximum amount of disk usage the broker may use to temporarily keep NON-PERSISTENT messages around while consumers are unavailable to process them. This store DOES NOT survive broker restarts or crashes. It is simply used as a buffer for that broker session. (<tempUsage /> config element)

Stay tuned for subsequent blog posts

In the next blog posts, I'll tell you more about how to define per-destination limits in ActiveMQ, how ActiveMQ behaves when these limits are being approached and/or breached how Producer Flow Control kicks in), how the different message cursors affect the behaviour and what happens when PFC is disabled.

4 Responses so far.

  1. Unknown says:

    Thanks for this post.

    Is it true that PFC works only for non persistent message?
    By the way, it is what I can see in my tests.


    Regards

  2. PFC works for any delivery mode, persistent or non-persistent. However, it depends on the cursor settings and memory limits you set on both: the destination and globally. Stay tuned for the next blog post, where I'll delve deeper into all of this!

    -- Raúl Kripalani.

  3. Brent says:

    Do you know whether the global limits are affected when using BlobMessages?

    Since a BlobMessage is passing the large payload 'out-of-band' so to speak, is activeMQ taking into account the disk space of the out-of-band payloads?

    I have a situation where I'm running out of disk space on my AMQ server. It is all due to the payloads of my BlobMessages. Then AMQ gets in a weird state and my consumers are getting "Unexpected EOF" when trying to read the payloads. What I am hoping for is a way to limit my producer based upon disk space of the server...not just disk space of the messages under the control of AMQ. Because BlobMessages are out-of-band, I'm guessing the flow control is not helping me.

    Thanks,

  4. Max E says:

    Very helpful post which succinctly explains many ActiveMQ concepts. Can't wait for the next parts.

    Thanks,

Leave a Reply

Category

Category

Category

Category