Skip to Main Content

This is an IBM Automation portal for Integration products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (

Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.

Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal ( - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal ( - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM. - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Not under consideration
Created by Guest
Created on Dec 2, 2022

IBM MQ Performance - Increase guaranteed latency, such downstream systems and queues don't fall over for sync communication

Good Day,


Given all the problems that I have experienced in the data centres, there are the other issues that also need to be looked at and resolved, with all the hybrid

VSAN disk and caching extra, which can cause a whole application to fall over or at some random point in time fail, as the queue reading will no longer keep up, with the disk speed

Hit a difference caching layer medium, which then exposes all the downstream consumer to the disk latency and ultimately, reduce the speed at which read messages off a queue,

Because the sync TCP communication round trip speed has slowed down, due disk latency reads.


This is what I suggest to your Mercedes internal IBM queue teams, on how they would need to structure things to ensure that no downstream

Queue reader is exposed to an increase in latency because the sync application fails and be slower than needed when storage space runs out or hits a slower layer of storage.
By using this technical simple everything for everyone and IBM then handles the complexities of ensuring that all messages reads, will not encourage or see more than 1 ms of latency.


IBM Queue system design and to ensure that the latency exposed to the consumer of the queue at all times,

remains below 1ms disk response for reads.

We can’t just add more load balance consumers to our applications all over the place for every downs stream application, its going to add a lot more cost to end systems.


The current design architecture, as far as I am concerned,  can’t guarantee this, because the consumer of the queue is directly exposed

To the VSAN DISK GROUP no cache disk latency, when fully or items get cycled.

Also is this is a shared SSD, then basically one would have to have reserved IO capacity on each for each VM host.

I think the best would be to have IBM queue architecture., where queues are stored on different disks and IBM be configure

to batch and have many workers able to transfer traffic with multiple IBM Queue workers back and forth between the primary queue on SSD and slower variable VSSAND.

This would provide all downstream consumers of guarantees of latency of less 1ms for say 400 application Queue consumers.

Ideally, the upstream remote queues write into SSD disk, however, when full must switch over directly to writing to the VSAND queue, as

The VSAND disk queue will attempt to replay its traffic onto the SSD queue for consumption,.

The SSD queue for replay from VSAN can have an in-memory buffer as well because it should be read by the downstream fast, so no need to persist to the SSD, if already persisted to VSSAND.

this can be a prediction algorithm. IBM then crazy scales all the threads and workers, from the distributed VSAN storage, across multiple physical disks,

to get the concurrent speed needed. VSAN disk where you increase the concurrent bandwidth, not the speed, which overall will increase the throughput.

Just like one does with quad-pumping memory for DDR on the clocking signals.


If you image the situation, that VSAN variable disk latency, is exposed to all downstream servers, all downstream services, are going to have to have a massive load

balancing and complicated infrastructure and application instances to deal with the massive variable upstream system design at every single point in the chain.

This is bad.

It would be best that at the main ingestion point, it is able to potentially after some routing rules are applied, store things in a central system, design

To scan with a slower VSAN disk to persist traffic, when there is an issue.

Best is for you to change the architecture, such that upstream guarantees sub 1ms for say 400 workers at XMB messages size, for both enqueuer and deque operations.

You would have to benchmark such system configurations and provide the limits of a current system, which would have to uphold it guarantee performance contract of latency and throughput.

We would then have to choose from these benchmarked system designs.

Kind Regards,


Wesley Oliver

Idea priority Low
  • Admin
    Mark Taylor
    Dec 13, 2022

    While this is an interesting suggestion, ithis is not something we would expect to do in MQ. As a result we are declining this Idea. While declined, the Idea can still be commented, upon.