Good Day,
Given all the problems that I have experienced in the data centres, there are the other issues that also need to be looked at and resolved, with all the hybrid
VSAN disk and caching extra, which can cause a whole application to fall over or at some random point in time fail, as the queue reading will no longer keep up, with the disk speed
Hit a difference caching layer medium, which then exposes all the downstream consumer to the disk latency and ultimately, reduce the speed at which read messages off a queue,
Because the sync TCP communication round trip speed has slowed down, due disk latency reads.
This is what I suggest to your Mercedes internal IBM queue teams, on how they would need to structure things to ensure that no downstream
Queue reader is exposed to an increase in latency because the sync application fails and be slower than needed when storage space runs out or hits a slower layer of storage.
By using this technical simple everything for everyone and IBM then handles the complexities of ensuring that all messages reads, will not encourage or see more than 1 ms of latency.
IBM Queue system design and to ensure that the latency exposed to the consumer of the queue at all times,
remains below 1ms disk response for reads.
We can’t just add more load balance consumers to our applications all over the place for every downs stream application, its going to add a lot more cost to end systems.
The current design architecture, as far as I am concerned, can’t guarantee this, because the consumer of the queue is directly exposed
To the VSAN DISK GROUP no cache disk latency, when fully or items get cycled.
Also is this is a shared SSD, then basically one would have to have reserved IO capacity on each for each VM host.
I think the best would be to have IBM queue architecture., where queues are stored on different disks and IBM be configure
to batch and have many workers able to transfer traffic with multiple IBM Queue workers back and forth between the primary queue on SSD and slower variable VSSAND.
This would provide all downstream consumers of guarantees of latency of less 1ms for say 400 application Queue consumers.
Ideally, the upstream remote queues write into SSD disk, however, when full must switch over directly to writing to the VSAND queue, as
The VSAND disk queue will attempt to replay its traffic onto the SSD queue for consumption,.
The SSD queue for replay from VSAN can have an in-memory buffer as well because it should be read by the downstream fast, so no need to persist to the SSD, if already persisted to VSSAND.
this can be a prediction algorithm. IBM then crazy scales all the threads and workers, from the distributed VSAN storage, across multiple physical disks,
to get the concurrent speed needed. VSAN disk where you increase the concurrent bandwidth, not the speed, which overall will increase the throughput.
Just like one does with quad-pumping memory for DDR on the clocking signals.
If you image the situation, that VSAN variable disk latency, is exposed to all downstream servers, all downstream services, are going to have to have a massive load
balancing and complicated infrastructure and application instances to deal with the massive variable upstream system design at every single point in the chain.
This is bad.
It would be best that at the main ingestion point, it is able to potentially after some routing rules are applied, store things in a central system, design
To scan with a slower VSAN disk to persist traffic, when there is an issue.
Best is for you to change the architecture, such that upstream guarantees sub 1ms for say 400 workers at XMB messages size, for both enqueuer and deque operations.
You would have to benchmark such system configurations and provide the limits of a current system, which would have to uphold it guarantee performance contract of latency and throughput.
We would then have to choose from these benchmarked system designs.
Kind Regards,
Wesley Oliver
While this is an interesting suggestion, ithis is not something we would expect to do in MQ. As a result we are declining this Idea. While declined, the Idea can still be commented, upon.