This is an IBM Automation portal for Integration products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
Hi, thanks for the detail.
Imagine this use case: we have thousands of the input messages for processing with message affinity requirement per second. Currently with MQ first-in-first-out approach, it is very difficult to catch up the required performance and it couldn't be scaled horizontally. With the Kafka and consumer group, we can scale the message flow with multi-threaded/additional instances approach and deploy the same flow to many brokers for parallel processing. The message affinity is still maintained since the consumer group can coordinate each consumer (a thread in a flow) to their designated partition. Let's say we have 1000 partition, then we can have 1000 parallel threads processing the message concurrently without losing the message affinity. This will be very difficult to be done with IBM MQ.
However, in this typical read-process-write approach, i.e. KafkaConsumer node -> ESQL processing -> KakfaProducer node, the benefit of Kafka and scale horizontally can't be achieved due to the missing Exactly-Once and transactional support.
As you mentioned, the local transaction is being supported in Kafka, I really wish IIB/ACE can be enhanced to support Exactly Once and Transactional.
I can see the benefit of exactly once, transactional, high performance and horizontal scaling with IIB and Kafka.
Thank you for raising this RFE. When the KafkaProducer and KafkaConsumer nodes were first added to the product (IIB / ACE) the functionality to provide an exactly-once level of message reliability delivery was not available with the Kafka protocol. Since this time, the protocol has evolved and this is now a reasonable request. IIB / ACE does of course have history interacting with transactional predictable behaviours. A traditional message flow interaction with MQ queueing for example gives three operational possibilities:
1. Non-transactional, all operations are disjoint
2. Local Transactions, all or none of the messaging operations occur (publication and consuming). But other operations (like database updates) are not linked to the messaging transaction.
3. XA-Transactions. Database updates are linked to the MQ transaction; all or nothing.
Initially, Kafka had no way of doing either option 2 or option 3, so rather than try to imitate a level of message reliability which Kafka could not fully support, we implemented a set of behaviours which we could explain and were hopefully easy for implementers to understand.
For the Kafka producer node, we exposed the 'acks' property which allowed the user to request confirmation that a publication had occurred. So when acks are enabled, if a response is received we propagate to the out terminal returning the message meta-data i.e. the user knows the message has been published. However if no response is received from the Kafka server within the configured time-out interval we do not know if the message has been published or not, and we propagate to the failure terminal.
For the Kafka consumer node, we chose to manually acknowledge the receipt of each message by the consumer to Kafka (aka saving the consumer offset to Kafka) immediately prior to the message being delivered to the message flow. This gave an at-most-once level of message reliability, which again we felt would be well understood.
In our private prototyping, we have also investigated acknowledging the receipt of each message at the end of each message flow (rather than the start) in order to give something like an "exactly once" level of message processing, but with many potential reasons why getting an acknowledgement can fail (especially when dealing with remote Kafka brokers across high latency networks), we often observed unpredictable behaviour. But of course, things have now moved on, and Kafka is improving all the time; specifically it has now introduced:
The option to group multiple publications into a 'transaction' which is then committed as a single unit
The option for consumers to only consume "committed" messages
The option for consumers to save their position as part of the "publication transaction"
So, by combining these aspects, in theory Kafka now also has the ability to support the message pattern 2 described above and link together the consuming of an input message from Kafka with the production of one or more output messages to Kafka. We think this would be a good first use case to look into to help direct this RFE, and would be interested in users' thoughts on this. A final thought for this update ... Until now, the vast majority of integration patterns involving Kafka which have been presented by our customer base have prioritised a need for high performance above higher level qualities of service such as exactly-once only delivery. We also note that so far this RFE has gained 5 votes from the community and, so far at least, no further comments of support for the use case. However, we are watching the Kafka protocol evolve with interest and we will continue to monitor this RFE closely. The status of the RFE is updated to Uncommitted Candidate.