Skip to Main Content
Integration


This is an IBM Automation portal for Integration products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.


Status Future consideration
Workspace App Connect
Created by Guest
Created on Mar 2, 2022

Support Schema registry to integrate with Kafka

When producing and consuming messages to/from Kafka we need to support schema registry as all production deployment use registry. Event Streams is supporting apicur.io and ACE should be able to do the same. As part of the producer, when it uses a kafka user with schema registry ACL to write new schema, then it becomes transparent.

Idea priority Urgent
  • Guest
    Reply
    |
    May 22, 2024

    Our specific use case is related to facilitate interaction with serialised Avro on Confluent, but any kind of improvement in that area would be welcome. At the moment almost everything we do to requires custom logic in java compute nodes, turning ACE into a dumb wrapper for java code, which is less than desirable.

    Support for Avro schema in a similar vein to what is available for XSD. Having a direct link to the schema registry would be nice, but honestly, not as important. Based on those, conversion from/to either a new Avro parser or just the regular Json one. And then being able to link a topic to the current schema, so it knows how to serialise/deserialise the message tree. Some support for magic bytes might be required here, but seeing as different vendors have different solutions for them, I would settle for being able to just provide a setting to denote how much has to be trimmed from the front.
    More fine grained control on the commit when acting as a consumer would be nice, as the node currently commits on propagation and you have to manually build in transactionality through other means.
    On the schema lifecycle, I don't expect a fully managed solution around this. Honestly, it can already be worked around with a byte stream reader and interpreting the magic bytes and routing, but maybe a slightly more OOTB way of working might be useful. For example, if the consumer node can propagate some info on the magic bytes in the local environment or message properties, that would make life easier.
    With XSD, generation of java classes with Jaxb is provided out of the box and the runtime comes with Jackson, so something similar for Avro would be greatly appreciated.

    I assume most of these will already be on their radar, but regardless, these are some of the features we could really use.

  • Guest
    Reply
    |
    Jan 31, 2024

    Are we considering this feature to make the Kafka Producer a schema-aware client?

    This benefits so many ACE users and adds value to ACE ecosystem.

  • Admin
    Ben Thompson
    Reply
    |
    Mar 23, 2022

    RFE Review. Thank you for taking the time to submit this idea for enhancement which we agree would benefit the product. In the past we have extended the Kafka message flow nodes to assist with use cases involving schema registries - the provision of options to set and retrieve Kafka custom header properties helps flow developers deal with the association between a Kafka message and the schema which describes its format:


    https://www.ibm.com/docs/en/app-connect/12.0?topic=enterprise-setting-retrieving-kafka-custom-header-properties


    One popular serialization/deserialization format is Avro, and being able to interrogate Kafka custom header properties in a JavaCompute node can help a user write code to select a relevant schema when interpreting the content of a Kafka message.


    Regarding the specifics of the idea, we like the concept of tying together the process of reading/writing the message with the process of injecting/retrieving the schema from the registry, but also note that in many circumstances flow developers might want a built-in function for doing these tasks but not necessarily want to execute the injection/retrieval of the schema into the registry at the same time (potentially this could be controlled by separate flow logic, or only invoked for certain messages depending on their characteristics, or perhaps a set of commonly used schemas cached in memory). Any thoughts on these aspects would be appreciated as it would help us devise the best externals for this feature (eg separate node for the purpose of injection/retrieval, or extra properties on the existing KafkaConsumer / KafkaRead / KafkaProducer?)

    Status of the Idea is updated to Future Consideration.