Skip to Main Content

Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Post your ideas

Start by posting ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea

Help IBM prioritize your ideas and requests

The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The offering manager team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.

Receive a notification on the decision

Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.

If you encounter any issues accessing the Ideas portal, please send email describing the issue to for resolution. For more information about IBM's Ideas program visit

Status Not under consideration
Created by Guest
Created on Apr 30, 2021

Disable auto-restart of ACE pods when configuration is updated

We are running ACE (App Connect Enterprise) EUS version on CP4I EUS version on Openshift 4.6.21 on Azure cloud.

When an ACE "Configuration" object such as odbc.ini, server.conf.yaml is updated, it triggers an automatic restart of all the ACE pods, these restarts are beginning to cause havoc in our production systems.

This is a real challenge for us as not all PODs can go down at any time without consequences. Also the fact that all pods go down at the same time, can cause issues if some integrations depend on services from other pods. Furthermore, long running tasks would potentially have to roll back and start over, which causes delays.Today we have around 50 integrations living in around 20 pods in our production cluster. Our best-effort approach today is to try and find a good time to modify ODBC setting that minimizes the "damage" during off-peak hours. When we are at 200 integrations over x pods we don't think we can keep the level of control needed to maintain a stable production environment, with this kind of auto-restart. We have to be able to controll this manually - modifying odbc.ini as we see fit and then rebuild the pods at a time that suits us to capture the new odbc details.

The workaround right now: Currently it is possible to overcome this limitation by making the change from the CLI via an "oc apply on the configuration" which is an upsert operation. However, we are told in the PMR that this could in the future be no longer possible in order to ensure the functionality in the CLI is in sync with that from the ACE Dashboard GUI (where an update operation triggers a mutatingwebhook restarting all ace pods at once)

Idea priority Urgent
  • Admin
    Andy Garratt
    Nov 16, 2021

    Updating the status of this following James' and Abu's conversation