Skip to Main Content
Integration


This is an IBM Automation portal for Integration products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.


Status Not under consideration
Created by Guest
Created on Apr 30, 2021

Disable auto-restart of ACE pods when configuration is updated

We are running ACE (App Connect Enterprise) EUS version on CP4I EUS version on Openshift 4.6.21 on Azure cloud.

When an ACE "Configuration" object such as odbc.ini, server.conf.yaml is updated, it triggers an automatic restart of all the ACE pods, these restarts are beginning to cause havoc in our production systems.

This is a real challenge for us as not all PODs can go down at any time without consequences. Also the fact that all pods go down at the same time, can cause issues if some integrations depend on services from other pods. Furthermore, long running tasks would potentially have to roll back and start over, which causes delays.Today we have around 50 integrations living in around 20 pods in our production cluster. Our best-effort approach today is to try and find a good time to modify ODBC setting that minimizes the "damage" during off-peak hours. When we are at 200 integrations over x pods we don't think we can keep the level of control needed to maintain a stable production environment, with this kind of auto-restart. We have to be able to controll this manually - modifying odbc.ini as we see fit and then rebuild the pods at a time that suits us to capture the new odbc details.

The workaround right now: Currently it is possible to overcome this limitation by making the change from the CLI via an "oc apply on the configuration" which is an upsert operation. However, we are told in the PMR that this could in the future be no longer possible in order to ensure the functionality in the CLI is in sync with that from the ACE Dashboard GUI (where an update operation triggers a mutatingwebhook restarting all ace pods at once)


Idea priority Urgent
  • Admin
    Andy Garratt
    Reply
    |
    Nov 16, 2021

    Updating the status of this following James' and Abu's conversation