Skip to Main Content
Integration


This is an IBM Automation portal for Integration products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.


Status Functionality already exists
Workspace App Connect
Created by Guest
Created on Sep 1, 2021

Support for OpenShift and Kubernetes native log outputs

From CP4I 2020.4.1 onwards, Cloud Pak logging is now integrated with OpenShift cluster logging running on EFK. Fluentd is able to consume logs that adhere to the kubernetes native log outputs making log integration seamless.


Support for kubernetes native log outputs like stderr and stdout is highly desirable for ACE native logging including the LOG statement and TRACE node.




Idea priority High
  • Admin
    Ben Thompson
    Reply
    |
    Nov 5, 2021

    As previous comment ... assumed safe to close. Cheers!

  • Admin
    Ben Thompson
    Reply
    |
    Oct 4, 2021

    Hi Colin,

    Just checking in to see if you'd been able to confirm whether you have what you need here ... Fine either way if you need a bit longer to check things out ... but if we don't hear anything further by the beginning of next month then we'll assume we're safe to close down the request.

    Cheers,

    Ben

  • Guest
    Reply
    |
    Sep 24, 2021

    Looking at the current ACE container description (github.com/ot4i/ace-docker)


    Logging

    The logs from the integration server running within the container will log to standard out. The log entries can be output in two format:



    That may be sufficient to ensure the container logs are collected by fluentd and intergrated with Openshift cluster logging. Will run some more tests with Openshift/CP4I with ACE.

  • Admin
    Ben Thompson
    Reply
    |
    Sep 22, 2021

    Idea / RFE Review. Thank you very much for taking the time to raise this enhancement request ... There seem to be three separate aspects to this query:


    1. Ability for an integration server to send "ACE native log output" to standard out

    2. Ability for the LOG statement in an ESQL Compute node to send text to standard out

    3. Ability for the TRACE node in a message flow to send text to standard out


    Taking these in turn ...


    For aspect 1, it would be good to get clarity on what is meant by "ACE native log output". My interpretation of your words here means when an integration server logs an event of interest, which in earlier versions of the on-premise product involved writing to the syslog or Windows event viewer. Typically this kind of log entry would occur due to a message flow node experiencing some kind of error, or an administration action being requested which cannot be fulfilled ... and it would occur by the product runtime independent of any logging actions which a specific message flow developer might want to happen as part of their coded functionality in the flow. It should be noted that this requirement is really a requirement for the behaviour of the ACE container, rather than a requirement for the behaviour of the integration server process itself, which is already capable of logging in this manner ... The ACE container (github.com/ot4i/ace-docker) also offers an environment variable named LOG_FORMAT which can be used for configuring this. It can take these values:


    basic: human-headable for use in development when using docker logs or kubectl logs

    json: for pushing into ELK stack for searching and visualising in Kibana


    ... so if understood correctly what was meant by "ACE native log output" I think we already have this capability today so nothing further is needed.


    For aspect 2, there are Log settings in the server.conf.yaml file which can be used to control the format of the entries which are written to standard out. One of the options is "ibmjson" format ...


    Log:

    consoleLog: true

    outputFormat: 'ibmjson'


    If you set this in the server.conf.yaml and then use a Compute node with a LOG statement such as this:


    LOG EVENT SEVERITY 1 CATALOG 'BIPmsgs' MESSAGE 2951 VALUES(1,2,3,4);


    ... and then look at console.log / standard out you should see an entry like this:


    {"type":"ace_message","ibm_product":"IBM App Connect Enterprise","ibm_recordtype":"log","host":"LAPTOP-HH1E1OQN","module":"integration_server.TEST_SERVER","ibm_serverName":"TEST_SERVER","ibm_processName":"","ibm_processId":"24272","ibm_threadId":"22728","ibm_datetime":"2021-09-22T21:43:28.880266","loglevel":"INFO","message":"2951I: Event generated by user code. Additional information : '1' '2' '3' '4' 'Logger.Compute' '{5}' '{6}' '{7}' '{8}' '{9}' ","ibm_message_detail":"2951I: Event generated by user code. Additional information : '1' '2' '3' '4' 'Logger.Compute' '{5}' '{6}' '{7}' '{8}' '{9}' \nThe event was generated by an SQL LOG or THROW statement. This is the normal behavior of these statements. \nSince this is a user generated event, the user action is determined by the message flow and the type of event. ","ibm_messageId":"2951I","ibm_sequence":"1632343408880_0000000000001"}


    For aspect 3, if you keep the same settings in server.conf.yaml, and then use a trace node in the message flow with its properties as follows:


    Destination = Local Error Log

    File path = <leave this blank>

    Pattern =

    Another

    Example

    Message catalog = BIPmsgs

    Message number = 2951


    ... then push data through the trace node in the flow and you should see an entry like this:


    {"type":"ace_message","ibm_product":"IBM App Connect Enterprise","ibm_recordtype":"log","host":"LAPTOP-HH1E1OQN","module":"integration_server.TEST_SERVER","ibm_serverName":"TEST_SERVER","ibm_processName":"","ibm_processId":"24272","ibm_threadId":"24792","ibm_datetime":"2021-09-22T21:58:37.508696","loglevel":"ERROR","message":"2951I: Event generated by user code. Additional information : 'Another\r","ibm_message_detail":"2951I: Event generated by user code. Additional information : 'Another\r\nExample\r\n' 'Logger.Trace' '{2}' '{3}' '{4}' '{5}' '{6}' '{7}' '{8}' '{9}' \nThe event was generated by an SQL LOG or THROW statement. This is the normal behavior of these statements. \nSince this is a user generated event, the user action is determined by the message flow and the type of event. ","ibm_messageId":"2951E","ibm_sequence":"1632344317508_0000000000001"}


    So, in summary I think the product in its current form has quite a lot of what is being asked for here ... but of course without further deeper investigation there could be aspects of the above messages which are not helpful for your usecase, for Fluentd, a preferred styling, more assists for taking data out of the message body / logical tree for insertion etc. ... so we're updating the status of the idea to "Need more information" ... could you let us know your thoughts on the above?