In our organisation we are using Ansible Automation Platform(AAP) to control our inventory, schedule and execute our playbooks, store our secrets, authentication and authorization, logging, etc. AAP is the standard Enterprise Solution for managing & orchestrating hosts within a business environment. Within the AAP, Ansible is not run on a Control Node but on the so called “Execution Environments”. Execution Environments are Podman container images which acts as a control node. They contain ansible-core, ansible-runner, python, additional system dependencies and additional ansible collections.
(Note: execution environments are podman containers, they are temporary of nature. That is to say: containers are to be spinned up during the start of the playbook and are deleted after completion. The data & reports transferred to these EE’s are to be deleted as well.)
We noticed a few examples in the playbooks whereby the design was not fully complementary with execution environments in mind. They were designed with the more traditional Control Nodes in mind.
Below are few examples:
collect_scan_results_unix.yml:
---
- name: "Get the list of packages with scan results from UNIX/Linux endpoints"
find:
paths: "{{ lmt_scanner_output_path_unix \
if ( lmt_scanner_output_path_unix is defined ) \
else lmt_scanner_path_unix + '/output' }}"
use_regex: true
patterns: ['^\d{12}-.+-\d{10}\.tar\.gz$']
register: files_to_copy_unix
- name: "Fetch packages with scan results from UNIX/Linux endpoints"
fetch:
src: "{{ item.path }}"
dest: "{{ lmt_local_file_storage_path }}/{{ lmt_scan_result_packages_folder }}/"
flat: true
register: fetched_files
with_items: "{{ files_to_copy_unix.files }}"
loop_control:
label: "{{ item.path | basename }}"
- name: "Clean up packages on UNIX/Linux endpoints"
file:
state: absent
path: "{{ item.path }}"
with_items: "{{ files_to_copy_unix.files }}"
when: fetched_files is succeeded
ignore_errors: true
loop_control:
label: "{{ item.path | basename }}"
In this occasion the lmt_local_file_storage_path is set to the ./lmt_file_storage directory. As this is being executed in the execution environment container and not the control node, the files are stored in a container.
Lmt_collect_troubleshooting_data
The next example I discovered is the collect_troubleshooting_data playbook. The support team is asking for the reports generated by this playbook when opening a ticket. Therefor it is important that it can be run accordingly to have a smooth process. In the collect_troubleshooting_data_unix task:
- name: "Fetch troubleshooting data archive from UNIX/Linux endpoints"
fetch:
src: "{{ lmt_scanner_path_unix }}/work/{{ ansible_host }}_{{ unix_endpoint_id.stdout }}.tar.gz"
dest: "{{ lmt_local_file_storage_path }}/{{ lmt_troubleshooting_data_folder }}/"
flat: true
register: fetched_files
- name: "Remove troubleshooting data archive from UNIX/Linux endpoints"
file:
state: absent
path: "{{ lmt_scanner_path_unix }}/work/{{ ansible_host }}_{{ unix_endpoint_id.stdout }}.tar.gz"
when: fetched_files is succeeded
Again, the data is transferred to the execution environment and not the control node. Here the results are more severe; the troubleshooting data will be deleted with the execution environment as there is no host acting as a control node within AAP. Therefor the lmt_collect_troubleshooting_data playbook is not compatible with the Ansible Automation Platform.
Please update the code to be more compatible with AAP environment, so that it helps in troubleshooting during any issue we register with IBM or so.
Thanks in advance.