GithubHelp home page GithubHelp logo

ansible / ansible-builder Goto Github PK

View Code? Open in Web Editor NEW
270.0 28.0 84.0 951 KB

An Ansible execution environment builder

License: Other

Python 87.86% Makefile 1.28% Shell 10.16% Jinja 0.39% Dockerfile 0.31%
ansible-dev-tools

ansible-builder's Introduction

CI codecov

Ansible Builder

Ansible Builder is a tool that automates the process of building execution environments using the schemas and tooling defined in various Ansible Collections and by the user.

See the readthedocs page for ansible-builder at:

https://ansible-builder.readthedocs.io/en/stable/

Get Involved:

  • We use GitHub issues to track bug reports and feature ideas
  • Want to contribute, check out our guide
  • Join us in the #ansible-builder channel on Libera.chat IRC
  • For the full list of Ansible email Lists, IRC channels and working groups, check out the Ansible Mailing lists page of the official Ansible documentation.

Code of Conduct

We ask all of our community members and contributors to adhere to the Ansible code of conduct. If you have questions, or need assistance, please reach out to our community team at [email protected]

License

Apache License v2.0

ansible-builder's People

Contributors

akasurde avatar alancoding avatar andersson007 avatar ansible-zuul[bot] avatar beeankha avatar dependabot[bot] avatar duckinator avatar eqrx avatar felixfontein avatar jctanner avatar kdelee avatar matburt avatar mscherer avatar nitzmahone avatar oranod avatar pabelanger avatar samccann avatar samdoran avatar shanemcd avatar shrews avatar sivel avatar spredzy avatar ssbarnea avatar stanislav-zaprudskiy avatar tanganellilore avatar thedoubl3j avatar timgrt avatar w4hf avatar webknjaz avatar wenottingham avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-builder's Issues

Clean error messages for missing files

If the data inside of the EE spec doesn't exist, it falls back to traceback.

$ ansible-builder create -f alan/alan_ee.yml -c alan/bc
Traceback (most recent call last):
  File "/Users/alancoding/.virtualenvs/ansible-builder_test/bin/ansible-builder", line 11, in <module>
    load_entry_point('ansible-builder', 'console_scripts', 'ansible-builder')()
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/cli.py", line 34, in run
    ab = prepare()
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/cli.py", line 30, in prepare
    return AnsibleBuilder(**vars(args))
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/main.py", line 19, in __init__
    build_context=self.build_context)
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/main.py", line 65, in __init__
    self.build_steps()
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/main.py", line 71, in build_steps
    [self.steps.append(step) for step in GalaxySteps(containerfile=self)]
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/main.py", line 93, in __new__
    copy(src, dest)
  File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 248, in copy
    copyfile(src, dst, follow_symlinks=follow_symlinks)
  File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 120, in copyfile
    with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'adjusted_requirements.yml'

where

$ cat alan/alan_ee.yml 
---
# subset of what may eventually be the container to replace the Ansible venv in AWX
version: 1
dependencies:
  galaxy: adjusted_requirements.yml
  python: requirements.txt
  system:
    - git  # for project updates
 
additional_build_steps: |
  RUN some-command
  ENV SOME_VAR=some_value

and the file adjusted_requirements.yml does not exist.

Figure out how to handle existing build context folder

Right now, if a folder already exists at the location for -c, the context, it will merge new files with whatever is in there.

This is probably undesirable, because any files that are not created anew are will almost certainly not going to be used, and that goes against best practices

Inadvertently including files that are not necessary for building an image results in a larger build context and larger image size. This can increase the time to build the image, time to pull and push it, and the container runtime size.

So if the build context already exists, then sensible default behaviors could be:

  • delete the folder
  • error
  • warn

feedback requested

Maybe this could become a toggle via a --force option?

Update README w/ General Information

Before getting too "in the weeds" with documentation, let's fill out some basic info in the README file (description of the project, maintainer list).

Support python installs in venv

We discussed the possibility that collections will use python dependencies that conflict with the base image's system dependencies. This is one reason why python venvs are used, so users may want to have their python dependencies.

Granted it would then be on the user to properly set ansible_python_interpreter in their vars.

Not saying this is top priority but may eventually be a nice feature to install all python deps in a designated venv instead of pip installing to the system

TIL there was a issue to add a env var but didn't make it in back in 2014
ansible/ansible#6345

Scrub the python requirement files for duplicate and unnecessary requirements

Installing the "awx" case in the examples right now will give this error:

Double requirement given: ansible>=2.9.0 (from -r /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/requirements.txt (line 1)) (already in ansible (from -r /usr/share/ansible/collections/ansible_collections/ansible/netcommon/requirements.txt (line 1)), name='ansible')
The command '/bin/sh -c pip3 install     -r /usr/share/ansible/collections/ansible_collections/ansible/netcommon/requirements.txt     -r /usr/share/ansible/collections/ansible_collections/ansible/posix/requirements.txt     -r /usr/share/ansible/collections/ansible_collections/awx/awx/requirements.txt     -r /usr/share/ansible/collections/ansible_collections/azure/azcollection/requirements-azure.txt     -r /usr/share/ansible/collections/ansible_collections/community/vmware/requirements.txt     -r /usr/share/ansible/collections/ansible_collections/google/cloud/requirements.txt     -r /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/requirements.txt' returned a non-zero code: 1

All ansible entries can be safely removed from this command, and the same goes for a number of common test libraries.

Any duplicate entires can be safely removed if it is confirmed that the version range does not conflict.

However, reading the requirements file in the first place is a larger challenge. Some implementations of that:

Note that this issue does not rise to the level of any form of "dependency resolution". This is only slightly more complicated than parsing python requirements.txt. Our bar for success for this specific work item is that the cataloged requirements of collections migrated from Ansible 2.9 can mutually install without error.

Catch errors from parsing collection python requirements

Example:

$ ansible-builder introspect --sanitize ~/Documents/repos/collection-dependencies-demo/target/
Traceback (most recent call last):
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/requirements.py", line 90, in __init__
    req = REQUIREMENT.parseString(requirement_string)
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py", line 1654, in parseString
    raise exc
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py", line 1644, in parseString
    loc, tokens = self._parse( instring, 0 )
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py", line 1402, in _parseNoCache
    loc,tokens = self.parseImpl( instring, preloc, doActions )
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py", line 3417, in parseImpl
    loc, exprtokens = e._parse( instring, loc, doActions )
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py", line 1406, in _parseNoCache
    loc,tokens = self.parseImpl( instring, preloc, doActions )
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py", line 3205, in parseImpl
    raise ParseException(instring, loc, self.errmsg, self)
pkg_resources._vendor.pyparsing.ParseException: Expected stringEnd (at char 10), (line:1, col:11)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3108, in __init__
    super(Requirement, self).__init__(requirement_string)
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/requirements.py", line 94, in __init__
    requirement_string[e.loc:e.loc + 8]))
pkg_resources.extern.packaging.requirements.InvalidRequirement: Invalid requirement, parse error at "'in confi'"

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/alancoding/.virtualenvs/ansible-builder/bin/ansible-builder", line 33, in <module>
    sys.exit(load_entry_point('ansible-builder', 'console_scripts', 'ansible-builder')())
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/cli.py", line 30, in run
    data['python'] = sanitize_requirements(data['python'])
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/requirements.py", line 23, in sanitize_requirements
    for req in parsed:
  File "/Users/alancoding/Documents/repos/requirements-parser/requirements/parser.py", line 50, in parse
    yield Requirement.parse(line)
  File "/Users/alancoding/Documents/repos/requirements-parser/requirements/requirement.py", line 239, in parse
    return cls.parse_line(line)
  File "/Users/alancoding/Documents/repos/requirements-parser/requirements/requirement.py", line 217, in parse_line
    pkg_req = Req.parse(line)
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3155, in parse
    req, = parse_requirements(s)
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3101, in parse_requirements
    yield Requirement(line)
  File "/Users/alancoding/.virtualenvs/ansible-builder/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3110, in __init__
    raise RequirementParseError(str(e))
pkg_resources.RequirementParseError: Invalid requirement, parse error at "'in confi'"

The root cause of this traceback is that a collection has a requirements.txt file which is not formatted correctly. We should never error with a traceback like this, this also doesn't give any actionable information (not clear which collection had the bad requirements).

This may be subject to discussion, but my proposal would be:

  • do not halt execution in this case, but instead, ignore the bad requirements file
  • issue a warning to the console, giving the bad requirements file and the name of the collection involved

In practice, this should be uncommon, so there is not particular urgency for solving this.

Error if we see an unrecognized YAML key is used in EE.yml

When we parse the execution-environments.yml file, we should error early if an unrecognized key is provided. Currently it is just silently not found and returns None.

For example, this is an invalid execution-environment.yml

[chadams@chadams-work ansible-builder-play]$ cat execution-environment.yml 
---
version: 1
base_image: 'quay.io/ansible/ansible-runner:stable-2.10.devel'
requirements:
  galaxy: requirements.yml
  python: requirements.txt
  system: bindep.txt

(requirements: should actually be dependencies:)

Comment and space out Containerfile syntax

With #59, this is what a Containerfile may look like:

FROM docker.io/ansible/ansible-runner:devel

ADD requirements.yml /build/

RUN ansible-galaxy role install -r /build/requirements.yml --roles-path /usr/share/ansible/roles
RUN ansible-galaxy collection install -r /build/requirements.yml --collections-path /usr/share/ansible/collections
ADD introspect.py /usr/local/bin/introspect
RUN chmod +x /usr/local/bin/introspect
RUN introspect --write-bindep /build/bindep_combined.txt
ADD bindep_output.txt /build/
RUN dnf -y install $(cat /build/bindep_output.txt)
ADD requirements_combined.txt /build/
RUN pip3 install --upgrade -r /build/requirements_combined.txt

Compare to human-written files:

https://github.com/ansible/ansible-runner/blob/devel/Dockerfile

Note the abundance of whitespace and comments.

ansible-runner should space out different commands (obviously phases, but even more than that), and write full-line comments where it is merited.

consider helping the user find the `--container-runtime` option

ansible-builder build --tag my-ee
You do not have podman installed, please specify a different container runtime for this command.

Change above error message to include --container-runtime and, stretch here, include the list of runtimes that the user does have.

Support collection tarball installation

In some cases, we want to be able to install a collection tarball from the local file system. This could be before we don't have access to galaxy or we are testing an unreleased version of code.

We do this today by not using a requirement file, and use the CLI directly

ansible-galaxy collection install  vyos-vyos-1.1.1-dev2.tar.gz ansible-netcommon-1.4.2-dev1.tar.gz

Test compatibility with ansible-test

I've collected an example with community.aws

ERROR: tests/unit/compat/mock.py:101:12: import-outside-toplevel: Import outside toplevel (_io)
See documentation for help: https://docs.ansible.com/ansible/devel/dev_guide/testing/sanity/pylint.html
Running sanity test 'replace-urlopen' with Python 3.7
Running sanity test 'rstcheck' with Python 3.7
Traceback (most recent call last):
  File "/Users/alancoding/.virtualenvs/awx_collection/bin/ansible-test", line 7, in <module>
    exec(compile(f.read(), __file__, 'exec'))
  File "/Users/alancoding/Documents/repos/ansible/bin/ansible-test", line 28, in <module>
    main()
  File "/Users/alancoding/Documents/repos/ansible/bin/ansible-test", line 24, in main
    cli_main()
  File "/Users/alancoding/Documents/repos/ansible/test/lib/ansible_test/_internal/cli.py", line 173, in main
    args.func(config)
  File "/Users/alancoding/Documents/repos/ansible/test/lib/ansible_test/_internal/sanity/__init__.py", line 191, in command_sanity
    result = test.test(args, sanity_targets, version)
  File "/Users/alancoding/Documents/repos/ansible/test/lib/ansible_test/_internal/sanity/rstcheck.py", line 80, in test
    results = parse_to_list_of_dict(pattern, stderr)
  File "/Users/alancoding/Documents/repos/ansible/test/lib/ansible_test/_internal/util.py", line 712, in parse_to_list_of_dict
    raise Exception('Pattern "%s" did not match values:\n%s' % (pattern, '\n'.join(unmatched)))
Exception: Pattern "^(?P<path>[^:]*):(?P<line>[0-9]+): \((?P<level>INFO|WARNING|ERROR|SEVERE)/[0-4]\) (?P<message>.*)$" did not match values:
/Users/alancoding/.virtualenvs/awx_collection/bin/python: No module named rstcheck

(was resolved locally with pip install rstcheck)

This was after installing and running ansible-test sanity. The CLI ansible-test allows running inside of docker. We should produce an EE for community.aws, feed it into ansible-test, and verify that it does not fail on missing rstcheck.

Process direct requirements and additional build steps from the user's spec

Example execution-environment.yml:

(EDIT: direct listing of system dependencies not supported after all)

---
version: 1
dependencies:
  python: requirements.txt
  system:
    - git

additional_build_steps: |
  RUN some-command
  ENV SOME_VAR=some_value

This tool should output a Container file with:

  • pip install for requirements
  • dnf? install for system requirement
  • additional lines for whatever is given in the additional specs

Schema of EE spec is subject to change, but for now we want to just get something in.

Note that galaxy in dependencies is processed in the current code base.

Implement Custom Verbosity Level

Allow user to set a custom verbosity level using -v or --verbosity when running builder CLI commands.

This work should get done after a lot of the other ansible-builder issues have been completed, since we want to know exactly what to be verbose about.

Establish pattern for running integration tests by invoking script that outputs xml test result

We want to be able to execute integration tests from the testing pipeline/provider of our choice.

To make our integration tests easy to execute from whatever location, and consume the results, have both the test setup and test execution wrapped in a bash script that after the test run, leaves a junit style xml file as output.

Suggested implementation:

  1. create a folder for integration tests that you can not run during normal unit tests
  2. In that folder, create a conftest.py file with a fixture like this (may take some small changes to work here) complete proxy cli fixture from runner
  3. create a test file and that file create a test that does something simple like:

psuedocode:

def test_help(cli):
   result = cli(['ansible-builder', 'help'])
   assert 'something that is in help text' in result.stdout
  1. now that you have a test, create a shell script that

Acceptance criteria:

  • user/CI only needs to call tools/run_integration.sh or whatever name and script takes care of any dependency installs, runs tests, and leaves behind a artifacts/results.xml file that is always named the same thing

In the future more tests can be added, and this will be transparent to whatever CI runner is running the tests. If there are system requirements this is assumed to be taken care of by the CI runner, so please just document these dependencies.

Optional additional features:

  • accept RUNNER_FORK and RUNNER_BRANCH environment variables to chose what runner to install from git
  • accept TESTEXPR environment variable to pass to pytest -k test selector option

Output from `build` command needs more info

ansible-buidler build seems to do something for a while but then it says Complete! Build context is at: context but this is ????? does not tell me what the image is named/tagged or how to actually find it.

I am learning that the default tag name and container runtime are defined here https://github.com/ansible/ansible-builder/blob/ec481d8c8a8dd995614aa1f46baccad4e2253715/ansible_builder/constants.py and can be overriden with these cli options:
https://github.com/ansible/ansible-builder/blob/devel/ansible_builder/cli.py#L47-L69

Would be nice if it streamed the standard out from the build so I could see what it was doing and/or print the command it was running.

For example, running podmand build -t foo against the context dir prints this out:

podman build -t foo context
STEP 1: FROM shanemcd/ansible-runner
STEP 2: ADD requirements-galaxy.yml /build/
--> Using cache 3c38d111ddb5973a7b0cf7da0c880e92a626ab1833c5f541282474e8b523069c
STEP 3: RUN ansible-galaxy role install -r /build/requirements-galaxy.yml --roles-path /usr/share/ansible/roles
--> Using cache 486039834b6973edf273357cae57aa5aad6a6f63eae662d78f08ed47dc10143a
STEP 4: RUN ansible-galaxy collection install -r /build/requirements-galaxy.yml --collections-path /usr/share/ansible/
collections
--> Using cache 8ab174613b064c74ccaf40fd38177a6bab9cd4255cf5ad2bad951a1524942b76
STEP 5: COMMIT foo
--> 8ab174613b0
8ab174613b064c74ccaf40fd38177a6bab9cd4255cf5ad2bad951a1524942b76

Then it could do something like:

podman images -a foo
REPOSITORY                        TAG      IMAGE ID       CREATED          SIZE
localhost/foo                     latest   8ab174613b06   43 minutes ago   673 MB

then you would see the result

Reduce image size by removing dnf/pip cache

Argument is made here https://www.redhat.com/sysadmin/tiny-containers that dnf steps would more ideally be done with cache cleanup in the same command:

RUN dnf install -y nginx \
  	&& dnf clean all \
  	&& rm -rf /var/cache/yum

We do a dnf install:

"RUN dnf -y install $(bindep -b -f {0})".format(file)

We could just add this syntax to our command. Presumably we should check image sizes before / after.

I'm not saying it needs to be done, pending feedback, just saving an issue for this.


A similar thing may be possible for pip via either --no-cache-dir or ~/.cache/pip

Process system requirements

for my own notes, from this comment:

#22 (comment)

https://opendev.org/opendev/system-config/src/branch/master/docker/python-builder/scripts/assemble

the critical syntax looks like:

bindep -l newline >> /output/bindep/run.txt || true
yum -y install $(cat /output/bindep/run.txt)

(we may or may not use the intermediary file)

In my testing, the -q option (for quiet) is also needed, and very important, a newline is needed at the end of the bindep.txt file.

This will be implemented as a RUN in the image.

Ansible Galaxy requirements not being installed in Execution Environment?

I tried following the directions in the Introduction to Ansible Builder blog post that was published earlier today, but am running into at least one, maybe two issues:

I have my execution-environment.yml file defined like this:

---
version: 1
requirements:
  galaxy: requirements.yml
  python: requirements.txt
  system: bindep.txt

And I have my requirements.yml file like this:

---
collections:
  - kubernetes.core

roles:
  - geerlingguy.k8s_manifests

When I run the command ansible-builder build --tag my_first_ee_image --container-runtime docker (feature request: it would be really nice to not have to specify --container-runtime docker on every run; I'm on a Mac where it's harder to get Podman running efficiently), it outputs the following:

File context/introspect.py is already up-to-date.
Writing partial Containerfile without collection requirements
Running command:
  docker build -f context/Dockerfile -t my_first_ee_image context
#1 [internal] load .dockerignore
#1 sha256:49a257367e154b85e381edac2ee05fe07a1d503c587c8b207da5b2a7a01b3509
#1 transferring context: 2B done
#1 DONE 0.0s

#2 [internal] load build definition from Dockerfile
#2 sha256:cd5e1f427e5efb2d9d548bf83d35afc89e41b75e81e1e2337ee4a216d0108eba
#2 transferring dockerfile: 85B done
#2 DONE 0.0s

#3 [internal] load metadata for quay.io/ansible/ansible-runner:devel
#3 sha256:a2ee052f444f48b483ef7462f9f15dc496066ca125add16c92e38bb837b6dd2b
#3 DONE 0.3s

#4 [1/1] FROM quay.io/ansible/ansible-runner:devel@sha256:37a2a888bbed04a6c776c88bebd8af1c1e1674c6a765bcb7ae2f6cecff2f6314
#4 sha256:68c9e03fc26cbba1796f8c07c6ee67fc90f742dee2d8ccaf2d322aaf087d315e
#4 CACHED

#5 exporting to image
#5 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#5 exporting layers done
#5 writing image sha256:3acbe395d410ba8ca131d974c7dbe32239e1c56d27788405bc62b346cba4841d done
#5 naming to docker.io/library/my_first_ee_image done
#5 DONE 0.0s
Running command:
  docker run --rm -v /Users/jgeerling/Downloads/builder/context:/context:Z my_first_ee_image python3 /context/introspect.py
python: {}
system: {}

Rewriting Containerfile to capture collection requirements
Running command:
  docker build -f context/Dockerfile -t my_first_ee_image context
#2 [internal] load build definition from Dockerfile
#2 sha256:0cb11f131087ead2dae6e96d04945228e344098b915b3f5564dbe1d4a878ab33
#2 transferring dockerfile: 85B done
#2 DONE 0.0s

#1 [internal] load .dockerignore
#1 sha256:82edaef0797864832622ec1b814ca7c0310edff9017d21d0ea6ebd51dc3f792c
#1 transferring context: 2B done
#1 DONE 0.0s

#3 [internal] load metadata for quay.io/ansible/ansible-runner:devel
#3 sha256:a2ee052f444f48b483ef7462f9f15dc496066ca125add16c92e38bb837b6dd2b
#3 DONE 0.2s

#4 [1/1] FROM quay.io/ansible/ansible-runner:devel@sha256:37a2a888bbed04a6c776c88bebd8af1c1e1674c6a765bcb7ae2f6cecff2f6314
#4 sha256:68c9e03fc26cbba1796f8c07c6ee67fc90f742dee2d8ccaf2d322aaf087d315e
#4 CACHED

#5 exporting to image
#5 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#5 exporting layers done
#5 writing image sha256:3acbe395d410ba8ca131d974c7dbe32239e1c56d27788405bc62b346cba4841d done
#5 naming to docker.io/library/my_first_ee_image done
#5 DONE 0.0s
Complete! The build context can be found at: context

But I don't see any step where it installs the requirements (either collection or roles), nor do I see them when I run the container and look around inside with docker run -it --rm my_first_ee_image:latest /bin/bash.

Am I doing something wrong, or is the EE not including the dependencies I need to run my playbooks?

supress warning about tmp build dir not being in collections path on build host

In the output of ansible-builder build I see:

  ansible-galaxy collection install -r reqs.yml -p /tmp/ansible_builder_jn54s8dn
[WARNING]: The specified collections path '/tmp/ansible_builder_jn54s8dn' is not part of the
configured Ansible collections paths    
'/home/elijah/.ansible/collections:/usr/share/ansible/collections'. The installed collection
won't be picked up in an Ansible run.  

Because /tmp/ansible_builder_jn54s8dn is not on my local machine's ansible collection path, but that is normal -- it is not meant for execution on my local machine, rather it is going to get placed in the collections path inside the container.

This warning is a bit confusing so would be good to supress it. I bet there is an env var we can set when calling ansible-galaxy

Agree on solution for community.kubernetes.helm and kubectl

In #22 we discussed how we may use bindep to process system dependencies, and how this will resolve into a set of dnf installs.

Importantly, this would have allowed for a consistent set of requirements, consistently using the system package manager.

We have identified our first notable exception.

https://github.com/ansible-collections/community.kubernetes/blob/3e971e0ad36a05a9ad63f442204c2dd59bbf558e/plugins/modules/helm.py#L23

The community.kubernetes.helm module, unsurprisingly, requires helm. This is not available as a dnf package to be installed, and to get it on a supported OS like RHEL, steps look like this:

https://snapcraft.io/install/helm/rhel

sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sudo dnf upgrade
sudo rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo subscription-manager repos --enable "rhel-*-optional-rpms" --enable "rhel-*-extras-rpms"
sudo yum update
sudo yum install snapd
sudo systemctl enable --now snapd.socket
sudo ln -s /var/lib/snapd/snap /snap

(tl;dr it requires a different package manager, snap, so install snap, then install helm)

We don't have any need for this particular module in the AWX internals, but we do have a need for community.kubernetes, since it's a primary dependency for OpenShift-native job isolation https://github.com/ansible/awx/blob/2385e47ac313110235758c098ca0188fb430f5a6/requirements/collections_requirements.yml#L11

How could this possibly be handled?

  • allow collection-level metadata to specify "additional build steps" and dump whatever they need in there
  • don't attempt to support this requirement, leave as exercise for user

Even assuming that we can get away with not supporting this, kubectl itself may be a problem, see the AWX solution:

https://github.com/ansible/awx/blob/89b087ffb61ed5cbad2360329e7e28abd5b6ba25/installer/roles/image_build/templates/Dockerfile.j2#L143-L145

Note that this is a little more than a blind dnf install command. So we might need some increased form of flexibility even to support this.

Build caching regression

Subsequent runs of ansible-builder should be near-instant when inputs dont change. I believed this regressed in #57. Since we're always re-writing context/requirements.txt, this causes the pip install steps to always be run, even when the contents of requirements.txt doesn't change.

Add comments to combined requirements file, denoting what collections require each

Here's an example of a combined set of python requirements:

$ ansible-builder introspect --sanitize test/data/

Sanitized dependencies for test/data/
python:
- pyvcloud>=14,>=18.0.10
- pytz
- tacacs_plus
system:
- test/bindep/bindep.txt

The python dependencies will get written to the requirements.txt file inside of the build context folder.

In this case, 2 collections at the target location list pyvcloud (simply as a test case). The two different version ranges come from the two different collections.

It would obviously be nice for the user if we could add a comment to each of these lines that lists what collection the requirement comes from. That would be immensely helpful for debugging complex execution environments.

This will require a change to the output structure of the introspect.py script, so I want to have this fully discussed and vetted early.

Support some means of customizing Galaxy server settings

The galaxy CLI install commands are part of the Containerfile. Because it's a clean, replicable, environment it will use the Ansible defaults. There needs to be a means of passing a token and other settings for a private account to an Automation Hub server.

The challenge here is that we can't have any credentials left anywhere in the image, in intermediary layers, and maybe not in the build context at all. We will want building with the build context to be replicable, but that seems impossible to reconcile with providing a protected secret token.


My only readily available solution that doesn't involving tweaking the requirements:

  • perform ansible-galaxy collection download as a subprocess before starting any form of building, and store the output in build context
  • ADD the downloaded content to the image
  • Inside the container, install the downloaded content

Downside is that this would require a relatively recent Ansible version.

Some other options are probably possible, but will require discussion.

Better error message when you do not have podman

podman is the default, but I have Docker on my machine. I run a command using the default and get this:

$ ansible-builder build \
>   -f test/data/pytz/execution-environment.yml \
>   -c my_new_bc \
>   -t my_new_tag
Writing partial Containerfile without collection requirements
Running command:
  podman build -f my_new_bc/Containerfile -t my_new_tag my_new_bc
FileNotFoundError: [Errno 2] No such file or directory: 'podman': 'podman'

This is going to be a common thing to hit, and the program can give much better guidance than this.

Add support to install ansible collection as tarball

In some cases, we want to be able to install a collection tarball from the local file system. This could be before we don't have access to galaxy or we are testing an unreleased version of code.

We do this today by not using a requirement file, and use the CLI directly

ansible-galaxy collection install  vyos-vyos-1.1.1-dev2.tar.gz ansible-netcommon-1.4.2-dev1.tar.gz

Need flag to force Dockerfile or Container file, regardless of container runtime

We have a requirement downstream, to use podman for builds. However, by using the podman container-runetime, we hardcore Containerfile.

In some cases, people run podman, but still expect Dockerfile to be used. The way to work around this is to create a symlink

ln -s Containerfile Dockerfile

But we should expose a way for the user to pick the filename, independent of container-runtime.

Default verbose level to 2

First time running ansible-builder, I thought nothing was happening. Consider verbose level 2 to be the default and have a silence option.

introspect.py should move into ansible-runner image

We should move introspect.py into ansible-runner, rather then having every EE repo container a copy of it.

This might take a bit of work, given we hardcode some 'execution-environment' filenames / variables. But, not impossible.

File upstream collection pull requests

This is really just a place to collect links

https://github.com/ansible/ansible-builder/pull/44/files#diff-836eec5ffe9ae50553c6e473eec31adf

Replace tmpdir install of collections with a container image layer

Assuming #34 is merged...

Installing collections to a temporary directory on the host machine will be the status quo.

To talk in blunter terms, I'm using a Mac. When I run ansible-builder create, the tool runs ansible-galaxy collection install in a subprocess and installs collections to a temporary directory on my mac. It then crawls those folders to find out where the requirement files for the collections live.

@shanemcd has a vision for replacement of this, in my words:

Create an intermediate Containerfile which implements the Galaxy steps. Those look like:

FROM shanemcd/ansible-runner

ADD requirements.yml /build/

RUN ansible-galaxy role install -r /build/requirements.yml --roles-path /usr/share/ansible/roles
RUN ansible-galaxy collection install -r /build/requirements.yml --collections-path /usr/share/ansible/collections

(maybe it won't have the roles at this point, not important)

Now, build an image corresponding to this Containerfile. We may call this the "introspection" or "discovery" layer.

Use this image to crawl the installed collections (in /usr/share/ansible/collections inside the container), and apply all same logic we already use in the host machine tmpdir. More logic is likely to be added on top of this, in particular, combining all the python requirements file into a single file.

Next, create the actual Contailerfile, and allow it to use cached layers from the introspection layer. Subsequent layers will do the pip install of the content discovered from the introspection layer.

Failure in YAML parsing when using podman aliased as docker

do a dnf install of docker, and you don't actually get docker, you get a wrapper around podman. This actually works just fine, except for a tiny detail in the implementation of ansible-builder:

E               STEP 4: COMMIT builder-test-f1c1303d-b
E               2d20e4c5a4049c6efc138ab39fd5fc66750ef0f7af4df7a15fa2c32d6bd8d5b1
E               #x1B[94mRunning command:#x1B[0m
E               #x1B[94m  docker run --rm builder-test-f1c1303d-b introspect#x1B[0m
E               Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
E               python: []
E               system: []
E               
E               , stderr: yaml.scanner.ScannerError: mapping values are not allowed here
E                 in "<unicode string>", line 2, column 7:
E                   python: []
E                         ^

test/integration/conftest.py:41: Failed

The problem is that the docker CLI issues a warning, and that warning pollutes the stdout of commands which are expected to be YAML.

The solution to this could be either a work-around, or just error with a better message.

YAML Exception

Catch a YAML exception and warn that the execution-environment.yml is not a valid yaml file, when appropriate (in the ansible_builder/main.py file).

Prepend steps don't work since moving to multi-stage builds

Example:

---
version: 1
dependencies:
  galaxy: requirements/collections_requirements.yml
  system: requirements/bindep_requirements.txt
additional_build_steps:
  prepend:
    - RUN pip3 install --upgrade pip setuptools
    - RUN dnf config-manager --set-enabled epel

yields Containerfile

FROM quay.io/ansible/ansible-runner:devel as builder

RUN pip3 install --upgrade pip setuptools
RUN dnf config-manager --set-enabled epel
ADD requirements.yml /build/

RUN ansible-galaxy role install -r /build/requirements.yml --roles-path /usr/share/ansible/roles
RUN ansible-galaxy collection install -r /build/requirements.yml --collections-path /usr/share/ansible/collections

RUN mkdir -p /usr/share/ansible/roles /usr/share/ansible/collections

FROM quay.io/ansible/ansible-runner:devel

COPY --from=builder /usr/share/ansible/roles /usr/share/ansible/roles
COPY --from=builder /usr/share/ansible/collections /usr/share/ansible/collections

ADD bindep_output.txt /build/
RUN dnf -y install $(cat /build/bindep_output.txt)
ADD requirements_combined.txt /build/
RUN pip3 install --upgrade -r /build/requirements_combined.txt

But that's not right, because RUN dnf config-manager --set-enabled epel happens inside of the builder image and later RUN dnf -y install $(cat /build/bindep_output.txt) happens inside of the 2nd (and final) image.

The prepend steps should probably act on the 2nd image, not the first ephemeral image

Process collection python requirements metadata

Example execution-environment.yml

---
# subset of what may eventually be the container to replace the Ansible venv in AWX
version: 1
dependencies:
  galaxy: scm_metadata.yml

where the file scm_metadata.yml contains:

collections:
   - name: https://github.com/alancoding/azure.git
     version: ee
     type: git

Just to link it, this references the tree https://github.com/AlanCoding/azure/tree/ee which importantly contains:

https://github.com/AlanCoding/azure/blob/ee/meta/execution-environment.yml

  
dependencies:
  python:
  - packaging
  - requests[security]
  - xmltodict
  - azure-cli-core==2.0.35
  - azure-cli-nspkg==3.0.2
  - azure-common==1.1.11
  - azure-mgmt-authorization==0.51.1
  - azure-mgmt-batch==5.0.1
  - azure-mgmt-cdn==3.0.0
  - azure-mgmt-compute==10.0.0
  - azure-mgmt-containerinstance==1.4.0
  - azure-mgmt-containerregistry==2.0.0
  - azure-mgmt-containerservice==4.4.0
  - azure-mgmt-dns==2.1.0
  - azure-mgmt-keyvault==1.1.0
  - azure-mgmt-marketplaceordering==0.1.0
  - azure-mgmt-monitor==0.5.2
  - azure-mgmt-network==4.0.0
  - azure-mgmt-nspkg==2.0.0
  - azure-mgmt-redis==5.0.0
  - azure-mgmt-resource==2.1.0
  - azure-mgmt-rdbms==1.4.1
  - azure-mgmt-servicebus==0.5.3
  - azure-mgmt-sql==0.10.0
  - azure-mgmt-storage==3.1.0
  - azure-mgmt-trafficmanager==0.50.0
  - azure-mgmt-web==0.41.0
  - azure-nspkg==2.0.0
  - azure-storage==0.35.1
  - msrest==0.6.10
  - msrestazure==0.6.2
  - azure-keyvault==1.0.0a1
  - azure-graphrbac==0.40.0
  - azure-mgmt-cosmosdb==0.5.2
  - azure-mgmt-hdinsight==0.1.0
  - azure-mgmt-devtestlabs==3.0.0
  - azure-mgmt-loganalytics==0.2.0
  - azure-mgmt-automation==0.1.1
  - azure-mgmt-iothub==0.7.0
  - azure >= 2.0.0
  system: []
version: 1

Building that should produce an image with those python requirements, which should run azure.azcollection modules inside of playbooks.

ansible-builder exits with exit code 0 when build command failed

Say that I give bindep -q ... in a RUN command.

This is incorrect syntax, but the tool continues as if everything went fine.

$ ./examples/run.sh subversion

Running Example: subversion
Untagged: exec-env-subversion:latest
Deleted: sha256:a30fbf2489e0b8d3d0d2cadd41caaf860ee1dced3681819e686897b51ffb22be
Deleted: sha256:0a38f62a710a8c73de9d1f8c0d21dbf7899dfee66f1c4e7a2fc1fcfcbad72282
Writing partial Containerfile without collection requirements
Running command:
  docker build -f bc/Dockerfile -t exec-env-subversion bc
Sending build context to Docker daemon  7.168kB
Step 1/3 : FROM shanemcd/ansible-runner
 ---> f0c79677d9b1
Step 2/3 : ADD introspect.py /usr/local/bin/introspect
 ---> Using cache
 ---> cba3da5faf2e
Step 3/3 : RUN chmod +x /usr/local/bin/introspect
 ---> Using cache
 ---> 13c1dc7b98f2
Successfully built 13c1dc7b98f2
Successfully tagged exec-env-subversion:latest
Running command:
  docker run --rm exec-env-subversion introspect
[]

No collections requirements file found, skipping ansible-galaxy install...
Rewriting Containerfile to capture collection requirements
Running command:
  docker build -f bc/Dockerfile -t exec-env-subversion bc
Sending build context to Docker daemon  8.192kB
Step 1/6 : FROM shanemcd/ansible-runner
 ---> f0c79677d9b1
Step 2/6 : ADD introspect.py /usr/local/bin/introspect
 ---> Using cache
 ---> cba3da5faf2e
Step 3/6 : RUN chmod +x /usr/local/bin/introspect
 ---> Using cache
 ---> 13c1dc7b98f2
Step 4/6 : ADD bindep.txt /build/
 ---> Using cache
 ---> a8e7d75fea57
Step 5/6 : RUN pip3 install bindep
 ---> Using cache
 ---> 303631a88289
Step 6/6 : RUN yum -y install $(bindep -q -f /build/bindep.txt)
 ---> Running in 74f5bd447791
usage: bindep [-h] [--brief] [--file FILENAME] [--profiles]
              [--list_all {newline,csv}] [--version]
              [profile [profile ...]]
bindep: error: unrecognized arguments: -q
usage: yum install [-c [config file]] [-q] [-v] [--version]
                   [--installroot [path]] [--nodocs] [--noplugins]
                   [--enableplugin [plugin]] [--disableplugin [plugin]]
                   [--releasever RELEASEVER] [--setopt SETOPTS]
                   [--skip-broken] [-h] [--allowerasing] [-b | --nobest] [-C]
                   [-R [minutes]] [-d [debug level]] [--debugsolver]
                   [--showduplicates] [-e ERRORLEVEL] [--obsoletes]
                   [--rpmverbosity [debug level name]] [-y] [--assumeno]
                   [--enablerepo [repo]] [--disablerepo [repo] | --repo
                   [repo]] [--enable | --disable] [-x [package]]
                   [--disableexcludes [repo]] [--repofrompath [repo,path]]
                   [--noautoremove] [--nogpgcheck] [--color COLOR] [--refresh]
                   [-4] [-6] [--destdir DESTDIR] [--downloadonly]
                   [--comment COMMENT] [--bugfix] [--enhancement]
                   [--newpackage] [--security] [--advisory ADVISORY]
                   [--bz BUGZILLA] [--cve CVES]
                   [--sec-severity {Critical,Important,Moderate,Low}]
                   [--forcearch ARCH]
                   PACKAGE [PACKAGE ...]
yum install: error: the following arguments are required: PACKAGE
The command '/bin/sh -c yum -y install $(bindep -q -f /build/bindep.txt)' returned a non-zero code: 2
Complete! The build context can be found at: bc

If the build command gives a non-zero exit code, the tool should bail right then.

Ability to produce future-proof Containerfile

A couple of scenarios of concern:

  • An execution environment spec with un-versioned collection requirement: someone builds a Containerfile with requirements.txt inside of its build context. Collection releases new version with new or removed python dependencies, and new images built with Containerfile will now fail to run content
  • An execution environment spec with versioned collection: a dependency of the primary collection releases a new version with changed python or system requirement, so same basic scenario happens, content may fail to run

Brainstorm of a couple of solutions:

  • Leave various un-versioned forms of requirements around, document that Containerfile is not meant to be a long-lived artifact. Establish that the container image is the product of concern, and that the Containerfile itself is intended to be intermediary, and not kept around.
    • Potentially set some kind of time-to-live, and fail the image build if a certain time has passed
  • Pin requirements of all forms, so any requirements.txt, requirements.yml, or bindep.txt would be processed when ansible-builder is ran, and do something like a pip freeze for every case, producing a file with pinned versions for all types of dependencies and only put that in the build context.
  • Put the installed collections into the build context, and only ADD them in the Containerfile, so that results of ansible-galaxy collection install will not give different outcomes from one build to the next

Files referenced in -f option should be relative to the file, not cwd

consider:

$ cat alan/alan_ee.yml 
---
# subset of what may eventually be the container to replace the Ansible venv in AWX
version: 1
dependencies:
  galaxy: scm_metadata.yml
  python: requirements.txt
  system:
    - git  # for project updates
 
additional_build_steps: |
  RUN some-command
  ENV SOME_VAR=some_value

now if I run this command from the same directory (meaning alan_ee.yml is nested in the "alan" directory)

$ ansible-builder create -f alan/alan_ee.yml -c alan/bc
Traceback (most recent call last):
  File "/Users/alancoding/.virtualenvs/ansible-builder_test/bin/ansible-builder", line 11, in <module>
    load_entry_point('ansible-builder', 'console_scripts', 'ansible-builder')()
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/cli.py", line 34, in run
    ab = prepare()
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/cli.py", line 30, in prepare
    return AnsibleBuilder(**vars(args))
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/main.py", line 19, in __init__
    build_context=self.build_context)
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/main.py", line 65, in __init__
    self.build_steps()
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/main.py", line 71, in build_steps
    [self.steps.append(step) for step in GalaxySteps(containerfile=self)]
  File "/Users/alancoding/Documents/repos/ansible-builder/ansible_builder/main.py", line 93, in __new__
    copy(src, dest)
  File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 248, in copy
    copyfile(src, dst, follow_symlinks=follow_symlinks)
  File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/shutil.py", line 120, in copyfile
    with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'scm_metadata.yml'

it fails. But it runs in the other directory.

$ cd alan/
$ ansible-builder create -f alan_ee.yml -c bc
Complete! Build context is at: bc

The tool should not behave differently depending on where you run it from, because that would kill portability of the ansible-builder project folders.

Write out sources into subdirectory in build context

There is a use-case with some downstream build tooling that expects a Dockerfile in the top-level directory. This is not a major issue, but also forces all other files in the context folder to be top-level.

We should allow for the location of the Dockerfile / Container file to be config option, then use proper path info for files located in the context folder.

Expose ansible-galaxy CLI flags (eg: --pre)

We'd like to be able to pass custom flags to ansible-galaxy CLI tool, in this case --pre. This allows us to use pre-released version of collections in our testing environments.

Local execution use case?

Is this project purely directed at setting up execution environments for AWX / Tower use cases?

Specifically, would the community be able to use this project to distribute containerized execution environments that engineers can use on their local workstations to execute Ansible code?

To that end, is there a way to specify Ansible / Python versions for a given execution environment?

What about inclusion of other binary dependencies that can't be handled through bindep? (namely terraform, kubectl, eksctl, etc...)

Support special-purpose bindep profiles

We have a proposal that we would document certain bindep profiles that, if included by a collection, would trigger certain behavior by ansible-builder. We have discussed:

  • epel - if a collection had entries with this profile, then the epel repo would be enabled before installing those requirements
  • compile - if a collection had entries with this profile, then it would indicate that it is needed to install other requirements (python ones specifically), but not required to be in the final build. Several are being deal with at the https://github.com/ansible/awx-ee repo

This would be a notable addition to the core feature set of ansible-builder, and need to be documented. Ping @pabelanger

Systematically test that collection imports work

We need integration testing for this obviously, and we don't yet have a framework for that.

Throwing an idea out into the possibilities:

Introduce custom module which will attempt to import a variable or module, given in parameters.

Do the relatively simple task of cataloging the HAS_* variables in each collection, ex:

$ grep -r "HAS_.*=" theforeman/foreman/
theforeman/foreman//plugins/module_utils/foreman_helper.py:    HAS_APYPIE = True
theforeman/foreman//plugins/module_utils/foreman_helper.py:    HAS_APYPIE = False
theforeman/foreman//plugins/module_utils/foreman_helper.py:    HAS_PYYAML = True
theforeman/foreman//plugins/module_utils/foreman_helper.py:    HAS_PYYAML = False
theforeman/foreman//plugins/callback/foreman.py:    HAS_REQUESTS = True
theforeman/foreman//plugins/callback/foreman.py:    HAS_REQUESTS = False
theforeman/foreman//plugins/modules/subnet.py:    HAS_IPADDRESS = True
theforeman/foreman//plugins/modules/subnet.py:    HAS_IPADDRESS = False
theforeman/foreman//plugins/modules/content_upload.py:    HAS_DEBFILE = True
theforeman/foreman//plugins/modules/content_upload.py:    HAS_DEBFILE = False
theforeman/foreman//plugins/modules/content_upload.py:    HAS_RPM = True
theforeman/foreman//plugins/modules/content_upload.py:    HAS_RPM = False
$ grep -r "HAS_.*=" community/vmware/
community/vmware//plugins/connection/vmware_tools.py:    HAS_REQUESTS = True
community/vmware//plugins/connection/vmware_tools.py:    HAS_REQUESTS = False
community/vmware//plugins/connection/vmware_tools.py:    HAS_URLLIB3 = True
community/vmware//plugins/connection/vmware_tools.py:        HAS_URLLIB3 = True
community/vmware//plugins/connection/vmware_tools.py:        HAS_URLLIB3 = False
community/vmware//plugins/connection/vmware_tools.py:    HAS_PYVMOMI = True
community/vmware//plugins/connection/vmware_tools.py:    HAS_PYVMOMI = False
community/vmware//plugins/inventory/vmware_vm_inventory.py:    HAS_REQUESTS = True
community/vmware//plugins/inventory/vmware_vm_inventory.py:    HAS_REQUESTS = False
community/vmware//plugins/inventory/vmware_vm_inventory.py:    HAS_PYVMOMI = True
community/vmware//plugins/inventory/vmware_vm_inventory.py:    HAS_PYVMOMI = False
community/vmware//plugins/inventory/vmware_vm_inventory.py:    HAS_VSPHERE = True
community/vmware//plugins/inventory/vmware_vm_inventory.py:    HAS_VSPHERE = False
community/vmware//plugins/module_utils/vmware.py:    HAS_REQUESTS = True
community/vmware//plugins/module_utils/vmware.py:    HAS_REQUESTS = False
community/vmware//plugins/module_utils/vmware.py:    HAS_PYVMOMI = True
community/vmware//plugins/module_utils/vmware.py:    HAS_PYVMOMIJSON = hasattr(VmomiSupport, 'VmomiJSONEncoder')
community/vmware//plugins/module_utils/vmware.py:    HAS_PYVMOMI = False
community/vmware//plugins/module_utils/vmware.py:    HAS_PYVMOMIJSON = False
community/vmware//plugins/module_utils/vmware_rest_client.py:    HAS_REQUESTS = True
community/vmware//plugins/module_utils/vmware_rest_client.py:    HAS_REQUESTS = False
community/vmware//plugins/module_utils/vmware_rest_client.py:    HAS_PYVMOMI = True
community/vmware//plugins/module_utils/vmware_rest_client.py:    HAS_PYVMOMI = False
community/vmware//plugins/module_utils/vmware_rest_client.py:    HAS_VSPHERE = True
community/vmware//plugins/module_utils/vmware_rest_client.py:    HAS_VSPHERE = False
community/vmware//plugins/module_utils/vca.py:    HAS_PYVCLOUD = True
community/vmware//plugins/module_utils/vca.py:    HAS_PYVCLOUD = False
community/vmware//plugins/modules/vmware_content_deploy_template.py:HAS_VAUTOMATION_PYTHON_SDK = False
community/vmware//plugins/modules/vmware_content_deploy_template.py:    HAS_VAUTOMATION_PYTHON_SDK = True
community/vmware//plugins/modules/vmware_guest_facts.py:    HAS_VSPHERE = True
community/vmware//plugins/modules/vmware_guest_facts.py:    HAS_VSPHERE = False
community/vmware//plugins/modules/vmware_guest.py:HAS_PYVMOMI = False
community/vmware//plugins/modules/vmware_guest.py:    HAS_PYVMOMI = True
community/vmware//plugins/modules/vmware_vm_vss_dvs_migrate.py:    HAS_PYVMOMI = True
community/vmware//plugins/modules/vmware_vm_vss_dvs_migrate.py:    HAS_PYVMOMI = False
community/vmware//plugins/modules/vmware_content_library_manager.py:HAS_VAUTOMATION_PYTHON_SDK = False
community/vmware//plugins/modules/vmware_content_library_manager.py:    HAS_VAUTOMATION_PYTHON_SDK = True
community/vmware//plugins/modules/vmware_guest_info.py:    HAS_VSPHERE = True
community/vmware//plugins/modules/vmware_guest_info.py:    HAS_VSPHERE = False
community/vmware//plugins/modules/vmware_resource_pool.py:    HAS_PYVMOMI = True
community/vmware//plugins/modules/vmware_resource_pool.py:    HAS_PYVMOMI = False
community/vmware//plugins/modules/vmware_content_deploy_ovf_template.py:HAS_VAUTOMATION_PYTHON_SDK = False
community/vmware//plugins/modules/vmware_content_deploy_ovf_template.py:    HAS_VAUTOMATION_PYTHON_SDK = True
community/vmware//plugins/modules/vmware_vsan_cluster.py:    HAS_PYVMOMI = True
community/vmware//plugins/modules/vmware_vsan_cluster.py:    HAS_PYVMOMI = False
community/vmware//plugins/modules/vmware_migrate_vmk.py:    HAS_PYVMOMI = True
community/vmware//plugins/modules/vmware_migrate_vmk.py:    HAS_PYVMOMI = False
community/vmware//plugins/modules/vmware_dvs_host.py:    HAS_COLLECTIONS_COUNTER = True
community/vmware//plugins/modules/vmware_dvs_host.py:    HAS_COLLECTIONS_COUNTER = False
community/vmware//plugins/modules/vmware_dns_config.py:    HAS_PYVMOMI = True
community/vmware//plugins/modules/vmware_dns_config.py:    HAS_PYVMOMI = False
community/vmware//plugins/modules/vmware_vsan_health_info.py:    HAS_PYVMOMI = True
community/vmware//plugins/modules/vmware_vsan_health_info.py:    HAS_PYVMOMIJSON = hasattr(VmomiSupport, 'VmomiJSONEncoder')
community/vmware//plugins/modules/vmware_vsan_health_info.py:    HAS_PYVMOMI = False
community/vmware//plugins/modules/vmware_vsan_health_info.py:    HAS_PYVMOMIJSON = False
community/vmware//plugins/modules/vmware_vsan_health_info.py:    HAS_VSANPYTHONSDK = True
community/vmware//plugins/modules/vmware_vsan_health_info.py:    HAS_VSANPYTHONSDK = False
community/vmware//plugins/modules/vmware_vmkernel_ip_config.py:    HAS_PYVMOMI = True
community/vmware//plugins/modules/vmware_vmkernel_ip_config.py:    HAS_PYVMOMI = False

You can see that some of these may be easier than others. However, even between the two of these, there are only a tiny handful of unique HAS_ variables. While those are duplicated a lot, they tend to still have definitions in module_utils.

The re-definition in individual modules like:

https://github.com/ansible-collections/vmware/blob/main/plugins/modules/vmware_vsan_health_info.py#L105

seem to be completely unnecessary and could be removed. So that supports the argument that we could just focus on verifying that the HAS_ variables from module_utils are True.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.