ansible-collections / ansible.utils Goto Github PK
View Code? Open in Web Editor NEWA collection of ansible utilities for the content creator.
License: GNU General Public License v3.0
A collection of ansible utilities for the content creator.
License: GNU General Public License v3.0
Version 3.1 of Jinja2 removed deprecated code, including environmentfilter
: pallets/jinja#1544
Attempting to execute Ansible operations that use filter
with Jinja2 >= 3.1 results in:
[WARNING]: Skipping plugin (/opt/python3/lib/python3.10/site-packages/ansible/plugins/filter/core.py) as it seems to be invalid: cannot import name 'environmentfilter' from 'jinja2.filters' (/opt/python3/lib/python3.10/site-packages/jinja2/filters.py)
This issue appears to have been addressed in core Ansible here: ansible/ansible#74667
filter
ansible [core 2.12.3]
config file = None
configured module search path = ['/home/jenkins/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/python3/lib/python3.10/site-packages/ansible
ansible collection location = /home/jenkins/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.4 (main, Mar 24 2022, 16:14:08) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 3.1.1
libyaml = False
# /opt/python3/lib/python3.10/site-packages/ansible_collections
Collection Version
----------------- -------
community.general 4.6.0
<no output>
Linux (EL7)
Python 3.10.4 compiled from source
Jinja2 3.1.1
Ansible 5.5.0 (according to pip3 freeze
)
Ansible Core 2.12.3 (according to pip3 freeze
)
Same results with Ansible 2.10.7 and Ansible Base 2.10.9 using Python 3.9.10.
Install Jinja2 version 3.1.x
Execute playbook
No Python import error
[WARNING]: Skipping plugin (/opt/python3/lib/python3.10/site-packages/ansible/plugins/filter/core.py) as it seems to be invalid: cannot import name 'environmentfilter' from 'jinja2.filters' (/opt/python3/lib/python3.10/site-packages/jinja2/filters.py)
Currently update_fact
will report error if the var or subfield doesn't exist. It would be really nice if the missing var or subfield can be created automatically. To avoid breaking existing patterns, an additional argument could be added to update_fact such that by default it is still the old behavior.
update_fact
Due to the nature of this module not using strict versioning in requirements.txt which is very bad practice due to issues like this. In short and as the title says, textfsm 1.1.3 has issues and cannot be installed which breaks anything that is using ansible.utils.
issue in the textfsm repo google/textfsm#105
Switch "filter" key to name in docs, check other types to do same, request from toshio
Dear maintainers,
This is important for your collections!
In accordance with the Community decision, we have created the news-for-maintainers repository for announcements of changes impacting collection maintainers (see the examples) instead of Issue 45 that will be closed soon.
Watch
button in the upper right corner on the repository's home page.Issues
.Also we would like to remind you about the Bullhorn contributor newsletter which has recently started to be released weekly. To learn what it looks like, see the past releases. Please subscribe and talk to the Community via Bullhorn!
Join us in #ansible-social (for news reporting & chat), #ansible-community (for discussing collection & maintainer topics), and other channels on Matrix/IRC.
Help the Community and the Steering Committee to make right decisions by taking part in discussing and voting on the Community Topics that impact the whole project and the collections in particular. Your opinion there will be much appreciated!
Thank you!
In both the devel and milestone branches for ansible-core 2.14, a regression was introduced which causes ansible test sanity to fail if the constant DOCUMENTATION
is used anywhere, more than once, in a lookup plugin.
A workaround was put in place to get it from globals lowercased.
When the below ansible-core issue is fixed, these changes need to be removed.
This is what was done in each of the ansible.utils lookup plugins.
schema = [v for k, v in globals() if k.lower() == "documentation"]
The is the ansible-core issue that needs to be resolved prior to removing the code from the ansible.utils lookup plugins:
ansible/ansible#77764
This is the PR where the lookup plugin changes went in:
Related #172
lookup plugins
When passing facts to fact_diff, if you add ignore_lines the process treats the dict like a list and adds its keys as strings (removing values).
Specifically these lines: (60-69):
self._before = [ line for line in self._before if not any(regex.match(str(line)) for regex in self._skip_lines) ] self._after = [ line for line in self._after if not any(regex.match(str(line)) for regex in self._skip_lines) ]
sub_plugins/fact_diff/native.py
ansible [core 2.13.1]
- name: Show the difference in json format
ansible.utils.fact_diff:
before: "{{ before }}"
after: "{{ after }}"
plugin:
name: ansible.utils.native
vars:
skip_lines:
- '^example.*'
Should properly handle dicts, instead it will break if skip_lines is included (causing two different facts to appear the same).
As it will only compare key names and not their values.
Add new plugin resource_list_compare that generates the final resource list after comparing base and provided resource list and bangs.
EXAMPLE:
- set_fact:
resources: resources:
- '!all'
- 'interfaces'
- 'l2_interfaces'
- 'l3_interfaces'
- set_fact
network_resource_list:
modules:
- 'acl_interfaces'
- 'acls'
- 'bgp_address_family'
- 'bgp_global'
- ' interfaces'
- 'l2_interfaces'
- 'l3_interfaces'
- 'lacp',
- 'lacp_interfaces'
- 'lag_interfaces'
- 'lldp_global',
- 'lldp_interfaces'
- 'logging_global'
- 'ospf_interfaces'
- 'ospfv2'
- 'ospfv3'
"prefix_lists",
"route_maps",
"static_routes",
"vlans"
- name: set facts for data and criteria
set_fact:
final_network_resources: "{{ network_resources_list['modules']|resource_list_compare(resources) }}"
RESULT:
final_network_resources=['interfaces', 'l2_interfaces', 'l3_interfaces']
Hi, I'm not sure if it is the expected behavior:
remove_keys
applied on [ "" ]
always return [ ]
in place of [ "" ]
ansible.utils.remove_keys
ansible [core 2.14.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ubuntu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/ubuntu/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
Same problem with:
ansible-playbook [core 2.12.3.post0]
config file = None
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /runner/requirements_collections:/home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.8.12 (default, Sep 21 2021, 00:10:52) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 2.10.3
libyaml = True
# /home/ubuntu/.ansible/collections/ansible_collections
Collection Version
----------------- -------
ansible.utils 2.9.0
community.general 6.5.0
kubernetes.core 2.4.0
CONFIG_FILE() = /etc/ansible/ansible.cfg
Distributor ID: Ubuntu
Description: Ubuntu 22.04.2 LTS
Release: 22.04
Codename: jammy
Not environment dependent.
File test.yaml
:
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2023-03-08T10:06:46Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: extension-apiserver-authentication-reader
namespace: kube-system
resourceVersion: "184"
uid: 99df70e6-6d4f-442a-8b95-bc4b2138b589
rules:
- apiGroups:
- ""
resourceNames:
- extension-apiserver-authentication
resources:
- configmaps
verbs:
- get
- list
- watch
kind: List
metadata:
resourceVersion: ""
Playbook to run:
---
- name: playbook test remove_keys
hosts: localhost
tasks:
- name: debug file content without remove_keys
ansible.builtin.debug:
msg: "{{ lookup('file','test.yaml') | from_yaml }}"
- name: debug file content with remove_keys
ansible.builtin.debug:
msg: "{{ lookup('file','test.yaml') | from_yaml | ansible.utils.remove_keys(target=['creationTimestamp','resourceVersion']) }}"
remove_keys
should not remove the empty string from the apiGroups
list:
TASK [debug file content with remove_keys] **************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": {
"apiVersion": "v1",
"items": [
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "Role",
"metadata": {
"annotations": {
"rbac.authorization.kubernetes.io/autoupdate": "true"
},
"labels": {
"kubernetes.io/bootstrapping": "rbac-defaults"
},
"name": "extension-apiserver-authentication-reader",
"namespace": "kube-system",
"uid": "99df70e6-6d4f-442a-8b95-bc4b2138b589"
},
"rules": [
{
"apiGroups": [
""
],
"resourceNames": [
"extension-apiserver-authentication"
],
"resources": [
"configmaps"
],
"verbs": [
"get",
"list",
"watch"
]
}
]
}
],
"kind": "List",
"metadata": {}
}
}
remove_keys
remove the empty string from the apiGroups
list (the ouput from the filter from_yaml
is OK).
Same result with matching_parameter=regex
TASK [debug file content with remove_keys] **************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": {
"apiVersion": "v1",
"items": [
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "Role",
"metadata": {
"annotations": {
"rbac.authorization.kubernetes.io/autoupdate": "true"
},
"labels": {
"kubernetes.io/bootstrapping": "rbac-defaults"
},
"name": "extension-apiserver-authentication-reader",
"namespace": "kube-system",
"uid": "99df70e6-6d4f-442a-8b95-bc4b2138b589"
},
"rules": [
{
"apiGroups": [],
"resourceNames": [
"extension-apiserver-authentication"
],
"resources": [
"configmaps"
],
"verbs": [
"get",
"list",
"watch"
]
}
]
}
],
"kind": "List",
"metadata": {}
}
}
netutils is very handy for the average infrastructure engineer. There are a number of functions that would be ideal to create filter_plugins from.
If the maintainers are interested, I can pick out some components that don not lap with the current collection filter_plugins.
I would be interested to contribute assuming some blessing on the feature idea.
We do not currently publish the documentation strings from sub-plugins in this collection. Should we make it available somewhere? Move it into the related module plugins?
Existing sub-plugins
cli-parser sub-plugin docstring
cli-parser module plugin docstring
cli-parser module published documentation
All sub-plugins
Ansible 4 and later
The from_xml
filter returns a string instead of returning a python dictionary as its documentation reports and as the from_json
and from_yaml
filters do.
from_xml
ansible [core 2.11.3]
config file = /home/user/workspace/ansible-playbooks-2/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/workspace/ansible-playbooks-2/.direnv/python-3.8.10/lib/python3.8/site-packages/ansible
ansible collection location = /home/user/workspace/ansible-playbooks-2/collections
executable location = /home/user/workspace/ansible-playbooks-2/.direnv/python-3.8.10/bin/ansible
python version = 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0]
jinja version = 3.0.1
libyaml = True
# /home/user/workspace/ansible-playbooks-2/collections/ansible_collections
Collection Version
------------- -------
ansible.utils 2.4.2
AGNOSTIC_BECOME_PROMPT(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = False
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
CALLBACKS_ENABLED(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = ['timer', 'ansible.posix.profile_tasks']
COLLECTIONS_PATHS(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = ['/home/user/workspace/ansible-playbooks-2/collections']
DEFAULT_CONNECTION_PLUGIN_PATH(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = ['/home/user/workspace/ansible-playbooks-2/connection_plugins']
DEFAULT_FORKS(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = 10
DEFAULT_HOST_LIST(env: ANSIBLE_INVENTORY) = ['/home/user/workspace/ansible-playbooks-2/inventories/test']
DEFAULT_TIMEOUT(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = 10
DEFAULT_VAULT_PASSWORD_FILE(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = /home/user/workspace/ansible-playbooks-2/inventories/vault_password.sh
DISPLAY_SKIPPED_HOSTS(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = False
HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False
INVENTORY_ANY_UNPARSED_IS_FAILED(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = True
INVENTORY_ENABLED(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = ['host_list', 'ini', 'constructed']
INVENTORY_IGNORE_EXTS(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = ['~', '.orig', '.bak', '.cfg', '.retry', '.pyc', '.pyo', 'LICENSE', '.md', '.txt', 'secrets.yml', 'vars.yml', 'ssh_private_key']
PLAYBOOK_DIR(env: ANSIBLE_PLAYBOOK_DIR) = /home/user/workspace/ansible-playbooks-2
RETRY_FILES_ENABLED(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = True
RETRY_FILES_SAVE_PATH(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = /tmp
TASK_TIMEOUT(env: ANSIBLE_TASK_TIMEOUT) = 180
VARIABLE_PRECEDENCE(/home/user/workspace/ansible-playbooks-2/ansible.cfg) = ['all_inventory', 'groups_inventory', 'all_plugins_play', 'groups_plugins_play', 'all_plugins_inventory', 'groups_plugins_inventory']
Ubuntu 20.04.3 LTS (Focal Fossa)
- hosts: localhost
diff: true
gather_facts: false
tasks:
- debug:
msg: "{{ (some_xml | ansible.utils.from_xml)['root']['leaf'] }}"
vars:
some_xml: |
<?xml version="1.0" encoding="UTF-8"?>
<root>
<leaf>
<attribute>foobar</attribute>
</leaf>
</root>
This should print
"msg": {
"attribute": "foobar"
}
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'str object' has no attribute 'root'\n\nThe error appears to be in '/home/john/workspace_tagpay/ansible-playbooks-2/foo.tmp.yml': line 6, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - debug:\n ^ here\n"}
The documentation at https://docs.ansible.com/ansible/latest/collections/ansible/utils/docsite/filters_ipaddr.html#subnet-manipulation incorrectly states in multiple places that the optional first parameter can be a subnet size. This is incorrect, because the function instead expects the size of the prefix for the subnet. I took me a while to figure out why I can have subnets of size 4 in 192.168.0.0/16 and similar:
- name: asdf
debug:
msg: |
{{ item | ansible.utils.ipsubnet(2) }}
{{ item | ansible.utils.ipsubnet(16) }}
{{ item | ansible.utils.ipsubnet(20) }}
{{ item | ansible.utils.ipsubnet(80) }}
{{ item | ansible.utils.ipsubnet(96) }}
loop:
- '192.168.0.0/16'
- 'fc00:0:0:0:1::/80'
Output:
TASK [asdf] **************************************************************************************************************************************************************************************************************************************
ok: [localhost] => (item=192.168.0.0/16) => {
"msg": "0\n1\n16\nFalse\nFalse\n"
}
ok: [localhost] => (item=fc00:0:0:0:1::/80) => {
"msg": "0\n0\n0\n1\n65536\n"
}
ansible.utils.ipsubnet
Relates to ansible-collections/news-for-maintainers#24.
It's been detected that this repository contains ignore-2.14.txt
file but not ignore-2.15.txt
file.
Actions needed:
tests/sanity/ignore-2.14.txt
to tests/sanity/ignore-2.15.txt
otherwise the CI will get failing.stable-2.15
branch to your CI workflow files. If you use GitHub actions, see the content of .github/workflows
or, if you use Azure Pipelines, community.postgresql/.azure-pipelines/azure-pipelines.yml
. If not added, this will violate the Collection Requirements for collections included in the Ansible package.To read more about the topic, see the Ignores guide.
To read more about the context, see the announcement.
Thank you!
Please implement a new filter that returns either the first or last X bits of an IP address.
Example:
I have an IP address like 1234:4321:abcd:dcba::17
. Now I would like to just get the last 80 bits of the address for my firewall rule.
1234:4321:abcd:dcba::17
> expanded: 1234:4321:abcd:dcba:0000:0000:0000:17
-> filtered: dcba:0:0:0:17
ansible.utils.ipaddr
In my case I need to derive a portion of an IP address to make a firewall rule from it.
I don't want to use the IP address itself, as it wouldn't match (Complicated IPv6 story).
So I just want to have the network portion and the interface identifier.
1234:4321:abcd:dcba::17 | ansible.utils.ipcut(last, 80)
dcba:0:0:0:17
1234:4321:abcd:dcba::17 | ansible.utils.ipcut(first, 64)
1234:4321:abcd:dcba
Edit:
As I need that for nftables, the result needs to be in expanded format.
The default RefResolver in jsonschema correctly loads references that the validate module does not
ansible.utils.validate
ansible [core 2.14.5]
config file = /home/user/ansible_home/ansible.cfg
configured module search path = ['/home/user/ansible_home/plugins/modules']
ansible python module location = /home/user/venv/lib64/python3.11/site-packages/ansible
ansible collection location = /home/user/ansible_dev/collections:/home/user/ansible_home/collections:/home/user/playbooks/collections
executable location = /home/user/venv/bin/ansible
python version = 3.11.3 (main, Apr 5 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/home/user/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
# /home/user/ansible_home/collections/ansible_collections
Collection Version
------------- -------
ansible.utils 2.9.0
# /home/user/venv/lib/python3.11/site-packages/ansible_collections
Collection Version
------------- -------
ansible.utils 2.9.0
# /home/user/venv/lib64/python3.11/site-packages/ansible_collections
Collection Version
------------- -------
ansible.utils 2.9.0
ANSIBLE_HOME(env: ANSIBLE_HOME) = /home/user/ansible_home
BECOME_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/become']
COLLECTIONS_PATHS(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_dev/collections', '/home/user/ansible_home/collections', '/home/user/playbooks/collections']
CONFIG_FILE() = /home/user/ansible_home/ansible.cfg
DEFAULT_ACTION_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/action']
DEFAULT_CACHE_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/cache']
DEFAULT_CALLBACK_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/callback']
DEFAULT_CONNECTION_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/connection']
DEFAULT_FILTER_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/filter']
DEFAULT_HOST_LIST(/home/user/ansible_home/ansible.cfg) = ['/home/user/code/netbox_inventory']
DEFAULT_INVENTORY_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/inventory']
DEFAULT_LOOKUP_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/lookup']
DEFAULT_MODULE_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/modules']
DEFAULT_MODULE_UTILS_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/module_utils']
DEFAULT_ROLES_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/roles']
DEFAULT_STRATEGY_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/strategy']
DEFAULT_TERMINAL_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/terminal']
DEFAULT_TEST_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/test']
DEFAULT_VARS_PLUGIN_PATH(/home/user/ansible_home/ansible.cfg) = ['/home/user/ansible_home/plugins/vars']
INVENTORY_ENABLED(/home/user/ansible_home/ansible.cfg) = ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml', 'constructed']
PERSISTENT_COMMAND_TIMEOUT(/home/user/ansible_home/ansible.cfg) = 120
RETRY_FILES_ENABLED(/home/user/ansible_home/ansible.cfg) = False
RETRY_FILES_SAVE_PATH(/home/user/ansible_home/ansible.cfg) = /home/user/.ansible-retry
Fedora Remix 37 on WSL
https://gist.github.com/bk2zsto/f9df69ed86a5b0558b103bfaab3a63d1
validate loads external references and/or errors when they are not resolved
validate ignores external references
$ python /tmp/debug.py
Validation of 192.0.2.0 failed with combined
Validation of 192.0.2.0 failed with referring
$ ansible-playbook /tmp/debug.yml
PLAY [localhost] *******************************************************************************************************
TASK [setup test_data] *************************************************************************************************
ok: [localhost]
TASK [validate with combined schema] ***********************************************************************************
ok: [localhost] => (item=cidr)
failed: [localhost] (item=not_cidr) => {"ansible_loop_var": "item", "changed": false, "errors": [{"data_path": "network", "expected": "^[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}/[0-9]{1,2}", "found": "192.0.2.0", "json_path": "$.network", "message": "'192.0.2.0' does not match '^[0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3}/[0-9]{1,2}'", "relative_schema": {"$comment": "CIDR, IPv4-only", "pattern": "^[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}/[0-9]{1,2}", "type": "string"}, "schema_path": "properties.network.pattern", "validator": "pattern"}], "item": {"name": "not_cidr", "network": "192.0.2.0"}, "msg": "Validation errors were found.\nAt 'properties.network.pattern' '192.0.2.0' does not match '^[0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3}/[0-9]{1,2}'. "}
...ignoring
TASK [validate with referring schema] **********************************************************************************
ok: [localhost] => (item=cidr)
ok: [localhost] => (item=not_cidr) <==============================
PLAY RECAP *************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
When using /31 IPv4 and /127 IPv6 networks for point-2-point connections the network_in_network
filter fails when trying to validate the second IP address in the network.
Trying the same with network_in_usable
works because this function seems to have special handling for a network size of two addresses. As network_in_usable
is by definition a subset of network_in_network
the latter should also succeed when using this kind of networks.
ipaddr
ansible 2.10.16
config file = /Users/CENSORED/test/ansible.cfg
configured module search path = ['/Users/CENSORED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/CENSORED/test/venv/lib/python3.9/site-packages/ansible
executable location = /Users/CENSORED/test/venv/bin/ansible
python version = 3.9.9 (main, Nov 21 2021, 03:16:13) [Clang 13.0.0 (clang-1300.0.29.3)]
# /Users/CENSORED/test/collections/ansible_collections
Collection Version
----------------- -------
ansible.netcommon 2.5.0
Not dependent on config
MacOS 12.1
- hosts: localhost
tasks:
- name: Test filter
debug:
msg:
- "network_in_network IPv4: {{ '192.168.1.0/31'|ansible.netcommon.network_in_network('192.168.1.1') }}"
- "network_in_network IPv6: {{ '2001:db8::/127'|ansible.netcommon.network_in_network('2001:db8::1') }}"
This should return True both times because the last IP address in a /31 is valid.
The result is False
PLAY [localhost] *************************************************************************************************************
TASK [Test ipaddr filters] ***************************************************************************************************
ok: [localhost] => {}
MSG:
['network_in_network IPv4: False', 'network_in_network IPv6: False']
PLAY RECAP *******************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
It looks like you are actively working on this collection and release new versions on galaxy. However, you don't tag them in this repository, which is a requirement for collections that are part of the community package.
Please tag your release, otherwise we will consider removing ansible.utils
from ansible.
Not ot put too fine a point on it, but it looks like an easy thing to fix where we don't have to discuss much ;-)
When installing the requirements.txt for ansible.utils, jsonschema gets downgraded to 3.2.0, and pip complains that ansible-lint requires jsonschema=>4.5.1
. I'm not sure if upgrading jsonschema to 4.5.1 will have an adverse affect on ansible.utils after installing the ansible.utils requirements.txt?
Unsure
ansible [core 2.12.5]
config file = /Users/bryan.hundven/Projects/p-git01.on24.com/netops/centos_base/ansible.cfg
configured module search path = ['/Users/bryan.hundven/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bryan.hundven/Projects/venv/lib/python3.9/site-packages/ansible
ansible collection location = /Users/bryan.hundven/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/bryan.hundven/Projects/venv/bin/ansible
python version = 3.9.12 (main, May 8 2022, 18:05:47) [Clang 13.1.6 (clang-1316.0.21.2)]
jinja version = 3.1.1
libyaml = True
# /Users/bryan.hundven/.ansible/collections/ansible_collections
Collection Version
------------- -------
ansible.utils 2.6.1
N/A
MacOS
Linux
Windows
pip3 install -U ansible-lint
ansible-galaxy collection install -f ansible.utils
pip3 install -U ~/.ansible/collections/ansible_collections/ansible/utils/requirements.txt
No package conflicts
Package conflicts
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
ansible-lint 6.2.1 requires jsonschema>=4.5.1, but you have jsonschema 3.2.0 which is incompatible.
subnet_of_test does not work with IPv6 addresses
subnet_of_test
ansible [core 2.14.3]
config file = /workspaces/ansible.cfg
configured module search path = ['/home/vscode/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /workspaces/.pyenv/versions/3.10.4/envs/ansible.venv/lib/python3.10/site-packages/ansible
ansible collection location = /workspaces/collections
executable location = /workspaces/.pyenv/versions/ansible.venv/bin/ansible
python version = 3.10.4 (main, Mar 2 2023, 17:18:28) [GCC 10.2.1 20210110] (/workspaces/.pyenv/versions/3.10.4/envs/ansible.venv/bin/python3.10)
jinja version = 3.1.2
libyaml = True
# /workspaces/collections/ansible_collections
Collection Version
------------- ----------
ansible.utils 2.10.0-dev
# /workspaces/.pyenv/versions/3.10.4/envs/ansible.venv/lib/python3.10/site-packages/ansible_collections
Collection Version
------------- -------
ansible.utils 2.9.0
CACHE_PLUGIN(/workspaces/ansible.cfg) = ansible.builtin.jsonfile
CACHE_PLUGIN_CONNECTION(/workspaces/ansible.cfg) = .cache.facts.json
CACHE_PLUGIN_TIMEOUT(/workspaces/ansible.cfg) = 86400000
COLLECTIONS_PATHS(/workspaces/ansible.cfg) = ['/workspaces/collections']
CONFIG_FILE() = /workspaces/ansible.cfg
DEFAULT_BECOME(/workspaces/ansible.cfg) = True
DEFAULT_BECOME_FLAGS(/workspaces/ansible.cfg) = -l
DEFAULT_BECOME_METHOD(/workspaces/ansible.cfg) = su
DEFAULT_FORCE_HANDLERS(/workspaces/ansible.cfg) = True
DEFAULT_FORKS(/workspaces/ansible.cfg) = 50
DEFAULT_HOST_LIST(/workspaces/ansible.cfg) = ['/workspaces/inventory.ini']
DEFAULT_JINJA2_NATIVE(/workspaces/ansible.cfg) = True
DEFAULT_MANAGED_STR(/workspaces/ansible.cfg) = This file is managed by ansible - local changes will be lost
DEFAULT_REMOTE_USER(/workspaces/ansible.cfg) = root
DEFAULT_TRANSPORT(/workspaces/ansible.cfg) = ssh
DEFAULT_VAULT_PASSWORD_FILE(/workspaces/ansible.cfg) = /workspaces/.vpass
DEPRECATION_WARNINGS(/workspaces/ansible.cfg) = True
INVENTORY_ENABLED(/workspaces/ansible.cfg) = ['ini']
SYSTEM_WARNINGS(/workspaces/ansible.cfg) = True
Debian 11 in dev container with pinned python and dependency versions for reproducible runs in CI/CD.
'fe80::1234/64' is ansible.utils.subnet_of 'fe80::/10'
Evaluating to true
Evaluating to false
We are happy to announce that the registration for the Ansible Contributor Summit is open!
This is a great opportunity for interested people to meet, discuss related topics, share their stories and opinions, get the latest important updates and just to hang out together.
There will be different announcements & presentations by Community, Core, Cloud, Network, and other teams.
Current contributors will be happy to share their stories and experience with newcomers.
There will be links to interactive self-passed instruqt scenarios shared during the event that help newcomers learn different aspects of development.
Online on Matrix and Youtube. Tuesday, April 12, 2022, 12:00 - 20:00 UTC.
Add the event to your calendar. Use the ical URL (for example, in Google Calendar "Add other calendars" > "Import from URL") instead of importing the .ics file so that any updates to the event will be reflected in your calendar.
Check out the Summit page:
We are looking forward to seeing you!:)
Extend at least remove_keys
from Recursive filter plugins remove_keys | replace_keys | keep_keys to support inputs for target which have the form what ansible.utils.to_paths generates.
If data structure is more complex and has a key which is matched by target on multiple levels you might want to be more specific which key you really want to remove.
##example.yaml
a:
b:
c:
e:
- 0
- 1
d:
e:
- True
- False
##Playbook
vars_files:
- "example.yaml"
tasks:
- ansible.builtin.set_fact:
paths: "{{ a|ansible.utils.to_paths }}"
# TASK [ansible.builtin.set_fact] ********************************************
# ok: [localhost] => changed=false
# ansible_facts:
# paths:
# b.c.e[0]: 0
# b.c.e[1]: 1
# b.d.e[0]: True
# b.d.e[1]: False
- debug:
msg: "{{ a|ansible.utils.remove_keys(target=['^b.d.e'], matching_parameter= 'regex') }}"
# TASK [debug] *****************************************
# ok: [localhost] => {
# "msg": {
# "b": {
# "c": {
# "e": [
# 0,
# 1
# ]
# }
# }
# }
# }
I wanted to work on a filter. To start I wanted write a unit test that replicated my failure. I initially was unable to run the current tests. I wanted to execute them in a container and it was failing to gather the necessary requirements (netaddr).
Unit Tests in ansible.utils for ipsubnet
ansible --version
ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/bbbb/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/bbbb/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.2 (main, Jan 17 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]
jinja version = 3.0.1
libyaml = True
Look at the steps to reproduce. In this case it is the head of the repo.
No change.
$ ansible-config dump --only-changed
Fedora 35 Workstation, fresh installation, using the Podman container runtime and base
container image in ansible-test
.
git clone https://github.com/ansible-collections/ansible.utils.git
cd ansible.utils
ansible-galaxy collection build .
ansible-galaxy collection install -p . ansible-utils-\*.tar.gz
ansible-test units --docker base tests/unit/plugins/filter/test_ipsubnet.py --requirements --verbose
All current tests pass.
A cut of the failures are below. All tests fail for ipsubnet until netaddr is present.
[gw1] [100%] FAILED ../../../ansible/test/lib/ansible_test/_data/::TestIpSubnet::test_ipvsubnet_filter_subnet_with_1st_index
====================================================== FAILURES ======================================================
_____________________________________ TestIpSubnet.test_ipvsubnet_address_subnet _____________________________________
[gw0] linux -- Python 3.10.0 /usr/bin/python3.10
self = <ansible_collections.ansible.utils.tests.unit.plugins.filter.test_ipsubnet.TestIpSubnet testMethod=test_ipvsubnet_address_subnet>
def test_ipvsubnet_address_subnet(self):
"""convert address to subnet"""
args = ["", address, ""]
result = _ipsubnet(*args)
> self.assertEqual(result, "192.168.144.5/32")
E AssertionError: False != '192.168.144.5/32'
(This is a fairly trivial fix; reported in the main issue list, but I was redirected here)
When using a playbook which uses eg ansible.utils.ipv4('public'), I see an error message:
The ipv4 filter requires python's netaddr be installed on the ansible controller
Trying to parse this, I ended up perplexed – ‘in what sense does “python” have a network address; WTF?!’ – and it took a web search to realise that it's referring to a Python netaddr
package. Fairly obvious in retrospect, of course (and I have in fact used that package before), but it took me a good 10 minutes to work out what the error message was reporting, rather than giving an immediate 'aha!’.
I suggest changing the wording to
The ipv4 filter requires that the Python netaddr package be installed on the ansible controller
(One for the intern...?)
ansible-playbook
ansible [core 2.14.3]
config file = /Users/Shared/common/checkouts/itm/ansible/ansible.cfg
configured module search path = ['/Users/norman/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/Shared/common/checkouts/itm/ansible/ansible-venv/lib/python3.9/site-packages/ansible
ansible collection location = /Users/norman/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/Shared/common/checkouts/itm/ansible/ansible-venv/bin/ansible
python version = 3.9.6 (default, Oct 18 2022, 12:41:40) [Clang 14.0.0 (clang-1400.0.29.202)] (/Users/Shared/common/checkouts/itm/ansible/ansible-venv/bin/python)
jinja version = 3.1.2
libyaml = True
# /Users/Shared/common/checkouts/itm/ansible/ansible-venv/lib/python3.9/site-packages/ansible_collections
Collection Version
------------- -------
ansible.utils 2.9.0
CONFIG_FILE() = /Users/Shared/common/checkouts/itm/ansible/ansible.cfg
DEFAULT_HOST_LIST(/Users/Shared/common/checkouts/itm/ansible/ansible.cfg) = ['/Users/Shared/common/checkouts/itm/ansible/inventory/hosts']
DEFAULT_MANAGED_STR(/Users/Shared/common/checkouts/itm/ansible/ansible.cfg) = Ansible managed: modified on %Y-%m-%d %H:%M:%S
DEFAULT_ROLES_PATH(/Users/Shared/common/checkouts/itm/ansible/ansible.cfg) = ['/Users/Shared/common/checkouts/itm/ansible/roles']
DEFAULT_VAULT_PASSWORD_FILE(/Users/Shared/common/checkouts/itm/ansible/ansible.cfg) = /Users/Shared/common/checkouts/itm/ansible/.vault_pass
RETRY_FILES_ENABLED(/Users/Shared/common/checkouts/itm/ansible/ansible.cfg) = False
macOS 13.2.1
Run any playbook which exploits features of the Python netaddr
package, while that package isn't installed on the controller node
I expected an easy-to-parse/non-confusing error message
I got a confusing (clear only in retrospect) error message
Something like this....
try:
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
HAS_ANSIBLE_ARG_SPEC_VALIDATOR = True
except ImportError:
HAS_ANSIBLE_ARG_SPEC_VALIDATOR = False
def validate(self):
"""The public validate method
check for future argspec validation
that is coming in 2.11, change the check according above
"""
if HAS_ANSIBLE_ARG_SPEC_VALIDATOR:
if self._schema_format == "doc":
self._convert_doc_to_schema()
conditionals = self._schema_conditionals or {}
validator = ArgumentSpecValidator(self._schema['argument_spec'], **conditionals)
result = validator.validate(self._data)
valid = not bool(result.error_messages)
return valid, result.error_messages, result.validated_parameters
else:
return self._validate()
Update network user guide to point to ansible.utils.cli_parse
module as ansible.netcommon.cli_parse
is deprecated from ansible.netcommon
2.0.0 release
Add new docs to explain ansible.utils.validate
plugin usage.
Add support for using the following format options: https://python-jsonschema.readthedocs.io/en/stable/validate/#validating-formats
"The format keyword allows for basic semantic validation on certain kinds of string values that are commonly used."
@ganeshrn has provided some preliminary working modifications to the plugin for me to test.
ansible.utils.validation -> jsonschema.py
The python library used for the ansible.utils.validation module as it pertains to jsonschema, has additional support for using jsonschema's format keywords, which help validate common formats like email, IP's, URL's, etc without specifying regex patterns
ipaddr('prefix') returns [prefix]
instead of just prefix
like ansible.netcommon.ipaddr did before the migration.
ipaddr('prefix')
ansible [core 2.12.3]
python version = 3.9.10 (main, Jan 15 2022, 11:40:53) [Clang 13.0.0 (clang-1300.0.29.3)]
jinja version = 3.0.3
libyaml = True
# /Users/.../ansible/.venv-ansible/lib/python3.9/site-packages/ansible_collections
Collection Version
------------- -------
ansible.utils 2.5.0
# /Users/.../.ansible/collections/ansible_collections
Collection Version
------------- -------
ansible.utils 2.5.1
DEFAULT_STDOUT_CALLBACK(/Users/.../ansible/ansible.cfg) = yaml
- set_fact:
ansible_default_ipv4:
address: "10.10.184.252"
netmask: "255.255.240.0"
- debug:
msg: "{{ (ansible_default_ipv4.address + '/' + ansible_default_ipv4.netmask) | ansible.netcommon.ipaddr('prefix') }}"
Result of debug: 20
Result of debug: [20]
under certain conditions, update_facts will prevent variables being substituted within variables it manipulates
ansible.utils.update_fact
ansible [core 2.11.4]
config file = /Users/pookey/.ansible.cfg
configured module search path = ['/Users/pookey/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/4.5.0/libexec/lib/python3.9/site-packages/ansible
ansible collection location = /Users/pookey/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.9.7 (default, Sep 3 2021, 04:31:11) [Clang 12.0.5 (clang-1205.0.22.9)]
jinja version = 3.0.1
libyaml = True
ansible-galaxy collection list ansible.utils
# /opt/homebrew/Cellar/ansible/4.5.0/libexec/lib/python3.9/site-packages/ansible_collections
Collection Version
------------- -------
ansible.utils 2.4.0
# /Users/pookey/.ansible/collections/ansible_collections
Collection Version
------------- -------
ansible.utils 2.4.3
Tested on OSX and Linux
---
- hosts: localhost
vars:
env_name: moo
datadog_checks:
mysql:
- host: "{{ env_name }}.moo"
codedeploy:
logs: []
tasks:
- debug:
var: datadog_checks.mysql[0].host
- name: update fact
ansible.utils.update_fact:
updates:
- path: "datadog_checks['codedeploy']['logs']"
value: "{{ datadog_checks['codedeploy']['logs'] + ['/path/to/logfile'] }}"
register: new_dd
- name: replace the datadog_checks array
set_fact:
datadog_checks: "{{ new_dd.datadog_checks }}"
- debug:
var: datadog_checks.mysql[0].host
the second debug should print 'moo.moo', in the same way the first debug does.
I get {{ env_name }}.moo
as the output.
❯ ansible-playbook ./test.yaml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] *****************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************
ok: [localhost]
TASK [debug] *********************************************************************************************************************************************************************
ok: [localhost] => {
"datadog_checks.mysql[0].host": "moo.moo"
}
TASK [update fact] ***************************************************************************************************************************************************************
changed: [localhost]
TASK [replace the datadog_checks array] ******************************************************************************************************************************************
ok: [localhost]
TASK [debug] *********************************************************************************************************************************************************************
ok: [localhost] => {
"datadog_checks.mysql[0].host": "{{ env_name }}.moo"
}
PLAY RECAP ***********************************************************************************************************************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Hi.
If the point of ipaddr filters is to parse/modify valid values and return False
(or discard in lists) for invalid, then why some invalid values produce warnings (and even errors)?
The doc contains ''
example which is shown to be discarded silently, but in actuality produces a warning.
Chaining multiple ipaddr family and other filters becomes hard as one needs to take care to manually filter out anything that might break it, instead of just relying on ipaddr filter to parse what is parseable. Even chaining two ipaddr filters together will result in warning if the value does not pass through the first because False
is now invalid.
This behavior was introduced at some point, but why? and what is the plan for "breaking change in future"?
Just placing this before singular values in ipaddr filters could simplify a lot of stuff:
if isinstance(value, (int, str)) and value:
# proceed with filtering
pass
else:
return False
ipaddr
ansible [core 2.14.2]
config file = None
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.1 (main, Dec 31 2022, 10:23:59) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
ansible.utils 2.8.0
CONFIG_FILE() = None
Debian testing
---
- hosts: localhost
gather_facts: no
vars:
potential_ips:
- 127.0.0.1
- 192.168.0.0/24
- 192.168.0.14/24
- 8.8.8.8
- 8.8.8.0/27
- ''
- some string
- []
- yes
- no
- 0
- - 1
- 2
- noip
- ''
- False
- 1
#- 1.12
#- {key: value}
tasks:
- name: ipaddr per item
debug:
msg: |-
{{ item }} => {{ item | ansible.utils.ipaddr }}
loop: "{{ potential_ips }}"
- name: ipaddr on list
debug:
msg: |-
{{ potential_ips | ansible.utils.ipaddr }}
Everything invalid is silently discarded, ints and valid stings are parsed
TASK [ipaddr per item] *********************************************************
ok: [localhost] => (item=127.0.0.1) => {
"msg": "127.0.0.1 => 127.0.0.1"
}
ok: [localhost] => (item=192.168.0.0/24) => {
"msg": "192.168.0.0/24 => 192.168.0.0/24"
}
ok: [localhost] => (item=192.168.0.14/24) => {
"msg": "192.168.0.14/24 => 192.168.0.14/24"
}
ok: [localhost] => (item=8.8.8.8) => {
"msg": "8.8.8.8 => 8.8.8.8"
}
ok: [localhost] => (item=8.8.8.0/27) => {
"msg": "8.8.8.0/27 => 8.8.8.0/27"
}
ok: [localhost] => (item=) => {
"msg": " => False"
}
ok: [localhost] => (item=some string) => {
"msg": "some string => False"
}
ok: [localhost] => (item=[]) => {
"msg": "[] => []"
}
ok: [localhost] => (item=True) => {
"msg": "True => False"
}
ok: [localhost] => (item=False) => {
"msg": "False => False"
}
ok: [localhost] => (item=0) => {
"msg": "0 => False"
}
ok: [localhost] => (item=[1, 2, 'noip', '', False]) => {
"msg": "[1, 2, 'noip', '', False] => ['0.0.0.1', '0.0.0.2']"
}
ok: [localhost] => (item=1) => {
"msg": "1 => 0.0.0.1"
}
ok: [localhost] => (item=1) => {
"msg": "1.12 => False"
}
ok: [localhost] => (item=1) => {
"msg": "{key: value} => False"
}
TASK [ipaddr on list] **********************************************************
ok: [localhost] => {
"msg": [
"127.0.0.1",
"192.168.0.0/24",
"192.168.0.14/24",
"8.8.8.8",
"8.8.8.0/27",
[
"0.0.0.1",
"0.0.0.2"
],
"0.0.0.1"
]
}
TASK [ipaddr per item] *********************************************************
ok: [localhost] => (item=127.0.0.1) => {
"msg": "127.0.0.1 => 127.0.0.1"
}
ok: [localhost] => (item=192.168.0.0/24) => {
"msg": "192.168.0.0/24 => 192.168.0.0/24"
}
ok: [localhost] => (item=192.168.0.14/24) => {
"msg": "192.168.0.14/24 => 192.168.0.14/24"
}
ok: [localhost] => (item=8.8.8.8) => {
"msg": "8.8.8.8 => 8.8.8.8"
}
ok: [localhost] => (item=8.8.8.0/27) => {
"msg": "8.8.8.0/27 => 8.8.8.0/27"
}
[WARNING]: The value '' is not a valid IP address or network, passing this
value to ipaddr filter might result in breaking change in future.
ok: [localhost] => (item=) => {
"msg": " => False"
}
ok: [localhost] => (item=some string) => {
"msg": "some string => False"
}
ok: [localhost] => (item=[]) => {
"msg": "[] => []"
}
[WARNING]: The value 'True' is not a valid IP address or network, passing this
value to ipaddr filter might result in breaking change in future.
ok: [localhost] => (item=True) => {
"msg": "True => False"
}
[WARNING]: The value 'False' is not a valid IP address or network, passing this
value to ipaddr filter might result in breaking change in future.
ok: [localhost] => (item=False) => {
"msg": "False => False"
}
[WARNING]: The value '0' is not a valid IP address or network, passing this
value to ipaddr filter might result in breaking change in future.
ok: [localhost] => (item=0) => {
"msg": "0 => False"
}
ok: [localhost] => (item=[1, 2]) => {
"msg": "[1, 2] => ['0.0.0.1', '0.0.0.2']"
}
ok: [localhost] => (item=1) => {
"msg": "1 => 0.0.0.1"
}
TASK [ipaddr on list] **********************************************************
[WARNING]: The value '' is not a valid IP address or network, passing this
value to ipaddr filter might result in breaking change in future.
[WARNING]: The value 'True' is not a valid IP address or network, passing this
value to ipaddr filter might result in breaking change in future.
[WARNING]: The value 'False' is not a valid IP address or network, passing this
value to ipaddr filter might result in breaking change in future.
[WARNING]: The value '0' is not a valid IP address or network, passing this
value to ipaddr filter might result in breaking change in future.
ok: [localhost] => {
"msg": [
"127.0.0.1",
"192.168.0.0/24",
"192.168.0.14/24",
"8.8.8.8",
"8.8.8.0/27",
[
"0.0.0.1",
"0.0.0.2"
],
"0.0.0.1"
]
}
If float or dict values are uncommented, it results in error.
fatal: [localhost]: FAILED! => {"msg": "Unrecognized type <<class 'float'>> for ipaddr filter <value>"}
fatal: [localhost]: FAILED! => {"msg": "Unrecognized type <<class 'dict'>> for ipaddr filter <value>"}
to_xml defaults to indenting XML documents with tabs, as xmltodict .
xmltodict library unparse
function supports and indent
parameter that can be used to specify a different indent character (for example spaces).
ansible.utils.to_xml
The plugin should have two additional parameters:
Parameters have to be parsed before sending them to unparse
function, since this function only supports indent parameter to specify the character to use for indentation.
- name: Define JSON data
ansible.builtin.set_fact:
data:
"interface-configurations":
"@xmlns": "http://cisco.com/ns/yang/Cisco-IOS-XR-ifmgr-cfg"
"interface-configuration":
- debug:
msg: "{{ data|ansible.utils.to_xml(indent="space", indent_width=2) }}"
If you are open to adding this feature, I am available to make a pull request for this.
The README.md file (https://github.com/ansible-collections/ansible.utils/blame/main/README.md#L140) states:
Use of the latest version of black is required for formatting (black -l79)
However:
black
an older version 19.3b0.black
does have stability guarantees, some changes are expected across versions. Here is the black stability policy: https://black.readthedocs.io/en/latest/the_black_code_style/index.html#stability-policyblack -l79
in a local workspace does not match the formatting by black
in the zuul CI.It might be useful to run the zuul CI with the --diff
option instead of the --check
option. That would make it easier to identify formatting differences between CI and local workspace.
black (python formatting)
When the ansible-tox-linters
job runs in CI, the following output may be generated. However, we can't tell what would change. This makes it challenging to identify specific differences between black
running in CI vs local workspace.
would reformat /home/zuul/src/github.com/ansible-collections/ansible.utils/tests/unit/plugins/filter/test_ipsubnet.py
would reformat /home/zuul/src/github.com/ansible-collections/ansible.utils/tests/unit/plugins/sub_plugins/validate/test_config.py
All done! 💥 💔 💥
2 files would be reformatted, 163 files would be left unchanged.
ERROR: InvocationError for command /home/zuul/src/github.com/ansible-collections/ansible.utils/.tox/linters/bin/black -v -l79 --check . (exited with code 1)
I'm trying to figure out if the issue is caused by the version of black or by configuration. The formatting diffs are for function documentation. Example:
https://github.com/ansible-collections/ansible.utils/pull/131/files#diff-21f3ea4214265402c6b1447e96218830431ff1090545d358a4df63d425e095acL692
- """ Check if string is a HW/MAC address and filter it """
+ """Check if string is a HW/MAC address and filter it"""
In the current utils implementation this method: https://github.com/ansible-collections/ansible.utils/blob/5fe1d93eb0f07051fc119bf24ef407cad559ea1b/plugins/plugin_utils/base/ipaddress_utils.py#[…]6 is restricted to ipaddress utility. we can make this generic so that it can be used across utils collection.
ipaddress_utils.py
On 31 Mar 2023, ansible-navigator 3.0.0 was released which dropped support for python 3.8.
This impacts the integration testing matrix that currently includes ansible 2.9 and python 3.8.
An additional workflow was added to work around this issue until such time the decision is made to change the main workflow.
The above workflow can be used as a workaround but does not run integration tests with ansible 2.9 or python 3.8 which may or may not be required.
The fact_diff module is great for comparing facts, but I often use the asset module to compare vars. It would be useful to be able to use a fact_diff type function in a fail message of an assert task.
fact_diff
When the ipaddr
filter moved from ansible.netcommon
to ansible.utils
it changed behaviour and breaks downstream usage.
ipaddr
For more information see ansible-collections/ansible.netcommon#375
Based on the community decision to use true/false
for boolean values in documentation and examples, we ask that you evaluate booleans in this collection and consider changing any that do not use true/false
(lowercase).
See documentation block format for more info (specifically, option defaults).
If you have already implemented this or decide not to, feel free to close this issue.
P.S. This is auto-generated issue, please raise any concerns here
Add xml_to_json
filter plugin to convert a xml
string to json
structure using xmltodict
as default parsing engine.
{{ some_xml_string_variable | xml_to_json(engine='xmltodict') }}
Add json_to_xml
filter plugin to convert a json
string to xml
structure using xmltodict
as default parsing engine.
{{ some_json_string_variable | json_to_xml(engine='xmltodict') }}
As we are migrating ipaddr filters from netcommon to utils,
We need to update https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters_ipaddr.html documentation.
The "2600:1f1c:1b3:8f00::/56" | ipsubnet(120, 0)
filter never returns. Internally, the filter invokes the netaddr.subnet
function, which attempts to create 18446744073709551616 values.
The ipsubnet
filter creates a IPNetwork with "2600:1f1c:1b3:8f00::/56" and prefix length 120.
The ipsubnet
filter invokes the netaddr.subnet
function at https://github.com/ansible-collections/ansible.utils/blob/main/plugins/filter/ipsubnet.py#L283
)
return str(list(value.subnet(query))[index])
In the netaddr
module, the subnet() function calculates the max number of subnets:
width = self._module.width
max_subnets = 2 ** (width - self.prefixlen) // 2 ** (width - prefixlen)
If the network prefix is /56 and the prefix length is /120, then max_subnets = 18446744073709551616.
In the worst case, the network address is /0 and the prefix length is 128, then max_subnets = 2 ^ 128 // 2 ^ 0, which is 2 ^ 128. If the count
argument is set to 0, this will cause unbounded CPU usage and memory allocation.
So netaddr.subnet
never returns.
ipsubnet filter
I have pasted the output below, but the problem is reproduced when running unit tests in the main
branch of ansible.utils
module as of 01/24/2022.
ansible [core 2.12.1]
config file = None
configured module search path = ['/home/serosset/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/serosset/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/serosset/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/rh/rh-python38/root/usr/local/bin/ansible
python version = 3.8.11 (default, Sep 1 2021, 12:33:46) [GCC 9.3.1 20200408 (Red Hat 9.3.1-2)]
jinja version = 3.0.3
libyaml = True
Because the problem is exposed when running the test_ipsubnet.py unit tests inside a docker image, the applicable collections and configuration is what's inside the docker container. I'm not sure how to get that information. But see summary, I think the problem can be root caused to a specific line in the ipsubnet
function in the main branch.
ansible-test units --python=3.10 --docker -vvvv tests/unit/plugins/filter/test_ipsubnet.py
Checkout the code from #131, then run the following command:
# ansible-test units --python=3.10 --docker -vvvv tests/unit/plugins/filter/test_ipsubnet.py
The following filter never completes: "2600:1f1c:1b3:8f00::/56" | ipsubnet(120, 0)
The problem gets worse when the network prefix has a smaller value and the first argument of ipsubnet
has a higher value. In the worst case, this would cause 2 ^ 128 iterations.
"2600:1f1c:1b3:8f00::/56" | ipsubnet(120, 0)
should return 2600:1f1c:1b3:8f00::/120
"2600:1f1c:1b3:8f00::/56" | ipsubnet(120, 4)
should return 2600:1f1c:1b3:8f00::400/120
This could be calculated in constant time. Instead, the ipsubnet
filter attempts to create all possible subnets, then get the subnet at index X. However, the list of all subnets could be 2 ^ 128 in the worst case.
The ipsubnet
filter never returns. It iterates through a very large number of elements and never returns.
$ ansible --version
ansible [core 2.13.1]
$ ansible-galaxy collection list ansible.utils
# /Users/akira/envs/a6/lib/python3.9/site-packages/ansible_collections
Collection Version
------------- -------
ansible.utils 2.6.1
$ ansible-config dump --only-changed
HOST_KEY_CHECKING(/Users/akira/Documents/ansible/ansible.cfg) = False
Cisco IOS 15.9(3)M3
The task is bellow:
- name: show ip ospf neighbor
ansible.utils.cli_parse:
command: show ip ospf neighbor
parser:
name: ansible.netcommon.ntc_templates
register: result_show_ip_ospf_neighbor
until:
- (result_show_ip_ospf_neighbor.parsed | length) >= 2
- result_show_ip_ospf_neighbor.parsed[1].state == 'FULL/DR'
Run show ip ospf neighbor
command until the second ospf neighbor is FULL
state.
When retrying with until directive, the value of the register variable is not as intended. And an error 'dict object' has no attribute 'parsed'
occurs.
TASK [show ip ospf neighbor] **********************************************************************************************************************************
task path: /Users/akira/Documents/ansible/ios_test.yml:10
redirecting (type: connection) ansible.builtin.network_cli to ansible.netcommon.network_cli
redirecting (type: terminal) ansible.builtin.ios to cisco.ios.ios
redirecting (type: cliconf) ansible.builtin.ios to cisco.ios.ios
FAILED - RETRYING: [ios01]: show ip ospf neighbor (3 retries left).Result was: {
"attempts": 1,
"changed": false,
"parsed": [
{
"address": "10.1.1.1",
"dead_time": "00:00:32",
"interface": "GigabitEthernet0/3",
"neighbor_id": "192.168.1.105",
"priority": "1",
"state": "FULL/DR"
}
],
"retries": 4,
"stdout": "Neighbor ID Pri State Dead Time Address Interface\n192.168.1.105 1 FULL/DR 00:00:32 10.1.1.1 GigabitEthernet0/3",
"stdout_lines": [
"Neighbor ID Pri State Dead Time Address Interface",
"192.168.1.105 1 FULL/DR 00:00:32 10.1.1.1 GigabitEthernet0/3"
]
}
fatal: [ios01]: FAILED! => {
"msg": "The conditional check '(result_show_ip_ospf_neighbor.parsed | length) >= 2' failed. The error was: error while evaluating conditional ((result_show_ip_ospf_neighbor.parsed | length) >= 2): 'dict object' has no attribute 'parsed'"
}
When displaying the value of the register variable at the time of retry, it contained an error of parameters are mutually exclusive: command|text
. But I used only command
.
- name: show ip ospf neighbor
ansible.utils.cli_parse:
command: show ip ospf neighbor
parser:
name: ansible.netcommon.ntc_templates
register: result_show_ip_ospf_neighbor
until:
- false # for debug
The result is bellow:
...omitted...
FAILED - RETRYING: [ios01]: show ip ospf neighbor (2 retries left).Result was: {
"attempts": 2,
"changed": false,
"errors": [
"parameters are mutually exclusive: command|text"
],
"msg": "argspec validation failed for cli_parse module plugin",
"retries": 4
}
...omitted...
ansible.utils shows errors on docs.ansible.com that prevent the plugin documentation from displaying for users.
2.14
I was surprised that there is no filter for getting expanded ipv6 addresses in ansible.utils
collection.
I would like to get something as:
# {{ '::1' | ansible.utils.ipv6('expand') }}
'0000:0000:0000:0000:0000:0000:0000:0001'
And for ipv6 addresses compression:
# {{ '0000:0000:0000:0000:0000:0000:0000:0001' | ansible.utils.ipv6('compress') }}
'::1'
Also I think it would be great to have filter that converts ipv6 address to x509 notation:
# {{ '::1' | ansible.utils.ipv6('x509') }}
'0:0:0:0:0:0:0:1'
When the subjectAltName extension contains an iPAddress, the address MUST be stored in the octet string in "network byte order", as specified in [RFC791]. The least significant bit (LSB) of each octet is the LSB of the corresponding byte in the network address. For IP version 4, as specified in [RFC791], the octet string MUST contain exactly four octets. For IP version 6, as specified in [RFC2460], the octet string MUST contain exactly sixteen octets.
IPv6 addresses must be in the form
x:x:x:x:x:x:x:x
, exactly 8 groups of one to four hexadecimal digits separated by colons. The compressed zero form using "::" is not accepted.
I think it would be like ansible.utils.ip4_hex
filter or ansible.utils.ipwrap
wrapper.
It can be also implemented as separate module, e.g. ansible.utils.ip6_form
:
ansible.utils.ip6_form('compress')
ansible.utils.ip6_form('expand')
ansible.utils.ip6_form('x509')
In general, for processing ipv6 addresses in different forms with other filters. In particular, x509 notation filter for ipv6 will be used for self-signed certificates. Getting expanded ipv6 using regular expressions is a very complicated and unreliable way.
Example of using:
- community.crypto.openssl_csr_pipe:
privatekey_path: "{{ privkey.filename }}"
common_name: "{{ ansible_fqdn }}"
subject_alt_name:
- "IP:{{ ansible_default_ipv4.address }}"
- "IP:{{ ansible_default_ipv6.address|ansible.utils.ip6_form('x509') }}"
When using to_xml it always returns the xml tags which are only needed once in a files.
Here is an example showing the problem.
server.xml.j2
<?xml version="1.0" encoding="UTF-8"?>
<!-- {{ ansible_managed }} -->
<!-- Stop! Contact your administrator if any changes to this file are needed!! -->
<Server port="{{ instance.server.shutdown_port }}">
{% for listener in (instance.server.listensers | default(tomcat_server_listeners)) %}
{{ listener | ansible.utils.to_xml }}
{% endfor %}
{% for gnr in (instance.server.gnrs | default(tomcat_server_gnrs)) %}
{{ gnr | ansible.utils.to_xml }}
{% endfor %}
<Service name="Catalina">
{% for executor in (instance.server.executors | default(tomcat_server_executors)) %}
{{ executor | ansible.utils.to_xml }}
{% endfor %}
{% for connector in instance.server.connectors %}
{{ connector | ansible.utils.to_xml }}
{% endfor %}
<Engine name="Catalina" defaultHost="{{ instance.server.default_host | default(tomcat_server_default_host) }}">
{% for realm in (instance.server.realms | default(tomcat_server_realms)) %}
{{ realm | ansible.utils.to_xml }}
{% endfor %}
{% for host in (instance.server.hosts | default(tomcat_server_hosts)) %}
{{ host | ansible.utils.to_xml }}
{% endfor %}
</Engine>
</Service>
</Server>
server.xml
<?xml version="1.0" encoding="UTF-8"?>
<!-- This file is managed by Ansible! All changes will be lost!! -->
<!-- Stop! Contact your administrator if any changes to this file are needed!! -->
<Server port="8081">
<?xml version="1.0" encoding="utf-8"?>
<Listener className="org.apache.catalina.startup.VersionLoggerListener"></Listener>
<?xml version="1.0" encoding="utf-8"?>
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on"></Listener>
<?xml version="1.0" encoding="utf-8"?>
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener"></Listener>
<?xml version="1.0" encoding="utf-8"?>
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"></Listener>
<?xml version="1.0" encoding="utf-8"?>
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener"></Listener>
<?xml version="1.0" encoding="utf-8"?>
<GlobalNamingResources>
<Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml"></Resource>
</GlobalNamingResources>
<Service name="Catalina">
<?xml version="1.0" encoding="utf-8"?>
<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443"></Connector>
<Engine name="Catalina" defaultHost="localhost">
<?xml version="1.0" encoding="utf-8"?>
<Realm className="org.apache.catalina.realm.LockOutRealm">
<Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase">
<CredentialHandler className="org.apache.catalina.realm.SecretKeyCredentialHandler" algorithm="PBKDF2WithHmacSHA512" iterations="100000" keyLength="256" saltLength="16"></CredentialHandler>
</Realm>
</Realm>
<?xml version="1.0" encoding="utf-8"?>
<Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern='%h %l %u %t "%r" %s %b'></Valve>
</Host>
</Engine>
</Service>
</Server>
Indentation issues asside to_xml
when used as a filter should not return <?xml version="1.0" encoding="UTF-8"?>
or should have an option to disable it.
With the changes from PR149 ansible.utils raises the base AnsibleError
for invalid input values instead of AnsibleFilterError
. Where possible the more specific error should be raised.
An example of this can be seen with ansible-lint where the AnsibleError
triggers the jinja[invalid]
linting rule. If the filters would use AnsibleFilterError
it would see that the problem lies with a filter, not with the template itself.
ansible.utils filters
ansible [core 2.13.4]
config file = None
configured module search path = ['/home/harm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/harm/.virtualenvs/ansible-tmp/lib/python3.10/site-packages/ansible
ansible collection location = /home/harm/.ansible/collections:/usr/share/ansible/collections
executable location = /home/harm/.virtualenvs/ansible-tmp/bin/ansible
python version = 3.10.4 (main, Jun 29 2022, 12:14:53) [GCC 11.2.0]
jinja version = 3.1.2
libyaml = True
ansible-lint 6.7.0 using ansible 2.13.4
Collection Version
------------- -------
ansible.utils 2.6.1
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
$ pip install ansible ansible-lint netaddr
$ ansible-galaxy collection install ansible.utils
```yaml
---
- name: Play
hosts: localhost
gather_facts: false
tasks:
# This PASS
- name: Debug
ansible.builtin.debug:
msg: "{{ '127.0.0.1' | ansible.utils.ipaddr }}"
# This FAILS
- name: Debug
ansible.builtin.debug:
msg: "{{ somevar | ansible.utils.ipaddr }}"
vars:
somevar: '127.0.0.1'
...
$ ansible-lint play.yml
WARNING Overriding detected file kind 'yaml' with 'playbook' for given positional argument: play.yml
$ ansible-lint play.yml
WARNING Overriding detected file kind 'yaml' with 'playbook' for given positional argument: play.yml
WARNING Listing 1 violation(s) that are fatal
jinja: Unrecognized type <<class 'ansible.template.AnsibleUndefined'>> for ipaddr filter <value> (jinja[invalid])
play.yml:13 Task/Handler: Debug
You can skip specific rules or tags by adding them to your configuration file:
# .config/ansible-lint.yml
warn_list: # or 'skip_list' to silence them completely
- jinja[invalid] # Rule that looks inside jinja2 templates.
Finished with 1 failure(s), 0 warning(s) on 1 files.
After updating to Ansible community 5.6.0, the ipaddr
filter gives an error in one of our playbooks. The error did not occur in 5.5.0 and, per semver rules, 5.6.0 should be backwards compatible.
Originally reported in ansible/ansible#77480; was told to submit here instead.
ipaddr
$ ansible --version
ansible [core 2.12.4]
config file = None
configured module search path = ['/Users/riendeau/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/riendeau/venvs/ansible5x/lib/python3.9/site-packages/ansible
ansible collection location = /Users/riendeau/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/riendeau/venvs/ansible5x/bin/ansible
python version = 3.9.10 (v3.9.10:f2f3f53782, Jan 13 2022, 17:02:14) [Clang 6.0 (clang-600.0.57)]
jinja version = 3.0.3
libyaml = True
$ ansible-galaxy collection list ansible.utils
# /Users/riendeau/venvs/ansible5x/lib/python3.9/site-packages/ansible_collections
Collection Version
------------- -------
ansible.utils 2.5.2
$ ansible-config dump --only-changed
COLOR_DEBUG(env: ANSIBLE_COLOR_DEBUG) = bright gray
Mac OS
---
- name: Test playbook
hosts: localhost
tasks:
- name: Set fact 1
set_fact:
ip1: "172.20.0.1"
- name: Set fact 2
set_fact:
ip2: "{{ ((ip1 | ipaddr('int')) + 6) | ipaddr }}"
- debug: msg="ip2={{ ip2 }}"
Expected behavior seen in Ansible community 5.5.0:
$ ansible-playbook ~/Desktop/ipaddr-filter-test.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test playbook] *********************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [Set fact 1] ************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [Set fact 2] ************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "ip2=172.20.0.7"
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ ansible-playbook ~/Desktop/ipaddr-filter-test.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test playbook] *********************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [Set fact 1] ************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [Set fact 2] ************************************************************************************************************************************************************************************************************************
[DEPRECATION WARNING]: Use 'ansible.utils.ipaddr' module instead. This feature will be removed from ansible.netcommon in a release after 2024-01-01. Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
fatal: [localhost]: FAILED! => {"msg": "Unrecognized type <<class 'int'>> for ipaddr filter <value>"}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.