GithubHelp home page GithubHelp logo

paloaltonetworks / pan-os-ansible Goto Github PK

View Code? Open in Web Editor NEW
190.0 28.0 92.0 65.6 MB

Ansible collection for easy automation of Palo Alto Networks next generation firewalls and Panorama, in both physical and virtual form factors.

Home Page: https://pan.dev/ansible/docs/panos

License: Apache License 2.0

Python 99.78% Makefile 0.22%
ansible pan-os panorama

pan-os-ansible's Introduction

PAN-OS Ansible Collection

CI Version on Galaxy

Ansible collection that automates the configuration and operational tasks on Palo Alto Networks Next Generation Firewalls, both physical and virtualized form factors, using the PAN-OS API.

Tested Ansible Versions

This collection is tested with the most current Ansible releases. Ansible versions before 2.14 are not supported.

Python Version

The minimum python version for this collection is python 3.9.

Installation

Install this collection using the Ansible Galaxy CLI:

ansible-galaxy collection install paloaltonetworks.panos

Usage

Refer to modules by their full FQCN:

  tasks:
    - name: Get the system info
      paloaltonetworks.panos.panos_op:
        provider: '{{ device }}'
        cmd: 'show system info'
      register: res

    - name: Show the system info
      ansible.builtin.debug:
        msg: '{{ res.stdout }}'

(Note that use of the collections key is now discouraged)

Releasing, changelogs, versioning and deprecation

There is currently no intended release frequency for major and minor versions. The intended frequency of patch versions is never, they are released for fixing issues or to address security concerns.

Changelog details are created automatically and more recently can be found here, but also the full history is here.

Semantic versioning is adhered to for this project.

Deprecations are done by version number, not by date or by age of release. Breaking change deprecations will only be made with major versions.

Support

As of version 2.12.2, this Collection of Ansible Modules for PAN-OS is certified on Ansible Automation Hub and officially supported for Ansible subscribers. Ansible subscribers can engage for support through their usual route towards Red Hat.

For those who are not Ansible subscribers, this Collection of Ansible Modules is also published on Ansible Galaxy to be freely used under an as-is, best effort, support policy. These scripts should be seen as community supported and Palo Alto Networks will contribute our expertise as and when possible. We do not provide technical support or help in using or troubleshooting the components of the project through our normal support options such as Palo Alto Networks support teams, or ASC (Authorized Support Centers) partners and backline support options. The underlying product used (the VM-Series firewall) by the scripts or templates are still supported, but the support is only for the product functionality and not for help in deploying or using the template or script itself.

Unless explicitly tagged, all projects or work posted in our GitHub repository (at https://github.com/PaloAltoNetworks) or sites other than our official Downloads page on https://support.paloaltonetworks.com are provided under the best effort policy.

pan-os-ansible's People

Contributors

acelebanski avatar alperenkose avatar barloff-st avatar bvaradinov-c avatar dapperdeer avatar dependabot[bot] avatar fosix avatar itdependsnetworks avatar itsamemarkus avatar jamesholland-uk avatar jonasreineke avatar mgollo avatar michalbil avatar mrichardson03 avatar nembery avatar nothing4you avatar ntwrkguru avatar patrikkaren avatar pmalinen avatar rbcollins123 avatar refaelazi avatar ryanmerolle avatar sebastianczech avatar semantic-release-bot avatar shinmog avatar shourai avatar smigii avatar spacebjorn avatar stealthllama avatar undodelete avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pan-os-ansible's Issues

Layer3 Subinterface creation throws an error but creates configuration

Describe the bug

When I execute this playbook extract on Panorama v9.1.2:

# Add a subinterface
    - name: add subinterface to template
      panos_l3_subinterface:
          provider: '{{ provider }}'
          template: 'Test_Template'
          name: 'ae1.1234'
          zone_name: 'Trust'
          tag: 1234 
          ip: '10.10.10.1/24'
          vr_name: 'Router1'
          enable_dhcp: false

I get the following error back, but the creation succeeds:

fatal: [panorama]: FAILED! => {"changed": false, "msg": "Failed setref: layer3 'ae1.1234' is not a valid reference"}

Expected behavior

Should not throw error.

Current behavior

Thows the error above

Possible solution

I think this is because the command is trying to set 'layer3' on subinterface but not required on GUI.

Steps to reproduce

As above.

Screenshots

None

panos_ha fails, list index out of range

Describe the bug
When playbook attempts to run the panos_ha section it fails due to "list index out of range"

Expected behavior
It should be able to complete as expected

Current behavior
when executing play book other panos modules execute correctly, panos_ha module fails.

Possible solution
Unknown

Steps to reproduce
Execute Playbook

Relevant Playbook Configuration:
- name: set ports to HA mode
panos_interface:
provider: '{{ provider }}'
if_name: "{{ item }}"
mode: "ha"
enable_dhcp: false
commit: false
with_items:
- "ethernet1/1"
- "ethernet1/2"

- name: Configure Active/Standby HA
  panos_ha:
    provider: "{{ provider }}"
    state: present
    ha_peer_ip: "10.1.1.6"
    ha_peer_ip_backup: "10.1.2.6"
    ha1_ip_address: "10.1.1.5"
    ha1_netmask: "255.255.255.252"
    ha1_port: "ha1-a"
    ha1b_ip_address: "10.1.1.5"
    ha1b_netmask: "255.255.255.252"
    ha1b_port: "ha1-b"
    ha2_ip_address: "10.1.3.5"
    ha2_netmask: "255.255.255.252"
    ha2_port: "ethernet1/1"
    ha2b_ip_address: "10.1.4.5"
    ha2b_netmask: "255.255.255.252"
    ha2b_port: "ethernet1/2"

Error Message:
TASK [set ports to HA mode] *****************************************************************************************************************************************************************************************************************
ok: [-Redacted-] => (item=ethernet1/1)
ok: [-Redacted-] => (item=ethernet1/2)

TASK [Configure Active/Standby HA] **********************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: IndexError: list index out of range
fatal: [M-PA-CLINSYS]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File "/root/.ansible/tmp/ansible-tmp-1593696616.73-40067-259423980504753/AnsiballZ_panos_ha.py", line 102, in \n _ansiballz_main()\n File "/root/.ansible/tmp/ansible-tmp-1593696616.73-40067-259423980504753/AnsiballZ_panos_ha.py", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File "/root/.ansible/tmp/ansible-tmp-1593696616.73-40067-259423980504753/AnsiballZ_panos_ha.py", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.paloaltonetworks.panos.plugins.modules.panos_ha', init_globals=None, run_name='main', alter_sys=True)\n File "/usr/lib/python2.7/runpy.py", line 188, in run_module\n fname, loader, pkg_name)\n File "/usr/lib/python2.7/runpy.py", line 82, in _run_module_code\n mod_name, mod_fname, mod_loader, pkg_name)\n File "/usr/lib/python2.7/runpy.py", line 72, in _run_code\n exec code in run_globals\n File "/tmp/ansible_panos_ha_payload_g89nNP/ansible_panos_ha_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_ha.py", line 456, in \n File "/tmp/ansible_panos_ha_payload_g89nNP/ansible_panos_ha_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_ha.py", line 443, in main\nIndexError: list index out of range\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

Your Environment:
ansible 2.9.10
Device: PA-3260
Pan OS 9.0.4

Tech support file module

Would be an great enhancement to pull tech-support files from a range of firewalls, f.e. to collect the data base for Best Practice Assessment, etc., using Ansible.

pan_ike_gateway - ikev2 only - still adds ikev1_crypto_profile 'default'

Describe the bug

When selecting version = ikev2, ikev1_crypto_profile 'default' is still add since it is there is a 'default' setting. Adding this ikev1 crypto profile will error out the creation of the ike gateway since 'default' ikev1 crypto profile does not exists. This is also not apparent when looking directly at the GUI because it will still say 'ikev2'.
Also if version ikev2 is default then no ikev1 variables should be set with a default either.

Expected behavior

adding ikev2 only should disallow ikev1 variables.

Current behavior

adding ikev2 only, also adds ikev1 crypto profile of "default".

Error on commit:
Details:
. Validation Error:
. network -> ike -> gateway -> IXXXXXXX -> protocol -> ikev1 -> ike-crypto-profile constraints failed : default crypto profile doesn't exist
. network -> ike -> gateway -> XXXXXX -> protocol -> ikev1 -> ike-crypto-profile is invalid

Possible solution

ikev1_crypto_profile should not have a default set and should be a manual setting

Steps to reproduce

  1. Push ansible config:
  • name: Add IKE gateway config to the firewall
    panos_ike_gateway:
    provider: '{{ pan_provider }}'
    state: 'present'
    name: 'Some Name'
    enable_passive_mode: False
    interface: 'ethernet1/1'
    ikev2_crypto_profile: 'IKE-some-name'
    peer_ip_value: '1.1.1.1'
    pre_shared_key: 'some key'
    template: 'panorama template'
    version: 'ikev2'
    commit: 'False'

Screenshots

Context

Trying to create and commit new IPSec tunnels and have to set ikev2-preferred to get this to work.

Your Environment

Standard 2 firewall setup with panorama

  • Version used: ansible 2.9.6
  • Environment name and version (e.g. Chrome 59, node.js 5.4, python 3.7.3): python version = 3.6.9
  • Operating System and version (desktop or mobile): Ubuntu bionic beaver
  • Link to your project:

Feature Request: generate statsdump file

Would be an great enhancement to pull stats-dump files from a range of firewalls, f.e. to collect the data base for Security Lifecycle Report, etc., using Ansible.

Strange timestamp using panos_op

Im having a strange issue when trying to use ansible to get a list of all the unused rules from panorama, the timestamps get weird in the output (i need to have the timestamps, since i will later delete the unused rules that was create 3+ month ago.


  tasks:
    - name: Get hit count for rule
      paloaltonetworks.panos.panos_op:
#        provider: '{{ device }}'
        ip_address: 1.1.1.1
        api_key: '13371337'
        cmd:  |
          <show><rule-hit-count><device-group><entry name='fw01'><pre-rulebase><entry name='security'><rules><all/></rules></entry></pre-rulebase></entry></device-group></rule-hit-count></show>
        cmd_is_xml: True
      changed_when: false
      register: initial
    - debug:
        msg: "{{ (initial.stdout )}}"

When doing the command from the cli it looks like this.

020-08-06 09:37:56
<response status="success"><result>Rule Name                                                         Rule Usage          Rule Create Timestamp         Rule Modify Timestamp        
---------------------------------------------------------------------------------------------------------------------------------------------------------
133711510-32724-01                                             Partially Used      Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
test-userid                                                       Partially Used      Mon Oct  7 16:48:14 2019      Tue Jan 14 15:53:38 2020     
133711522-32724-02                                             Unused              Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711540-32724-08                                             Unused              Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711528-32724-03                                             Used                Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711542-32724-09                                             Unused              Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711534-32724-04                                             Used                Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711544-32724-10                                             Unused              Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711535-32724-05                                             Unused              Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711547-32724-11                                             Used                Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711536-32724-06                                             Used                Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711549-32724-12                                             Unused              Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711537-32724-07                                             Used                Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711551-32724-13                                             Used                Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
133711552-32724-14                                             Unused              Mon Oct  7 16:48:14 2019      Mon Oct  7 16:48:14 2019     
20190819                                                          Used                Mon Oct  7 16:48:14 2019      Tue Mar 31 14:32:54 2020     

However the output from ansible looks like this:


ication-timestamp>1570697945</rule-modification-timestamp></entry><entry name="20200127094600"><rule-state>Used</rule-state><all-connected>yes</all-connected><rule-creation-timestamp>1580114892</rule-creation-timestamp><rule-modification-timestamp>1580123975</rule-modification-timestamp></entry><entry name="20200107130000"><rule-state>Partial</rule-state><all-connected>yes</all-connected><rule-creation-timestamp>1578398026</rule-creation-timestamp><rule-modification-timestamp>1578406587</rule-modification-timestamp></entry><entry name="20190829151302"><rule-state>Used</rule-state><all-connected>yes</all-connected><rule-creation-timestamp>1570459694</rule-creation-timestamp><rule-modification-timestamp>1575033480</rule-modification-timestamp></entry><entry name="Any-any"><rule-state>Used</rule-state><all-connected>yes</all-connected><rule-creation-timestamp>1570459694</rule-creation-timestamp><rule-modification-timestamp>1570459694</rule-modification-timestamp></entry><entry
    name="global-deny"><rule-state>Used</rule-state><all-connected>yes</all-connected><rule-creation-timestamp>1570459694</rule-creation-timestamp><rule-modification-timestamp>1570459694</rule-modification-timestamp></entry></rules></entry></rule-base></entry></device-group></rule-hit-count></result></response>

TASK [debug] ********************************************************************************************************************************************

  msg:
    response:
      '@status': success
      result:
        rule-hit-count:
          device-group:
            entry:
              '@name': fw01
              rule-base:
                entry:
                  '@name': security
                  rules:
                    entry:
                    - '@name': 133711510-32724-01
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Partial
                    - '@name': test-userid
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1579013618'
                      rule-state: Partial
                    - '@name': 133711522-32724-02
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Unused
                    - '@name': 133711540-32724-08
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Unused
                    - '@name': 133711528-32724-03
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Used
                    - '@name': 133711542-32724-09
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Unused
                    - '@name': 133711534-32724-04
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Used
                    - '@name': 133711544-32724-10
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Unused
                    - '@name': 133711535-32724-05
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Unused
                    - '@name': 133711547-32724-11
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Used
                    - '@name': 133711536-32724-06
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Used
                    - '@name': 133711549-32724-12
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Unused
                    - '@name': 133711537-32724-07
                      all-connected: 'yes'
                      rule-creation-timestamp: '1570459694'
                      rule-modification-timestamp: '1570459694'
                      rule-state: Used

Change per-module commit behavior

Is your feature request related to a problem?

Most (but not all) of the modules include a commit parameter that defaults to true. This performs a commit on the target host each time a module is called. This is anti-pattern in that Ansible modules should be designed to perform one specific function. There is a separate panos_commit module that should be used in Ansible playbooks instead of per-module commits.

Describe the solution you'd like

Initially, change the default behavior of the commit parameter to false. Then deprecate that functionality out of the modules over time.

Describe alternatives you've considered

Use the panos_commit module in Ansible playbooks instead.

Additional context

Too often, users of these Ansible modules forget about the per-module commits and playbook runs result in multiple commits that are not necessary and slow down playbook execution.

panos_admpwd hangs when connecting to AWS Palo Alto Firewall

Describe the bug

panos_admpwd hangs when trying to establish a connection with a newly provisioned AWS-based Palo Alto Firewall. SSHing into the firewall from the terminal command line (with the appropriate SSH private key of course) works.

Expected behavior

The panos_admpwd module should be applying a password to the specified user in order to allow for the other modules to be used, as they require a username and password rather than an SSH private key file.

Current behavior

That playbook never gets past establishing a connection with the newly created firewall.

Playbook in question:

---
- name: Provision AWS Resources
  hosts: panos
  gather_facts: no
  vars_files:
    - ./vars/default_vars.yml
    - ./credentials/pan_credentials.yml

  tasks:

    - name: Configure admin password
      paloaltonetworks.panos.panos_admpwd:
        ip_address: "{{ inventory_hostname }}"
        username: "admin"
        key_filename: "{{ working_dir }}/{{ ec2_prefix }}-key-private.pem"
        newpassword: "{{ PAN_PASSWORD }}"
      register: result
      until: not result|failed
      retries: 10
      delay: 30

Results of the playbook:

TASK [Configure admin password] ***************************************************************************************************************************************************************
task path: /Users/michaelford/My Local Documents/github-workspace/ansible-panos/secure_firewalls.yml:20
Read vars_file './vars/default_vars.yml'
Read vars_file './credentials/pan_credentials.yml'
Read vars_file './vars/default_vars.yml'
Read vars_file './credentials/pan_credentials.yml'
<18.214.16.238> ESTABLISH SSH CONNECTION FOR USER: admin
<18.214.16.238> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/tmp/mford-panos-demo-key-private.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="admin"' -o ConnectTimeout=10 -o ControlPath=/Users/michaelford/.ansible/cp/ecf716a88b 18.214.16.238 '/bin/sh -c '"'"'echo ~admin && sleep 0'"'"''

Result of logging into the firewall on the terminal:

$ ssh [email protected] -i /tmp/mford-panos-demo-key-private.pem 
Last login: Sun Mar  8 14:13:41 2020 from 45.56.142.132
Welcome admin.
admin@PA-VM> 

Context

Not having this module work properly, will force operators to manually change add passwords to firewalls, raising the barrier of entry to using ansible automation for Palo Alto firewalls.

Your Environment

facing issue with creating aggregate sub interface

facing issue with creating aggregate sub interface on panorama

Describe the bug

#create ae8.100
-name: create sub interface
ip_address: '1.1.1.1'
username: 'admin'
password: '****'
name: "ae8.100"
tag: 100
zone_name: "TEST"
template: "TEST"

When running this it gives error
"changed": false , "msg": "failed refresh: Object doen't exist: /config/devices/entry[@name='localdomain']/template/entry[@name='TEST']/config/devices/entry[@name='name='localhost.localdomain']/network/interface/Ethernet/entry[name='ae8]"

instead of its going to aggregate interface its going to ethernet

Expected behavior

it should create ae8.100

Current behavior

Possible solution

Steps to reproduce

Screenshots

Context

Your Environment

  • Version used:
  • Environment name and version (e.g. Chrome 59, node.js 5.4, python 3.7.3):
  • Operating System and version (desktop or mobile):
  • Link to your project:

panos_interface (and friends): cannot set IPv6 addresses

While the interface-related modules allow enabling IPv6 on an interface it doesn't seem to be possible to actually configure any addresses.

Expected behavior

There should be some way to add IPv6 addresses to an interface.

Current behavior

As far as I can tell, pan-os-ansible currently doesn't allow for IPv6 addressing configuration at all.

Possible interface sketch:

panos_vlan_interface:
  provider: "{{ provider }}"
  name: vlan.42
  zone_name: twilight_zone
  vr_name: default
  ipv6_enabled: yes
  ip:
    - '198.51.100.42/24'
  ipv6:
    - address: '2001:db8::42/64'
      advertise_enabled: yes

Context

Trying to figure out how to drive a pair of PA-5200s. If there's a more suitable way for setting up IPv6 addressing I'd happily try something else.

panos_administrator cannot modify Panorama device admins

Describe the bug

In a previous version we were able to use panos_administrator to modify the administrators of the Panorama device itself. Now a template is required even when we don't want to modify a template.

Expected behavior

The template field should be optional for Panorama so that the panorama device administrators can be updated as well.

Current behavior

An error is raised that the template or template stack is required:
" msg: Specify either the template or the template stack."

Steps to reproduce

Pretty straightforward, just try to add an administrator on a panorama device without a template.

Context

We have a fair number of network engineers and we need to be able to automatically sync the panorama administrators with our local LDAP information.

Your Environment

  • Version used:
    2.4.1

  • Environment name and version (e.g. Chrome 59, node.js 5.4, python 3.7.3):
    ansible==2.9.10
    cffi==1.14.0
    cryptography==2.9.2
    Jinja2==2.11.2
    MarkupSafe==1.1.1
    pan-python==0.16.0
    pandevice==0.14.0
    pycparser==2.20
    PyYAML==5.3.1
    six==1.15.0
    xmltodict==0.12.0

python version = 3.7.3

  • Operating System and version (desktop or mobile):
    OSX or CentOS 7.

panos_type_cmd :: get: xml.parsers.expat.ExpatError: junk after document element while getting node attributes

Overview

panos_type_cmd :: get: module fails when the get call on an XPATH returns a list of items.
XPATH : /config/devices/entry[@name='localhost.localdomain']/template/entry/@name

It works fine when list returns only one element:
Example for XPATH : /config/devices/entry/@name

"entry": {
            "@name": "localhost.localdomain"
        }

Description

When a get call to an XPATH returns list of items (Example : get list of all available template names) this module fails as below line fails.

panos_type_cmd.py: Line 211
obj_dict = xmltodict.parse(xml_output)

Expected behavior

The get call should return list of elements instead of failing.

{
            "entry": [
                {
                    "@name": "Template-1"
                },
                {
                    "@name": "Template-2"
                },
                {
                    "@name": "Template-3"
                },
                {
                    "@name": "Template-4"
                },
                {
                    "@name": "Template-5"
                },
                {
                    "@name": "Template-7"
                },
                {
                    "@name": "Template-8"
                },
                {
                    "@name": "Template-9"
                },
                {
                    "@name": "Template-10"
                }
            ]
     }

Current behavior

Module failure:
Traceback (most recent call last):\n File \"/home/mylocaluser/.ansible/tmp/ansible-tmp-1590179869.8708434-3691-1200177165009/AnsiballZ_panos_type_cmd.py\", line 102, in <module>\n _ansiballz_main()\n File \"/home/mylocaluser/.ansible/tmp/ansible-tmp-1590179869.8708434-3691-1200177165009/AnsiballZ_panos_type_cmd.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/mylocaluser/.ansible/tmp/ansible-tmp-1590179869.8708434-3691-1200177165009/AnsiballZ_panos_type_cmd.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.paloaltonetworks.panos.plugins.modules.panos_type_cmd', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib/python3.8/runpy.py\", line 206, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.8/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.8/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_panos_type_cmd_payload_8f6o73ac/ansible_panos_type_cmd_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_type_cmd.py\", line 220, in <module>\n File \"/tmp/ansible_panos_type_cmd_payload_8f6o73ac/ansible_panos_type_cmd_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_type_cmd.py\", line 209, in main\n File \"/home/mylocaluser/ansible-venv/lib/python3.8/site-packages/xmltodict.py\", line 327, in parse\n parser.Parse(xml_input, True)\nxml.parsers.expat.ExpatError: junk after document element: line 3, column 2\n

Possible solution

import xml
if xml_output is not None:
        try:
            obj_dict = xmltodict.parse(xml_output)
        except xml.parsers.expat.ExpatError :
            xml_output = '<entries>'+xml_output+'</entries>'
            obj_dict = xmltodict.parse(xml_output)
        json_output = json.dumps(obj_dict)

Wrapping the xml_output list within <entries> </entries> fixed my issue.

Steps to reproduce

  1. Setup multiple Templates (In Panaroma GUI Network > Template (Pull down))
  2. Use below play to pull all the available template names.
   - name: get available templates
      panos_type_cmd :
        provider: '{{ provider }}'
        cmd: get
        xpath:  | 
          /config/devices/entry[@name='localhost.localdomain']/template/entry/@name
      register: result

Screenshots

API explorer results:
image

Context

I had to do certain action against each template, for the purpose I had to pull the list of available templates dynamically and update the config use 'set' method.

Environment

  • Version used:
  • Python 3.8.2
  • ansible 2.9.7
  • paloaltonetworks.panos:1.1.0
  • Panaroma 9.0.8

Read provider parameters from environment variables

Instead of having to specify the credentials as parameters in every task, it is common to have the module being able to grab values from specific environment variables.
The suggestion is to have the following environment variables:

  • PANOS_IP_ADDRESS
  • PANOS_USERNAME
  • PANOS_PASSWORD
  • PANOS_API_KEY
  • PANOS_PORT

Then, having pan_device_auth being set accordingly, based on module.params or os.getenv.

not all modules are idempotent

Not all of the modules are idempotent. E.g. when I enable bgp with panos_bgp, then configure a peer group with panos_bgp_peer_group, and then add a bunch of peers with panos_bgp_peer, and then run the same playbook again, the panos_bgp task will remove the peers and peer_group, resulting in a changed state instead of ok

Expected behavior

Run a playbook with these tasks twice:

  - name: Configure BGP on a Virtual Router
    panos_bgp:
      provider: '{{ provider }}'
      state: present
      vr_name: default
      router_id: "{{ router_id }}"
      local_as: "{{ local_as }}"
      install_route: True
      reject_default_route: False
      commit: false

  - name: Configure BGP Peer Group
    panos_bgp_peer_group:
      provider: '{{ provider }}'
      state: present
      name: "{{ peer_group_name }}"
      vr_name: default
      commit: False

  - name: Configure BGP Peer
    panos_bgp_peer:
      provider: '{{ provider }}'
      state: present
      name: "{{ peer_name }}"
      enable: true
      peer_as: "{{ peer_as }}"
      local_interface_ip: "{{ peer_local_if_ip }}"
      local_interface: "{{ peer_local_if }}
      peer_address_ip: "{{ peer_addr_ip }}"
      peer_group: "{{ peer_group_name }}"
      vr_name: default
      commit: False

The second run each task should display ok

Current behavior

When run a second time, all tasks display changed

Possible solution

I believe the source of the problem is in module_utils/network/panos/panos.py apply_state function. In there the generated object only includes child types that exist in the new version of the object instead of leaving the existing children untouched.

Your Environment

  • Version used:
  • Ubuntu 18.04, with pandevice 0.14.0

panos_import: Failed to import config to Pan OS Device

Describe the bug

Failed to import configuration to pan OS device

Expected behavior

It should be able to import the config every time. Some time its working and some time failed to import the config.

Current behavior

90% time unable to import

TASK [Import configuration file into FW] ********************************************************************************************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'PanXapiError' object has no attribute 'message'
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/tmp/ansible_panos_import_payload_75kydk6b/ansible_panos_import_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_import.py\", line 180, in main\n  File \"/tmp/ansible_panos_import_payload_75kydk6b/ansible_panos_import_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_import.py\", line 99, in import_file\n  File \"/usr/local/lib/python3.6/dist-packages/pan/xapi.py\", line 637, in keygen\n    raise PanXapiError(self.status_detail)\npan.xapi.PanXapiError: URLError: reason: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1585567740.856-61302474294564/AnsiballZ_panos_import.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1585567740.856-61302474294564/AnsiballZ_panos_import.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1585567740.856-61302474294564/AnsiballZ_panos_import.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.paloaltonetworks.panos.plugins.modules.panos_import', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_panos_import_payload_75kydk6b/ansible_panos_import_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_import.py\", line 193, in <module>\n  File \"/tmp/ansible_panos_import_payload_75kydk6b/ansible_panos_import_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_import.py\", line 183, in main\nAttributeError: 'PanXapiError' object has no attribute 'message'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

Possible solution

Manual deployment is working but not through ansible.

Steps to reproduce

  1. Try to deploy 4 to 5 firewall through some script one by one.
  2. Some time it fail for all the OVA and some time it will work for 1 or 2 ova.

Your Environment

$ ansible --version
ansible 2.9.6
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]

Using paloalto collections.

panos_l2_subinterface: fails on aggregate interfaces

Describe the bug

The module fails when used with aggregate interfaces.

panos_l2_subinterface:
  provider: "{{ provider }}"
  name: ae1.42
  tag: 42
  vlan_name: some_vlan
  zone_name: twilight_zone

Expected behavior

Creation of subinterface ae1.42 on aggregate interface ae1.

Current behavior

Execution fails witth the following message:

Failed refresh: Object doesn't exist: /config/devices/entry[@name='localhost.localdomain']/network/interface/ethernet/entry[@name='ae1']

Possible solution

When the named interface doesn't exist within the ethernet section of the config tree, try looking it up as an aggregate.

Context

Evaluating how to deploy a pair of PA-5200s, each dual-connected to the switches (thus the need for link aggregation).

panos_l3_subinterface: no commit parameter

Is your feature request related to a problem?

Most of modules have commit parameter but this one doesn't

Describe the solution you'd like

Add commit parameter to this module to have it consistent with rest

Describe alternatives you've considered

N/A

Additional context

Most modules have this parameter, would be good to keep this one consistent with others.

panos_ha fails on vm-series in Azure

Running ansible playbook to configure active-passive ha cluster in Azure fails.

Describe the bug

I am running a playbook to configure interfaces, routing etc. of a pair of vm-series 300 in Azure. I am trying to also automate the HA config and have added the following to the playbook:

- name: Configure Active/Standby HA
  panos_ha:
    provider: "{{provider}}"
    state: present
    ha_peer_ip: "10.75.0.53"
    ha1_ip_address: "10.75.0.52"
    ha1_netmask: "255.255.255.240"
    ha1_port: "management"
    ha2_port: "ethernet1/3"
    ha_mode: "active-passive"
    commit: "false"

Expected behavior

HA configuration succeeds.

Current behavior

TASK [Configure Active/Standby HA] ******************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed apply: high-availability -> interface -> ha1-backup unexpected here\n high-availability -> interface is invalid"}

Possible solution

Steps to reproduce

Screenshots

Context

Your Environment

Using the paloaltonetworks.panos:1.1.0 collection with ansible 2.9.4 on Ubuntu 16.04.06 (WSL).

panos_commit bug

  1. Panos_commit doesnot have functionality of check mode for commit to panorama and push to devices
  2. All committed configuration from panroama is pushed to devices despite mentioning particular user via panos_commit: admins: [abc]

Describe the bug

panos_commit doesn't have the functionality of check mode like other modules,which provide the preview option.

Expected behavior

check mode should be present which allows user to check the changes being committed to panorama and pushed to devices

Current behavior

panos_commit commits to the panorama intended and previous commit configurations and then push merged configuration to the devices .This is leading to outages since unintended configurations from previous committed changes are getting merged with the current commit.

Possible solution

panos_commit should provide a functionality to preview the configs committed to panorama and also provide preview feature of config being pushed to devices.
in absence of preview/check feature , panorama commits of previous versions are getting merged with current configuration commit and getting pushed to devices

Steps to reproduce

  1. Commit a change to panorama manually via gui and don't push to devices.
  2. Make a second policy change via panoroma/automation and use panos_commit to push to device group.
  3. final outcome: both commits will go to device group without change 2 owner knowledge.

Current Code Snippet from panos_commit.py
'
module = AnsibleModule(

    argument_spec=helper.argument_spec,

    supports_check_mode=False,

    required_one_of=helper.required_one_of,

)'

Context

Your Environment

  • Version used: 2.8.5
  • Environment name and version (e.g. Chrome 59, node.js 5.4, python 3.7.3): python 2.7
  • Operating System and version (desktop or mobile):
  • Link to your project:

Hitcount?

Is there any way to get hitcounts from policys with the current set of modules?

panos_loadcfg: Export config module

Hi,

While reading documentation, I saw there is import parameter that allows to import previously offloaded config. I am wondering if there is export parameter that will allow to export config files.

Request to Add Module for Configuring Dashboard Widgets

Is your feature request related to a problem?

I did not see any dashboard widgets related requests so far.

Describe the solution you'd like

A request to add a module/functionality where we can add desired widget for the dashboard.
"You can also decide which widgets to display or hide so that you see only those you want to monitor."

Describe alternatives you've considered

None so far.

Additional context

This task on my playbook is on hold and pending. Hoping for your help on this.

Thank you very much for your support.

panos_facts interfaces fails on IPv6

The panos_facts module tries to pull the "name" out of the IPv6Address objects but there is no name, only "address". So, if panos_facts is run with gather_subset: ['interfaces'] it fails on name attribute.

Expected behavior

panos_facts module with gather_subset: ['interfaces'] should not fail on IPv6 as there is built in support in the module.

Current behavior

The module fails either with the default gather_subset of ['!config'] or gather_subset: ['interfaces'].

Possible solution

I played with changing the following:

line 346 - iface_info['ipv6'].append(child.name)
to
iface_info['ipv6'].append(child.address)

and it appeared to fix the issue.

Steps to reproduce

Use panos_facts module with following:

panos_facts:
provider: '{{ provider }}'
gather_subset: ['interfaces']

Ensure an IPv6 address is assigned to one of the interfaces

Screenshots

I don't have any as I have limitations on what I can transfer from work computers etc.

Context

It is consistent and I have had the problem both by manually configured IPv6 addresses as well as address objects in that field.

Your Environment

PanOS 8.1.12 is my current setup
Galaxy role version 2.4.1
Pandevice version 0.14.0

panos_address_object without Description is not idempotent

Describe the bug

When omitting the description parameter to the task, the description key is always presented as changing. The description parameter is listed as optional and should not attempt to make a change when there is nothing set.

Expected behavior

I'd expect that if no description is provided and the rest of the rule is present that there would be no changes made.

Current behavior

-<entry name="MyNet"><ip-netmask>192.0.2.0/24</ip-netmask></entry>
\ No newline at end of file
+<entry name="MyNet"><ip-netmask>192.0.2.0/24</ip-netmask><description /></entry>
\ No newline at end of file

Possible solution

I need take a look at the code to see why description is still being sent if not defined.

Steps to reproduce

  1. Create an address object task with only name, type, and value defined
  2. Continuously run a playbook with the task
  3. It comes back with a change being made every time

Context

Looking to have a fully idempotent configuration.

Don't seem to be able to use panos modules inside roles

Describe the bug

I don't seem to be able to use any of the modules when I use the in side a role.

Expected behavior

To be able to use any module in the collections in a role.

Current behavior

I have the following playbook that calls a role that I made.

---
- name: Config PA
  gather_facts: no
  connection: local
  hosts: all
  
  collections:
    - paloaltonetworks.panos

  roles:
    - pa-role

The pa-role has the following play in it:

  - name: Add IPv4 default route
    panos_static_route:
      provider: '{{ provider }}'
      name: 'default'
      destination: '0.0.0.0/0'
      nexthop: "{{ defult_route.ipv4 }}"

When I run this playbook I get the following error:

ERROR! couldn't resolve module/action 'panos_static_route'. This often indicates a misspelling, missing collection, or incorrect module path.

If I copy that play into a standalone playbook it work.

---
- name: Config PA
  gather_facts: no
  connection: local
  hosts: all
  
  collections:
    - paloaltonetworks.panos
  
  tasks:
  - name: Add IPv4 default route
    panos_static_route:
      provider: '{{ provider }}'
      name: 'default'
      destination: '0.0.0.0/0'
      nexthop: "{{ defult_route.ipv4 }}"

Steps to reproduce

The snippets above are pulled from the code that I'm using. They should work to be able to reproduce it.

Rename pandevice import statements to panos

Is your feature request related to a problem?

The import statements are changing with the rebrand of pandevice to pan-os-python. The package structure will stay the same, but this:

from pandevice import firewall
from pandevice import network

will need to be renamed to:

from panos import firewall
from panos import network

Beta release of pan-os-python is already out.

panos_loadcfg: Unable to commit the config on Pan OS Device

Describe the bug

Even if panos_import module able to import the config successfully, panos_loadcfg is unable to load it correctly.

Expected behavior

It should be able to load as expected.

Current behavior

If loading one by one its working. But if trying to configure 4 to 5 firewall one by one through script/ansible, its breaking and not able to commit the configs correctly.

Possible solution

No resolution.

Steps to reproduce

100% reproducible in my environment. I am deploying these OVAs on the VMware platform.

  1. Deploy the a provision server with private network in 192.168.1.X subnet with 192.168.1.2
  2. Deploy the firewall OVA in the same subnet. It will pick 192.168.1.1 IP
  3. From provision server copy the config using panos_import and apply the config using panos_loadcfg
  4. Then change the interface to the correct one for the management IP
  5. If deploying these play books one by one manually its working.

Playbooks:

# cat 01_firewall-config.yml
---
- name: Firewall Configuration
  hosts: localhost
  connection: local
  gather_facts: false
  vars_files:
    - 01_PrimaryDC_Mgmt_FWA.yml

  collections:
    - paloaltonetworks.panos

  tasks:
    - name: Import configuration file into FW
      panos_import:
        ip_address: '{{ ip_address }}'
        username: '{{ username }}'
        password: '{{ password }}'
        file: '{{ config_file }}'
        category: 'configuration'
      register: result

    - name: Load the configuration file to FW
      panos_loadcfg:
        ip_address: '{{ ip_address }}'
        username: '{{ username }}'
        password: '{{ password }}'
        file: '{{ result.filename }}'
        commit: True

Error Message:

PLAY [Firewall Configuration] *******************************************************************************************************************************************************************************************************************************************************************************************

TASK [Import configuration file into FW] ********************************************************************************************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [Load the configuration file to FW] ********************************************************************************************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pan.xapi.PanXapiError: Commit job was not queued since auto-commit not yet finished successfully. Please use "commit force" to schedule commit job
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1585567312.7959883-164436344259963/AnsiballZ_panos_loadcfg.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1585567312.7959883-164436344259963/AnsiballZ_panos_loadcfg.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1585567312.7959883-164436344259963/AnsiballZ_panos_loadcfg.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.paloaltonetworks.panos.plugins.modules.panos_loadcfg', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_panos_loadcfg_payload_tc554zho/ansible_panos_loadcfg_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_loadcfg.py\", line 130, in <module>\n  File \"/tmp/ansible_panos_loadcfg_payload_tc554zho/ansible_panos_loadcfg_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_loadcfg.py\", line 124, in main\n  File \"/usr/local/lib/python3.6/dist-packages/pan/xapi.py\", line 902, in commit\n    raise PanXapiError(self.status_detail)\npan.xapi.PanXapiError: Commit job was not queued since auto-commit not yet finished successfully. Please use \"commit force\" to schedule commit job\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************************************************************
localhost                  : ok=1    changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Context

For every DC we deploy 4 Palo Alto firewalls. 2 Mgmt and 2 for customers.
And in one go we do such deployment in 2 DCs in primary and secondary mode.
But lack of automation capability in Palo Alto is a huge drawback. Have to re think about this product.

Your Environment

ansible 2.9.6 and Pan OS 8.5

Panorama - "Scope \"shared\" is not allowed"

Describe the bug

Using panos_security_rule and leaving out exact device_group, generates the following error:

fatal: [localhost]: FAILED! => {"changed": false, "msg": "Scope \"shared\" is not allowed"}

Expected behavior

panos_security_rule should push to shared scope when leaving device_group key out of task.

Current behavior

panos_security_rule correctly recognizes specific device_group and adds rule as expected.
When leaving out device_group, it correctly defaults to "shared" but error message is shown: shared scope is not allowed.

User with admin rights pushes the playbook to Panorama.

Steps to reproduce

If this isn't specific to environment I use, simple panos_security_rule without device_group, pushed to Panorama should invoke the same error.

Your Environment

fatal: [localhost]: FAILED! => {
    "changed": false, 
    "invocation": {
        "module_args": {
            "action": "allow", 
            "antivirus": null, 
            "api_key": null, 
            "application": [
                "any"
            ], 
            "category": [
                "any"
            ], 
            "commit": false, 
            "data_filtering": null, 
            "description": "Rule test", 
            "destination_ip": [
                "TEST-SERVERS"
            ], 
            "destination_zone": [
                "any"
            ], 
            "device_group": "shared", 
            "devicegroup": null, 
            "disable_server_response_inspection": false, 
            "disabled": false, 
            "existing_rule": null, 
            "file_blocking": null, 
            "group_profile": null, 
            "hip_profiles": [
                "any"
            ], 
            "icmp_unreachable": null, 
            "ip_address": null, 
            "location": null, 
            "log_end": true, 
            "log_setting": null, 
            "log_start": false, 
            "negate_destination": false, 
            "negate_source": false, 
            "negate_target": null, 
            "operation": null, 
            "password": null, 
            "port": 443, 
            "provider": {
                "api_key": null, 
                "ip_address": "10.10.10.10", 
                "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
                "port": 443, 
                "serial_number": null, 
                "username": "myusername"
            }, 
            "rule_name": "ACL-TEST-SERVERS", 
            "rule_type": "universal", 
            "rulebase": null, 
            "schedule": null, 
            "service": [
                "TEST-SERVICES"
            ], 
            "source_ip": [
                "any"
            ], 
            "source_user": [
                "TEST-USER"
            ], 
            "source_zone": [
                "PROD"
            ], 
            "spyware": null, 
            "state": "present", 
            "tag_name": [
                "AUTOMATED"
            ], 
            "target": null, 
            "url_filtering": null, 
            "username": "admin", 
            "vsys": "vsys1", 
            "vulnerability": null, 
            "wildfire_analysis": null
        }
    }, 
    "msg": "Scope \"shared\" is not allowed"
}
  • Version used: ansible 2.9.10
  • Environment name and version (e.g. Chrome 59, node.js 5.4, python 3.7.3): python version = 2.7.17
  • Operating System and version (desktop or mobile): Ubuntu 19.10 with 5.3.0-62-generic kernel

Commit by particular user in panos modules

Is your feature request related to a problem?

While using any of the PAN-OS modules, it includes commit -all option by default if any configuration is changed. However, in our production Panorama, many of the team members are working with their individual unsaved changes. Commit-all is not very helpful and might be dangerous if changes from other members get commited and pushed.

Describe the solution you'd like

All the PAN-OS modules should commit changes made by the user rather than commit all

Modules should include an option of making partial changes/ changes made by user only / commit all changes

Describe alternatives you've considered

Making use of panos_commit module

Additional context

panos_bgp: missing parameter for BFD Profile selection

Is your feature request related to a problem?

On PAN in Virtual Router BGP configuration in tab General, one of the option is to select BFD profile. Currently according to panos_bgp module documentation there is no parameter to set that

Describe the solution you'd like

Add parameter for BFD profile selection

Describe alternatives you've considered

None

Additional context

PANOS version 9.0.5

panos_tag_object errors when attempting to remove GUI generated color (Maroon)

Describe the bug

Setup

  • Create a tag on the Panorama GUI with Maroon (color19)
  • Run playbook to remove unwanted tags

Execution

  1. Gather tag objects from panos_object_facts module
  2. Determine which tags belong, which need to be deleted
  3. Attempt to delete using the state: absent on panos_tag_object module, errors out that:
value of color must be one of: red, green, blue, yellow, copper, orange, purple, gray, light green, cyan, light gray, blue gray, lime, black, gold, brown, got: color19

Expected behavior

I'd expect that the objects would get deleted

Current behavior

Ansible errors out due to expected color

Possible solution

If the state is absent, do not do a color check?

  • Attempted to run without the color key, and that errored as well:
MODULE_STDERR:

Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-tmp-1589395215.5506418-2467-34486907132534/AnsiballZ_panos_tag_object.py", line 102, in <module>
    _ansiballz_main()
  File "/root/.ansible/tmp/ansible-tmp-1589395215.5506418-2467-34486907132534/AnsiballZ_panos_tag_object.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/root/.ansible/tmp/ansible-tmp-1589395215.5506418-2467-34486907132534/AnsiballZ_panos_tag_object.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible_collections.paloaltonetworks.panos.plugins.modules.panos_tag_object', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/local/lib/python3.7/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/local/lib/python3.7/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_panos_tag_object_payload_phjvd2zb/ansible_panos_tag_object_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_tag_object.py", line 148, in <module>
  File "/tmp/ansible_panos_tag_object_payload_phjvd2zb/ansible_panos_tag_object_payload.zip/ansible_collections/paloaltonetworks/panos/plugins/modules/panos_tag_object.py", line 125, in main
  File "/usr/local/lib/python3.7/site-packages/pandevice/objects.py", line 178, in color_code
    raise ValueError("Color '{0}' is not valid".format(color_name))
ValueError: Color 'None' is not valid

Steps to reproduce

  1. Gather tag objects from panos_object_facts module
  2. Determine which tags belong, which need to be deleted
  3. Attempt to delete using the state: absent on panos_tag_object module, errors out that:

Context

Trying to delete tags that should not be in the environment.

Your Environment

ansible 2.9.9

latest (2020-05-13) Galaxy instance of the collection, fresh install to Docker

panos_security_rule reports error, but achieves it's goal

Describe the bug

When creating new security rule with panos_security_rule and when location is specified (parameters location and existing_rule are used) then task reports as failed with "msg": "; " but it achieves it's goal and successfully creates rule in correct location.
When running same task without location specified (not location and exisiting_rule) then task is reported as ok

Expected behavior

Ansible playbook's task should report it as OK.

Current behavior

Ansible playbook's task is being reported as FAILED (but planned changes are implemented as expected).

Possible solution

Use ignore_errors: yes, so that playbook continues with next tasks.

Steps to reproduce

  - name: Create Security Policy
    panos_security_rule:
      rule_name: 'security rule'
      tag_name: ['tag1']
      rule_type: 'interzone'
      source_zone: ['DMZ', 'TRUST']
      source_ip: ['source-{{ item.name }}','source-{{ item.name2 }}']
      destination_zone: ['DMZ', 'TRUST']
      destination_ip: ['source-{{ item.name }}','source-{{ item.name2 }}']
      service: ['any']
      action: 'allow'
      location: 'before'
      existing_rule: 'intrazone-custom'
      state: 'present'
      provider: '{{ provider }}'
      commit: 'False'
    loop: '{{ rule }}'

Screenshots

Context

Your Environment

Part of configuration (i.e. address objects are pushed to Firewalls from Panorama)

  • Version used: PaloAltoNetworks.paloaltonetworks, v2.4.1
  • Environment name and version (e.g. Chrome 59, node.js 5.4, python 3.7.3): Python 3.6.9
  • Operating System and version (desktop or mobile): Ubuntu 18.04.4 LTS
  • Link to your project: N/A

panos_security_rule_facts module do not support 'shared' device-group for Panorama

Describe the bug

panos_security_rule_facts module do not support shared device_group parameter. This leads to situation where shared policies cannot be retrieved from panorama.

Expected behavior

panos_security_rule_facts should be able to work with shared device_group (its also default value when target device is panorama).

Current behavior

  • name: Getting shared security policies from panorama
    paloaltonetworks.panos.panos_security_rule_facts:
    provider: '{{ network_provider }}'
    all_details: 'yes'
    register: shared_security_policies_list

TASK [net_sec_policy_get : Getting shared security policies from panorama] *******************************************************************************************************************
fatal: [panorama.bisinfo.org]: FAILED! => {"changed": false, "msg": "Scope "shared" is not allowed"}

However this works for any other device_groups:
name: Getting all security policies from all device-groups
paloaltonetworks.panos.panos_security_rule_facts:
provider: '{{ network_provider }}'
all_details: 'yes'
device_group: '{{ item }}'
loop: '{{ panorama_device_groups }}'
register: dg_security_policies_list
TASK [net_sec_policy_get : Getting all security policies from all device-groups] **********************************************************************************************************************
ok: [panorama.bisinfo.org] => (item=DC)
ok: [panorama.bisinfo.org] => (item=Switzerland)
ok: [panorama.bisinfo.org] => (item=ftu2-022t_ftu2-024t)
ok: [panorama.bisinfo.org] => (item=fz02-021t_fz02-023t)
ok: [panorama.bisinfo.org] => (item=Guest)
ok: [panorama.bisinfo.org] => (item=OOB)
ok: [panorama.bisinfo.org] => (item=ftu2-062t_fz02-063t)
ok: [panorama.bisinfo.org] => (item=Perimeter)
ok: [panorama.bisinfo.org] => (item=CH)
ok: [panorama.bisinfo.org] => (item=ftu2-012t_fz02-011t)

Steps to reproduce

1.Just execute panos_security_rule_facts module against panorama and either do not provide device_group parameter or set it to 'shared' (default)

Context

We are currently not being able to get shared policies from panorama

Your Environment

  • Version used: 1.1.0 (collection newest version)
  • Environment name and version (e.g. Chrome 59, node.js 5.4, python 3.7.3): python 2.7.5, ansible: 2.9.7
  • Operating System and version (desktop or mobile): RHEL7

panos_facts do not support panorama

Describe the bug

When using panos_facts module panorama device is not supported for any of facts scope.

Expected behavior

panos_facts should gather facts from panorama device. According to module documentation it says that: "Certain subsets might be supported by Panorama."

Current behavior

When using panos_facts module against panorama device below error is shown:
"msg": "This module is for firewall facts only"}

The above is returned for any of gather_subset:
all, system, session, interfaces, ha, routing, vr, vsys and config

Steps to reproduce

  1. use panos_facts module against panorama device with any gather_subset option

Context

We have environment where all of the firewalls are manged via panorama central management system. All configuration is kept on it for all firewalls and currently we can't get facts for it especially config, ha-mode, model etc.

Your Environment

  • Version used: 1.1.0 (collection) ansible version: 2.9.7
  • Environment name and version (e.g. Chrome 59, node.js 5.4, python 3.7.3): python version = 2.7.5
  • Operating System and version (desktop or mobile): RHEL7

Create layer3 subinterface with panos_l3_subinterface

Hi, i dont know why but i have a problem when i run my playbook the error is :

FAILED! => {"changed": false, "msg": "Interface name does not have "." in it"}

my code is

  • name: Create layer3 subinterface eth1
    panos_l3_subinterface:
    provider: '{{ provider }}'
    template: "{{ context }}_Network_Template"
    name: "ethernet1/1"
    tag: 2807
    zone_name: "{{ context }}_LAN"
    vr_name: "{{ context }}_VR"

Anyone know why ? thank you

panos_ipsec_profile

What:
The module panos_ipsec_profile requires lifetime setting when state: absent

What did you try:

- name: Remove IPSec crypto config
  panos_ipsec_profile:
    provider: '{{ provider }}'
    name: "{{ vpn_name }}-IPSec"
    template: "{{ pano_template }}"
    commit: no
    state: absent

What did you expect?
Removal of ipsec profile

What did actually happen?

The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
  File "/tmp/ansible_panos_ipsec_profile_payload_P1Psbn/ansible_panos_ipsec_profile_payload.zip/ansible/module_utils/basic.py", line 1523, in _check_required_one_of
    check_required_one_of(spec, param)
  File "/tmp/ansible_panos_ipsec_profile_payload_P1Psbn/ansible_panos_ipsec_profile_payload.zip/ansible/module_utils/common/validation.py", line 96, in check_required_one_of
    raise TypeError(to_native(msg))
fatal: [IP.IP.IP.IP]: FAILED! => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "invocation": {
        "module_args": {
            "commit": false,
            "dh_group": "group2",
            "name": "Name-IPSec",
            "port": 443,
            "provider": {
                "api_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
                "ip_address": "IP.IP.IP.IP"
            },
            "state": "absent",
            "template": "name-of-template",
            "username": "admin"
        }
    },
    "msg": "one of the following is required: lifetime_seconds, lifetime_minutes, lifetime_hours, lifetime_days"
}

If I add lifetime option, it works.
Probably related to the requirement on this line:
https://github.com/PaloAltoNetworks/ansible-pan/blob/132e28f6b6a5f5fd8df871c3e25ffbb4ec575fd3/library/panos_ipsec_profile.py#L134

panos_aggregate_interface.py - Please Add LACP Support

When creating Aggregate interfaces, we need to also config LACP.

Describe the solution you'd like

Add LACP Support for Aggregate Interfaces

Describe alternatives you've considered

None

Additional context

This is key to creating aggregate interfaces for many customers

Support batch mode in panos_software

It would be nice if it would be possible to use a image already uploaded to the device (uploaded with panorama).
Some of our remote sites has a quite crappy and slow internet connection. We usualy prepear the upload some day before the upgrade.

New Module: panos_export

Describe the solution you'd like

Export operations are not currently supported by a module. This module should perform all the operations supported by the export API command type, documented here:

  • Configuration
  • Certificates/Keys
  • Response pages
  • Packet captures
  • Technical support data
  • Stats dump files
  • Device State

panos commit doesn't commit to multiple device groups at once

Describe the bug

Compared to the Panorama GUI commit-all , the panos_commit module does NOT push the changes to multiple device groups at once/in parallel

Expected behavior

The panos_commit module should allow the changes to be pushed to multiple device groups at once/ in parallel fashion.

Current behavior

Currently, the panos_commit module supports single device group at a time and if used with loop functionality, makes the pushes to the device groups in serial fashion

Possible solution

  • Allow parallel push for device groups
  • Data type of device_group as list

Steps to reproduce

We have more than 1 device group in our environment and would like to push changes to all of the device groups at once/in parallel -> Similar to what we do in GUI

I have provided an example where I have more than 1 device group and using the loop functionality
device_group_list: ['device_group_1', 'device_group_2', 'device_group_3']

- name: Commit changes on Panorama and push to Device Groups
  panos_commit:
    ip_address: "{{ panorama_ip }}"
    username: "{{ username }}"
    password: "{{ password}}"
    admins: ['admin']
    device_group: "{{ item }}"
  loop: "{{ device_group_list }}"
  loop_control:
    loop_var: item

  register: commit_output

Screenshots

Context

My ansible playbook is part of an end to end flow to automate security policy creation. However, with more number of device groups, the automation is going to take more time to just to complete the push process to all the device groups which will increase the overall execution time of the end to end flow involving different systems. I am expecting to reduce this implementation time.

Your Environment

  • Version used: Palo Alto galaxy Module - 2.4.1
  • Environment name and version (e.g. Chrome 59, node.js 5.4, python 3.7.3): Ansible Tower and Ansible version - 2.9.5
  • Operating System and version (desktop or mobile):
  • Link to your project:

panos_op Pipe through a command

Hi,

Tries the panos_op module and found that can't do anything with '|'.
Am I doing something wrong or that one really missing?

ansible 2.9.8
using collection

Thanks

Content Module

Need a module to update content packages i.e. Apps and Threat
One module to download and one to install.

Panorama: Option to Retrieve license keys from license server

Is your feature request related to a problem?

I have a requested functionality for the playbook where the license to be configured will come from the option "retrieve license keys from license server"

Describe the solution you'd like

Add this option aside from using authcode from panos_lic.py or uploading a license key from panos_import.py.

Describe alternatives you've considered

Request a different functionality from client in selecting/input of license.

Additional context

I am stuck in finding right module to implement this.

Thank you very much.

Registering tags to ip-adresses may fail

Registering an extra tag to an existing ip-address while specifying both old and new tags fails to add the new tag. See below excerpt:

Show current mappings

    - name: List the IP address to tag mapping to show empty list
      panos_registered_ip_facts:
        provider: "{{ dag }}"
      register: list

    - debug:
        msg: "{{ list | pprint }}"

TASK [List the IP address to tag mapping to show empty list] *************************************************************************************************************************
ok: [localhost]

TASK [debug] *************************************************************************************************************************************************************************

ok: [localhost] => {
    "msg": {
        "changed": false,
        "failed": false,
        "results": {}
    }
}

Register an IP with a single tag

    - name: Register an IP address with 'first-tag'
      panos_registered_ip:
        provider: '{{ dag }}'
        ips: '192.168.1.1'
        tags:  [ 'first_tag' ]
        state: 'present'
      register: list

    - debug:
        msg: "{{ list | pprint }}"

TASK [Register an IP address with 'first-tag'] ***************************************************************************************************************************************
changed: [localhost]

TASK [debug] *************************************************************************************************************************************************************************

ok: [localhost] => {
    "msg": {
        "changed": true,
        "failed": false,
        "results": {
            "192.168.1.1": [
                "first_tag"
            ]
        }
    }
}

Show the mappings again to confirm above result

    - name: List the IP address to tag mapping
      panos_registered_ip_facts:
        provider: "{{ dag }}"
      register: list

    - debug:
        msg: "{{ list | pprint }}"

TASK [List the IP address to tag mapping] ********************************************************************************************************************************************
ok: [localhost]

TASK [debug] *************************************************************************************************************************************************************************

ok: [localhost] => {
    "msg": {
        "changed": false,
        "failed": false,
        "results": {
            "192.168.1.1": [
                "first_tag"
            ]
        }
    }
}

Now register a second tag while also mentioning the first tag. Note that the second tag is not added!!!

    - name: Register same IP address with 'first-tag' and 'second-tag'
      panos_registered_ip:
        provider: '{{ dag }}'
        ips: '192.168.1.1'
        tags:  [ 'first_tag', 'second_tag' ]
        state: 'present'
      register: list

    - debug:
        msg: "{{ list | pprint }}"

TASK [Register same IP address with 'first-tag' and 'second-tag'] ********************************************************************************************************************
ok: [localhost]

TASK [debug] *************************************************************************************************************************************************************************

ok: [localhost] => {
    "msg": {
        "changed": false,
        "failed": false,
        "results": {
            "192.168.1.1": [
                "first_tag"
            ]
        }
    }
}

Verify again

    - name: List the IP address to tag mapping again to show 'second-tag' is not added
      panos_registered_ip_facts:
        provider: "{{ dag }}"
      register: list

    - debug:
        msg: "{{ list | pprint }}"

TASK [List the IP address to tag mapping again to show 'second-tag' is not added] ****************************************************************************************************
ok: [localhost]

TASK [debug] *************************************************************************************************************************************************************************

ok: [localhost] => {
    "msg": {
        "changed": false,
        "failed": false,
        "results": {
            "192.168.1.1": [
                "first_tag"
            ]
        }
    }
}

Only a mapping for the first_tag is listed.
This is probably caused by the comparison of the existing ip-list on the firewall with the ip-list to be changed. The existing list is determined by retrieving ip addresses that have any of the given tags. The proper way may be to determine the existing list by verifying ip-addresses that have all the given tags.

The following code in panos_registered_ip.py may need to be updated:

        registered_ips = device.userid.get_registered_ip(tags=tags)

        if state == 'present':
            # Check to see if IPs actually need to be registered.
            to_add = set(ips) - set(registered_ips.keys())
            if to_add:
                if not module.check_mode:
                    device.userid.register(ips, tags=tags)
                changed = True

In the same way unregistering ip-addresses may fail

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.