A Quick Look at Ansible

In the modern age of networking or IT in general, if you've worked in the industry for more than 5 seconds you'll have heard the term software defined..... fill in the blank or network automation. They're the buzzwords of the era and everyone wants it even if they don't know what it actually means or does. Personally I've never been a fan of software defined or automated anything. I've always been a firm believer that when it comes to automation in networking, you auto-not use it. In saying that, I have not had an extensive amount of experience with software defined networks or using automation tools, but the little experience that I have had, has not changed my view on it at all. There are aspects to automation that I do find useful and I do like such as zero touch deployments, and...... Well, that's about all I can think of right now. I'm yet to see the benefit of writing all these scripts to run the same commands on a router or switch that you would if you logged into the cli and did it anyway but much faster because there's no need to write some script in some language only to find you missed a colon or a space somwhere and spend hours troubleshooting your code. You just enter the commands you want and you're done. Anything that you can write a script to do, you can also write a config template for and just copy and paste to the cli of a device. 

In saying all of that, I realise that we gave already reached the point of no return after everyone has poured all this time and money into SDN/Automation tools etc so I decided to take a little delve into some basic automation by using Ansible in my home Lab to see if it will change my mind or enlighten me in some way. Keep in mind as you read these posts I am in no way a programmer at all and never will be, in fact one of the things I loved about being a network engineer is that I don't have to write scripts or program anything at all and if I do need one, someone else has already done it and I can copy and paste. I would appreciate any feedback in the comments on any play book scripts that I do write up though as I can guarantee they are not elegant or well written. I may not like this stuff but that doesn't mean I don't want to learn and improve my skills. 

For my Ansible lab I created a basic vanilla Alma Linux VM and installed pip and ansible on it. I'll be using my virtual NX-OS switches in my VXLAN lab for testing the playbooks but may in future do another post on my physical lab devices or even my actual home networking kit. Installing ansible is a straight forward and already very well documented process which can be easily found among the plethora of other guides and documentation on the ansible website here. One thing I will add to the install process just in case someone else has the same problem I did, is that I had to run the below command to get ansible to install and function properly.

[ray@wrlabansbl01 ~]$ pip install --user ansible-pylibssh
Collecting ansible-pylibssh
  Downloading ansible_pylibssh-1.2.2-cp39-cp39-manylinux_2_28_x86_64.whl (2.8 MB)
     |████████████████████████████████| 2.8 MB 4.7 MB/s            
Installing collected packages: ansible-pylibssh
Successfully installed ansible-pylibssh-1.2.2

Once I had my ansible server ready to go, it was time to get started. 

Inventory File

The inventory file in Ansible is exactly that, an inventory of the devices and device options within your network for ansible to use when running your playbooks. There are many ways to create an inventory file and you can even setup an inventory folder structure depending on your environmental needs etc, but for the purposes of this post and my little lab environment, I've just created a single inventory file that contains the information that I require for my virtual NX-OS Switches. The inventory file can be in either YAML or INI format but ive chosen YAML for 2 reasons. One, I found it nicer to look at and read and two, its covered in the Cisco exams. To create an inventory file, simply log into your ansible server and first create your basic folder structure. For example, create a root folder called ansible, then under the ansible folder, create 2 more folders called inventories and one called playbooks. Now that you have a very basic folder structure, create your inventory.yml file inside the inventories directory. When creating an inventory file you can specify your devices as individual hosts, or you can configure them in groups and sub groups etc. The ability to group devices allows for a hierarchical structure for your inventory configurations and greater control over which devices to run the playbook against. For example you could have a group structure like the below example where I have created groups based on what the device function is in my Lab environment such as leafs, spines etc or you could group them by NX-OS version or geographical location etc. 

leafs:
  hosts:
    wrlablfsw01:
      ansible_host: 10.199.200.30
    wrlablfsw02:
      ansible_host: 10.199.200.31

spines:
  hosts:
    wrlabspsw01:
      ansible_host: 10.199.200.20

bordergw:
  hosts:
    wrlabbgsw01:
      ansible_host: 10.199.200.11

core:
  hosts:
    wrlabcr01:
      ansible_host: 10.199.200.10

vxlan:
  hosts:
    wrlablfsw01:
      ansible_host: 10.199.200.30
    wrlablfsw02:
      ansible_host: 10.199.200.31
    wrlabspsw01:
      ansible_host: 10.199.200.20

network_lab:
  children:
    leafs:
    spines:
    bordergw:
    core:

lab_switches:
  children:
    leafs:
    spines:

When you're creating your groups for your devices the idea would be to group devices together in such a way that you can create playbooks to run against the whole group. For example, if you only want to run it against all of your spine switches, you can specify spines in your run command options or if you want to run it against all of your lab devices, you can specify network_lab as the inventory group. You can also group devices based on your playbook variables. For example, if you have a playbook that requires the same variable across multiple hosts, you can create a parent group for those hosts that contains those particular variables. 

If you decide not to create groups or even an inventory file for that matter, you can also specify all of these options in the ansible command line. One more thing to note is that if you don't create any groups for your devices, all devices in your inventory file by default are in two groups, one called all and one called ungrouped. If you do group your devices, they are still in the group called all but instead of the ungrouped group, they're in the group you created as well. Also note that you can have hosts in multiple groups. In my example inventory file above, I have a group called leafs and spines, and a group called vxlan that includes the leaf and spines switches in both groups. This allows you to create various playbooks and run them based on group function, device type or even network location depending on your structure. 

In the inventory example above, you will also see that I have placed some groups in a parent/child format. This allows me to have yet snother way to group like devices together. For example, you might have a prod or a non-prod parent group that contains all hosts in your production or non-production environment or a parent group for all devices in a geographical location. The above example is a very simple and basic inventory file that really only contains the IP address of the host and the group it belongs to. In your own inventory file you can also configure a number of other attributes for each host such as the username and password to connect with or which ansible module or plugin to use etc. You can also specify these via the command line but isn't the whole point of automation to automate these things? To create these attributes in your inventory file, you need to either configure them under each host or specify them under the group as in the below example based on what option you're setting and what it will be used for. Attributes are configured as vars in your inventory file and there are a number of attributes you can specify. 

network_lab:
  children:
    leafs:
    spines:
    bordergw:
    core:
  vars:
    ansible_connection: ansible.netcommon.network_cli
    ansible_network_os: nxos
    ansible_user: ansible

The above attributes tell ansible to use the nxos module to connect to devices in these groups and that the username to log in is ansible. The ansible_connection line specifies which plugin to use to connect to the device which in this case is the network_cli module. There are so many plugins and modules available for use with ansible and there are so many examples of how to configure and use them i wont go to much more into it here specially since im still learning myself.

Playbooks

Ansible uses play books to create its automation tasks. There's a lot of information and documentation already out there on how to create playbooks and the various modules and options etc so there's not much point in saying too much about it here so I'll just provide a brief overview. 

Ansible playbooks are written in YAML format and are a reusable/repeatable script that runs a set of tasks in sequence, completing the each task before moving onto the next. Each playbook will contain one or more plays, with each play containing one or more tasks. You can create playbooks to do any number of tasks from installing and configuring applications on a machine to writing a base configuration for a switch. There are large number of plugins and utilities that you can use to complete these tasks. I can easily see how Ansible automation would be useful for server guys who builds and deploys servers daily and requires a quick repeatable processes but in all honesty, when it comes to networks i still find that you might as well just have a config template and copy and paste it. The amount of work to write and run a script is about the same really if not easier to use a config template. Anyway, back to the lab. When you write a playbook, you need to specify a host that the play is for. When it comes to ansible and network automation, it works differently to Linux/Windows automation tasks in that the play/task is executed on the ansible control node, not the actual network host itself. This is because ansible requires python to run and most networking gear is incapable of doing so. 

Example Playbooks

Personally I find that the best way to learn or to understand is to do, so let me run through a couple of example play books I created for network tasks. It has taken me quite some time and a lot of trial and error, but I eventually managed to create a couple of playbooks to do a few things. This first playbook simply logs into the devices in your inventory file and outputs the details of all physical and layer 3 interfaces on the device itself.

---
- name: NX-OS Playbook
  hosts: network_lab
  tasks:

  - name: Gather Interface facts
    cisco.nxos.nxos_facts:
      gather_subset: interfaces
  - name: Display Interfaces facts
    debug:
      msg: "The host {{ ansible_net_hostname }} has the following interfaces {{ ansible_net_interfaces }}"

  - name: Gather L3 Interface facts
    cisco.nxos.nxos_facts:
      gather_subset: interfaces
      gather_network_resources: l3_interfaces
  - name: Display Interfaces facts
    debug:
      msg: "The host {{ ansible_net_hostname }} has the following layer 3 interfaces {{ ansible_net_all_ipv4_addresses }}"

The above playbook uses the module nxos_facts to gather information about the device. In this particular instance, the devices interfaces. I then used the debug msg feature to output the list of the interfaces on that specific host. This doesn't serve any purpose other than my own visual proof that the command ran successfully. The second part of the script is the same however only for the layer 3 interfaces. All in all, this wasn't too difficult to do. Once thing I will mention is, always check your spaces at the beginning of each line. The indentation of your commands is crucial. I've lost count of the number of times I've been troubleshooting a script only to find I didn't indent my line correctly. With YAML playbooks you start from column one in your file and then each line under starts with a double space and is structured in thag manner. If you need to indent a line more, you add an additional two spaces. For example in the above, the line below cisco.nxos.nxos_facts is indented with an additional two spaces to its previous line. For more information on the cisco.nxos.nxos_facts modules, see the ansible website. 

This next playbook will output the BGP neighbour summary using the command show bgp ipv4 unicast summary. This playbook takes advantage of the nxos_command module. 

- name: NX-OS Basic Config
  hosts: core

  tasks:

  - name: Show BGP Peers
    cisco.nxos.nxos_command:
      commands: 
        - command: show bgp ipv4 unicast summary
          output: json
    register: output

  - name: Display BGP Peers
    debug:
      msg: "{{ output.stdout }}"

I the above example, I use the module nxos_command to run an actual command on the switch itself. The register keyword tells ansible to register a variable called output that contains the output of the function. In this case it will contain the output of the show bgp ipv4 unicast summary command. Once again, I've used debug simply to display the output of the play for my own confirmatiom that it worked. 

The next playbook makes use of the nxos_hostname and nxos_feature modules to configure the device hostname and enable some required features. 

- name: NX-OS Base Config
  hosts: core 
  tasks:

# Configure the device hostname

  - name: Configure device hostname
    cisco.nxos.nxos_hostname:
      config:
        hostname: "{{ inventory_hostname }}"

# Enable the required features

  - name: Enable tacacs features
    cisco.nxos.nxos_feature:
      feature: tacacs+
      state: enabled

  - name: Enable OSPF Feature
    cisco.nxos.nxos_feature:
      feature: ospf
      state: enabled

  - name: Enable BGP Feature
    cisco.nxos.nxos_feature:
      feature: bgp
      state: enabled

  - name: Enable OSPFv3 Feature
    cisco.nxos.nxos_feature:
      feature: ospfv3
      state: enabled

  - name: Enable SVI Feature
    cisco.nxos.nxos_feature:
      feature: interface-vlan
      state: enabled

  - name: Enable BFD Feature
    cisco.nxos.nxos_feature:
      feature: bfd
      state: enabled

  - name: Enable SCP Server Feature
    cisco.nxos.nxos_feature:
      feature: scp-server
      state: enabled

This one I think is pretty self explanatory. The hostname module will set the device hostname to whatever is set in the inventory file as per the variable inventory_hostname. Note that if you want to specify a variable in Yaml and the line starts with a variable, you must use " before specifying the variable attribute and at the end of the line. The rest of the playbook is used to enable the various features that I will be using on my lab switches such us tacas, bgp, ospf etc. 

This next playbook was a method I used to configure an ACL. The Ansible documentation had a few different ways of doing this but this was the only one I could get to work correctly. It's not an easy way to do it really. No where near as simple as just writing permit this deny that but it worked. 

- name: NX-OS Base Config
  hosts: core 
    tasks

## Multiple methods to configure ACLS ##

  - name: Configure VTY ACL
    cisco.nxos.nxos_acls:
      config:
        - afi: ipv4
          acls:
            - name: ACL_VTY_ACCESS
              aces:
                - sequence: 10
                  grant: permit
                  protocol: tcp
                  source:
                    prefix: 10.1.10.0/24
                  destination:
                    any: true
                    port_protocol:
                      eq: 22
                - sequence: 20
                  grant: permit
                  protocol: tcp
                  source:
                    prefix: 10.1.3.0/24
                  destination:
                    any: true
                    port_protocol:
                      eq: 22
                - sequence: 30
                  grant: permit
                  protocol: tcp
                  source:
                    prefix: 10.1.1.0/24
                  destination:
                    any: true
                    port_protocol:
                      eq: 22
      state: overridden

The ACL is pretty straight forward, it simply allows SSH from the subnets, 10.1.10.0/24. 10.1.3.0/24 and 10.1.1.0/24. And in all honest I can't recall exactly what the overridden command did, but it was needed to get it to work. I had quite a lot of trouble with ACLs, but eventually got this to apply correctly 

This next playbook configures SNMPv3 access to the local device. 

- name: NX-OS Base Config
  hosts: core
  vars:
    snmp_svr: 10.1.1.4
    snmp_usr: cacti
    snmp_priv: PrivPasswd
    snmp_pass: SnmpPasswd
  
  tasks

# Configure SNMP users and SNMP v3

  - name: Configure SNMPv3 and User
    cisco.nxos.nxos_snmp_server:
      config:
        aaa_user:
          cache_timeout: 36000
        contact: networks@wr-mem.net
        location: HOME lab
        hosts:
          - host: "{{ snmp_svr }}"
            traps: true
            version: '3'
            auth: NMS
        users:
          auth:
            - user: "{{ snmp_usr }}"
              group: network-operator
              authentication:
                algorithm: sha
                password: "{{ snmp_pass }}"
                localized_key: true
                priv:
                  privacy_password: "{{ snmp_priv }}"
                  aes_128: true
          use_acls:
            - user: "{{ snmp_usr }}"
              ipv4: ACL_SNMP_ACCESS

I the above playbook, notice that I have specified certain variables in the playbook itself. With Ansible, there are multiple spots you can configure variables ranging from. Within the playbook itself, to using the inventory to using specific group variables and host variables. Also note that I've specified the SNMP password in plain text inside the playbook file itself. I've only done that here as an example and in the really world, you would take advantage of the ansible vault.to encrypt your passwords and sensatove information. 

And last but not least, this next section is to configure a tacacs server. 

- name: NX-OS Base Config
  hosts: core
  vars:
    tacacs1: 10.1.1.5
    tacacs2: 10.1.1.6
    tacacs_key: Mys3cr3t!

  tasks:

# Configure the tacacs servers

  - name: Configure Primary TACACs Servers
    cisco.nxos.nxos_aaa_server_host:
      state: present
      server_type: tacacs
      tacacs_port: default
      host_timeout: 10
      address: "{{ tacacs1 }}"
      key: "{{ tacacs_key }}"

  - name: Configure Secondary TACACs Servers
    cisco.nxos.nxos_aaa_server_host:
      state: present
      server_type: tacacs
      tacacs_port: default
      host_timeout: 10
      address: "{{ tacacs2 }}"
      key: "{{ tacacs_key }}"

Once again, this is pretty self explanatory and all of the user and password information is stored directly into the playbook for demonstration purposes only. 

All in all this didn't take me too long to get working. The hardest one was the ACLs. I'm not sure what I was doing wrong but I had a lot of trouble with them. In the end I have a single script that will apply a very simple base config for everything I need on the lab switches and it's easy to deploy. The last thing to do is to run the play books. To run a playbook, you use the command ansible-playbook - i <inventory-file> /path/to/playbook. 

/usr/local/bin/ansible-playbook -i /intentories/inventory.yml /playbooks/lab_base_config.yml 

You can also configure an ansible.cfg file in your root directory that contains certain attributes such as the default inventory file which would mean you do not require the inventory keyword when you run your ansible command. 

Using Ansible Vault

I mentioned earlier about encrypting variables in a vault file  that you can then use in your play books. This is done using the ansible-vault command. To create a new vault file, use the command ansible-vault create /file/name.yml. You can also view or edit a vault file using the keywords view and edit. Once you enter the create command, you will be prompted for a password. This password will be used to encrypt/decrypt the contents of the vault file and is also required when running a playbook that will use an encrypted variable. Once you have entered the password, you will be presented with a blank text document where you can enter the data that you want encrypted. For example, if I create a file called my_secret.yml and enter a variable my_secret that contains my super secret password. 

[ray@wrlabansbl01 ansible]$ ansible-vault create my_secret.yml
[ray@wrlabansbl01 ansible]$ ansible-vault view my_secret.yml 
my_secret: s3cr3tp@ss

This creates an encrypted file that I can then use the variable my_secret in a playbook to specify the password. You can verify that the contents of the vault file have been encrypted using the ansible-vault view command. 

[ray@wrlabansbl01 ansible]$ cat my_secret.yml 
$ANSIBLE_VAULT;1.1;AES256
6562366361636262363774594834626461303764313439313936663234323234393832633139653435303375705995478
6561383531633439333363363865626264643232326430630a33376463386565646439636332653453769194673
3335343432623664313464376466366138626335623635616537383662363465313762362885448948818914705478456395379141152475544
6434363133643436360a336332643565626539363332373035316131653930333331666230663861784

And that's it for this post. While I'm still not amazed or even slightly impressed by automation, I did enjoy creating these scripts and learning about ansible. Would I have been able to configure 5 lab switches with the same config in a fraction of the time? Hell yes! But that's not the point. 

Thanks for checking out my blog. If you've noticed anything missing or have any questions, please leave a comment and let me know. 

 

Tags

Add new comment