Friday, November 22, 2019
Checking existence of an attribute value within a dictionary in Ansible
When you have to search within a list of dictionaries for the existence of a certain attribute value, you can use the 'selectattr' filter.
You specify the attribute you are interested in, the values this attribute is checked against, and after you convert to a list, you get the length. If the length is greater than zero, your list contains at least one item matching the criteria.
downlink_ports|selectattr('type','in','dslam,isam,lightspan')|list|length > 0
References:
https://jinja.palletsprojects.com/en/master/templates/#list-of-builtin-tests
http://www.oznetnerd.com/jinja2-selectattr-filter/
Saturday, August 31, 2019
Formatting numbers in your Ansible/Jinja templates
Quite often you may have to use an index somewhere in your templates. I think the most common scenarion is to track the loop index within a for loop in Jinja2 or a loop in Ansible.
One such scenario is shown below, where I'm trying to produce the configuration for the uplink ports of a router. Notice that I'm using the index variable to differentiate the configuration sections of each port and store it in a different file.
- name: Generate uplink ports config
template:
src: configtemplates/{{ ansible_network_os }}/uplink_ports_cfg
dest: deviceconfigs/{{ inventory_hostname }}/{{ index }}_uplink_ports_cfg
delegate_to: localhost
changed_when: false
loop: "{{ uplink_ports }}"
loop_control:
index_var: index
when: uplink_ports is defined
0_downlink_ports_cfg
1_downlink_ports_cfg
2_downlink_ports_cfg
..
8_downlink_ports_cfg
9_downlink_ports_cfg
10_downlink_ports_cfg
After producing the config sections I'll use the 'assemble' function in Ansible, to merge all the sections in a common file. I expect the merging to follow the index sequence, which is the default behavior of the 'assemble' module, based on filename sorting.
And everything works well, as long as the index is smaller than 10. If you exceed 10, then the sorting function of 'assemble' will mess up, by merging the 10th section before the 2nd.
In this case you can start indexing with a two digit number and thank god there's an easy way to do that. Just replace '{{ index }}' with '{{ "%02d"|format(index) }}'.
The filenames produced in this case are in the following format and 'assemble' merges correctly.
00_downlink_ports_cfg
01_downlink_ports_cfg
02_downlink_ports_cfg
..
08_downlink_ports_cfg
09_downlink_ports_cfg
10_downlink_ports_cfg
00_downlink_ports_cfg
01_downlink_ports_cfg
02_downlink_ports_cfg
..
08_downlink_ports_cfg
09_downlink_ports_cfg
10_downlink_ports_cfg
Labels:
ansible,
jinja2,
network automation,
python
Checking duplicate interfaces or addresses in Ansible
Consider the scenario that you have the following variable consisting of information about the uplink ports of a router.
uplink_ports:
- {port: "Te0/0/26", ip: 10.10.10.11/31, peer_name: neighbor1, peer_port: Gi0/0/2 }
- {port: "Te0/0/27", ip: 10.10.10.13/31, peer_name: neighbor2, peer_port: Gi0/0/2 }
Before proceeding to config generation and application on the router, it's a good idea to check for duplicates. Very often, usually when we copy paste, we forget to change all the parameters and this may result in unexpected failures on the network.
One of the approaches is to check for duplicates using the 'assert' module. You actually ask Ansible to check certain conditions and report back, either with a success or a fail message.
- name: Check uplink ports for duplicates
assert:
that: uplink_ports|map(attribute='port')|list|length == uplink_ports|map(attribute='port')|list|unique|length
fail_msg: "Duplicates exist in your uplink ports variable. Please revise."
delegate_to: localhost
changed_when: false
The tricky part here is the condition you specify to the function. As you see we compare
"uplink_ports|map(attribute='port')|list|length" to "uplink_ports|map(attribute='port')|list|unique|length", but what does it mean?
- name: Check uplink ports for duplicates
uplink_ports:
- {port: "Te0/0/26", ip: 10.10.10.11/31, peer_name: neighbor1, peer_port: Gi0/0/2 }
- {port: "Te0/0/27", ip: 10.10.10.13/31, peer_name: neighbor2, peer_port: Gi0/0/2 }
Before proceeding to config generation and application on the router, it's a good idea to check for duplicates. Very often, usually when we copy paste, we forget to change all the parameters and this may result in unexpected failures on the network.
One of the approaches is to check for duplicates using the 'assert' module. You actually ask Ansible to check certain conditions and report back, either with a success or a fail message.
- name: Check uplink ports for duplicates
assert:
that: uplink_ports|map(attribute='port')|list|length == uplink_ports|map(attribute='port')|list|unique|length
fail_msg: "Duplicates exist in your uplink ports variable. Please revise."
delegate_to: localhost
changed_when: false
The tricky part here is the condition you specify to the function. As you see we compare
"uplink_ports|map(attribute='port')|list|length" to "uplink_ports|map(attribute='port')|list|unique|length", but what does it mean?
- map(attribute='port')|list ==> Will produce a list of items including only the 'port' key of our variable
- unique ==> Will remove all the duplicates from the previous list
- length ==> Will calculate the length of the list
So, we compare the length of the list to the length of the same list after having removed the duplicates. This means if our variable had duplicates in the first place, the length of the lists won't match. If no duplicates existed, the length would be the same before and after the 'unique' operation.
You can actually assert multiple conditions at once. If you want to check both the 'port' and the 'ip' keys for duplicates you can do the following.
- name: Check uplink ports for duplicates
assert:
that:
- uplink_ports|map(attribute='port')|list|length == uplink_ports|map(attribute='port')|list|unique|length
- uplink_ports|map(attribute='ip')|list|length == uplink_ports|map(attribute='ip')|list|unique|length
fail_msg: "Duplicates exist in your uplink ports variable. Please revise."
delegate_to: localhost
changed_when: false
that:
- uplink_ports|map(attribute='port')|list|length == uplink_ports|map(attribute='port')|list|unique|length
- uplink_ports|map(attribute='ip')|list|length == uplink_ports|map(attribute='ip')|list|unique|length
fail_msg: "Duplicates exist in your uplink ports variable. Please revise."
delegate_to: localhost
changed_when: false
Labels:
ansible,
duplicates,
jinja2,
network automation
Wednesday, August 28, 2019
Working with dynamic inventories in Ansible using PHP (part 2)
Following the discussion in part 1, here follows the PHP code that will return the json structure required by Ansible. It's quite complex and probably there are other better ways to do it, but that's my way.. It is optimized so that only one loop over each table will populate the structure as needed.
The database behind the scenes is PostgreSQL and you can see the structure below. Don't focus on the exact SQL query strings, as they are based on my specific data model. Depending on your model you will have to write your own queries. In the end, irrespectively of your data model, you need two table structures, one with Group/Level associations and another with Group/Hostname associations. In my case I don't need to include any 'vars' section, so it's not shown below.
Concerning the group membership table, it's quite straight forward. The group table however, requires some special attention. You need to carefully set the grouplevel of each group, keeping in mind the 'modulo 100' and 'modulo' 20 rules. Each group with grouplevel multiple of 100 will be a major group. Each group with grouplevel multiple of 20 will be a subgroup within the major group. It may sound complex, but you actually do it once and you don't need to change frequently.
Group table
|
Group membership table
|
Below you can see the exact PHP code along with some comments to help you understand how it works. I have tried to remove some non-critical parts to make the code more readable. Normally you should do some error checking on several parts..
<?php
// Create connection to database $conn = pg_connect($conn_string);
// Query the group table that allows building of groups and children
$sql = "SELECT groupname, grouplevel FROM YOUR_TABLE ORDER BY grouplevel ASC";
$groups = pg_query($sql);
// Query the group membership table that allows building of hosts in groups
$sql = "SELECT groupname, hostname FROM YOUR_TABLE ORDER BY grouplevel ASC";
$groupmembers = pg_query($sql);
// Set some helper variables
$response = array();
$parentgroup = "";
$subgroup = "";
$subgrouplevel = 0;
// We loop once over the group list. We create the respective arrays and
// identify the children of each group based on the grouplevel hierarchy.
// The hierarchy is based on a modulo 100 function for major groups and
// modulo 20 for subgroups. Very important to keep in mind that the group
// list is sorted based on grouplevel.
// To avoid conflicts with subgroups we restrict the subgroup range to +20 from
// the subgroup level
// Example of the hierarchy we achieve for groups residing in the range 200-299:
// 200 ( 201, 202, 203, 220, 240), 220 ( 221, 222), 240 ( 241, 242)
while ($row = pg_fetch_array($groups, null, PGSQL_ASSOC)) {
$group = trim($row['groupname']);
$level = trim($row['grouplevel']);
if (!array_key_exists($group,$response)) $response[$group] = array();
// if true we have identified a major group. just store the name
if ($level % 100 == 0) $parentgroup = $group;
else
{
// if true we have identified a subgroup. store the name and the level and continue
if ($level % 20 == 0) {$subgroup = $group; $subgrouplevel = $level;}
// at the next loop if we are within the subgroup limits
// we set current group as subgroup child
if ($level > $subgrouplevel && $level < ($subgrouplevel + 20))
{
if (!array_key_exists('children',$response[$subgroup])) $response[$subgroup]['children'] = array();
array_push($response[$subgroup]['children'],$group);
}
else // otherwise we set current group as parentgroup child
{
if (!array_key_exists('children',$response[$parentgroup])) $response[$parentgroup]['children'] = array();
array_push($response[$parentgroup]['children'],$group);
}
}
}
// We have finished setting children for each group. Time to deal with hosts
// We loop over the group membership list. We identify the hosts and set them
// to their respective groups
while ($row = pg_fetch_array($groupmembers, null, PGSQL_ASSOC)) {
$group = trim($row['groupname']);
$host = trim($row['hostname']);
if (!array_key_exists('hosts',$response[$group])) $response[$group]['hosts'] = array();
array_push($response[$group]['hosts'],$host);
}
// release resources and close connection to database
pg_free_result($groupmembers);
pg_free_result($groups);
pg_close($conn);
// encode the array as json send it back
echo json_encode($response);
?>
Executing the code you would get something like the following json
{"south":{"children":["crete","athens","islands"]},"crete":{"hosts":["switch2"]},"athens":{"children":["lab"]},"lab":{"hosts":["router1"]},"islands":{"children":["mikonos","rodos"]},"mikonos":[],"rodos":[],"north":{"children":["thessaloniki"]},"thessaloniki":{"hosts":["switch1"]},"os":{"children":["ios","iosxr","nxos","junos"]},"ios":{"hosts":["router1"]},"iosxr":[],"nxos":{"hosts":["switch1"]},"junos":[],"function":{"children":["metro","core","datacenter"]},"metro":{"hosts":["router1"]},"core":[],"datacenter":{"children":["spine","leaf"]},"spine":{"hosts":["switch1"]},"leaf":[]}
This json structure is acceptable by Ansible and works quite well as you can see below. I use the dynamic inventory and ask Ansible to return the groups associated with a specific host.
ansible-playbook -i ./get_inventory.php --limit router1 get_host_groups.yml
PLAY [all] *********************************************************************************************************
TASK [show group associations for the host(s)] *********************************************************************************************************
ok: [router1 -> localhost] =>
msg:
- athens
- function
- ios
- lab
- metro
- os
- south
PLAY RECAP **********************************************************************************************************
router1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
As you see, inheritance works well and although I have set only 'lab', 'ios' and 'metro' groups for my host, it is also associated with the parent groups, not explicitly specified in the membership table.
Labels:
ansible,
dynamic inventory,
json,
network automation,
php,
postgresql
Tuesday, August 27, 2019
Working with dynamic inventories in Ansible using PHP (part 1)
As you probably know, Ansible has a great group inheritance mechanism. If you build the inventory file carefully, the benefits of inheritance are significant.
If you have a host belonging to group 'childgroup' and this group is a child of 'parentgroup' the variables set in the 'parentgroup' are inherited by the 'childgroup'. This way you avoid setting variables in multiple places, which is considered a best practice when writing playbooks.
Having to maintain a text file with few groups and hosts works quite well, but when you want to scale, you probably want to keep a database with your groups, hosts and the respective membership. The choice of database is a matter of personal preference, in my case I chose PostgreSQL.
There are several ways to do that and Ansible accepts several types of dynamic inventories. Instead of giving a filename as inventory in the command line, you specify an executable file written in any language you like, that returns json encoded data. I'm more familiar with PHP so I decided to use it instead of Python or another language. The command line looks like the following
ansible-playbook -i get_inventory.php my_sample_playbook.yml
The data that is returned by the script must be in the format shown below and is documented at Developing dynamic inventory
Well.. it's quite easy to say it.. but not so easy to develop such a script that will return this kind of structure.. We'll see that in part 2 of this story!
If you have a host belonging to group 'childgroup' and this group is a child of 'parentgroup' the variables set in the 'parentgroup' are inherited by the 'childgroup'. This way you avoid setting variables in multiple places, which is considered a best practice when writing playbooks.
Having to maintain a text file with few groups and hosts works quite well, but when you want to scale, you probably want to keep a database with your groups, hosts and the respective membership. The choice of database is a matter of personal preference, in my case I chose PostgreSQL.
There are several ways to do that and Ansible accepts several types of dynamic inventories. Instead of giving a filename as inventory in the command line, you specify an executable file written in any language you like, that returns json encoded data. I'm more familiar with PHP so I decided to use it instead of Python or another language. The command line looks like the following
ansible-playbook -i get_inventory.php my_sample_playbook.yml
The data that is returned by the script must be in the format shown below and is documented at Developing dynamic inventory
{
"group001": {
"hosts": ["host001", "host002"],
"vars": {
"var1": true
},
"children": ["group002"]
},
"group002": {
"hosts": ["host003","host004"],
"vars": {
"var2": 500
},
"children":[]
}
}
Keep in mind that you don't have to return the 'vars' and 'children' sections if you don't actually utilize them. Ansible will accept the data structure, without any complain, even with just the 'hosts' section.Well.. it's quite easy to say it.. but not so easy to develop such a script that will return this kind of structure.. We'll see that in part 2 of this story!
Labels:
ansible,
dynamic inventory,
json,
network automation,
php,
postgresql
Saturday, August 24, 2019
Creating access-list wildcard masks for Cisco in Ansible
Working with access lists for Cisco IOS in Ansible is almost a nightmare by itself. This is due to the fact that you need to handle the exact position of each entry and you may have to remove the complete access list before you do anything.
One more thing to take into account is handling of wildcard bits. If you have defined your variables in CIDR notation you need to calculate the wildcard (or don't-care) bits before actually using them.
Just recently I found out there is a filter in Jinja2 that does exactly this calculation. It's an option in the ipaddr filter called 'hostmask'. It seems this filter is not so popular and I found very few references online, nevertheless it works quite well!
{{ mycidrvariable | ipaddr('hostmask') }}
For example if you apply this filter on '10.10.8.16/28' you will get '0.0.0.15'
One more thing to take into account is handling of wildcard bits. If you have defined your variables in CIDR notation you need to calculate the wildcard (or don't-care) bits before actually using them.
Just recently I found out there is a filter in Jinja2 that does exactly this calculation. It's an option in the ipaddr filter called 'hostmask'. It seems this filter is not so popular and I found very few references online, nevertheless it works quite well!
{{ mycidrvariable | ipaddr('hostmask') }}
For example if you apply this filter on '10.10.8.16/28' you will get '0.0.0.15'
Tuesday, August 6, 2019
To be or not to be.. using Declarative Intent modules in Ansible?
Declarative Intent modules in Ansible are device specific modules that configure a specific feature on a networking device. Such modules are nxos_bgp, nxos_ntp, iosxr_bgp and several others.
Sounds like a great thing.. there are certain cases that they really help, but when you start digging deeper sometimes you get a not so nice surprise.
Consider the scenario where you want to create several config sections for a device and push them to the device. You have actually two main options
Sounds like a great thing.. there are certain cases that they really help, but when you start digging deeper sometimes you get a not so nice surprise.
Consider the scenario where you want to create several config sections for a device and push them to the device. You have actually two main options
- Use multiple Jinja2 templates, create the final config file and push it to the device
- Use multiple declarative intent modules, one for each feature you need to configure
Someone might say 'Why should I bother learning Jinja2 and use a template.. Let's go for the easier path using a specific module'. That's what I initially though and started preparing my config using such modules.
For simple tasks and small playbooks the modules are quite good, but if you start writing more complex ones and use more 'exotic' features, then you'll certainly have problems. That's what I found out:
- If you want to build a playbook that will create config for several platforms, you will need the respective module for each one of them. Guess what.. There is no parity between platforms for each module. You may find the specific module for nxos, but no module for ios or ios-xr.
- If you are lucky enough to find the required modules, you realize that one of them supports the vrf option that you need, but the rest don't.
- If you are still lucky and get to a point where you want to optimize the execution of your playbook, by using the 'aggregate' option, you realize it's supported only for a few of your modules. For the rest you just wait..
- If you got to this point you are really lucky and your playbook executes quite well. But then you realize that each module you call, it executes a 'show running-config' on your device. This happens for every module, each time you call it. What if you have a device with long config that takes some time to return? Not so effective, don't you think?
These are some of the problems I got through and so I decided to go with Jinja2 for that playbook. Working with Jinja2 had also some constraints, but the final result was much better.
I'm not saying that declarative intent modules are not useful. Sometimes they certainly help, but don't be fooled, everything comes with a cost..
Labels:
ansible,
jinja2,
network automation
Sunday, August 4, 2019
Simple list vs dictionary in Ansible (and how easily you can mess)
I've been working on a relatively simple task in Ansible, namely create the VRF configuration for a Cisco router.
I would use a 'for' loop in a Jinja2 template and iterate over a variable that holds my VRF parameters.
Now it depends how you have declared your variable, as a list or as a dictionary? There are valid use-cases to use any method, the point is to understand what you're doing and why..
List example
router_vrfs:
- {vrf_name: "VRF1", vrf_rd: 100, vrf_import_rt: "1:100", vrf_export_rt: "1:100"}
- {vrf_name: "VRF2", vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:200"}
- {vrf_name: "VRF3", vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:300"}
Dictionary example
router_vrfs:
"VRF1": {vrf_rd: 100, vrf_import_rt: "1:100", vrf_export_rt: "1:100"}
"VRF2": {vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:200"}
"VRF3": {vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:300"}
If you have declared a list, it's an indexed list and you can use router_vrfs[0], router_vrfs[1], router_vrfs[2], or you can use a 'for' loop to access one item after the other as per below
{% for data in router_vrfs %}
vrf context {{ data.vrf_name }}
rd {{ router_loopback0 }}:{{ data.vrf_rd }}
address-family ipv4 unicast
route-target import {{ data.vrf_import_rt }}
route-target export {{ data.vrf_export_rt }}
{% endfor %}
If you have declared a dictionary, things are a bit more complex. In this case you actually need to define two variables within your loop and use the 'items()' function on the variable. In the following snippet you also see I'm using a 'sort' filter, because dictionaries are unordered by default and you could have different result every time.
{% for name, data in router_vrfs.items()|sort(false,true) %}
vrf context {{ name }}
rd {{ router_loopback0 }}:{{ data.vrf_rd }}
address-family ipv4 unicast
route-target import {{ data.vrf_import_rt }}
route-target export {{ data.vrf_export_rt }}
{% endfor %}
And now the messy part..
What would happen if you declare a dictionary and by mistake use dash (-) in the beginning of each line??
Wrong dictionary example
router_vrfs:
- "VRF1": {vrf_rd: 100, vrf_import_rt: "1:100", vrf_export_rt: "1:100"}
- "VRF2": {vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:200"}
- "VRF3": {vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:300"}
You have just created a simple list, in which each element is a dictionary!
Each element contains only one key/value pair, but the damage is done. You understand if you try to use any of the above 'for' loop examples nothing will work. Instead you would need to do something like the following, which is just the wrong way of doing things..
{% for dict in router_vrfs %}
{% for key,value in dict.items() %}
So, beware how you declare variables!
I would use a 'for' loop in a Jinja2 template and iterate over a variable that holds my VRF parameters.
Now it depends how you have declared your variable, as a list or as a dictionary? There are valid use-cases to use any method, the point is to understand what you're doing and why..
List example
router_vrfs:
- {vrf_name: "VRF1", vrf_rd: 100, vrf_import_rt: "1:100", vrf_export_rt: "1:100"}
- {vrf_name: "VRF2", vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:200"}
- {vrf_name: "VRF3", vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:300"}
Dictionary example
router_vrfs:
"VRF1": {vrf_rd: 100, vrf_import_rt: "1:100", vrf_export_rt: "1:100"}
"VRF2": {vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:200"}
"VRF3": {vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:300"}
If you have declared a list, it's an indexed list and you can use router_vrfs[0], router_vrfs[1], router_vrfs[2], or you can use a 'for' loop to access one item after the other as per below
{% for data in router_vrfs %}
vrf context {{ data.vrf_name }}
rd {{ router_loopback0 }}:{{ data.vrf_rd }}
address-family ipv4 unicast
route-target import {{ data.vrf_import_rt }}
route-target export {{ data.vrf_export_rt }}
{% endfor %}
If you have declared a dictionary, things are a bit more complex. In this case you actually need to define two variables within your loop and use the 'items()' function on the variable. In the following snippet you also see I'm using a 'sort' filter, because dictionaries are unordered by default and you could have different result every time.
{% for name, data in router_vrfs.items()|sort(false,true) %}
vrf context {{ name }}
rd {{ router_loopback0 }}:{{ data.vrf_rd }}
address-family ipv4 unicast
route-target import {{ data.vrf_import_rt }}
route-target export {{ data.vrf_export_rt }}
{% endfor %}
And now the messy part..
What would happen if you declare a dictionary and by mistake use dash (-) in the beginning of each line??
Wrong dictionary example
router_vrfs:
- "VRF1": {vrf_rd: 100, vrf_import_rt: "1:100", vrf_export_rt: "1:100"}
- "VRF2": {vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:200"}
- "VRF3": {vrf_rd: 200, vrf_import_rt: "1:300", vrf_export_rt: "1:300"}
You have just created a simple list, in which each element is a dictionary!
Each element contains only one key/value pair, but the damage is done. You understand if you try to use any of the above 'for' loop examples nothing will work. Instead you would need to do something like the following, which is just the wrong way of doing things..
{% for dict in router_vrfs %}
{% for key,value in dict.items() %}
So, beware how you declare variables!
Labels:
ansible,
jinja2,
network automation
Thursday, June 27, 2019
How to upload an existing project to Gitlab
1. Visit Gitlab site and create a new project following the online instructions
2. Install git on your local computer
3. Configure git username & email
git config --global user.name "username"
git config --global user.email "myemail@example.com"
4. Initialize git in your project directory and upload files
cd your_project_dir
git init
git remote add origin https://gitlab.com/myusername/myproject.git
git add .
git commit -m "Initial version"
git push -u origin master
In case it fails with a rejection you may need to execute "git pull" first and then "git push" as per above.
2. Install git on your local computer
3. Configure git username & email
git config --global user.name "username"
git config --global user.email "myemail@example.com"
4. Initialize git in your project directory and upload files
cd your_project_dir
git init
git remote add origin https://gitlab.com/myusername/myproject.git
git add .
git commit -m "Initial version"
git push -u origin master
In case it fails with a rejection you may need to execute "git pull" first and then "git push" as per above.
Wednesday, March 13, 2019
Executing an ansible playbook from within PHP
Ansible seems to be the perfect tool to create a device inventory and keep track of all the devices in the network. Having this in mind, I decided to write a web application in PHP with a Postgres database that would hold the data of each device. Below you can see the web app.
Starting with the basics I used the "ios_facts" function to get the data and insert into the database. The Ansible playbook was executed manually, it went through the devices connecting to one after the other and database was updated with the new data.
That worked very well until I decided to trigger the execution of the Ansible playbook from within PHP, in order to create a more dynamic inventory or execute a playbook against a certain network device. Also I wanted to get the output from the execution of the playbook and display it to the web user.
Since PHP runs as a www-data user the privileges for executing anything are rather limited. This is what I had to do in order to make it work.
1. Create a user www-data in Postgress and grand "connect" privileges to my database
2. Grand 'insert', 'update' & 'select' privileges to the www-data user for the table I was interested in
3. Put the ansible playbook in the directory where the PHP application files existed
4. Use the PHP command passthru to execute the playbook and get the output back to the web application as per below
<?php
passthru("/usr/bin/ansible-playbook -i myinventory mytest.yml");
****ansible.cfg****
[defaults]
host_key_checking = False
And this is the output that I get on my browser after executing the script. This is just a Javascript alert, but you get the point..
Please keep in mind my application is running in an internal lab network and the security of the application is not an issue. The above process took place just to make things work in an internal lab environment. You shouldn't take such actions in a production environment where the security of the application and the network itself is critical
Starting with the basics I used the "ios_facts" function to get the data and insert into the database. The Ansible playbook was executed manually, it went through the devices connecting to one after the other and database was updated with the new data.
That worked very well until I decided to trigger the execution of the Ansible playbook from within PHP, in order to create a more dynamic inventory or execute a playbook against a certain network device. Also I wanted to get the output from the execution of the playbook and display it to the web user.
Since PHP runs as a www-data user the privileges for executing anything are rather limited. This is what I had to do in order to make it work.
1. Create a user www-data in Postgress and grand "connect" privileges to my database
2. Grand 'insert', 'update' & 'select' privileges to the www-data user for the table I was interested in
3. Put the ansible playbook in the directory where the PHP application files existed
4. Use the PHP command passthru to execute the playbook and get the output back to the web application as per below
<?php
passthru("/usr/bin/ansible-playbook -i myinventory mytest.yml");
?>
5. Create an "ansible.cfg" file in the directory of the playbook to disable host key checking****ansible.cfg****
[defaults]
host_key_checking = False
6. Modify write permissions of the application directory, to allow Ansible write on the disk
And this is the output that I get on my browser after executing the script. This is just a Javascript alert, but you get the point..
Please keep in mind my application is running in an internal lab network and the security of the application is not an issue. The above process took place just to make things work in an internal lab environment. You shouldn't take such actions in a production environment where the security of the application and the network itself is critical
Subscribe to:
Posts (Atom)