Preface
So, if you have ended up on this blog post it means you are either looking for guidance on how to use URI module, or on guidance on how to automate NSX-T with Ansible.
For those of you in the second category, I need to mention that to automate NSX-T there are existing modules maintained by VMware which you can use. You can find those on VMware’s github page.
So, why am I writing this you might ask? Well there are several reasons. Main one is that the modules available online are not updated as often as I would like them to be. NSX-T API changes quite a bit from version to version, and in many cases I cannot afford waiting five months until the official modules are updated, and another reason is that I like experimenting.
In this post I will show some examples on how the Ansible URI module can be used to automate the REST API of NSX-T. But a very similar approach can be used to automate the vSphere REST API or any other REST API.
Environment and Versions
For this example I have deployed a fresh Python venv and installed ansible and jmespath using pip. The following versions got installed (I am sure you can use newer versions if you are doing this in future):
- Ansible: 2.10.4
- jmespath: 0.10.0
And in my lab I have NSX-T 3.1.0.
REST API and URI
I am sure you know what a REST API is, it is an API which is consumed by simply sending HTTP Methods such as GET, POST, PUT, PATCH and DELETE to an API endpoint.
Ansible URI module allows you to use Ansible to work with REST APIs.
Working with REST API, you will have to work with so called payloads (JSON or XML), and Response Codes to see if a request was successful or not.
I am sure you can find much more deep info on the Wikipedia page covering it.
The main concept we need for these examples is that if we are getting information from REST API, we will get a response in a certain format (JSON in case of NSX-T) and if we need to send some information to the API we will need to send it in certain format as well (again JSON for NSX-T). So you will see me using JSON templates later.
Examples
Create Basic Overlay Segment
To create an Overlay Segment we will need to send a PATCH request to the NSX-T Policy API with a payload describing the details of the Segment we need to create. The payload itself can be created using Jinja2 templating engine and be provided to the playbook as part of the execution.
The Playbook we will use to create this Segment is very simple, we will name it create_overlay_segment.yml and this is how it looks:
--- - hosts: 127.0.0.1 connection: local gather_facts: false tasks: - name: Creating a Segment uri: url: "https://{{ nsx_manager }}/policy/api/v1/infra" force_basic_auth: yes validate_certs: "{{validate_certs}}" headers: Accept: "application/json" Content-Type: "application/json" user: "{{ nsx_username }}" password: "{{ nsx_password }}" method: PATCH body: "{{ lookup('template','./create_overlay_segment.json.j2') | to_json }}" status_code: "200" body_format: json
Lets break it and discuss the important aspects:
- hosts: 127.0.0.1 and connection: local – These two lines tell Ansible that this operation is to be executed on the local host and there is no need to connect to any remote machine
- gather_facts: false – as we are executing an API call from the local host, there is no need to gather facts for it, so it is disabled.
- uri – here we specify the module which needs to be used. if you follow Ansible 2.10 standards you can also specify the FQCN for it which is ansible.builtin.uri
- url – this variable defined the full path to the API endpoint, in my example I have replaced the FQDN of NSX manager with a variable which I will provide at the execution time.
- force_basic_auth: yes – says URI module to use Basic Auth authentication method which is simply a username and password based authentication, if you are using vIDM as your identity source you will need to do some customisation. More details here.
- validate_certs: “{{validate_certs}}” – defines if the SSL certificate of NSX manager should be valid or not, needs to be set to NO if Self-Signed certificates are used.
- headers – pushes necessary HTTP headers to the API Endpoints, in this case we only specify that we want to operate the REST API using JSON payload.
- user and password – self explanatory, these variables define username and password to be used to authenticate agains NSX Manager. In our examples these will be provided as clear text on the execution time, but in real production environments Ansible Vault or Ansible Tower Custom Credentials can be used.
- method – defines the HTTP method to be used for this call.
- body – this is the payload which will be sent to the API endpoint as part of this call. In this example the body will be generated with help of Jinja2 template which is stored in same folder with the playbook
- status_code – this is the status code which is expected from the API endpoint so this call is considered successful.
- body_format – yet again we specify that the body which is sent to endpoint is in JSON format.
Here is the Jinja2 file we will use as payload template. If you are wondering from where I took this template, well there are many examples in the NSX-T API guide. We will name the file create_overlay_segment.json.j2.
{ "resource_type": "Infra", "children": [ { "resource_type": "ChildSegment", "Segment": { "resource_type": "Segment", "transport_zone_path": "/infra/sites/default/enforcement-points/default/transport-zones/{{ tz_id }}", "id": "{{ segment_name }}", "display_name": "{{ segment_name }}" } } ] }
As you can see the template itself requires a couple of variables to be provided.
- tz_id – Overlay Transport Zone ID. This can be captured in the NSX Manager GUI or yet another API call.
- segment_name – This is the name which will be used for the segment.
Having both the playbook and the template in the same folder, we can use ansible-playbook command with –-extra-vars parameter to execute it. here is an example command (Note: the command is long so I broke it down to several lines) :
ansible-playbook create_overlay_segment.yml --extra-vars \
"\
nsx_manager='nsxmanager.example.com' \
nsx_username='admin' \
nsx_password='$upaDup4Pa$$' \
validate_certs="no" \
segment_name='Segment-01' \
tz_id='f3231de2-6ff0-41c2-ab3b-fe4c2212e720' \
"
Get Transport Zone ID by name
In this example we will use the NSX-T Manager Search API to get Transport Zone ID using the Transport Zone Name. This can be useful in many cases, including the previous example where we created an Overlay Segment.
This time the approach will be a bit different. Instead of sending a PATCH or POST request we will be sending a GET request to the API endpoint. We will get the response in JSON format and we will need to parse through it to find the information we are looking for (ID in our example). So, Playbook will consist of two tasks:
- Send a GET request to search API, get the information required and save it into a variable.
- Parse through the response body and fetch the ID out of it.
Here is the playbook with two tasks:
--- - hosts: 127.0.0.1 connection: local gather_facts: false tasks: - name: Get info on specified Transport Zone ({{tz_name}}) uri: url: "https://{{ nsx_manager }}/api/v1/search/query?query=resource_type:TransportZone%20AND%20display_name:{{tz_name}}" force_basic_auth: yes validate_certs: "{{validate_certs}}" headers: Accept: "application/json" Content-Type: "application/json" user: "{{ nsx_username }}" password: "{{ nsx_password }}" method: GET body_format: json register: output_tz - name: Identifying ID of Transport Zone ({{tz_name}}) set_fact: tz_id: "{{ item.id }}" loop: "{{ output_tz.json | json_query('results[*]') }}" # Optional task - name: Output Transport Zone ID on Screen debug: msg: "{{tz_id}}"
We know most of the options and variables from previous examples, so let’s break down those which are different.
- url – as you can see we are using the search (aka query) API for this call. It was introduced in NSX-T 3.0, and if you are on older version you will need to use a different approach here.
- register: output_tz – this line basically saves the output of the call in the first task to a variable named output_tz.
- set_fact: tz_id: “{{ item.id }}” – in the second task we are running a loop and parsing through the content of output_tz. As soon as we find a line which contains id key, we assign the value to the tz_id ansible variable. To know which key you need to look for in the output then you can check the API guide.
- loop: “{{ output_tz.json | json_query(‘results[*]’) }}” – this setting basically initiates the loop to parse through the content of the output_tz variable
As a result of this playbook you will have the Transport Zone ID saved in the tz_id variable, and you can use it in different tasks. For testing purposes you can add a third Debug task to print the ID on screen.
To run the playbook, change to the folder where it is stored and run something like this.
ansible-playbook get_tz_id_by_name.yml --extra-vars \
"\
nsx_manager='nsxmanager.example.com' \
nsx_username='admin' \
nsx_password='$upaDup4Pa$$' \
validate_certs="no" \
tz_name='overlay-tz' \
"
Create Tier-0 Gateway
OK, lets cover one more example and create an Active/Standby Tier-0 gateway running on a two node Edge Node Cluster with a couple of external uplinks and a default gateway configured. The API call to do it was covered in one of my previous articles here, but in this example I will also show how you can combine several tasks to capture the needed data such as cluster ID before you run the actual API call to create the Tier-0 Gateway.
Here is the Playbook which will do what we need.
--- - hosts: 127.0.0.1 connection: local gather_facts: false tasks: - name: Collecting Edge Cluster IDs uri: url: "https://{{ nsx_manager }}/api/v1/edge-clusters" force_basic_auth: yes validate_certs: no headers: Accept: "application/json" Content-Type: "application/json" user: "{{ nsx_username }}" password: "{{ nsx_password }}" method: GET body_format: json register: output_edgecl - name: Identifying Edge cluster ID no_log: True set_fact: edge_cluster_id: "{{ item.id }}" when: - item.display_name == edge_cluster_name loop: "{{ output_edgecl.json | json_query('results[*]') }}" - name: Create ACTIVE/STANDBY T0 with Uplinks and Default GW uri: url: "https://{{ nsx_manager }}/policy/api/v1/infra" force_basic_auth: yes validate_certs: no headers: Accept: "application/json" Content-Type: "application/json" user: "{{ nsx_username }}" password: "{{ nsx_password }}" method: PATCH body: "{{ lookup('template','./create_T0.json.j2') | to_json }}" status_code: "200" body_format: json
Let’s take a look on each task separately:
- Collecting Edge Cluster IDs – This tasks runs an API call to fetch details on all Edge Node Clusters available in the environment and stores the output in output_edgecl variable. You might ask ‘why not to use search API for this’. Well, the problem is, search API does not support the edge node clusters for now, I hope it will in future though.
- Identifying Edge cluster ID – This task parses through the content stored in the output_edgecl variable and fetches the Edge Node Cluster ID based on the Cluster Display Name and stores it in the edge_cluster_id variable.
- Create ACTIVE/STANDBY T0 with Uplinks and Default GW – this task executes the API call to create the Tier-0 gateway will all the settings provided.
The payload create_T0.json.j2 template looks like this :
{ "resource_type":"Infra", "children":[ { "resource_type":"ChildTier0", "marked_for_delete":"false", "Tier0":{ "resource_type":"Tier0", "id":"{{ t0_name_id }}", "ha_mode":"ACTIVE_STANDBY", "children":[ { "resource_type":"ChildLocaleServices", "LocaleServices":{ "edge_cluster_path":"/infra/sites/default/enforcement-points/default/edge-clusters/{{ edge_cluster_id }}", "resource_type":"LocaleServices", "id":"{{ t0_name_id }}-SR", "children":[ { "Tier0Interface":{ "edge_path":"/infra/sites/default/enforcement-points/default/edge-clusters/{{edge_cluster_id}}/edge-nodes/0", "segment_path":"/infra/segments/{{ t0_uplink_ls_name }}", "type":"EXTERNAL", "resource_type":"Tier0Interface", "id":"{{ t0_uplink1_name }}", "display_name":"{{ t0_uplink1_name }}", "children":[ ], "marked_for_delete":false, "subnets":[ { "ip_addresses":[ "{{ t0_uplink1_ip }}" ], "prefix_len":"{{ t0_uplinks_subnetmask }}" } ] }, "resource_type":"ChildTier0Interface", "marked_for_delete":false }, { "Tier0Interface":{ "edge_path":"/infra/sites/default/enforcement-points/default/edge-clusters/{{ edge_cluster_id }}/edge-nodes/1", "segment_path":"/infra/segments/{{ t0_uplink_ls_name }}", "type":"EXTERNAL", "resource_type":"Tier0Interface", "id":"{{ t0_uplink2_name }}", "display_name":"{{ t0_uplink2_name }}", "children":[ ], "marked_for_delete":false, "subnets":[ { "ip_addresses":[ "{{ t0_uplink2_ip }}" ], "prefix_len":"{{ t0_uplinks_subnetmask }}" } ] }, "resource_type":"ChildTier0Interface", "marked_for_delete":false } ] } }, { "resource_type":"ChildStaticRoutes", "marked_for_delete":false, "StaticRoutes":{ "network":"0.0.0.0/0", "next_hops":[ { "ip_address":"{{ t0_uplinks_default_gw }}", "admin_distance":1 } ], "resource_type":"StaticRoutes", "id":"Default", "display_name":"Default", "children":[ ], "marked_for_delete":false } } ] } } ] }
And of course to execute the playbook you can use –extra-vars as usual. Command will look like this:
ansible-playbook create_T0.yml --extra-vars \
"\
nsx_manager='nsxmanager.example.com' \
nsx_username='admin' \
nsx_password='$upaDup4Pa$$' \
t0_name_id='t0-example' \
t0_uplink_ls_name='segment-t0-uplinks' \
t0_uplink1_name='t0-uplink1' \
t0_uplink2_name='t0-uplink2' \
t0_uplink1_ip='10.0.0.74' \
t0_uplink2_ip='10.0.0.75' \
t0_uplinks_subnetmask='26' \
t0_uplinks_default_gw='10.0.0.65' \
edge_cluster_name='test_etn_cluster01' \
"
Some Additional Remarks and Last Words
As you might have seen I suggest executing playbooks while providing needed variables using –extra-vars, but that is not the only way. For example you could make the playbooks more interactive by using Ansible prompts. You could do a combination, where most of the variables are provided using –extra-vars, but the password is prompted. With Ansible Prompts you can even make password invisible during prompt, you can even encrypt the password as part of the prompt. Ansible is a very powerful tool, so experiment.
Hope this post was useful and let me know in the comments if you have additional questions.
Latest posts by Aram Avetisyan (see all)
- Make Youtube Videos About Technology? Why not… The Cross-Cloud Guy - October 7, 2021
- Automating (NSX-T) REST API using Ansible URI module - December 29, 2020
- Quick Reference: Create Security Policy with Firewall Rules using NSX-T Policy API - May 4, 2020