Life is Better With Ansible

Earlier this year I intended to do a mulit-part blog series on Ansible. If you are not familiar with Ansible it’s basically a configuration management tool that’s part of a config mangement/infrastructure as code approach to managing your infrastructure.

In the past I’ve worked extensively with Puppet, SaltStack and Chef but have never played around with Ansible until about a year ago. At a high level Ansible is a tool written in Python with “playbooks” written in yaml. One of the great advantages to Ansible is that it’s got a fairly low barrier to entry and can be one giant self contained playbook or can be used in a broader configuration management strategy, which is what I intend to touch on throughout this series on Ansible. One of the greatest parts about Ansible is all you need are a couple of Python dependancies that are likely pre-installed with your distribution of Linux as well as an SSH connection and a user with sudo permissons if you are using it to configure configuration files and install packages. There is a paid version of Ansible but frankly the free version covers every use case that I need for it, but your mileage and support needs may be different from mine.

One of the ways in which Ansible is quite different than other config management tools is that it is agentless. This can be an advantage if you are working in an environment where you do not want changes to be automatically applied or if you are worried about the resources used on the box by an agent. Generally speaking there isn’t really a right or wrong way to structure your Ansible project which is both the beauty and the difficulty of Ansible. If you want to build a scalable well thought out configuration management tool not skipping foresight and planning is essential. Much like Chef’s Marketplace, or the Puppet forge there is Ansible Galaxy where you can find pre-built Ansible roles, however there really aren’t as many available as there are in Chef or Puppet and quality varies greatly between them. Currently in my existing role the only 2 Galaxy roles I’ve used are the DataDog and Zabbix agent roles as those met my needs and are rather well documented and straightforward. Galaxy roles can also offer a good insight into the ways in which to structure an Ansible role and how to make use of their various pieces.

While this is a general overview I promise the coming installments will have more to offer including instructions on installing Ansible, building a basic project, and building out roles, playbooks and inventory files. Before I wrap up this post I want to cover the basic components to our Ansible strategy that I’ll be moving forward with in this series.

Inventory: The inventory is a file that can either be structured as an INI file or a yaml file. An inventory file consists of a set of groups, the hosts they contain, host level variables, as well as group variables or global variables. I’ve found the yaml file to be a better structure because the rest of what we’re working with is yaml which makes it a familiar format and you can use encrypted variables from Ansible vault within the inventory file. In general I’ve found it makes the most sense to structure inventory files around environments (dev, qa, uat, stage, production, etc) and in cases where multiple regions are involved using an inventory file for each region as this will help to keep things more organized.

Vault: Ansible vault isn’t so much part of the file structure of a project but rather an important tool. Ansible Vault lets us encrypt strings so that we can safely store values like AWS key pairs, DB creds, and more in version control without having them exposed in plain text. This of course means the vault password you use should not be stored in version control as this would be problematic to security.

Roles: Roles are not a requirement within Ansible but they are incredibly useful if you are planning to scale Ansible. Roles make the most sense to divide into logical functions say Web, DB, DataDog, Varnish, Users, or whatever you may have in your server farms. Within these logical roles we can handles things such as installing, adding configuration files, making sure that services are enabled to autostart and that services are running. If you intend to use similar configurations across multiple environments we will want to be sure to templatize and turn things into variables. Variables can be associated at the role level, playbook level, or inventory level. We’ll see more later on how to decide what’s the best place to put things.

Playbooks: The actual runbook for what Ansible is supposed to do. This is the essential bit of Ansible, as you can run your entire config management within a playbook, or as I will show you we can use this to define server roles such as webservers, dbservers, etc, and include any relevant variables, some sanity checks around execution strategy, what method of becoming sudo, and listing which roles to include in the playbook. In general my preference is to just include a few housekeeping items at the top and then a yaml list of roles to include, as this scales well, keeps playbooks clean and readable. One thing to note about Ansible is unlike Puppet the execution order of an Ansible playbook is top to bottom. You will want to keep this in mind as you determine which order to apply your roles in, for example if a set of directories needs to be owned by a user you’re creating the user role may need to come before the role that’s creating the directories.

Thanks for tuning in, we will continue with more fun with Ansible in the coming several blog installments.

A Better Way To Parse CloudWatch Logs

Let’s face it, as much as I love AWS the experience of viewing logs in cloudwatch is moderately awful and annoying. That is until I stumbled upon a fantastic github project called awslogs.

awslogs can be installed as a pip package with a simple pip install awslogs. You will also need to have the AWS CLI installed and configured with a key and secret key in order to make use of it. Once it’s installed it’s a pleasure to work with. I won’t waste time describing is usage as it’s laid out rather nicely in the README on the github project. Check it out here!

Python httpstat To Troubleshoot Connectivity

I’ve recently been working to diagnose intermittent latency issues from some HTTP calls that go through an outbound proxy. In an effort to determine whether the latency is at the destination server itself or is coming from the proxy I needed to so some digging.

After spending some time crafting a script in Python I found that I wanted more granular output than what I was getting from the options available using Python requests. This is when I stumbled upon this great github project called HTTPStat that leverages some fancy curl options and displays latency of the request in a waterfall style right in your terminal.

The process to install is a simple pip install httpstat. To use simply follow the readme on the github repo. For my case I needed to repeat a request several times in rapid succession and needed to include a few hundred lines of SOAP xml in my request. To do that I did the following:
– Create a soap.xml file containing just the SOAP xml
– Create a python script such as the example below

from subprocess import call

call("httpstat -X POST --data-binary @/absolute/path/to/soap.xml --header 'Content-type: text/xml'", shell=True)

Assuming that you’re updated the above python snippet with a proper URI, absolute path, and the headers required you can now call your python script from the cli and you will see output similar to this (note the IPs have been obscured on purpose)

Connected to 111.222.333.444:443 from 444.555.666.777:37436

HTTP/1.1 100 Continue

HTTP/1.1 200 OK
Cache-Control: private, max-age=0
Content-Type: application/soap+xml; charset=utf-8
Server: Microsoft-IIS/7.5
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Tue, 21 May 2019 21:15:28 GMT
Content-Length: 1254

Body stored in: /tmp/tmpHm_Ts0

  DNS Lookup   TCP Connection   TLS Handshake   Server Processing   Content Transfer
[     4ms    |       9ms      |     83ms      |       215ms       |       377ms      ]
             |                |               |                   |                  |
    namelookup:4ms            |               |                   |                  |
                        connect:13ms          |                   |                  |
                                    pretransfer:96ms              |                  |
                                                      starttransfer:311ms            |

For bonus points if you want to repeatedly call the request to generate multiple samples you can simply run a bash one liner for loop such as this:

 for i in `seq 1 20` ; do python ; done 

Hopefully this has been helpful.