Monthly Archives: April 2015

Ansible RESTful Dynamic Inventory with Node.js

Most examples of Ansible Dynamic Inventory are coded in Python or Bash, but if you want to access a RESTful API to get your inventory, e.g. from OpenStack Nova Compute API, then the language of the web – aka Javascript – could be a better approach.

So, lets first have a look at a simple example in Javascript of how the inventory data is constructed:

If we run this script we get the following results:

Running this from Ansible gives us:

So, lets take the next step and write an inventory script to pull fixed or floating IP addresses from the OpenStack Nova Compute RESTful API.  Here’s my script to demo this:

This script first authenticates the user for the tenant, then calls the Nova RESTful API with the authentication token, getting a list of servers and their details.  The inventory is then generated from either the fixed or floating IP addresses.

Run this script to see the JSON it generates:

Which, from my OpenStack, generates:

We can now run this from Ansible using:

Success!

So, in summary, Javascript and Node.js (or io.js) are suitable candidates for Ansible Dynamic Inventory scripting, especially when you want to get data from a RESTful API.

PasTmon Passive Application Response Time Monitoring a CoreOS Cluster

The PasTmon Passive Application Response Time Monitor project (which I run) has just released pre-built docker images of pastmonweb front-end and pastmonsensor builds.  These make deploying a PasTmon response time monitoring solution a whole lot easier.

Here’s how I deployed PasTmon to my development CoreOS cluster.

PasTmon deployed into a CoreOS cluster

PasTmon deployed into a CoreOS cluster

The following instructions are available on the pastmonweb information page. Clone the project from GitHub – this contains all of the services unit files – onto the frontend cluster node:

Edit the unit files, pastmon-web@.service and pastmon-sensor@.service, to select the version of the docker image you want (currently “latest” and “0.16”):

You can instead create a local.conf file to override the selected version – but this applies to the node that the service will run on.  Editting the version, as above, before submitting the unit file allows this version to be set for the whole cluster.

Next edit the pastmon-web@service file to bind it to the frontend node of the cluster:

You can do this either using the MachineMetadata or MachineID from /etc/machine-id.

Submit all of the unit files to fleet:

Start the pastmonweb services:

The pastmon-web-discovery@.service is actually a “sidekick” to register the pastmonweb service as active to etcd – which provides host and port details to the pastmonsensors running on the other nodes in the cluster.

Once the web service is running (the first time will take a few minutes to download the docker image) you can point your browser at http://your-front-end-floating-ip:8080.  You should see a login screen for the PasTmon web app, like this:

login

You can login with the default credentials – user: “admin”, password: “admin”.

Next we can start the pastmon-sensor services on the remaining nodes in the cluster (the pastmonweb service also contains it’s own sensor) by running:

The “1..6” here means to start 6 instances numbered 1 through 6.

These should automatically discover the web service and connect to it’s postgresql database on port 5432.  After a while you should start to see measurement data in the web UI.

Here are a couple of screenshots of what to expect:

summary

rtt_avg

This one is showing the per 5 minute average of network round-trip-times for the postgresql server running on the pastmonweb container.

The way the pastmon sensor containers are configured allows them to bind to the same IP Namespaces as the CoreOS cluster nodes – so the sensors can see all of the traffic of all of the containers being run on that node.