Author Archives: rz4fap

LXD Getty Issues, wtmp grows over time

I’ve been running LXD containers, of various linux flavours, for the past year or so, to run unit-tests for a DevOps tool in development.  Recently discovered that /var/log/wtmp, and in some cases /var/log/messages, had been filling up.  In fact wtmp had filled up to the point the ‘last’ command was simply hanging trying to read it.

/var/log/messages was filling up with entries like:

So here are the fixes for each linux OS that I identified with the problem:

CentOS6

Edit /etc/init/tty.conf to comment out all entries and reboot.

CentOS7

Seemed to not experience the problem

Gentoo

Edit /etc/inittab comment out the c1:...agetty... line. Reboot.

OpenSuse13.2

This is systemd so:

This should list some units named getty@tty[1-4].service so these need stopping and disabling:

and reboot.

Oracle Linux 6

Edit /etc/init/tty.conf to comment out all entries and reboot.

Oracle Linux 7

This is systemd, again,  so:

This should list some units named getty@tty[1-4].service so these need stopping and disabling:

and reboot.

Ubuntu

14 and 16 both seemed to be ok

Brackets Dockerfile Syntax Highlighter Using the Jacob Lexical Tokenizer

Brackets is an excellent opensource source code editor, available from http://brackets.io.  Originally from Adobe, it is now a community developed project on GitHub – https://github.com/adobe/brackets.

It comes with a lot of plugin extensions for pretty much everything you would need, like: Git integration, Linters (code quality analysis tools), Language syntax highlighters, etc.

Recently I have started contributing my own syntax highlighters for M4 macros and Dockerfiles, and it is this latter project that this blog is about.

Under the hood, Brackets uses CodeMirror to provide language syntax highlighting.  It comes with a range of language “modes”, which are really just javascript modules that state-fully tokenize code into CSS styles for the syntax colouring/highlighting.  They can also handle indenting and commenting.

I wrote my original extensions in a similar manner, hand-coding the state-machine and tokenizing from the code using regular-expressions.  However, I quickly realised, with my Dockerfile extension, that this code had become too complicated, too convoluted and difficult to maintain.  Just look at this code in my project’s history…

Now, my background is in C coding and experience with tools like Lex/Flex & Yacc/Bison. Flex is an opensource Lexical Analyzer and Bison a Grammer Parser.  What I wanted was something similar, but for Javascript.  On searching, I found Jacob (also available via NPM here) – which provides both of these capabilities in one tool.  It seemed the Lexer component of Jacob would be an ideal way of coding, and hopefully simplifying, my Dockerfile extension.

Installing Jacob was easy:

I created a Dockerfile.jacoblex file.  This provides a lexical definition of the language I wanted to parse and tokenize.  This file is divided up into 3 sections, separated by %%.

The first section declares the lexer’s module name:

The next section is to define named regular expressions:

In this case, just a regex matching all of the Dockerfile’s possible keywords.

The final section defines the parsing rules and state-machine.  Here is a simple example. This parses a comment and returns the ‘COMMENT’ token:

A more complicated example, using the above named regex:

The first part of this rule matches on the {directive} (Dockerfile keywords) and then uses this.pushState() to advance the state-machine, e.g. to DOCKDIR, so the rules associated with that state, denoted by <DOCKDIR> can then be applied.  The method this.popState(), as it’s name implies, reverts back to the previous state on the stack.

This is just a taster, you can view the complete file here.

The lexer module is generated from this, using jacob:

This creates the Javascript file dockerlex.js, which can be imported into my extension’s main.js script:

Integrating the generated lexer into a custom CodeMirror Mode proved a little challenging, until I realised that I could simply 1) use the lexer itself as the mode’s State object, and 2) extend the Stream object to provide the extra methods expected by Jacob.

Here I create the mode’s state object:

and extend the stream object with these methods:

These were taken and tweaked from Jacob’s own StringReader object.

As CodeMirror was feeding my tokenizer stream line-by-line, I needed to think carefully how the lexer could work (e.g. the regex ‘$’ directive does not work, requiring an alternative approach using this.input.more()), and also reapply the stream on each iteration.

The start state being created using:

Then for each iteration, I ensured the lexer’s input was reset to the current stream object:

The call to state.nextToken() in fact calls the lexer generated by Jacob.  The return token’s name attribute is then passed back as the syntax highlighting style name (e.g. ‘def’, ‘string’, ‘error’, etc).

I realised CodeMirror’s internal copyState() method couldn’t fully copy the lexer state object, so I coded a custom method:

and also added a blankLine() method to pass a dummy newline to the lexer, as CodeMirror normally drops empty lines.

You can view this complete main.js script in GitHub here.

Finally, I was able to switch CodeMirror syntax highlighting to use its builtin mode for “Shell” scripts when my lexer encountered either a RUN or CMD Dockerfile directive:

In main.js the bashMode was retrieved from CodeMirror using:

and when state.localMode is set by the lexer, above, the nested shell code is tokenized using:

the check for the end-of-line containing a ‘\’ is to allow line continuation, multi-line shell scriptlets on the directives.

The resulting code and jacoblex rules are, in my opinion, much easier to understand and will save me much pain supporting going forward.

The full project can view viewed here.

Here are a few screenshots from the GitHub project page:



Creating CoreOS Services with Cross Node Dependency using etcd

When I was putting together an architecture for deploying PasTmon sensors across a CoreOS cluster for a previous blog, PasTmon Passive Application Response Time Monitoring a CoreOS Cluster, I wanted to have the Fleet service units coded so the pastmon-sensors would have the pastmon-web as a cross node dependency.  The plan was for the sensors to only start once the web/database service had started, but this dependency needed to operate across all nodes in the cluster.

At first I thought I could achieve this using the unit directives like Requires/Wants etc, so I tried:

simply following the examples shown in the CoreOS documentation.

The unit called pastmon-web-discovery@1.service is a sidekick unit that BindsTo the actual pastmon-web service pastmon-web@%i.service, registering it’s hostname and database port in etcd:

Firing up pastmon-web, with it’s sidekick, followed by the sensors across the rest of the nodes in the cluster, all worked fine. However, if the CoreOS cluster failed or was rebooted, the services came back up out of order and required manual intervention.  It was clear that the [Unit] After and Requires directives only applied to the node the unit was started on, and not across the whole cluster.

Actually, this kind of made sense when I thought about it. The [X-Fleet] section of the unit means just that: “Cross Fleet (cluster)”.  At the time of writing this blog, there does not appear to be any support in this section for cross cluster unit dependencies (though I did find a few discussions around and requesting this feature in the CoreOS forums).

To resolve this I realised I could leverage the existing etcd web service registration as a Pre-Start condition in the sensor units.  The etcd key value has a Time-To-Live (–ttl) of 60 seconds, and is re-registered every 45 seconds, as long as the pastmon-web service it is bound to is running.

So here is my fixed pastmon-sensor unit using the etcd Pre-Start test:

The etcdctl get command will fail with a non-zero return code if the key is not present.  Running the ExecStartPre=, without the ‘-‘ (= instead of =-) causes this to fail starting the unit.

The second highlighted section, above, sets the unit to automatically restart on failure, after a delay of 10 seconds, and to retry forever.

I tested these again, crashing and rebooting the cluster, and they restarted in the correct order everytime – perfect.

All of the code above is available in gbevan/pastmon on GitHub.

Ansible RESTful Dynamic Inventory with Node.js

Most examples of Ansible Dynamic Inventory are coded in Python or Bash, but if you want to access a RESTful API to get your inventory, e.g. from OpenStack Nova Compute API, then the language of the web – aka Javascript – could be a better approach.

So, lets first have a look at a simple example in Javascript of how the inventory data is constructed:

If we run this script we get the following results:

Running this from Ansible gives us:

So, lets take the next step and write an inventory script to pull fixed or floating IP addresses from the OpenStack Nova Compute RESTful API.  Here’s my script to demo this:

This script first authenticates the user for the tenant, then calls the Nova RESTful API with the authentication token, getting a list of servers and their details.  The inventory is then generated from either the fixed or floating IP addresses.

Run this script to see the JSON it generates:

Which, from my OpenStack, generates:

We can now run this from Ansible using:

Success!

So, in summary, Javascript and Node.js (or io.js) are suitable candidates for Ansible Dynamic Inventory scripting, especially when you want to get data from a RESTful API.

PasTmon Passive Application Response Time Monitoring a CoreOS Cluster

The PasTmon Passive Application Response Time Monitor project (which I run) has just released pre-built docker images of pastmonweb front-end and pastmonsensor builds.  These make deploying a PasTmon response time monitoring solution a whole lot easier.

Here’s how I deployed PasTmon to my development CoreOS cluster.

PasTmon deployed into a CoreOS cluster

PasTmon deployed into a CoreOS cluster

The following instructions are available on the pastmonweb information page. Clone the project from GitHub – this contains all of the services unit files – onto the frontend cluster node:

Edit the unit files, pastmon-web@.service and pastmon-sensor@.service, to select the version of the docker image you want (currently “latest” and “0.16”):

You can instead create a local.conf file to override the selected version – but this applies to the node that the service will run on.  Editting the version, as above, before submitting the unit file allows this version to be set for the whole cluster.

Next edit the pastmon-web@service file to bind it to the frontend node of the cluster:

You can do this either using the MachineMetadata or MachineID from /etc/machine-id.

Submit all of the unit files to fleet:

Start the pastmonweb services:

The pastmon-web-discovery@.service is actually a “sidekick” to register the pastmonweb service as active to etcd – which provides host and port details to the pastmonsensors running on the other nodes in the cluster.

Once the web service is running (the first time will take a few minutes to download the docker image) you can point your browser at http://your-front-end-floating-ip:8080.  You should see a login screen for the PasTmon web app, like this:

login

You can login with the default credentials – user: “admin”, password: “admin”.

Next we can start the pastmon-sensor services on the remaining nodes in the cluster (the pastmonweb service also contains it’s own sensor) by running:

The “1..6” here means to start 6 instances numbered 1 through 6.

These should automatically discover the web service and connect to it’s postgresql database on port 5432.  After a while you should start to see measurement data in the web UI.

Here are a couple of screenshots of what to expect:

summary

rtt_avg

This one is showing the per 5 minute average of network round-trip-times for the postgresql server running on the pastmonweb container.

The way the pastmon sensor containers are configured allows them to bind to the same IP Namespaces as the CoreOS cluster nodes – so the sensors can see all of the traffic of all of the containers being run on that node.

Adding a RESTful API on a Unix Domain Socket to a MEAN Stack Application

Why would you want to do this? Well, it provides the ability to expose your API to command line utilities.  For example Docker does this exact same thing for it’s CLI.

Here is an example server API layered on a default MEAN stack scaffold app.js:

Here is an example command-line client, using Node.js, that accesses the above server API:

Finally the results of running the cli:

 

Example of simple responsive layout with Angular Material

Playing with Angular Material lately, and decided to have a look at Responsive Layouts, to have a web page dynamically reorganise itself according to the device viewing it (and its orientation):

So a simple proof-of-concept, here’s some css:

and some html, with the Angular Material attributes:

The layout=”row” gives a side-by-side layout and “column” provides top-down.

In the first <div>, the “hide show-gt-md” causes this tag and it’s contents to only show on devices wider than 960px, whereas the second <div>’s “hide show-sm show-md” means it is only displayed on devices smaller than 960px (literally show-sm means show on devices smaller than 600px wide and show-md less than 960px).

I tried this simple test on a Samsung Galaxy Tab Pro 10.1″ and, as I rotated it’s orientation, it smoothly transitioned between a wide side-by-side layout in landscape and a narrow top-bottom layout in portrait.  Neat.

Howto edit top-left logo in Mean.io appserver from a custom package module

This is easy.  In the core system package’s public/views/header.html you will find the default pull-left div containing a <a> tag with a value of “MEAN”, which is displayed in the top-left of your website:

We can edit this dynamically via the “mean-token” argument using an AngularJS directive (in your custom package’s public/directives/ folder) like this:

 

Experimental Script to create a CoreOS Cluster in OpenStack

This is an experimental CoreOS cluster creator script for OpenStack Nova with Cinder: