Search This Blog

Thursday, 11 January 2018

Writing a Puppet External Node Classifier in Python

I recently had cause to try and troubleshoot someone's issues with node classification in their Puppet Enterprise infrastructure, and found that a faulty External Node Classifier configuration was at the root of it.

This got me thinking about whether it is feasible to write an ENC in Python, without it messing up the entire Puppet Enterprise setup, so I thought I'd give it a try.

First things first, I should warn you that using an ENC on Puppet Enterprise is entirely possible, but not covered by your Support agreement. If you have issues with node classification, and you are using any ENC, you will be politely told that you're on your own. As well as this, you do render parts of your PE Console redundant, although the reporting sections will still be entirely functional and valid.

With that out of the way, here is what I did. I hope it may be useful to someone else starting out on writing an ENC in Python.

I used a Vagrant install of PE 2017.2.2 on an Ubuntu 14.04 server, simply because setting up Python3 on a CentOS box is just too much trouble, and Ubuntu has it pre-installed. You will also need to have the Py-yaml Python module installed. Ensure the Puppet Master node is fully functional before you start, and it is useful if you also have an Agent node spun up. In my lab, the Master's FQDN was pe-201722-master.puppetdebug.vlan and the Agent's FQDN was pe-201722-agent.puppetdebug.vlan

I used the Puppet documentation at as a starting point and my completed ENC can be seen at

However, let me take you through some of the specifics.
I followed the instructions in the docs to enable the ENC, but I found that I also had to add the following keys to the common.yaml file in the production environment's hieradata directory:

puppet_enterprise::profile::master::classifier::node_terminus: 'exec'
puppet_enterprise::puppet_master_host: 'pe-201722-master.puppetdebug.vlan'

I have commented the Python code to illustrate what each section does and why I've included some of the things I have. For example, in the section below, I have sliced up the FQDN to extract only the specific substring that describes the nodes role, and then added this to the parameters dictionary inside the classification dictionary:

# Parameters section
# This sets the node_role based on a section of the hostname

The ENC script works entirely as expected now, and any changes made in Hiera to any of the puppet_enterprise profile parameters to change the default behaviour of any of the PE elements are propagated as expected and work. However, there are a few observations I have based on this test.

  • This works really well if you have a bunch of data such as server role, environment and such encoded into the hostname or FQDN of your nodes somehow. If you want to use facts to classify nodes, like the "using a fact to create a rule for a Node Group in the PE Console" workflow, it is not going to work unless you find a way to allow the ENC script to also query against the PuppetDB Query API. I believe this is possible, but not with the ENC script in the Github Gist link above.
  • Using an ENC may allow a slightly more automated way of classifying nodes if your workflow uses hostnames to describe node roles etc., but without a knowledge of the language the ENC is written in (Python3 in this case) it is not nearly as easy to change the classification rules. The PE Console is pretty Puppet Noob friendly, and as it is supported by Puppet Support, it is easier to get help in making changes to your classification rules. 
  • The choice of language in which the ENC is written may have implications for your sysadmins or Puppet admins, should the person who wrote the ENC decide to move on from your organisation. 
All in all, using an ENC to manage your classification is a good strategy if you are comfortable with the limitations it imposes. I found this a valuable lesson in thinking about classification, and applying Python principles that I'm comfortable with to Puppet node classification which I've normally done using either the PE Console or Hiera. Hopefully it may prove useful to someone else starting out on the same path.

Wednesday, 14 May 2014

Citrix Receiver on Ubuntu 14.04

Woohoo, got my new work laptop today, a Lenovo Thinkpad T430.

I didn't even boot into Windows, just stuck the Ubuntu CD in and installed 14.04 64 bit.

I now have it pretty much set up the way I like it, but Citrix Receiver (of all things!) gave me some troubles and I thought I'd better document what I did for next time I decide to re-install.

Firstly, the Ubuntu Wiki page on installing ICA Client is very good. It can be found at and there is even a section on installing Citrix Receiver 12.1 on 64 bit 14.04. It is a great shame that Citrix have created a .deb which will not install, and which needs to be fixed and repackaged before it is useable, but the instructions are simple to follow, and it installed correctly when I followed them.

The the next gotcha hit. Open XenDesktop, and it opens full screen. Even when Windowed mode is selected in the ICA Client configuration. That's a problem right there, as there is no simple way of getting back to your Ubuntu Desktop, other than using some extra key combos or logging off the XenDesktop. Neither of these make for an easy or productive work experience.

I asked around on the net and found very little help, except in the Citrix community forums, where one guy suggested I set the desired Horizontal and Vertical resolution globally. This worked, and I thought I'd document it here too, if only for my own use in the future.

The file I edited to force the resolutions I wanted was:


and the values I changed were from




This means that when I open my XenDesktop now, it opens in a window 1024x768 which I can then maximise like any other window. This means I can put the maximised XenDesktop window in one workspace and my other stuff in the other 3 workspaces and easily flick back and forth between them.

Et voila! Productivity restored!

Thursday, 8 May 2014

I want to break out, I want to breeeeaaaaaak out.....

I know, I know, the worst Queen pun ever.
But with the Raspberry Pi, one of the coolest things is that you can use the GPIO pins to interface with other bits of electronics, and my second Pi (the first is my Media Centre at home running RaspBMC) is now set up with a breakout board and case to make it a little more robust.

Not sure why this uploaded upside down....
I got the breakout board from RasPi Mart on Ebay ( and it is actually a really good deal for around 6GBP.

Here is a closeup:

You will notice that there are 8 GPIO pins broken out (P0 to P7) as well as the UART, I2C, SPI, etc. I found it difficult to find detailed information on which pin on the breakout corresponds to which BCM GPIO pin. I traced them, and being the nice sort of fellow I am, I'll detail them for you below. Bear in mind this is a Rev 2 Pi and this board is labelled "Raspberry Pi GPIO Extension Board V2.2", so if you don't have those precise items, YMMV.

Breakout PinBCM PinNotes
P1GPIO18Also PWM pin.
P2GPIO27Was GPIO21 on Rev1 Pi.
P7GPIO4Also the 1Wire pin by default.
I2C SDAGPIO2Was GPIO0 on Rev1 Pi.
I2C SCLGPIO3Was GPIO1 on Rev1 Pi.

I hope the above is useful to you, and helps you get the most out of your Raspberry Pi GPIO experimenting!

Wednesday, 7 May 2014

Time Flies.....


Has it seriously been a year? Time really has flown by.....

So to recap, still a penguin head, still working at Citrix, still liking my job, and still left with too little time to do what I want after work, family, etc.

Couple of things you'll hear me talking about over the next while though, Arduino and Raspberry Pi.

Been doing a fair bit of stuff with Arduino over the last few months, which has taught me a lot about using libraries, general electronics, I2C, SPI, LCD screens, accelerometers, relays and the like, but this has led me in a great big circle back to the Raspberry Pi, and more specifically what it can do with it's GPIO pins.

Expect to hear a fair bit about my struggles over the next little while, as I attempt to get my head around Python and using it to make the Pi do great things in the real world. At least, that's the idea.

Anyway, it's good to be back, and I'll be a little more diligent in keeping up to date in the future....

Tuesday, 21 May 2013

Honey I'm hoooome!


I'm back! Work has been particularly busy this last while and I've had a few other things to sort out, but I'm back into waffle mode, and for the odd person who stumbles across my meandering posts, I'm sorry but there will be more of them.

So, to take stock:
Little has changed. I'm still using Ubuntu with xmonad for my everyday work, I'm still working with RHEL and Solaris in my work at Citrix and I'm still kept busy by my two boys, at 3 1/2 and 1 1/2 they are hard work but lots of fun.

I have been bitten by a bug though. On holiday I tried sailing for the first time ever, having spent a huge amount of time on various powerboats since I was a child, and I'm hooked on the idea of getting places on water using only the wind. Expect to hear more about this as time goes on...

Anyway, I'll keep posting, and probably no-one will keep reading, but for odd person that does, thanks, and feel free to tell me how much you (don't) like my ramblings.

Thursday, 10 January 2013

A Tale of 2 Disks

I'm sorry if you were waiting for part 2 of the LVM saga, it will come, but in the meantime, I've something else to talk about.

I had a normal 500GB hard drive in my work laptop and I was reasonably happy with the performance, with Ubuntu 12.04 booting to an xmonad Window Manager, it took about 55-60 seconds from power button to login window.

However, some of my work colleagues have put SSDs into their computers, both desktop and laptop, and that gave me the idea to try that too.

Because I do a lot of work on my XenDesktop at work, and I also extensively use ShareFile and Dropbox, the size of the SSD was less important to me than the reviews it received. I store a lot of the non sensitive stuff I use on some form of cloud service, so a smaller, faster disk was fine by me.

I got a 128GB Crucial M4 SSD and installed Ubuntu 12.04 on it, again using xmonad as my WM. And boy, does it make a difference! Boot from power on to login screen now takes 19 seconds. Copying some large files from my backups to the new system was way faster than I'd dared hope, in fact copying about 6GB of files, assorted between large and small files, took around 3 minutes. I can tell you that I was very happy.

In general use I find things are faster too, opening 500MB+ files in Wireshark is so much quicker, file transfers from anywhere are so much quicker and generally the whole system feels more responsive.

So, if you've been thinking about SSDs, stop it! Get yourself one and see what a difference it really makes. With prices down so low now, there is no real reason to not use one for the OS of a computer. Even if the data is stored elsewhere, having the OS and some programmes you use a lot on an SSD really does make a difference. Just remember to enable the TRIM feature....

Monday, 7 January 2013

Fun with LVM (Part 1)

Ok, I lied a little, it's not really a lot of fun, but I'm hoping to create something useful (to me at least) here.

I don't work with LVM extremely frequently, it's one of those things I set up and tend to forget about, so when I need to make some changes, I find I have to look at the same old man pages again and again. I thought I could make my life easier and create a single post with most of the information I need in one place.

What makes this blog post different to any of the other millions already out there, I hear you say? Nothing, except that I'll find this one easier to find and hopefully writing it all up will cement it in my own head. So, here goes:

My lab has the following layout, all created in a VM:
HDD1   8GB       250MB /boot | LVM PV1 | 1GB swap
HDD2   8GB       LVM PV2
HDD3   15GB     MDRAID Disk1
HDD4   15GB     MDRAID Disk2
HDD5   15GB     MDRAID Disk3

The three mdraid disks are combined into a 30GB RAID5 volume, with a single LVM PV (LVM PV3) created on top.

PV1 and PV3 added to Volume Group 1 (VGlab) with separate LVs for /, /var, /home and /var/log with Ubuntu Server Edition 12.04 installed on top. LVM PV2 unused for now, and not all available spave in VGlab used.

So we have a server with the following output from df -h after a clean install:

stefan@lvm-lab:~$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/lab-os    4.7G  683M  3.8G  16% /
udev                  241M  4.0K  241M   1% /dev
tmpfs                 100M  292K   99M   1% /run
none                  5.0M     0  5.0M   0% /run/lock
none                  248M     0  248M   0% /run/shm
/dev/sda2             223M   25M  187M  12% /boot
/dev/mapper/lab-home  4.7G  198M  4.3G   5% /home
/dev/mapper/lab-var    14G  473M   13G   4% /var
/dev/mapper/lab-log   7.5G  257M  6.9G   4% /var/log

So we can then start using some of the commands to see what's going on and making some changes.

The first command which is useful is pvdisplay which lists out the physical volumes present in the physical machine. Bear in mind that this will list any volumes, physical or RAID, which have a LVM volume flag set. On my lab, I get the following output:

stefan@lvm-lab:~$ sudo pvdisplay
  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               lab
  PV Size               29.99 GiB / not usable 0
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              7678
  Free PE               0
  Allocated PE          7678
  PV UUID               Marv0q-qLZJ-HWOt-mVUC-FKoS-zqT8-K5C214
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               lab
  PV Size               6.84 GiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              1749
  Free PE               1560
  Allocated PE          189
  PV UUID               rIZb3g-GZdJ-yuGY-UC2k-iG0G-Q4f3-2TmS9R
  "/dev/sdb1" is a new physical volume of "8.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name            
  PV Size               8.00 GiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               jgePi7-SkVo-7kp2-KUdQ-Ajel-MQFU-Ubs4PH

You'll note that the first entry is the RAID array and the second is the LVM section of the first disk. You'll also note that the other 8GB disk is listed, even though it has not been added to a VG yet.

A similar command vgdisplay can be used to list all Volume Groups, in this lab I only haver a single VG but this would list all VGs present:

stefan@lvm-lab:~$ sudo vgdisplay
  --- Volume group ---
  VG Name               lab
  System ID          
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               4
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               36.82 GiB
  PE Size               4.00 MiB
  Total PE              9427
  Alloc PE / Size       7867 / 30.73 GiB
  Free  PE / Size       1560 / 6.09 GiB
  VG UUID               BgXqr1-gDCJ-OLzC-dYgV-2qa1-nzL2-95jpZx

As expected, there is also a similar command to view the Logical volumes in the VG as well, lvdisplay.

stefan@lvm-lab:~$ sudo lvdisplay
  --- Logical volume ---
  LV Name                /dev/lab/os
  VG Name                lab
  LV UUID                fYXl5R-NlHb-44Gs-rQWh-IiRE-3aaQ-fHusdk
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                4.66 GiB
  Current LE             1192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           252:0
  --- Logical volume ---
  LV Name                /dev/lab/home
  VG Name                lab
  LV UUID                GUGN3K-ICHB-VnvE-GvcK-16nP-Q20e-pyAqse
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                4.66 GiB
  Current LE             1192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           252:1
  --- Logical volume ---
  LV Name                /dev/lab/log
  VG Name                lab
  LV UUID                OagOZY-fvan-ji02-92UA-bv2r-EQuS-SEebSH
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                7.45 GiB
  Current LE             1907
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           252:2
  --- Logical volume ---
  LV Name                /dev/lab/var
  VG Name                lab
  LV UUID                0wztzb-mQOj-5Gp8-L21T-TIKP-1gcT-cermwT
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                13.97 GiB
  Current LE             3576
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           252:3

So there is a pattern emerging here, the commands are generally the same, just the prefix of pv-, vg- and lv- changes. There are also a number of switches which can be used to shape how each command runs more intelligently. For example, to see information on only a single Logical Volume, with units in kB, you would run the following:

stefan@lvm-lab:~$sudo lvdisplay --units K /dev/lab/var

The man page for each command does a good job of explaining these various switches and options.

In the next installment, I'll be looking at how a series of commands can be used to interact with the LVM elements, such as adding PVs to an existing or new VG, altering the size of a VG, altering the size and/or number of LVs in a VG, and also look at some gotchas when it comes to the filesystems on the Logical Volumes.