The tale of the Docker

I have come late in life to the wonderful world of virtual machines. Docker is my new best friend.

Too long have I spent putting a program on a machine, using it, then later updating something unrelated, and finding that my program doesn’t work. But no more. Earlier this year I finally got around to buying a dedicated computer to use as a server, installed Docker on it, and have never looked back.

Given my neophyte enthusiasm, I thought it might be worthwhile sharing what Docker is, why it’s so good, and what I am actually doing with it.

What is this “virtual machine” thing?

For years when I was adding some new software to my machines I would just go ahead and install it. Then time passed. I updated my machines with some new operating system or security update. And guess what – that software I used occasionally would have stopped working and I’d have to go and find the latest version, usually with a new whizz-bang interface, update and learn it all over again.

The beauty of using a virtual machine is that all these software packages can exist in their own self-contained operating systems underneath the main computer. When you update the computer it doesn’t matter to them – they just carry on in their own little worlds regardless. They can share some basic infrastructure, but they don’t have to. (I should note that I use the terms “virtual machine” and “container” pretty much interchangeably, but they are actually different. The differences aren’t important for this discussion.)

For a long time I worried that all this containerisation would introduce so much inefficiency that things would grind to a halt. But that’s just not the case. For the most part (with the possible exception of the most intensive computational tasks) containers work just fine.

Picture of Docker architecture
Multiple applications using the same Docker infrastructure

There are a number of virtual machine programs out there, but perhaps the best know is Docker. Starting in the mid-2010’s it has become ubiquitous in a very short time. Once you have containerised something then it’s a small conceptual leap to realise that you can run that container on pretty much any hardware, and the door is opened to running them across multiple servers, depending on where there is spare CPU power.

But anyway, the purpose of this blog post is mainly to talk about what I am doing with Docker in the context of a home automation and messing about scenario, especially since I moved to a gigabit fibre connection at home. Perhaps the easiest way to do that is to talk through the containers that I am running at the moment.

How did this list get so long?

It’s only when I took a step back and looked at how my Docker use has exploded over the last few months that I realised how quickly I have become dependent on it. Here is the list of containers I am currently running.

List of containers

These are in alphabetical order, and I’ll run through them like that. But I want to start with one nearer the bottom – Portainer.

Portainer is a container which exists only to make managing containers easier. That picture above of my containers – that’s a screen shot from Portainer. It’s not essential, but it makes running and managing containers much more approachable, even if you are happy with command line inetrfaces. Recommended highly.

From the top

So, starting again from the top. appdaemon is part of my home automation system. It can (and in my case does) interface with Home Assistant (of which more later). But it also allows python programming, giving much more flexibility. (I believe it can also be used for running wall-mounted tablets to interface with Home Assistant, but that’s not for me.)

The next three (all starting with “cgate”) are containers to interface my home’s structured lighting system – a slightly obscure Australian system called C-Bus. The first container translates a serial interface to the lighting system into a network socket. The second is the management system for the lights (which is not needed for everyday use). And the third translates between that management system and MQTT – a home automation communication protocol.

duplicati is the next container. If you have ever used Time Machine on a Mac then you’ll be familiar with this concept. It’s a freeware open-source backup program, which allows for encrypted automatic backups. At the moment I only use this for backing up my containers data folders, but I have big plans.

The next two are parts of my home automation system. home-assistant is a great hub for any home which has multiple types of internet-capable device in it. It will integrate pretty much anything into one easy to use interface to observe, control and automate. There are several ways of running it, and my preference is to use Docker. There will be another blog post about my home automation decisions in the future. But one of them was to use habridge to link Home Assistant to my Amazon Echos, without have too much information leaving my home.

Librespeed is a container with an internal-only speed test, so I can check to see if my home network is running ok if there are issues. Very handy; very boring!

The next two take us back to the home automation arena. MQTT is a protocol for internet-of-things devices to talk to one another, and is – via mosquitto – the backbone of my home automation on which Home Assistant is based (wherever I can). It small, reliable, and – when you overcome its foibles – easy to use. A bit like node-red, which is a neat graphical programming tool, useful for wiring things together. I’m not using that much at the moment, but have in the past and will in the future.

ntp is a tiny container which runs a network time daemon. For some reason my router which used to have this service stopped, and I needed a quick alternative. A perfect use for a Docker container.

Now we move on to media – probably the most computationally intensive use I have for Docker. Plex is a media server, which I use to manage films and TV programmes I have on a hard drive on my network. I’m not sure I like it much – too much flexibility at the cost of usability, but it works. Tautuilli is an assistant for Plex, which helps manage users – actually not much use for me, but handy if you share content. And rtorrent is a torrenting program I use for downloading, if I want to use my main machine for something else.

Of the remaining five two – tasmoadmin and zigbee2mqttAssistant – are back in the world of home automation. The first is a front end for managing multiple devices which are using the tasmota firmware. And the second is a front end for managing a program which interfaces between zigbee and MQTT. Handy for when they are needed, but don’t get much use.

The remaining three containers act together to give a wide range of graphical information about my home network. unifi-poller takes data from my UDM-Pro, and puts it into an influxdb database. Then Grafana interrogates that database to produce pretty pictures. This is a wonderful integration, and is a great demonstration of the power of user-generated software.

Screen shot of unifi-poller
An example of unifi-poller in action

Summary

Having a program come delivered with its own operating system is great. No more having to track down libraries and install them, and no more worrying about what breaks on an update. In just a few months I have become a convert to the world of Docker. The next step for me is to start exploring the methods of freeing myself from one set of hardware and having my Docker containers run on whatever cpu is free. Unraid, I’m coming…

Credits

Thanks to @andasta from Unsplash for the original picture of a docker (now cropped)

Leave a Reply

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close