Sometimes, old hardware lasts a very very long time. Until recently, I had an iPhone 7, which was mostly perfect running iOS 12 - the battery could be better, but in general it was good. I upgraded to an 8 recently, but mostly for the battery and because work (who provides my phone) had a spare one in the cupboard.
In 2008, I left the BBC, and started working from home in London, for a friend in Denmark. We were doing email archiving - very much like AfterMail, where I met him - so I needed a machine which I could run Exchange and Active Directory on. So I got a well spec’ed (for the time) Mac Mini - Core 2 Duo, 500GB disk, 8GB RAM (the most it could take). It served me as a VMWare Fusion server for the 9 months or so I worked for them.
Since then, it’s been used for various things, mostly as a iTunes server, serving media to the Apple TV, or to store our music collection. Over the years, I changed out the DVD drive and hard drive for a 256GB SSD, and a 500GB SSHD which I had spare. The SSD gave the old machine a big boost in performance.
With the advent of Docker, I started to run various services on it, mostly to learn about how it worked. After a while I ran into some Mac-specific issues, so I fired up Virtual Box and ran Linux in the VM, with Docker containers inside that.
While that worked, I was basically not using the Mac for anything, so I switched out the OS to Ubuntu 18.04 (the current LTS) and ran the Docker containers on the bare metal. The hardest bit of the upgrade was finding a monitor (in this case, our TV) to do the install with. Once it’s installed, it just lives on the network, in the loft in the studio, next to the NAS and the UPS.
Low maintenance and extreme ROI doesn’t even come close to what I’d had out of this little machine. It’s been fantastic.
So my setup out there is the Mac Mini and a Synology DS418j with 4x 4TB of disk (12TB usable). The Mini does most of the compute work, and the NAS provides the disk and a few other bits.
Why would I want to do this?
Mostly, it’s fun! I use Docker a lot at work, in various forms, so putting things in containers at home is a logical extension of that - it just makes sense to containerise all the things.
The concept of containers makes it really easy to package up something - anything from a full working application like Homebridge, to a single shell script - and run it in its own space, so that the underlying OS is none the wiser.
What would I change or add if I had to / could
Let’s say the Mac Mini dies and I have a bunch of spare cash around. Most likely, I’d get an equivalently small WinTel Box, and run Linux on it again. Maybe I’d get one of the newer / more powerful Synology NAS with x86, and run containers on that.
Most likely, I’d not buy another Mac Mini, but only because the new ones are overpriced unless you want to run macOS - there are a lot more powerful intel machines out there if you just want to run Linux - Intel NUC’s being one of them. I’d get a 32GB NUC with 4-6 cores, I think.
So far, tho, this machine has been rock solid. It’s 10 years old, and there is no reason to shelve it.
I’d also love some kind of orchestration tool - something like a very very light version of Kube. The Mini could run Kube, I think, but it’s overkill for what I’m using. I would like some way to build a container locally on my laptop, push it to the repository, then run an API command to restart the running container with the new image. I have this “working” using make
and bash
, but it’s no ECS or Kube. Maybe that’s something I can do later.
I’d also add a build system, which monitored a git repository, and rebuilt containers based on git commits. I could run Concourse on this, which we use at work, but again, I do maybe 1 or 2 container builds a month, so that is overkill. I think there is enough resource to do it, but I now have a registry on the Mini, so that I can build on my (relatively) fast laptop, and then push the resulting container, rather than using the (relatively) slow Mini to do the builds.
What sort of resources am I using for this?
I have the basic Mini, with 8GB of RAM. It’s running Ubuntu 18.04, and it’s using about 1.5GB of RAM, with about 6GB used as cache. A similar spec NUC would be about $400 NZD.
top - 05:37:07 up 1 day, 9:10, 2 users, load average: 0.06, 0.13, 0.13
Tasks: 197 total, 1 running, 142 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.7 us, 2.5 sy, 0.0 ni, 94.5 id, 0.2 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 7907236 total, 1431748 free, 1244500 used, 5230988 buff/cache
KiB Swap: 4194300 total, 4194300 free, 0 used. 6465064 avail Mem
The main installed software on here is Docker - there isn’t much else installed.
# install supporting stuff
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
# install GPG keys for docker
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# add the x86 repo
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
# update
sudo apt-get update
# install docker-ce
sudo apt-get install docker-ce
# make sure the ubuntu user can use the docker command
sudo usermod -aG docker ubuntu
# test it with hello-world
sudo docker run hello-world
# turn it on, so it comes up on boot.
sudo systemctl enable docker
The OS is out of the box Ubuntu 18.04, the current LTS version. It’s not overly exciting, and I’m sure I could get a more paired-down OS, but this works for me, as I’m very comfortable with Ubuntu. If I was doing this in the cloud, I’d be using Amazon Linux 2 - I tried for this, but it’s VM or Cloud only.
The Mini also has a 500GB SSHD mounted on /mnt/data
, but I have 200GB free on the 256GB SSD so I’ve got no reason to use it, and no inclination to take the machine apart to remove it.
I could use it as backup, I guess.
Things to run on a 10 year old machine
There are some things that are good to run on this machine, and some things which it’s totally unsuitable for. Transcoding or anything like that is useless - it doesn’t have the cores or the grunt to do it. But there is plenty of other things it can do.
UNMS
UNMS is the Ubiquity Network Management System - UBNT’s free management tool designed for people running wide area ISP networks based around their routers and WIFI gear.
For me, it also works to “manage” the Edgerouter X ($99 NZD) I have as a route/gateway for my fibre connection. Like most things in this setup, its total overkill, but it was fun to setup and play with.
Ideally, I’d replace my wifi kit with Ubiquity Unifi gear, or maybe Amplify, but the combination of the Edgerouter X and some older Apple Airport Extremes has proven to work great. Eventually, I’ll need to revisit this when some of this gear dies - but again, 6 year old gear is still doing strong, and I don’t need to replace it yet.
Installing UNMS is easy, tho it’s a case of curl | sh
, which has serious security implications (lets download and run a script from the internet!). I trust Ubiquity in this case, but there is lots out there I don’t.
UNMS makes its own user, and uses docker-compose
to setup the various containers it uses, including postgres, rabbitmq, fluent, redis, netflow and nginx.
If I was running a wireless ISP, it’d be awesome, but as it is, it’s a nice way to manage the router.
Plex
Plex is the godfather of media servers, coming from the now ancient Xbox Media Center (XBMC). It’s a very full featured media centre, and can serve media to multiple clients, including our Apple TV, iOS devices, as well as other devices outside the network.
Plex is a fairly decent investment to get running, but the container helps a lot - mostly you just need to get the command line settings right. The provided base container - plexinc/pms-docker
- works great out of the box.
I store the media on the NAS, and serve it to the Mac Mini over NFS, so it’s looking at a local folder. Transcoding anything is slow, so I have all the clients set to stream (maximum bitrate). This seams counter intuitive, but it works. Even with a few devices running, the Mini is barely breaking 10% CPU, as it’s mostly just moving bits around on the network.
It’s running in -net=host
mode, which isn’t ideal, but the other option is fixing ports, which I don’t want to do.
I bought a Plex Pass, but I’m not sure if I need it for this. $40/year wasn’t bad, given we use it almost every day.
docker pull plexinc/pms-docker:latest
docker run \
--restart=always \
-d \
--name plex \
--network=host \
-e TZ="Pacific/Auckland" \
-e PLEX_CLAIM="claim-nnnnnnn" \
-v ~/plex/config:/config \
-v ~/plex/transcode:/transcode \
-v ~/plex/media:/data \
plexinc/pms-docker:latest
Homebridge
Homebridge is a node-based hub for Homekit, so you can use Siri and Home.app to control various non-homekit things around the house. I’ve written about it a number of times before
- Homebridge - Homekit without Homekit hardware
- More Homebridge - AWS IOT, Dash Buttons, SQS, Broadlink RM3 Mini
- More IOT switch fun with Sonoff and Transmota
For this, I built my own container. There might be workable ones out there now, but there wasn’t when I started. It used to rely on the server having local services available, but I appear to have installed everything inside the container - avathi
, libmds
and others. You also have to use —net=host
to get this working, which is a bit yucky, but mdns
wants to control a lot of stuff.
I have various components installed for the switches and things around the house
* Wemo (I have 4 switches)
* Broadlink RM (I have 2 Broadlink RM Mini RF emitters to control our heating)
* Tasmota Switches to control the Sonoff switches, which have the Tasmota alternative firmware. This communicates via MQTT, provided by Mosquito. This controls the outside tank pump (plus outside temperature/humidity), the spa temperature (but not the spa pump or heater, sadly), and some lights in the bedroom (plus temperature and humidity, as that room gets very humid and damp)
* A temperature and humidity plugin which reads from MQTT and provides the values to Homekit.
* My own plugin - https://github.com/nicwise/homebridge-platform-sqs
- which listens on an SQS queue, and toggles a switch when a messages comes in. I use this with a pair of Amazon Dash Buttons which control the outside (Spa) lights and the heating (on @ 20 degrees, and off)
Mostly, I use this as a big, fairly intelligent scheduler. I detest Siri, so there is no “Siri, turn the lights on”, tho that does work.
Nginx - frontend to all the things
Nginx is now my HTTPS server of choice, as it’s very easy to setup and very very performant. I have it controlling port 80 and 443, and then proxying to other containers which want to listen on those standard ports.
It also handles SSL termination, so I can run Let’s Encrypt certificates locally - HTTPS all the things. It handles frontend services for
* s3.home.local -> Minio
* terraform.home.local -> Anthology
* unms.home.local -> UNMS (including websockets)
* docker.home.local -> docker registry
* Anything else I care to want to run, which wants to be on it’s own hostname + port 443
DNS names provided by the router (static host names), however I’d prefer to use pihole - I’ve just not managed to get that working yet.
Protip: Make sure you don’t take nginx down before pulling the latest image from the registry, as the registry is fronted by … nginx!
Anthology - Terraform registry
Anthology is a basic Terraform registry, which is where you can store Terraform modules. You don’t need to use this to use Terraform, but it was something I wanted to play around with. I use Terraform for everything at work, and all my personal AWS infrastructure - the host this blog is on, plus a few others, DNS, cloudfront etc - is all setup using Terraform.
Anthology backends on to S3, which I’ve used Minio for locally. Minio lets you set an Access Key and Secret, so as long as the one in Minio and the one here match, you’re good to go.
I still don’t have a good way to upload to a registry, or manage the content. The official one backs onto Github, but I think I need to write something which packages locally and pushes to S3, and there doesn’t appear to be anything around yet which does that. It’s not hard to do in bash tho, just not very repeatable.
docker pull erikvanbrakel/anthology
docker run -d --restart=always \
--name anthology -p 7080:7080 \
-e "AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE" \
-e "AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
--dns=192.168.1.1 \
erikvanbrakel/anthology --port=7080 \
--backend=s3 \
--s3.bucket=anthology \
--s3.endpoint=https://s3.home.local
Minio - local S3
Minio is a local server which exposes an API identical to the AWS S3 APIs, to a high level of detail. It can support multi-server, redundancy, mirroring and a load of other stuff. You could run a cloud storage business off this software - I just use it as a dumb blob store.
I use the default container, and point it at the local file system to store it’s files. The Access Key and Secret and just random bits which have to match up with whatever client (Anthgology) you are using.
docker pull minio/minio
docker run -d --restart=always \
-p 7180:9000 --name minio \
-v /home/ubuntu/minio/data:/data:rw \
-v /home/ubuntu/minio/config:/root/.minio:rw \
-e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
minio/minio server /data
Docker Registry
This provides a local version of the Docker Registry, which is sort of the non-UI version of Docker Hub. Again I’m using the official container - registry
- as I trust Docker to provide something which is secure and patched.
I backend this on the local file system, tho the NAS would be just as good. I did have to make sure that Nginx allowed larger file uploads for this tho - some of the container layers can get quite large, 350MB isn’t uncommon for something with Ubuntu in it!
Thats fairly easy with nginx tho:
server {
listen 80 http2;
listen 443 ssl http2;
server_name docker.home.local;
include /etc/nginx/conf.d/ssl.conf;
client_max_body_size 500M; <-------
location / {
proxy_pass http://docker;
proxy_http_version 1.1;
proxy_set_header Connection keep-alive;
...
docker pull registry
docker run \
-d --restart=always \
-p 7181:5000 --name registry \
-v /home/ubuntu/registry:/var/lib/registry \
registry
Once this is up, you can just tag your images using the hostname, and push to it., then pull from it Easy.
docker build . -t docker.home.local/homebridge
docker push docker.home.local/homebridge
...
docker pull docker.home.local/homebridge
docker run docker.home.local/homebridge
Mosquitto
Mosquito is a local MQTT broker I use for the Tasmota / Sonoff Switches. I didn’t do much to set this up, and most of it is documented here.
Again, I’m using the provided eclipse-mosquitto
container
docker pull eclipse-mosquitto:latest
docker run \
-d —restart=always \
—name mosquitto \
-p 1883:1883 -p 9001:9001 \
-v /home/ubuntu/mosquitto:/mosquitto:rw \
eclipse-mosquitto:latest
Most likely, you want to set it up with the username + password. That is on my to do list, as this is all local on my internal network.
Some of my own ones
- Copy Media - just runs every 5 mins, looks for a specific file extension in a folder, and moves them somewhere else if it finds them. It’s basically a bash script, in a container, fired by cron.
- Certs - refreshes the local
*.home.local
SSL certificate with Lets Encrypt using DNS as the source of truth, and pushes it to the various local places I need it. I have to do this every 3 months, so it’s important that it’s easy, bullet proof, and somewhat automated. So, like a good SRE, I put it in a container and wrote a runbook ;-)
Other things
I’m thinking about dumping Bitbucket and putting Gitlab - in a container - on the Mini. That would definitely be backed onto the NAS (or backed up onto the NAS). I’ve not had a problem with bitbucket, bit it’s one less thing to have out there in the cloud. That does mean I need to be on the VPN to get to my repo, but most of the time, thats not an issue. Not sure how I do it for remote builds like codebuild, which I use to build and deploy some lambdas in AWS - maybe I can use bitbucket as a mirror.
There’s life left in the ol’ girl!
OK, thats usually a reference to a boat (or a spaceship), but there’s definitely life left in this old Mac Mini, and while it’s not really stretched with what I’m doing with it, it’s still providing a lot of value even after 10 years. I don’t need a super-powerful server at home - there’s only three of us, and the cat has very low computing requirements.
Docker and containers are a technology that is not going away any time soon, especially when you think that “serverless” is really just “containers with hosts you don’t manage, and a great lifecycle story”. Knowing how containers work, and running them for real, is a very useful skill and knowledge to have. Even if “real” is just a few things to play around with at home. It’s a skill that I think every developer needs to be exposed to now - it’s not optional. And mostly, it’s fun.