Like a lot of engineers, Nick Jones never met a piece of hardware that didn’t spark joy. Or at least enough interest to keep around collecting dust until the next idea sparked.
One night at the pub, the community engineering lead at Mesosphere and former colleague Matt Illingworth realized that if they combined parts, they could build a “very small but serviceable” cloud platform: “A little too much for ‘homelab’ meddling, but definitely enough to do something interesting,” Jones says.
Our Homebrew series highlights how OpenStack powers more than giant global data centers, showing how Stackers are using it at home. Here we’re stretching the definition a little with this deployment is tucked away in a former bunker. Not exactly a cluster in the closet, but decidedly in line with the hacker-hobbyist spirit.
Now that the flame was lit, the pair ticked over options for where to put it. As luck would have it, a friend had plunked down for a decommissioned nuclear bunker tucked into the southern highlands of Scotland near Comrie. In one epic weekend, these brave hearts drove from Manchester to build a rack, install the hardware, deploy bootstrap and out-of-band infrastructure, configure basic networking and test it enough to manage with some confidence from remote. All in time to drive back 252 miles for their day jobs.
Figuring out what to call their creation kept them occupied from Glasgow to Lancaster. Illington wanted something that was universally liked (skirting the horror of both vegans and many tech-conference attendees) and settled on sausage. That decision, in turn, flavored the names of the virtual machines: chipolata, hotdog, saveloy, cumberland and bratwurst.
“Anyway, it seemed like a good idea after having been awake for about 18 hours,” Jones says.
“Anyone who still thinks OpenStack is hard to deploy and manage is dead wrong,” Jones says. Some 14 months and two upgrades later (made painless with Kolla, Jones adds) it’s still afloat. But perhaps not for long: they’re looking for folks to chip in to pay for costs and who might be interested in using it, too. (You can get in touch with Jones through his website.)
Superuser asked Jones a few questions about the particulars.
Tell us more about the hardware.
It’s a hobby project so hopefully we won’t be shamed for the pitiful state of the hardware but it’s good enough to be of use!
Right now it’s running on a selection of vintage HP BL460c G6 blades – 10 of them in total at the minute, each with 192GB RAM and a pair of mirrored SSDs. This gives us a reasonable amount of density and serviceable I/O, although they’re power hungry since they’re a very old generation of Xeon. Currently on 1GbE networking but we’re hoping to switch that out to 10GbE soon.
What are you running on it?
In terms of services, aside from the ’standard’ OpenStack services, it also runs Magnum for Kubernetes clusters on demand and Designate for DNS-as-a-Service. The one service we don’t yet run is Cinder so there’s no persistent storage available, but as with the networking upgrade we’re hoping to add a small amount of that in the not-too-distant future, again probably on donated hardware. No object storage either. Given the hardware we’d probably deploy Ceph to take care of both those.
Who’s using it and what are they doing with it?
Most of the users of the platform have found it useful to be able to spin up a handful of pretty big (over 16GB ram) VMs in order to be able to do remote development work. It’s really handy for people who don’t want to run big Devstack or Minikube (for example) clusters on their laptops locally and who’d rather just SSH into somewhere else to do that sort of thing, but not worry about a really expensive bill which would be the case pretty much everywhere else. With enough of us who find such a service useful all clubbing together, it just about covers the costs of running it.
Longer-term plans are to put the configuration for the whole platform online and welcome pull requests to add or change configuration for various services – along with comprehensive testing, of course. This would probably appeal to a subset of OpenStack developers who’d like to test how their service runs on a public cloud.
More on the specifics of the deployment on his blog.
Got an OpenStack homebrew story? Get in touch: editorATopenstack.org
All photos courtesy Nick Jones.
- OpenStack Homebrew Club: Meet the sausage cloud - July 31, 2019
- Building a virtuous circle with open infrastructure: Inclusive, global, adaptable - July 30, 2019
- Using Istio’s Mixer for network request caching: What’s next - July 22, 2019