Adrian Otto, a principal architect at Rackspace, is the project technical lead (PTL) for Magnum, an API service developed by the OpenStack containers team for OpenStack to make container management tools such as Docker and Kubernetes available as first-class resources in OpenStack. Magnum officially joined the OpenStack project list upon approval by a unanimous vote by the Technical Committee in March 2015.
Tech publications have exploded with container news, some even calling it “container mania.” Is this a real thing and what lead to the eruption of coverage?
Containers are certainly popular. I’m not sure about the term “mania,” as that suggests that the excitement level is excessive. The excitement about containers is happening because we are collectively realizing how useful they could be to address some of our very real and painful problems.
That excitement is justified. Containers significantly simplify the problem set that we currently address with configuration management tools such as Chef or Puppet. They allow us to have much more dynamic systems than virtual machines allow, because the chunks of data you need to move around to use containers are so much smaller than virtual machine images. Once you fathom what possibilities that opens up, it is truly exciting.
The turning point with the popularity of containers really came from the efforts that Docker made toward making container technology accessible and simplifying it to the extent that any system administrator, and any developer can use it without a custom kernel, and without an intimate knowledge of exactly how containers work internally. We’ve had most of the technology in the Linux kernel for at least six or seven years, but the innovation that Docker brought was the concept of the container image, and the layering features associated with that concept. That was really the missing piece that needed to be added in order to bring containers to the mainstream. They did a remarkable job with it.
People expressed interest in containers in the recent OpenStack user survey. What advice would you give for evaluating the right solution?
I would advise those evaluating containers to recognize that most of the tools for working with containers are new. Docker is only two years old, and most of the tools in that ecosystem are even newer than that, so we should set our expectations accordingly. Complex mission critical software systems have about a five-year maturation cycle, regardless if the effort is open source or not. That’s about how long it takes to discover and work through the full range of edge cases of a complex software system. Containers software is moving very quickly, but we should be realistic about the relative maturity level of these new tools versus the ones that have been around for much longer. If you step into a container evaluation expecting everything you currently get from your virtualization platform, you are setting yourself up for disappointment. You should plan to use virtual machines and containers in combination, not as substitutes.
What should everyone know about containers?
Containers are not a newer/smaller virtualization. They are different, and have a different set of benefits. If you use them for the right reasons (deployment simplicity, implementing immutable infrastructure, simplifying configuration management, app portability, etc.) then you will be delighted. If you expect them to replace your hypervisor and VMs, then you ought to think about it again after aligning the container benefits with your set of needs.
If what you want is security isolation between neighboring applications, then stick with virtual machines for that. If what you want is more dense packing of applications all belonging to the same user, group, or organization, where the security isolation is not a major concern, then containers are a better tool for that than virtual machines because they have lower overhead by sharing a single kernel. I would also point out to users that once they have a container image for their application, they can move it to any compute environment they want (bare metal servers, virtual machines, etc.). That level of application portability is better than almost any other approach we have today.
If you missed Otto speaking at the OpenStack Summit Vancouver, catch this video of his 90-minute Docker workshop.
- OpenStack Zed: The End of the Alphabet, The Beginning of a New Era | OpenInfra Live Recap - October 6, 2022
- Around the World with OpenInfra Events | OpenInfra Live Recap - September 22, 2022
- Making VDI a first-class citizen in the OpenStack world | OpenInfra Live Recap - August 25, 2022