Containers are the price of admission to the modern platform, says Google’s Kelsey Hightower, but they’re just the start.

Containers are the price of admission to the modern platform, says Google’s Kelsey Hightower, but they’re just the start.

At the recent Container World conference, Hightower, formerly of CoreOS and now a developer advocate at the company that created Kubernetes, talked about the huge gap between deploying applications and managing them in production and how to move beyond the single-machine programming model and start adopting API-driven distributed systems to unlock the true value of containers.

For starters, Hightower doesn’t believe the hype about them. “Containers are the thing we talk about today, that we’re excited about. Five years from now we’ll hate them all. Guarantee it, we’ll hate them all.”

He prefers to talk about what can be done with this new set of ideas and concepts — “moving up the stack of thinking” — considering the container as just a box that allows user to embrace the concept of platform-as-a-system.

“Containers are not platforms, when people say ‘We’re going to adopt containers’ to my mind it’s like saying, ‘We’re going to adopt .zip files.’ What they mean, he says, is they are planning to adopt Mesos, Swarm, Kubernetes, etc. — they are considering adopting a platform for the first time. “The container thing isn’t that exciting anymore,” he says, soliciting applause from the audience for Docker “for making containers real.”

A lot of platforms look like Kubernetes now, he adds, and the interesting part about this platform is the API server.

The current platforms are like Linux distros, it’s a platform but the APIs are all over the place — bash shell, cron jobs, package managers, some shell scripts, etc. Instead if you look at Kubernetes, you see an API server and if you give the user a container, all the components around it will do something fantastic with it, Hightower says.

“First we’ll place it onto a machine — step one, that’s the obvious thing, rolling updates — then once the app is deployed what happens next?” He then launched into a demo of iconic 80s video game Tetris, where the goal is to place the blocks in the right place, based on what shows up on the screen now.

“Users hit one button and blocks fall down stacking up randomly: it’s fully automated and hands-off completely. But you can see to the left and to the right, however, that CPU and memory are totally being lost. You’re just losing on this one, it’s totally automated but you have no resource awareness. Your scripts aren’t designed to handle any conditions that are real time.”

In the Kubernetes world, each workload is examined and instead of assigning it statically to a machine or relying on scripts, each workload gets evaluated and a decision is made. Bin packing is used, so as the workload “falls,” the user decides where it should land, moving it to the right place.

All of that looks great with new development but there’s a reason to curb container enthusiasm if you’re in a legacy environment. “[For] enterprise, the scenario is not so pretty- you start out in a different world, it’s not greenfield, you already have some things,” Hightower says. You can still use resource managers and get some benefit from them, he says. “One of those benefits is to fill in the blanks — you install these things right underneath your set-up — whether you’re running OpenStack or VMware you can carve out resources within that cluster. Over time, as you do more and more workloads you start to defrag your cluster…You get benefits even in the brownfield world.”

But If you’re in the real enterprise, you work with databases, then what can you do? “Absolutely nothing. You’re screwed.[laughs] I’ll be honest: this stuff doesn’t work in all environments,” he says. “That’s the thing most people don’t talk about. You just can’t put every workload in every operation…There are things to think about when you adopt one of these platforms.”

Since the best things happen with modern architectures, Hightower then ran through a demo deploying an app and presenting it with an SSL certificate to process it at run time. In this example with micro services, some details change (the certificate, IIRC) but the core APIs remain the same ‘automagically.’ “If I scale to 1,000 instances they’d all get the certs and the background would refresh the certificate without notifying the developers or changing any of the Kubernetes API. This is what makes these kinds of platforms really powerful. It’s not just about deploying applications, it’s about being able to build the tools like this and use all of the abstractions just to make it work.”

This is where the road splits from the platforms that try do everything for the user. To answer the question of why a user would do what today is fairly tedious work — create all these YAML files, work with this whole system from the outside and have to learn how to deploy and configure every single app — Hightower says it’s important to look past containers.

The most important thing is the HDI server and without that HDI server we have all these Kubernetes deployments – the agents, the nodes, the scheduler and the proxy etc. — that can tear user run time.

“Most people stop talking about these – they’re getting to the point where they’re not excited anymore — and this is a good thing,” he says. “Because we don’t want to be talking about container packaging forever. But these run times do provide value in communicating to the kernel so we don’t have to do this in every platform. Then they fade away.”

 

Cover Photo // CC BY NC