OpenStack and Kubernetes are becoming a standard that allows users to benefit from both virtual machines and containers for their cloud-native applications, says Catalyst Cloud’s Feilong Wang.

image

Platform services have came a long way. Not only are they becoming more popular, but they’re also driving true multi-cloud interoperability says Feilong Wang, head of R&D at Catalyst Cloud, a public cloud based in New Zealand built on OpenStack.

The combination of OpenStack and Kubernetes is becoming a standard option that allows users to benefit from both virtual machines and containers for their cloud-native applications, he adds. Wang and colleague Xingchao Yu talked about the company’s journey and shared a demo at the recent Linux Conf Australia.

Managed vs. metal

The first question is why you would use a managed Kubernetes service instead of just building something on bare metal instead. Wang cites two recent examples — Atlassian’s experience of Kubernetes “the hard way” and a GitHub discussion about the production readiness of Microsoft AKS — of the difficulties involved. “Don’t get me wrong, I’m just saying that building a production-ready Kubernetes service is not easy, that’s why we wanted to share our journey,” he clarifies.

What does production-ready mean, anyway?

There are as many opinions on this as engineers, but at Catalyst they define it using four factors: strong data security, high availability resiliency, good performance and scalability and ease-of-use. Here’s a breakdown of each:

Strong data security
• RBAC backed by Keystone
• Network policies
• Rolling upgrades and patching

Good performance/scalability
• Network performance
• Storage performance
• Time to deploy the cluster
• Horizontal scalability (auto-scaling)

High availability/resiliency
• Highly available master nodes
• Highly available worker nodes
• Auto-healing

And, finally, ease-of-use. Wang says most of Catalyst’s customers use Terraform or Ansible to talk to the OpenStack API; for container orchestration with Magnum, Catalyst also provides an API to manage the cluster. He also points out that they release all the work they’ve done so that the community can benefit from it. “We upstream everything – we don’t have any secret code,” he adds. As for what they’re working on upstream the current list includes: health checks and auto-healing, rolling upgrades, Octavia ingress controller and an ingress controller for DNS service Designate.

It hasn’t always been smooth sailing, however. Wang talks about a lag — 15 minutes, or many cups of coffee in engineer time — in creating a production cluster.  Users get two load balancers in front of multiple masters, so for the two load balancers under the hood, OpenStack needs to create at least four working machines to get the load balancer. That means creating three master nodes and two or three worker nodes to get into production. So it’s a “pretty big stack” but “we’re working on improving that,” he says. Another limitation, Wang notes, is getting Kubernetes Docker images from Docker Hub. When a user creates a cluster, OpenStack Magnum needs to talk to Docker Hub for those images, wherever they are stored — most likely outside New Zealand — creating latency. Catalyst plans to create a local container orchestration to resolve the issue.

Xingchao Yu, cloud engineer at Catalyst Cloud, offered this demo to show how users can create Kubernetes clusters in just a few clicks using two templates.

Catch the entire 45-minute session here.

Superuser