Do you have an OpenStack infrastructure at your fingertips? Curious about supporting Containers on OpenStack? We believe these are two very complementary technologies for operations teams in traditional data centers. With this in mind, enabling a simple and integrated approach to deploying and operating the container infrastructure will be key.

We believe that these technologies tend to be a good fit because:

  • Container schedulers and runtimes rely on Operating Systems which need virtual or physical infrastructure to be deployed and managed
  • OpenStack makes the consumption of infrastructure to support Operating Systems and Container Runtimes simple by providing an open API focused on compute, storage, networking, and other services
  • OpenStack provides an abstraction above storage platforms for us through Cinder
  • No surprise here, but when it comes to running workloads in containers we believe strongly that the benefits provided by embracing containers can be used across all types of use cases including persistent applications like databases. With OpenStack as our infrastructure provider and storage abstraction layer, running persistent applications is easy!

This post is going to focus on our orchestration package for delivering a working Docker Cluster that is able to leverage external storage provided by OpenStack Cinder. Why is this important? The formal deployment of clusters is done through OpenStack Magnum which leverages Heat templates. This post moves us towards being able to augment Magnum to ensure it deploy and configure container infrastructure inclusive of providing external storage support for containers.

The following video is an example of leveraging an OpenStack Heat template here for deploying and configuring our container cluster. In the video we show the process where it concludes by demonstrating containers using persistent storage provided by OpenStack Cinder.

Under the covers, here are some of the basics of what is occurring.

  • Heat is responsible for creating a number of cloud resources.
  • An autoscaling group of ‘m1.medium’ instances to act as Docker container servers is created.
  • Scaling policies are used to keep that group of servers active with at least three (but no more than five) instances.
  • A ‘swarm master’ instance is deployed for future experimentation with Docker Swarm.
  • A private subnet for the Docker servers to live in is configured and a router is deployed for communication with the outside world.
  • Necessary security groups and policies needed to keep the deployment accessible and secure are also configured.
  • Go ahead and check it out for yourself and download the template [here](https://github.com/drumulonimbus/rexray-openstack)!

Thank you for taking the time to read through this and we appreciate any feedback.

This post first appeared on EMC {code}'s blog.

Superuser is always interested in how-tos and other contributions, please get in touch: [email protected]

Cover Photo // CC BY NC