Project team lead Ales Komarek on what this project can do for you.

image

OpenStack-Salt became an official project under the Big Tent in May 2016.

The roots of this project date back three years ago, when we were searching for a provisioning tool to automate deployment of our OpenStack Folsom cloud. I had worked with SaltStack for a year already at the time, so configuring and deploying OpenStack was a welcome challenge that pushed our use of SaltStack to its very limits. Fortunately, SaltStack proved to be the right tool to fulfill all of our requirements and helped us to automate not only OpenStack but eventually all software systems, monitoring, continuous integration and deployment (CI/CD) pipelines and multiple application stacks.

This article presents a basic introduction to OpenStack-Salt and outlines our major goals. It’s structured to answer some of the most frequent questions that come to mind when looking at a configuration management project like this.

Is it just another config management tool?

“Why do we need any more config management tools? Aren’t there enough already? We have Chef, Ansible, Puppet-based solutions…”
This is the most common reaction that we get from community members. The reality is that OpenStack-Salt is not really just another implementation of an OpenStack deployer based on SaltStack.

The main difference is the possibility to setup and maintain operation workflows. As Lachlan Evenson and Jakub Pavlik [pointed out](http://www.tcpcloud.eu/en/blog/2016/06/27/making-openstack-production-ready– kubernetes-and-openstack-salt-part-1/), last week’s deployment tool is not enough.

Well-defined topologies and related workflows are what we need. Our mission and goal is not to create tool suitable only for laptop OpenStack deployment in 30 minutes. This is what DevStack is designed for. We have built a solution that can scale environments to hundreds of nodes and provide life-cycle management, monitoring, backups and documentation. That’s what people really need.

OpenStack-Salt project uses SaltStack as essential piece of one big puzzle and integrates various other technologies to do their role to form a complete ecosystem. Combining all of these services gives us the ability to create operational level workflows that can deliver complete service development and deployment pipelines from source to production.

What about serialized know-how?

It does not aim to have the total configurability of DevStack but focuses more on implementing the best practices. We started the project on three production environments and it has changed my mindset over time, from a developer point of view to an operations-focused one. OpenStack-Salt is maintained by people who know how to run large environments and have good operational knowledge. The goal is not to parametrize all the possible options, but rather to present the “best practice” configuration setups.

Let me give you an example. Recently, the request from the community came to support Qpid along the RabbitMQ message bus. We wondered why, because Qpid is not widely used in the community and has high-availability issues. Will this feature ever be used in production environment or it is just development for development? The goal is not to provide every option, but to help people in operations with tuned parameters. The support for every possible option will end up in too-complex tooling, which no one would be able to use.

What really makes me and all people in OpenStack-Salt community happy is reaction from Thomas Hatch (founder of SaltStack) regarding state of the project and relation to official Salt community:

“I REALLY like what you did with pillar, using pillar as a high level configuration is a great way to go when making reusable states, very nice!!

I am not a big fan of reclass, but you have abstracted it in the right way making it very clean, again, very nice.

… while we deploy OpenStack fairly often we end up deploying it in a more custom way per deployment, whereas your approach is a much better top down flexible design.”

This brings us to the next question concerning our metadata model.

What about Reclass as infrastructure-as-code?

Reclass usually gets the most controversial feedback from the community. This project is maintained by Martin F. Krafft, who drives most of the development. What Reclass really brings to the project is modeling whole infrastructures. Not just only models of OpenStack services, but all infrastructure and support services (monitoring, logging, backups, firewalls, documentations) along support for routers, switches, bare metal servers.

The metadata model relies on two simple basic principles: interpolation and merging. Interpolation gives us the ability to reference any other parameter in the model and thus reduce the duplicate definitions. The deep parameter merging allows us to split the systems and services into metadata fragments that are easy to manage. The final model is a result of many merged service metadata fragments with multiple interpolated parameters. There are several new options on how to manage the metadata for Salt, for example pillar stack that solves the parameter interpolation and merging as well and may replace Reclass in the future. We plan to add the ability to use plain pillar trees in near future.

What service-oriented or container services can OpenStack-Salt support?

The big question is whether to use virtual machines or containers to run the services. With introduction of Docker and Dockerfiles, the use of configuration management tools has been abandoned and new standard to setup the container content has been introduced. With all service configuration covered in Salt configuration management, we felt the need to combine these two approaches. The Dockerfile is generated by simple template that invokes the Salt configuration run that builds the actual container image content. The images are stored in local registry and used in Docker or Kubernetes clusters. We had to add only the container entry point and disable the service.

Transition to containers was big test of modularity and service decomposition of all modules. Fortunately, we have chosen the right level of service composition granularity and were able to model every scenario so far without the need to overhaul any of the Salt formulas. Now we can freely combine the host based services with container micro-services within one application stack.

Get involved

I am really happy to see the project growing under the OpenStack Big Tent. If you’re interested in any of the topics mentioned above, come to any of our weekly IRC meetings on Tuesdays 13:00 GMT on #openstack-meeting-4 channel or just write to our #openstack-salt channel anytime.
We are always looking for new challenges and opportunities.

_This article was contributed by Ales Komarek, PTL of OpenStack-Salt and a software architect at tcp cloud. Superuser is always interested in community contributions, get in touch: [email protected]_

Cover Photo // CC BY NC