The OpenStack Foundation announced that its Ironic software is powering millions of cores of compute all over the world, turning bare metal into automated infrastructure ready for today’s mix of virtualized and containerized workloads. Some 30 organizations joined for the initial launch of the OpenStack Ironic Bare Metal Program, and Superuser is running a series of case studies to explore how people are using it.
Platform9 customers have been using Ironic for more than two years; here, members of the team talk about the why they adopted it and what benefits they’ve seen.
First a little background on the San Francisco-based startup: Platform 9 provides a software-as-a-service-based service to deploy and operate OpenStack hybrid clouds for KVM, VMware and public cloud environments. Platform9’s SaaS platform helps customers get an OpenStack hybrid cloud in minutes. In addition, Platform9’s Managed Kubernetes service allows customers to deploy and run Kubernetes on any infrastructure of their choice: OpenStack, bare metal, VMware, Public clouds, or on the edge. Many of their customers’ workloads are suitable for running in virtualized environment, however some customer requirements/workloads require them to run on bare metal, bypassing the hypervisor.
Why did you select OpenStack Ironic for your bare metal provisioning in your product?
Internally within Platform9, our test teams need to provision and deploy our software for validation on a variety of hardware targets such as different hardware manufacturers (HPE, Dell etc), processor types (AMD, Intel), GPU resources (Nvidia), storage types (Optane), or special resources (SR-IOV, DPDK, etc.) This was a very manual and time-consuming process slowing down our software release velocity and impacting release schedules.
We selected OpenStack Ironic to address the above customer needs as well as our internal need to speed up our hardware testing process and deliver our services faster to our customers.
What was your solution before implementing Ironic?
Without Ironic, the whole process from racking and stacking a server to the time users get access to it would take weeks. Because of this, different teams would acquire those resources would assume “ownership” of them. If another team needed those resources with a different configuration, it would again take a lot of manual networking configuration, provisioning and deployment. So teams would often be reluctant to share those resources even if they were not being utilized fully. This led to a lot of under-utilization and increased costs as different teams would either procure their own resources or wait around until the resource is released by another team.
With Ironic, on the other hand, the variety of hardware resources can be pooled together and when users need a specific type of bare metal resource such as a HP box with AMD processor, they can simply request that flavor that was discovered and provisioned by Ironic already and be able to instantly deploy their workloads on that server in a sef-service manner. Internally, Platform9 tried to solve this problem with Cobbler and a lot of homegrown automation. This approach required a lot of time, effort and maintenance of the automation code which could have been better utilized on improving and adding value to our product. Using Ironic simplified and streamlined the bare metal provisioning and automation of our testing environment.
What benefits does Ironic provide your users?
The biggest benefit is time savings, without a doubt. With Ironic, the time to provision bare metal servers is orders of magnitude faster, going from weeks or months it used to take previously with manual methods to just under 20 minutes. The effect this has on our testing times and consequently on improving our software release time lines is revolutionary. In our customer environments, a whole manual ticketing process that starts from racking and stacking, switch configuration, network configuration, server provisioning to Operation System deployment is all completely automated and replaced by a simple single-click self-service experience.
From an administration standpoint, Ironic also reduces management overhead significantly by providing a centralized operational console with complete visibility of all the resources in terms of their location (racks etc.) and their specific configurations (CPU, RAM, storage and special hardware such as GPU). As a result, maintenance and updates also become fast and easy. Another big benefit is repeatability, i.e. the ability to provision a bare metal server with exactly the same images and configuration, thus making it possible to treat them as “cattle” and the ability to swap them out without impacting availability or reliability in production.
And finally, as with other OpenStack projects, Ironic supports the notion of plugins so you can use any switch from Juniper, Cisco, Arista allowing the network automation (like virtual LAN configuration, for example) to be agnostic to whatever switches are being used. This is extremely important from our product perspective as it allows us to easily integrate in various customer networking environments.
What feedback do you have for the upstream OpenStack Ironic team?
More documentation! There are a lot of areas in the product that feel lightly documented. Specifically which versions certain features are in. The Ironic Inspector rules and modules were fun to kind of reverse engineer to figure out.
You’ll find an overview of Ironic on the project Wiki.
Discussion of the project takes place in #openstack-ironic on irc.freenode.net. This is a great place to jump in and start your ironic adventure. The channel is very welcoming to new users – no question is a wrong question!
The team also holds one-hour weekly meetings at 1500 UTC on Mondays in the
#openstack-ironic room on
irc.freenode.netchaired by Julia Kreger (TheJulia) or Dmitry Tantsur (dtantsur).
Stay tuned for more case studies from organizations participating in the initial launch of the program.
- OpenStack Homebrew Club: Meet the sausage cloud - July 31, 2019
- Building a virtuous circle with open infrastructure: Inclusive, global, adaptable - July 30, 2019
- Using Istio’s Mixer for network request caching: What’s next - July 22, 2019