Managing 2,000 nodes with 10,000 servers to come for the world’s largest machine.

The OpenStack Foundation announced that its Ironic software is powering millions of cores of compute all over the world, turning bare metal into automated infrastructure ready for today’s mix of virtualized and containerized workloads. Some 30 organizations joined for the initial launch of the OpenStack Ironic Bare Metal Program, Superuser is running a series of case studies to explore how people are using it.

The European Organization for Nuclear Research, known as CERN, provides several facilities and resources to scientists all around the world, including compute and storage resources, for their fundamental research.

CERN has been using Ironic in production for about 18 months and it’s now the standard tool to provision all new hardware deliveries to their end users.  Team members tell us that they currently have around 2,000 nodes managed by Ironic, but aim to enroll the majority of their remaining 10,000 servers over the course of the next 12 months. Work to integrate the pre-production burn-in, the up-front performance validation, or the integration of retirement workflows is currently ongoing in collaboration with the upstream team.

Why did you select OpenStack Ironic for your bare metal provisioning?

For several years, the CERN IT department has been providing compute resources to the laboratory’s experiments and administrative services via an OpenStack-based private cloud. Ironic was the natural choice to complement the service’s offering of virtual machines and container clusters by physical machines: the users access physical resources via the same interfaces and workflows as they already do for virtual machines and containers.

What was your solution before implementing Ironic?

Before Ironic, the whole provisioning workflows were based on tools built in-house, with the corresponding man-power intensive maintenance workload. Part of these workflows have been now moved to Ironic, in particular the more user-facing parts. We’re actively working on moving additional workflows, such as the initial burn-in or performance verification and are doing so together with the upstream community.

What other technologies does your OpenStack Ironic deployment interact with?

One of the reasons we chose Ironic was that the APIs are identical to the ones used for virtual machines in Nova. As we have a substantial number of container clusters created via OpenStack Magnum, this interface similarity now allows for provisioning of (mostly Kubernetes) clusters on bare metal machines. Removing the additional virtualization tax is relevant for performance sensitive applications, such as the experiments’ analysis code, which run on our batch system.

What benefits does Ironic provide?

The CERN IT department provides resources to all Large Hadron Collider (LHC) experiments and needs to make sure that the provisioned resources are correctly accounted. The integration of physical resources into OpenStack via Ironic does not only allow for a simplification of the resource allocation, e.g. with respect to quotas, but also for streamlining the accounting process (since all accounting information will eventually come from a single source).

What feedback do you have for the upstream OpenStack Ironic team?

The Ironic team has been helpful to work with, both to first-time deployers who may have encountered issues or questions during the setup and new contributors who have needed clarifications or make suggestions to improve the code base. As an operator, we have particularly appreciated the balance between the constructive feedback when proposing a new feature and the need to maintain a code base which needs to consider also other use cases and backwards compatibility.

Learn more

You’ll find an overview of Ironic on the project Wiki.

Discussion of the project  takes place in #openstack-ironic on irc.freenode.net. This is a great place to jump in and start your ironic adventure. The channel is very welcoming to new users – no question is a wrong question!

The team also holds one-hour weekly meetings at 1500 UTC on Mondays in the #openstack-ironic room on irc.freenode.netchaired by Julia Kreger (TheJulia) or Dmitry Tantsur (dtantsur).

Stay tuned for more case studies from organizations participating in the initial launch of the program.

Cover image: © CERN