Superuser talked to participants at the Ops Mid-Cycle Meetup about what they learned during the two-day session and what they’d like to see in the future.
The two-day sessions gathered about 150 operators from companies including Rackspace, Time Warner, NTT, Bluebox, Red Hat and Go Daddy in Philadelphia. Local host and sponsor Comcast kept everyone fed and caffeinated.
What brought you to the Ops meetup and what did you find most useful?
Carol Barrett, director, enterprise platform marketing, Intel Corporation and organizer of OpenStack’s Win the Enterprise Team.
“I came to get feedback and get information on requirements for monitoring tools. The monitoring session was really rich with information. These are folks who have been working with OpenStack for a while, and they have a really strong sense of the challenges and how to bring the tools to create solutions that they can actually operate and manage in a reasonable way. So it’s really encouraging for me to hear the amount of experience people have and the tips and tricks they are willing to share.”
Barrett says one of her big takeaways – and a goal for future Mid-Cycles – is to find a way to harness all that fast-flowing information on the etherpads in a way that’s easier to use.
“We need to find a way to capture all of it, so it can be consumed by other people, discovered and scale up. That way, the next time we have an Ops meetup, and we don’t have 180 people — which is already an impressive number — but 360 people and they are new people bringing in ideas, new tools and new approaches, we have a way to get it all down and share it.”
What do you get most out of these Ops meet ups?
“The reason I love coming here is to hear the good and the bad about the various projects of OpenStack from the people who are responsible for it in production. Without that kind of feedback, you can’t make anything successful. It’s an opportunity for people to give a lot of feedback and most people here aren’t shy at all. And the conversation isn’t dominated by the traditional loud voices in OpenStack, and by that I mean a developer perspective or the business side.
Here you get people who actually have to deal with whether the code you wrote solved problems or created them. Once we know that, we can hopefully solve more problems and cause less pain. That’s why I love these things and why every PTL should be here.”
What would you like to see more of?
“I want to see less of the format where the moderator asks who is running what and gets a show of hands. I’d like more “show me your actual deployment," "show me your hardware specs," "show me your network diagram," etc.
Not so much war stories but [people sharing] the tools that they had to build around this whole environment so they can keep it running. I’d also like to know the kinds of things they’re looking for and the gaps they have to fill. That way the operators can share knowledge and learn from each another — and hopefully the projects can integrate where appropriate and make sure everything works.”
What needs to happen moving forward?
Matt Joyce, Big Switch Networks
“The obvious two areas that seem to be still problematic for operators are RabbitMQ and Neutron. The approaches are vastly different across Neutron, but with RabbitMQ we really just need to figure out how to run the diagnostics and how to analyze a problem in RabbitMQ. If we get those documented, then people should be able to figure out how to fix it – fingers crossed.”
“Neutron is a bigger problem. From my perspective, and this is coming from my job, the problem is that the switches are kind of black boxed too much. Everyone does stuff a little differently across them and it’s kind of impossible to build an orchestration layer on top of all the things you need, which is why you end up with this massively complex mess of code called Neutron…There are whole bunch of people trying to solve the same problem in different ways, and I’m not sure where that will lead us. “
What did you hear from the operators?
Mike Perez, in a vintage OpenStack tee.
The times that Cinder did pop up [in conversations] were about when people end up with a volume that’s in a bad state. As I understand right now, operators go and update the state of a volume, and if it’s in a bad state they’ll set it to "active" when they think it’s in a good state.
We’ve already talked this about issue at our Mid-Cycle meetup group and we do have a patch, but it’s not going into Kilo. Ideally, an operator could say that if a volume is stuck in an attaching state (so it’s trying to attach to an instance) and they want to bring it back into an available state, we would give them the ability to say to Cinder “I want this in an available state” and Cinder would talk to the back end and try to resolve with the back end by setting it from that state.
It’s an error-prone thing to expect somebody to look at what the storage back end is saying and then go back to the database and trust themselves to set it back into an "active" state. Having the ability to make state changes in Cinder from the administrator’s point of view is a good thing.
Overall, it didn’t seem like Cinder came up that much, I don’t know if that’s good or bad. Are we doing a good job or are people not doing anything interesting enough that might break it? (Laughs.) I don’t know.
A lot of the things people are asking for are things that I’m aware of — so for me it’s more about being present, letting people know I care. Listening to them.
For the future, Nova had a session that put Sean Dague in the hot seat — I wouldn’t mind doing that, hearing what people are trying to do with Cinder and the problems they having. A lot of the different issues I just see from the tests and I don’t hear from actual users about what their problems are."
Cover Photo: Thierry Carrez, director of OpenStack engineering, leads a discussion on tags at the Mid-Cycle meetup. All photos by Nicole Martinelli. // CC BY NC
- OpenStack Homebrew Club: Meet the sausage cloud - July 31, 2019
- Building a virtuous circle with open infrastructure: Inclusive, global, adaptable - July 30, 2019
- Using Istio’s Mixer for network request caching: What’s next - July 22, 2019