Cloud construction kits are the cornerstone of one company’s successful OpenStack strategy.
These easy-to-assemble public clouds first attracted small and medium businesses to Russian hosting provider Servionica’s MakeCloud. Four years after the launch, large retail and insurance companies are also constructing public clouds with it.
OpenStack functionality allowed Servionica to become not just another virtual machines (VMs) hosting provider but to offer its customers instruments enabling them to build a company’s complete IT infrastructure in minutes, in other words IT “infrastructure construction sets.”
Viacheslav Samarin, director of cloud services and products at Servionica, shared the story behind building the OpenStack-based public cloud and the customers who are using it.
What was the main purpose of building the OpenStack-based cloud?
Servionica is a part of i-Teco, one of the largest IT groups in Russia. Over the years, we have been building private clouds for enterprise customers based on the world’s leading vendors’ solutions and technologies. In 2009, we also launched our own VMWare-based cloud platform and started to provide managed clouds to our enterprise customers, but we saw the increasing demand for full customer self-service.
More customers wanted to manage their cloud resources themselves in a fully automated way, be able to order resources online, reconfigure them, view usage statistics and pay for consumed resources. Medium and small businesses were looking for an economical solution for their needs. It was also important for many corporate customers to get more than just a virtual server; they wanted an instrument that would allow them to build complete virtual IT infrastructure in minutes rather than days or even hours. So we decided we needed to build another cloud to satisfy all these needs. As a provider, we were looking for a platform that would ensure new functionality fast deployment to allow us to keep up with the changing market demand.
Why did you choose OpenStack for your cloud?
Thanks to our experience in building private clouds, we knew a lot about vendors’ platforms. In 2012, none of them were ready to be used to build as a true public cloud without a lot of additional work, effort and customization put into them. We also did not like the idea of our new business depending on a platform’s vendor road map and not being able to influence any new functionality roll out.
So we decided to look at open source solutions. After playing with several of them (installing each on a couple of old laptops), our engineers and developers decided on OpenStack. We wanted to have good reasons behind our choice, so we looked at the community and found it somewhat similar to the Linux community. New OpenStack releases were coming out every six months which made us believe there was good, dynamic work being done by the community. Documentation for the Diablo release and guides for deployment, administration and development were far from being complete in 2012, but got improved fast with every release. Although we spent a lot of time discussing our decision almost four years ago, today we think it was quite intuitive – but a smart guess.
What was it like launching the pilot OpenStack cloud, your first experience with open source?
It took us six months, and we started with Essex. We studied documentation, but a lot of functionality described was not really working or was working not as described. We worked on fixing bugs, as well as writing new code to add functions and features for the services we wanted to launch.
There were modules that were missing among OpenStack projects but were essential for launching a fully automated public cloud. Therefore, we found a partner that provided billing, personal area, front-end portal. We integrated with Velvica’s platform to get this functionality and still use it as a part of our MakeCloud infrastructure. Instead of the OpenStack dashboard, we developed our own control panel for implementation of user scenarios designed by us.
An important decision we had to make was on the storage to be used. We tried Gluster FS and Sheepdog, but both were quite immature and did not provide the performance needed. We launched our service using IBM Storwize V3700 which worked well but had performance issues, and using Cinder driver also caused security impact. Today we use NetApp FAS8020 and are very happy with its performance and ONTAP technologies.
It is worth mentioning that after many years of experience with vendors’ software and hardware, learning to work with open source was quite special. Initial expectations of our management and business people on the amount of R&D and of our own software development were set based on a quick assessment of documentation available of the code and the projects. Very soon we learned there was much more work required. But fortunately, we had a team of very bright DevOps working on our project. Today after four years of experience, we have learned to assess the scope of work and to plan it much better.
In mid-December 2012 we went operational.
When going commercial, problems did you face and how did you solve them?
It took us four months to go from pilot to commercial. Meanwhile, Grizzly was released and in April 2013 we started to provide commercial services based on it.
With the growth of our customer base and of the load on our platform, the need for additional development and optimization was obvious. To decrease latency we optimized how Neutron worked with our database. We fully automated additional network functions on our platform: so that our customers could order them as services with just a click of a mouse: VPN (based on Cloudpipe), DNS (based on Designate), load balancing as a service (based on Equilibrium). To ease building IT infrastructure for our customers on our platform, we introduced additional functionality, such as choosing default gateway when connecting a VM to several networks and floating IP auto allocation.
As we were not satisfied with how the Storwize plugin worked, we tried several other storages and eventually replaced Storwize with NetApp. Based on our test and experience, NetApp performance and features, such as snapshots and WAFL, in conjunction with OpenStack integration are the best choice for an OpenStack cloud provider.
What are your future plans for the public cloud?
Initially we targeted small and medium size companies with our service. For example, we have been providing IT services to software development companies, IT infrastructure for CRM and commercial system to wholesale companies and mostly VMs for private users. Overtime some larger companies got interested. Our approach of providing cloud “construction set” allowing a company to build complete IT infrastructure in minutes turned out to help not only SMB but enterprise customers as well.
One of our large users is the widely known Russian retail company DA! («ДА!») to which we provide test zone. Another user example is JSC «SOGAZ» (АО «СОГАЗ»), one of the largest Russian insurers on the federal level who is using the cloud based on OpenStack to host their production information systems.
However, certain requirements of enterprise customers are definitely more advanced. For example, significant attention is paid to advanced monitoring and back up functionality. Our nearest plans and longer-term road map include satisfying customer requests for such advanced services. They will eventually be available on our public platform for all users.
We are also getting an increasing amount of requests to build private on-premise clouds for our customers based on our platform. Over the last year, we have completed three pilots and three commercial projects for our customers.
- OpenStack Zed: The End of the Alphabet, The Beginning of a New Era | OpenInfra Live Recap - October 6, 2022
- Around the World with OpenInfra Events | OpenInfra Live Recap - September 22, 2022
- Making VDI a first-class citizen in the OpenStack world | OpenInfra Live Recap - August 25, 2022