Ildiko Vancsa talks to Mike Kelly of DataCentred about the challenges facing the “magicians” of high performance computing.

image

Many outsiders probably think that in the world of science and HPC (high performance computing) there’s only room for supercomputers and the magicians who operate them.

However, while that’s partially true, this area is much closer to the technology as we all know it than one would expect. And nothing proves this better than the fact that the earliest and most prevalent use cases for OpenStack are research and science. (To learn more about them, check out our OpenStack in Science web page.)

Bringing together a group of technical specialists and researchers to solve the conundrums of how to efficiently use cloud computing technology for workloads, the RCUK Cloud Working Group recently held its second annual workshop in London. The event is an amazing opportunity to bridge the gap between the magicians of science and the magicians of technology to produce better and more efficient solutions by sharing expertise.

In contrast to other communities, this a very open ecosystem where all aspects of the challenges of moving and running research workloads on public and private clouds — from financial through legal to technical aspects — are discussed. When comparing public cloud giants, it’s important to consider OpenStack an integral part as opposed to only a player  in the private-cloud era.

It’s also very important to underline the importance of open design and development that helps OpenStack to provide a de facto standard set of APIs. This is crucial from several perspectives one of which is multi-cloud, where users are considering multiple providers to run workloads with the goal of interoperability.

To get some insights on current and future challenges, we talked to Dr. Mike Kelly CEO of DataCentred for his perspective as a public-cloud provider.

As the largest UK-owned and UK-operated public OpenStack cloud, can you share your views on the importance of interoperability and open standards?

Open standards to date have really been one of the key reasons to use OpenStack. Our public cloud users like what we’re doing, partly because it gives them the benefits of a very rich tool set, with stable and open APIs. However, the fact that OpenStack is an open system and is based on a series of open components is extremely important. Our users want a public cloud which supports, and to some extent mirrors, the open source-based applications that they are deploying.

In the research community, OpenStack is a widely used, existing standard, and in this community, portability and interoperability are almost taken for granted. Non-research users like OpenStack public clouds for the same reasons, but they probably envisage a world where eventually there will be a good number of alternative OpenStack clouds. In a world of multiple OpenStack public clouds, then the interoperability of the clouds, collectively, would become an important factor. In a multi-cloud OpenStack world, federation, resource sharing and guaranteed interoperability are important.

The cornerstone of scientific research is data, including storing and analyze that data in high volumes. But what about ‘data sovereignty and borders?’ For instance, why did HMRC select OpenStack as the platform which all UK taxpayers use?

Data sovereignty is terribly important. HMRC chose to use public cloud, but were looking to use a UK-based operator. They are open source application developers, succeeding in moving legacy systems into a more modern development and deployment framework. OpenStack fits well with this general approach, but many UK users want to maintain sensitive UK personal data, even when it’s transient, in known locations in UK. There are many general computing situations, where location matters. Clients clearly value the ability to place data and to manipulate it with adjacent computing power.

What do you see as the next challenge for public clouds?

Containers and community and hybrid clouds.

Containers make the nature of the underlying cloud less important to the application. The cloud tool set remains extremely important to the operator, but less so to the application user. Public cloud operators and cloud software projects need to make sure that containers are very well integrated into tool sets and systems as a whole, that they are easy to use, and give good performance.

Public clouds need to be able to readily integrate with local and remote private and community clouds. Users are tending to segment workloads between public facing and definitely not public facing and want secure gateways to bridge between public and private or community.

Community clouds are shared between users with similar security and use models. Ease of interface between public clouds and local private or community will be important.

Get involved!

If you plan to extend your OpenStack private cloud to OpenStack-based public cloud options or you’d just like to explore the available providers around the globe, visit the Public Cloud Marketplace for more information.

If you’re running a public cloud with OpenStack, you might be interested in joining our newly formed PublicCloud Working Group to find solutions to future challenges in this area together.