New OpenStack initiative could address interoperability questions

The dust up over OpenStack interoperability could be a bit of a semantics issue but it highlights some growing pains with the open source software.

Yesterday, I wrote about the OpenStack Foundation preparing to work harder on ensuring that clouds that call themselves OpenStack are truly interoperable. I’ve since talked to HP and had another conversation with Josh McKenty, Piston’s CTO and OpenStack board member, and have some more details to share about the future of OpenStack interoperability.

Currently, to call a cloud service OpenStack, the provider has to have Nova and Swift implemented, the compute and storage functionalities of OpenStack, McKenty said. In practice, however, implementing Nova and Swift may not be enough to ensure interoperability, since there currently aren’t directives requiring service providers to implement certain APIs.

McKenty is leading a new project at OpenStack aimed at improving the interoperability situation. The project has broad support from the OpenStack community but has not been voted on so isn’t official quite yet, he said.

The idea is to develop what the group is calling Refstack, or a reference implementation of OpenStack against which providers can benchmark. OpenStack hopes to also offer automated testing so that vendors can test their own services against the reference implementation and get a scorecard on their own compliance.

With a more granular interoperability policy, providers can do better at implementing the kinds of features that ensure interoperability.

There were previous proposals to work on interoperability but they came about too early – “half the people involved didn’t even have products yet,” McKenty said.

In the meantime, many of the OpenStack cloud services may be technically following the current interoperability guidelines but aren’t necessarily easily interoperable.

“The question of interoperability is a tricky one to draw a line on,” McKenty said. You could set a very low bar and say that if a user can create a virtual machine and find out what the IP address is, “every public OpenStack cloud is interoperable in that sense,” he said. But if you raise the bar a bit and try to use certain third party tools built for use with OpenStack, like CloudEnvy for spinning up new instances, they may not work without using libraries that have been developed to account for differences in the services.

The goal is make the OpenStack clouds interoperable without having to rely on libraries that smooth out their differences.

“The reason I’m so concerned with interoperability, and this may be more of my personal passion than a rational view of the ecosystem, is I don’t believe we’ll have just a handful of OpenStack clouds,” McKenty said. He thinks there will be thousands or tens of thousands of OpenStack clouds, making the need for interoperability between them, particularly in hybrid environments, critical.

Meanwhile, HP sounds a bit like a case in point. I've heard criticism from a number of people noting that HP's cloud offering strays from OpenStack. But the company, which has people working with McKenty on the new interoperability plans, said it is committed to interoperability. “We are firmly adhering to the interoperability guidelines provided by OpenStack,” said Roger Levy, vice president and general manager of HP Cloud Services. “We don’t add proprietary APIs, so if someone builds a workload adhering to the API specifications from OpenStack, it should work on our cloud and it should be portable,” he said.

Read more of Nancy Gohring's "To the Cloud" blog and follow the latest IT news at ITworld. Follow Nancy on Twitter at @ngohring. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.

ITWorld DealPost: The best in tech deals and discounts.
Shop Tech Products at Amazon