April 17, 2013, 7:00 AM — A panel discussion at the OpenStack Summit about interoperability issues – scheduled in response to the articles I wrote last week on the subject – show that OpenStack community members recognize that interoperability is an important issue but that they are still trying to figure out how to handle it.
The discussion highlights some natural growing pains that OpenStack faces and shows that despite the momentum behind it, the group must make some tough decisions or risk smothering some of that momentum.
Source: kc7fys, via Flickr
For instance, vendors might have to give up some technologies that could give them an advantage for the sake of interoperability. “We have to decide that being interoperable is more important than what a customer at Rackspace is screaming about or some developer insists they have to have to make the coolest thing in the cloud,” said Troy Toman, vice president of engineering for Rackspace.
Developers of other technologies have similarly faced this point in the creation of new technologies, where they must decide to draw a line and define the technology rather than continue to add to it, risking fragmentation of implementation.
“If we don’t figure that out, we’ll lose this opportunity,” Toman said.
And yet, the decision isn’t always quite that easy. “What happens when a standard gets baked too early and it crushes innovation?” asked Josh McKenty, CTO of Piston. “Innovation moves somewhere else.”
However, Toman made it sound like it might be easier then it might seem to specify an interoperable implementation. It’s fair to point out that HP and Rackspace OpenStack clouds aren’t interoperable, he said. “They aren’t,” he said.
But there already are essentially de facto standards for how to implement OpenStack in a way that’s interoperable, the community just hasn’t been good at spelling it out to everyone.
“We’ve left it up to every implementer. I made a guess, the guys at HP made a guess. Some have guessed more alike than others,” said Toman. “We have to get better at articulating.”
McKenty and others are working on a solution, in the so-called Restack implementation, but it’s clear that there are still questions in the community about how to approach interoperability. There was debate over whether interoperability could be achieved solely based on APIs, without any regard to the underlying implementation.
But most panelists agreed that the implementation itself is important. “I think the bigger challenge is making sure people are grabbing the same sets of code,” said Bernard Golden, vp enterprise solutions for Enstratius. Service providers may implement entirely different sets of code but both label it Grizzly, the name of the latest release. “From a user perspective, it’s a nightmare,” he said.
“One challenge with OpenStack is we have an enormous number of config options,” McKenty agreed.
McKenty’s proposal for Refstack is to issue an OpenStack implementation that defines interoperability and allow vendors to test against it. The vendor would get a score card showing which APIs it supports based on which version of OpenStack. “You can take that scorecard and present it to customers and at some point in the future the board could use that to regulate use of the trademark,” he said.
Despite their differences, there was one thing the panelists agreed on: that interoperability is crucial. “There’s a real business benefit to it,” said Monty Taylor, who works full time on OpenStack for HP. “The business benefit is that we’re able to grow the market and able to do things together that none of us are able to do individually.”
Read more of Nancy Gohring's "To the Cloud" blog and follow the latest IT news at ITworld. Follow Nancy on Twitter at @ngohring. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.