Facebook open sources hardware specs? Not really.

Open Compute Project is disruptive, cool, but not true open source (yet).

By  

Yesterday I saw a cool bit of news: Facebook released the hardware specs and layout for its Prineville, OR data center in an effort to "open source" the specs and release a disruptive force in the hardware vendor/data center sector.

While I will definitely give them a solid 10.0 on the latter goal, I'm not so sure this is really an open source play.

The Open Compute Project is huge, don't get me wrong. By releasing the specifications and mechanical designs for the servers and data center in Prineville, Facebook has in one fell swoop set an incredibly high bar for those who would want to make their own datacenters.

Datacenter and hardware vendors are going to love hearing "hey, why can't we get our specs to be like Facebook's?" And, given the efficiency ratings and green aspects of the Open Compute specs, a lot of people are going to be clamoring for this design.

One big green aspect is the power usage effectiveness (PUE) stat, which measures the amount of power that gets from the outside power grid to the motherboard in each server. Facebook is reporting an initial PUE of 1.07--which translates into 93 percent of grid power getting to the motherboards. The industry average PUE, the Open Compute Project reports, is something like 1.5.

Yeah, that's nice.

Looking at the specs, it's very easy to see how disruptive this is going to be. Facebook opted for commodity parts, building everything from the ground up. In fact, Facebook made a point out of mentioning that branding was specifically removed from the servers' chassis. They even went without screws, to save time for servicing and weight of the servers.

Now, compare specs like that against the commercial hardware and datacenter packages that are out there, and you will see how those vendors are quickly going to have to adapt to this new level of efficiency, or die trying.

I think it's great. Commoditization of the datacenter and a push towards efficiency is much needed, and let's face it, Facebook is the 800-lb gorilla that can get such an initiative off the ground.

But open source? No, not really.

Yes, they have released the specs, in all manner of formats. So it's open. But I have doubts on just how collaborative this will be. From the initial language of the announcement, Facebook says that if someone can find a better way to do something, let them know. One would assume that Facebook and the contributor of the idea will benefit from the change, but how fast will those changes propagate to other participants in the Open Compute Project?

And how will such changes be governed? Is Facebook going to share that responsibility? Or will it just keep pointing to an open set of documents as its open policy? These aspects need to be established if Open Compute is to be a thriving open source community.

Again, I think that Open Compute is a great idea, and I applaud the move.

But for now I think it's more of a preliminary open standards move than an open source one.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question
randomness