November 08, 2012, 5:49 PM — Facebook has beaten some of the limitations of the Apache Hadoop data processing platform, its engineers assert.
Facebook has released source code for scheduling workloads on the Apache Hadoop data processing platform. Engineers at the social networking company claim this program, called Corona, is superior to Hadoop's own scheduler in MapReduce.
In tests, the Corona scheduler was able to put more than 95% of a cluster to work on jobs, whereas MapReduce could utilize, at the most, 70% of a cluster, Facebook said.
By using the clusters more efficiently, Facebook is able to analyze more information with existing hardware. Corona offers a number of additional benefits as well, including faster loading of workloads and a more flexible way of upgrading the software.
Facebook announced the release of Corona in a posting by a number of Facebook engineers who contributed to the software, including Avery Ching, Ravi Murthy, Dmytro, Ramkumar Vadali and Paul Yang.
Facebook's operations and users generate more than half a petabyte of data each day, which is analyzed by more than 1,000 Facebook personnel, mostly by using the Apache Hive query engine.
Typically, analysis jobs running on Hadoop are scheduled through the MapReduce framework, which breaks jobs into multiple parts so they can be executed across many computers in parallel.
Facebook ran into issues using MapReduce, however. The scheduler could not keep all the computers supplied with work. "At peak load, cluster utilization would drop precipitously due to scheduling overhead," the blog stated.
Another issue with MapReduce is that the software typically delayed jobs before executing them, the Facebook team said. In addition, the framework offered no easy way of scheduling non-MapReduce jobs on the same cluster, and software upgrades required system downtime, which necessitated stopping jobs that are then being executed.
Facebook engineers developed the Corona scheduler so it would not have these limitations. The software would scale more easily and make better use of clusters. It would offer lower latency for smaller jobs and could be upgraded without disrupting the system.