January 04, 2012, 8:10 AM — After nearly seven years of development and fine tuning, the Apache Hadoop data processing framework is finally ready for full production use, the developers of the software announced Wednesday.
The project team behind Apache Hadoop has released version 1.0 of their platform. "Users can be much more confident that this release will be supported by the open source community," said Apache Hadoop vice president Arun Murthy. "There is no more confusion over which version of Hadoop to use for which feature."
Three new additions in particular helped make this release worthy of the 1.0 designation, Murthy explained. End-to-end security is the chief feature. Hadoop can now be secured across an entire network, using the Kerberos network authentication protocol. As a result, enterprises can now trust their Hadoop deployments with sensitive and personal data. The second feature, the webhdfs REST API (representational state transfer application programming interface), can be used to interact with Hadoop using Web technologies that many administrators and programmers easily understand, making Hadoop more applicable to more organizations. Finally, this version is the first to fully run HBase, which gives administrators a familiar relational database-like structure to store their data.
Lucene developer Doug Cutting, along with Mike Cafarella, created Hadoop in 2005 as an implementation of Google's MapReduce algorithm, a technique for analyzing data spread out across many different servers. Cutting would later go on to work for Yahoo to help the portal company use the technology to aid in its search service, an implementation that was eventually spread across over 40,000 servers.
Hadoop can be used to store and analyze large data sets, often called big data. Although originally designed for aiding large search services, the technology is increasingly finding a home within enterprises as well, Murthy said. The project has at least 35 code committers, and hundreds of other contributors.