Cloudera moves Hadoop beyond MapReduce

Cloudera CDH increases the ways in which data can be analyzed on Hadoop

By , IDG News Service |  

With the latest update to its Apache Hadoop distribution, Cloudera has provided the possibility of using data processing algorithms beyond the customary MapReduce, the company announced Tuesday.

Version 4 of Cloudera's Distribution including Apache Hadoop (CDH) also comes with a number of resiliency improvements that should allow organizations to "run more critical workloads on the system," said Charles Zedlewski, Cloudera vice president of products.

CDH4 expands the number of computational processes that can be executed under Hadoop, Zedlewski explained. Typically, Hadoop will use MapReduce, which breaks a data analysis task up across multiple nodes, and then collects the results as the nodes complete their portions of the job.

CDH4 introduces a new feature called coprocessors, which allows software programs to be embedded with the data itself. The programs are executed when certain conditions are met, such as when the average of a set of numbers hits a predefined threshold. The idea is similar to database triggers and stored procedures. The programs reside with the data, which is spread across multiple servers.

Coprocessors allow for more flexibility than a MapReduce operation. "We can now do more real-time or continuous operation on data in motion," Zedlewski said. "This allows you to push data-intensive operations into the data layer and parallelize the workload there."

CDH4 also allows users to implement their own data analysis frameworks apart from MapReduce. "You no longer have to shoehorn all your user workloads into one paradigm," Zedlewski said. "MapReduce is a very linear process, but sometimes things need to work on an iterative process."

One example of a program that could work on CDH4 is the Apache Hama, a bulk synchronous parallel computing framework that can be used for scientific calculations. Hama "can work on the same data as MapReduce. It can borrow the same CPU and memory that the MapReduce jobs use," Zedlewski said.

CDH4 comes with a number of other features as well, all of them adapted from the latest versions of the open-source components that make up the Hadoop platform, such as the HDFS file system and the HBase database system.

The new distribution tackles one of Hadoop's fundamental weaknesses, namely the file system's reliance on a single namenode to direct all traffic. A namenode keeps track of where all the data in a Hadoop cluster resides. Having only one namenode for a cluster is considered a weakness. Should that namenode stop working correctly, the entire system will be unusable. This version of CDH beats that problem by including the ability to set up a backup namenode that would automatically spring into use should the primary namenode fail.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question