How to use Hadoop to overcome storage limitations

By Frank J. Ohlhorst, CIO |  Storage, Hadoop

What's more, Hadoop helps to remove much of the management overhead associated with large data sets. Operationally, as an organization's data is being loaded into a Hadoop platform, the software breaks down the data into manageable pieces, which are then automatically spread across different servers. The distributed nature of the data means there is no one single place to go to access the data. Hadoop keeps track of where the data resides, and further protects that information by creating multiple copy stores. Resiliency is enhanced, because if a server goes offline or fails, the data can be automatically replicated from a known good copy.

How Hadoop Goes Further

The Hadoop paradigm goes several steps further when it comes to working with data. Take, for example, the limitations associated a traditional, centralized database system, which may consist of a large disk drive connected to a server class system that features multiple processors. In that scenario, analytics is limited by the performance of the disk and, ultimately, the number of processors that can be bought to bear.

With a Hadoop deployment, every server in the cluster can participate in the processing of the data through Hadoop's capability to spread the work and the data across the cluster. In other words, an indexing job works by sending code to each of the servers in the cluster and each server then operates on its own little piece of the data. Results are then delivered back as a unified whole. With Hadoop, the process is referred to as MapReduce, where the code and processes are mapped to all the servers and the results are reduced into a single set.

That process is what makes Hadoop so good at dealing with large amounts of data. Hadoop spreads the data out and can handle complex computational questions by harnessing all of the available cluster processors to work in parallel.

Understanding Hadoop and Extract, Transform and Load

However, venturing into the world of Hadoop is not a plug-and-play experience. There are certain prerequisites, hardware requirements and configuration chores that must be met to ensure success. The first step consists of understanding and defining the analytics process. Luckily, most IT leaders are familiar with business analytics (BA) and BI processes and can relate the most common process layer used -- the extract, transform and load (ETL) layer -- and the critical role it plays when building BA/BI solutions.


Originally published on CIO |  Click here to read the original story.
Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question