World's most powerful big data machines charted on Graph 500

IBM's BlueGene/Q dominates an emerging ranking of data processing supercomputers

By , IDG News Service |  Big Data, supercomputers

The Top500 is no longer the only ranking game in town: make way for the Graph 500, which tracks how well supercomputers handle big-data-styled workloads.

So while a new Cray supercomputer took first place on theTop500, it was another machine, Lawrence Livermore National Laboratory's Sequoia, that proved to be the most adept at processing data intensive workloads on the Graph 500.

Such differences in ranking between the two scales highlight the changing ways in which the world's most powerful supercomputers are being used. An increasing number of high performance computing (HPC) machines are being put to work on data analysis, rather than the traditional duties of modeling and simulation.

"I look around the exhibit floor [of the Supercomputing 2012 conference], and I'm hard-pressed to find a booth that is not doing big data or analytics. Everyone has recognized that data is a new workload for HPC," said David Bader, a computational science professor at the Georgia Institute of Technology who helps oversee the Graph 500.

The Graph 500 was created to chart how well the world's largest computers handle such data intensive workloads. The latest edition of the list was released at the SC12 supercomputing conference, being held this week in Salt Lake City.

In a nutshell, the Graph 500 benchmark looks at "how fast [a system] can trace through random memory addresses," Bader said. With data intensive workloads, "the bottleneck in the machine is often your memory bandwidth rather than your peak floating point processing rate," he added.

The approach is markedly different than Top500. The well-known Top500 list relies on the Linpack benchmark, which was created in 1974. Linpack measures how effectively a supercomputer executes floating point operations, which are used for mathematically intensive computations such as weather modeling or other three dimensional simulations.

The Graph 500, in contrast, places greater emphasis on how well a computer can search through a large data set. "Big data has a lot of irregular and unstructured data sets, irregular accesses to memory, and much more reliance on memory bandwidth and memory transactions than on floating point performance," Bader said.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Spotlight on ...
Online Training

    Upgrade your skills and earn higher pay

    Readers to share their best tips for maximizing training dollars and getting the most out self-directed learning. Here’s what they said.

     

    Learn more

Big DataWhite Papers & Webcasts

See more White Papers | Webcasts

Answers - Powered by ITworld

Ask a Question