Supercomputers face growing resilience problems

As the number of components in large supercomputers grows, so does the possibility of component failure

By , IDG News Service |  Hardware

Basically, the researchers' approach consists of running multiple copies, or "clones" of a program, simultaneously and then comparing the answers. The software, called RedMPI, is run in conjunction with the Message Passing Interface (MPI), a library for splitting running applications across multiple servers so the different parts of the program can be executed in parallel.

RedMPI intercepts and copies every MPI message that an application sends, and sends copies of the message to the clone (or clones) of the program. If different clones calculate different answers, then the numbers can be recalculated on the fly, which will save time and resources from running the entire program again.

"Implementing redundancy is not expensive. It may be high in the number of core counts that are needed, but it avoids the need for rewrites with checkpoint restarts," Fiala said. "The alternative is, of course, to simply rerun jobs until you think you have the right answer."

Fiala recommended running two backup copies of each program, for triple redundancy. Though running multiple copies of a program would initially take up more resources, over time it may actually be more efficient, due to the fact that programs would not need to be rerun to check answers. Also, checkpointing may not be needed when multiple copies are run, which would also save on system resources.

"I think the idea of doing redundancy is actually a great idea. [For] very large computations, involving hundreds of thousands of nodes, there certainly is a chance that errors will creep in," said Ethan Miller, a computer science professor at the University of California Santa Cruz, who attended the presentation. But he said the approach may be not be suitable given the amount of network traffic that such redundancy might create. He suggested running all the applications on the same set of nodes, which could minimize internode traffic.

In another presentation, Ana Gainaru, a Ph.D student from the University of Illinois at Urbana-Champaign, presented a technique of analyzing log files to predict when system failures would occur.

The work combines signal analysis with data mining. Signal analysis is used to characterize normal behavior, so when a failure occurs, it can be easily spotted. Data mining looks for correlations between separate reported failures. Other researchers have shown that multiple failures are sometimes correlated with each other, because a failure with one technology may affect performance in others, according to Gainaru. For instance, when a network card fails, it will soon hobble other system processes that rely on network communication.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question
randomness