Exascale unlikely before 2020 due to budget woes

Prototype systems still eyed for 2018, but only if Congress approves billions in funding, say U.S. DOE officials

By , Computerworld |  Hardware, Department of Energy, high performance computing

China, in particular, has been investing heavily in large HPC systems and in its own microprocessor and interconnects technologies.

The U.S. set up some strict criteria for its exascale effort.

The system needs to be relatively low power as well as be a platform for a wide range of applications. The government also wants exascale research spending to lead to marketable technologies that can help the IT industry.

The U.S. plan, when delivered to Congress, will call for building two or three prototype systems by 2018. Once a technology approach is proven, the U.S. will order anywhere from one to three exascale systems, said Harrod.

Exascale system development poses a unique set of power, memory, concurrency and resiliency challenges.

Resiliency refers to the ability to keep a massive system, with millions of cores, continuously running despite component failures. "I think resiliency is going to be a great challenge and it really would be nice if the computer would stay up for more than a couple of hours," said Harrod.

The scale of the challenge is evident in the power goals.

The U.S. wants an exascale system that needs no more than 20 megawatts (MW) of power. In contrast, the leading petascale systems in operation today use as much 8 or more MW.

Although processor capability remains paramount, it is not the center of attention in exascale system design.

Dave Turek, vice president of exascale systems at IBM, said the real change with exascale systems isn't around the microprocessor, especially in the era of big data. "It's really settled around the idea of data and minimizing data movement as the principal design philosophy behind what comes in the future," he said.

In today's systems, data has to travel a long way which uses up power. Datasets are "being generated are so large that it's basically impractical to write the data out to disk and bring it all back in to analyze it," said Harrod.

"We need systems that have large memory capacity," said Harrod. "If we limit the memory capacity we limit the ability to execute the applications as they need to be run," he said.

Exascale systems require a new programing model, and for now there isn't one.

High performance computing allows scientists to model, simulate and visualize processes. The systems can run endless scenarios to test hypothesis, such as discovering how a drug may interact with a cell or how a solar cell operates.

Larger systems allow scientists to expand resolution, or look at problems in finer detail, as well as increase the amount of physics to any problem.

The U.S. research effort would aim to fully utilize the potential of exascale, and achieve a "one billion concurrency."

Originally published on Computerworld |  Click here to read the original story.
Join us:






Answers - Powered by ITworld

Ask a Question