If it's June, it's time for the bi-annual Top 500 list of supercomputers as defined by researchers in the U.S. and Germany. The list itself is a bit controversial for a few reasons, which we will get into, and more often than not, it's simply for bragging rights.
And as usual, the top bragging rights to go Intel, with 393 of the 500 systems running some kind of Intel Xeon processor. AMD was second, with 49 systems, a good showing all things considered. IBM's Power architecture was right behind it with 48, giving at least one good showing to the RISC architecture. Oracle/Sun showed at least flickers of life; it had four SPARC-based machines on the list.
Intel now runs 80 percent of the systems on this list, including 98 percent of the new ones, something unthinkable 20 years ago. Back then it was all RISC systems. Obviously the advent of x86-64 made the difference because Intel would never been a competitor with the 32-bit processor's 4GB memory limit.
Tianhe-2, the top supercomputer in the world, has a petabyte of memory. Think about that for a minute. That's 1000 terabytes of memory, and the theoretical limit of a 64-bit processor is 16 petabytes. Whoever the memory maker was on the Tianhe-2 contract, they made a mint.
The top 10 list is quite a mix of systems: AMD Opteron, Sun UltraSparc and quite a few IBM Power constitute the mix. IBM's Blue Gene/Q is the most common system in the top ten, with four systems representing Big Blue. As you go down the list, it becomes rather consistent: Xeon E5-powered clusters running Linux.
Just to show how crazy this list has grown, the lowest-performing system on the list is now a 96.6 teraflops per second machine, compared to 76.5 Tflop/sec six months ago. Number 500 on the June 2013 list would have been number 322 on the list from six months ago. The total combined performance of all 500 systems is 223 Pflops per second, up from the 162 Pflops/sec six months ago and 123 Pflops/sec one year ago.
The TOP500 list is compiled by supercomputing researchers in Germany, Tennessee and Lawrence Berkeley National Laboratory. They have acknowledged in the past that the list isn't a complete snapshot of what is out there, because some firms might want to keep what they have private. You know the NSA doesn't want to brag about the systems behind PRISM.
Then there's the issue of how you define supercomputing. The massive data centers from Microsoft and Facebook, for example, probably have more compute power under their massive roofs. But they are used differently. They are highly virtualized and loads move around the hardware all the time. In a supercomputing environment, you don't virtualize it because that would slow the system down.
All of the systems on this list use the Linpack benchmark, a series of Fortran subroutines that were first developed in the 1970s for testing those supercomputers. Mind you, a supercomputer of 1980 pales in comparison to your smartphone.
So there has been a revolt against the Top500 list and Linpack. The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign has stopped submitting scores to Top500.org because they don't believe in the test, and the NCSA would regularly have a machine on that list.
The administrators of the NCSA argue that Linpack doesn’t adequately measure the performance of today’s modern supercomputers. Linpack performs an algorithm performing linear algebra, originally designed in the 1970s and 1980s. The NCSA favors benchmarks such as the Sustained Petascale Performance Test, a collection of 12 application benchmarks that are complete applications drawn from their own research teams.
They have a point on these benchmarks being outdated. Futuremark, the maker of PC benchmarking software, comes out with revisions every few years due to the new architectures of CPUs and GPUs. You can't expect a math algorithm from the 1970s to still be valid today, even if it has been updated along the way.
The question then will become how many people follow NCSA's lead. Then again, one person can have quite an impact, as we have learned over and over. Let's see what develops between now and November, when the list is updated again.