facebook rss twitter

NVIDIA helps build world’s most powerful supercomputer

by Pete Mason on 28 October 2010, 10:32

Tags: NVIDIA (NASDAQ:NVDA)

Quick Link: HEXUS.net/qa2rd

Add to My Vault: x

NVIDIA wants us to believe that the future of high-performance computing lies in using GPUs, but now it actually has some proof to back up those claims. The company has announced that a new supercomputer built with its add-in boards has beaten the reigning champ to become the most powerful computer on the planet.

The Tianhe-1A is housed at the National Supercomputer Centre in Tianjin in North-eastern China. Built from the combination of 7,168 Tesla M2050 GPUs - the HPC version of the GTX 470 - and 14,336 unnamed multi-core CPUs, the system is able to reach 2.507 petaflops on the LINPACK benchmark. This beats the previous fastest supercomputer - the Cray XT5 'Jaguar' at Oak Ridge National Labs in the US - by around 40 per cent.

As well as providing a lot of raw compute power, the GPUs are significantly more efficient than an equivalent CPU-only system would be. NVIDIA estimates that more than 50,000 processors would be needed to build an equally powerful computer, which would require twice the floor space and three times as much power.

According to Guangming Liu, the man in charge of the facility housing the new supercomputer, "the performance and efficiency of Tianhe-1A was simply not possible without GPUs...the scientific research that is now possible with a system of this scale is almost without limits".

The announcement of this system - which is already fully operational - means that NVIDIA GPUs power two of the world's three fastest supercomputers. The other, Nebulae, is based in Shenzhen, China, and uses a collection of older-generation Tesla cards. The fastest AMD-powered system is housed at the same facility as the new Tianhe-1A, and makes use of around several thousand dual-GPU Radeon 4870 X2s to place seventh on the TOP500 ranking.



HEXUS Forums :: 15 Comments

Login with Forum Account

Don't have an account? Register today!
That's a lot of folding power…..

Wonder what size PSU they use? Chernobyl?
GPUs - average 128 processing cores

CPUs - average 4 cores

How long before we see GPUs taking over more traditional CPU processing in PCs
(yes I know we have CUDA for some stuff - but its very niche)
If they use this to fold/crunch, they'd solve the problems in a matter of weeks…

Thats just insane, in the membrane.
From the little I know about GPU architecture, the performance improvement of using stream processors vs. regular CPUs really depends on the data you are processing, how you are processing it and how the code to do this task is written. Some operations simply wouldn't work on GPUs, others would be incredibly inefficient, but then you have tasks that benefit from spectacular performance improvements by using GPUs.

Supercomputers are usually given taks that require processing a mass of data in a particular way. Give it the taks, walk away for X amunt of time and return to find your results. Just like rendering a 3D movie really. This is why GPU based systems are far more efficient, as that is what stream processors are designed to do.

Also, bare in mind that it may use 7,168 Tesla M2050 GPUs to do the heavy lifting, but the system still requires “14,336 unnamed multi-core CPUs” to manage that data, run the OS, manage coms between nodes, etc.

Personally I would be more impressed if they had 10,000 GPUs and 1,000 CPUs in something like this :)

Chalk this one up to massive marketing spin for nVidia
The lab also doubles up as a heat source for the city.