Folding@Home project to run on GPUsOn the 2nd of October 2006, the Stanford University's Folding@Home distributed computing project will release a beta client for the project where a large chunk of the GROMACS core run by Folding application will be run on the GPU. Until now, the entirety of the computation that takes place on a machine that participates in the project, takes place on its main CPU core or cores.
The GPU client will initially only be available for ATI R580-based graphics boards (Radeon X1900 and X1950s), with R520-based products gaining support a little further down the line. NVIDIA GPUs aren't currently supported at all.
BrookGPU is the GPGPU compiler and runtime framework hosting the GROMACS core on the GPU and it's the largest use of BrookGPU to date, the Pande Group -- headed up by Stanford's Associate Professor of Chemistry and Structural Biology, Vijay Pande -- working with Stanford's GPGPU guys to bring up the new core on the GPU.
The Folding guys say that running on R580 gives opportunity for 20-40x speedups versus running on the CPU (although what CPU they're measuring against isn't clear), with even further speedups possible with some new software approaches and optimisations.
The client is beta in nature largely because the program hits the GPU so hard and for so long that they can't take any official risks with heat produced by the GPU in other people's PCs for the time being (despite taking the risk with the CPU, although there's more control there over how much of the CPU it can use).
It seems like ATI's threaded architecture, where branching granularity and decoupled texturing afford the chip lots of latency hiding, and mighty fragment processing power are the reasons for R580 being the GPU of choice, mapping somewhat nicely to what BrookGPU asks of a GPU in the Folding@Home case.
More on the client when it's released, since we'll get it when you do!
The Folding@Home projectA little on the Folding@Home project itself before we sign off. It concerns itself with the assembly of proteins, which do so by folding their structures. Protein misfolding is responsible for number of diseases, so the analysis of the folding and misfolding is very key to understand the diseases caused by protein misfolding.
The thing is the computing power needed to simulate the process is outrageous, so much so that even with exclusive access to all the processors in the world dedicated to research, the project wouldn't reach their goals of computationally simulating the folding of a given protein in real-time. So to reach those goals they set out to find as many processors as possible, and use software to exploit them in the best ways possible, in order to attack the problem.
Distributed computing is that goal, where the public donate time on their computers to do the computation, and now with the GPU helping out as well. Advances in software, the use of the GPU and CPU in better ways and the simple goal of getting more people to help out are key to the Folding@Home project, and it's what they hope to achieve.
They've managed to make the Folding@Home computations almost embarassingly parallel, which means they can exploit given resources very effectively, and the GPU is the next step to more quickly analysing folding and misfolding of proteins for the efforts that need the data the most.