facebook rss twitter

NVIDIA: Intel's own research backs up GeForce GPU supremacy

by Tarinder Sandhu on 24 June 2010, 15:38

Tags: Intel (NASDAQ:INTC), NVIDIA (NASDAQ:NVDA)

Quick Link: HEXUS.net/qayu6

Add to My Vault: x

GPU is just plain better, says NVIDIA

In a strange twist of events, NVIDIA is recommending that the general public read some Intel-produced research. The research in question is a paper titled "Debunking the 100X GPU vs. CPU myth: an evaluation of throughput computing on CPU and GPU", found here.

Intel's research identifies that there are a number of computing kernels which are partial to be processed in a parallel manner, making them particularly suited to today's multi-core CPUs and GPUs. Demonstrating that modern CPUs and GPUs aren't wholly different with respect to processing power, Intel provides numbers, pulled from a 14-kernel evaluation, that show NVIDIA's GeForce GTX 280 is, on average, 2.5x faster than a Core i7 960 CPU, once optimisations have been applied.

NVIDIA's Andy Keane has a distinctly different take on the results, as you might expect. Delving deeper into the paper, performance on the GJK throughput test is 15.2x faster on the GeForce GTX 280 than on the Core i7 960, mainly due to it not using the CPU's SIMD engine efficiently, and Keane wastes little time in bringing this fact to our attention in his blog.

Claiming that: "It’s a rare day in the world of technology when a company you compete with stands up at an important conference and declares that your technology is *only* up to 14 times faster than theirs. In fact in all the 26 years I’ve been in this industry, I can’t recall another time I’ve seen a company promote competitive benchmarks that are an order of magnitude slower." Keane then goes on to provide numerous examples where GPU speed-up is over 100x, citing research conducted by leading universities.

What's more, not resisting a dig, Keane goes on to say: "Many of you will know, this [GeForce GTX 280] is our previous generation GPU, and we believe the codes that were run on the GTX 280 were run right out-of-the-box, without any optimization. In fact, it’s actually unclear from the technical paper what codes were run and how they were compared between the GPU and CPU. It wouldn’t be the first time the industry has seen Intel using these types of claims with benchmarks."

We reckon that NVIDIA is going to be increasing resources behind its Tesla GPGPU cards, as the high-margin line has an established foothold in the lucrative high-performance computing space. While commentators agree that modern GPUs are faster than their CPU counterparts in massively-parallel tasks, we see a CPU-and-GPU hybrid setup powering a range of supercomputers in the year ahead, because a great many tasks can only be computed in a serial manner.

Intel's paper clearly seeks to promulgate the fact that CPUs are still a good fit for customers who want huge computational power, yet it is more than willing to harness the virtues of parallel processing using a Many-Integrated-Core (MIC) architecture that will debut with the Knights Corner product, due in 2011.



HEXUS Forums :: 5 Comments

Login with Forum Account

Don't have an account? Register today!
As always with GPU tasks, it only applies to parallel applications. I can find thousands of serial applications which work orders of magnitude better on a CPU, and you can't run them all in parallel on a GPU based on GPU limitations.

A > B if C. A < B if not C.
A compute kernel isn't the same as an OS kernel. A compute kernel is a set of operations to perform on streaming data, whereas an OS kernel is the thing that controls all the inter-process messaging, memory protection, etc. GPUs wouldn't generally be able to run an OS kernel well as there's a lot of one-time branchy, unpredictable code. A compute kernel is very well suited to streaming data, SIMD, and parallelism.
I know - I work in CUDA. I'm merely pointing out that if you narrow down the parameters enough, then sure, a GPU will outperform a CPU for a specific task.
Oh, I wasn't criticising you at all - more the article itself. :)
I swear the article originally included something about kernels driving Operating Systems, which isn't the same sort of kernel at all. Has the article been edited?