facebook rss twitter

Review: BFG (NVIDIA) GeForce GTX 280: does it rock our world?

by Tarinder Sandhu on 16 June 2008, 14:01

Tags: GeForce GTX 280, NVIDIA (NASDAQ:NVDA), BFG Technologies

Quick Link: HEXUS.net/qanni

Add to My Vault: x

Anything else to add?

GeForce GTX 280 is a monolithic GPU, meaning that it's based on a single die. Packing in 240 stream processors, up 88 per cent over the previous generation, and connecting them to 32 ROPs and those to a 512-bit memory interface, GTX 280 is, inevitably, a big, big (read expensive-to-make) GPU.

Making matters worse, it's still based on a 65nm manufacturing process - as per the 9-series GPUs - and carries some 1.4bn (yes, billion) transistors. That will make it hot, power-hungry, and expensive to the end consumer.

Architecturally, GTX 280 is an incremental upgrade on 18-month-old G80. Admittedly, it will be the fastest NVIDIA single-GPU to date, the vital stats tell us that. But it's rather like a GeForce 9800 GTX that's been working out for a long time.

And a great deal of the performance gains derives simply from being larger - more threads, processors, shading, texturing, memory and bandwidth.

NVIDIA does also offer features such as double-precision capability, three-issue FLOPS, and Stream Out performance, but we've seen them from ATI already.

What happened to the using GDDR4 or GDDR5 memory, for technology leadership?

And what happened to reducing power-draw by moving on to a 55nm process?

GeForce GTX 280 certainly isn't the architectural jump that, dare we say, the epoch-making G80 was.

Non-architecture stuff

Not strictly related to the architecture, the GTX 280 now employs ATI-Avivo-like 10-bit colour processing through all stages of the pipeline, including output. Fine if you've got a 10-bit display panel handy, but the rest, and that is practically everybody, will need to have the output dithered down to match the monitor's specs - usually 8-bit.

GTX 280 also carries the same VP2 video-processing and Hybrid Power support as the GeForce 9-series of GPUs, where, on a compatible chipset that features an IGP, the discrete, wattage-hungry card can be completely turned off when not in heavy 3D use, and routing video through the IGP instead - to save power.

Continuing on this theme, idle-power requirements have been dropped to just 25W through intelligent clock-gating and are cut to 35W when playing high-def content. Our testing has shown that the GTX 280's clocks drop to 300/100/200MHz (core/ shader/ memory) in pure 2D mode.

All good, sensible stuff but it's dwarfed by the under-heavy-load figure of 236W.

The card interfaces with the system via PCIe 2.0, of course, and you'll need to use both eight-pin and six-pin power connectors to get it up and running.

SLI is present, naturally, and you can - funds permitting - tie three of these boards together for three-way SLI. The advantage of adding successive GPUs is based on diminishing returns and some games, such as Crysis, barely per-GPU scale at all.

Coming back to CUDA and GPGPU

Remember we talked about graphics architectures that were designed for more than just gaming?

NVIDIA's marketing focus of 2008 appears to rest with 'educating' potential buyers to the multi-purpose nature of GeForce 8-series and 9-series cards through its Optimised PC campaign.

Now, it needs to have compelling arguments for the use of a GPU in a non-gaming environment if it is to take away some of the CPU's thunder.

Right now, a CUDA-written application from Elemental Technologies, dubbed BadaBOOM, is doing the rounds. Its raison d'etre is to transcode media from one format to another - usually a standard-definition source (DVD) to an on-the-go, reduced-resolution file encoded in H.264, therefore suitable for Sony's PSP and Apple's iPod/iPhone.

Leveraging all those teeny-weeny stream processors in parallel computing architecture mode, NVIDIA reckons that BadaBOOM, when run on a GeForce GTX 280, can transcode at around 10x the speed of a quad-core CPU.

We note that BadaBOOM, once released, will run on all GeForce 8-/9-series cards, so it's not a GTX 280-only feature.

Further practical CUDA-based GPGPU plug-ins will be released later in the year, with Adobe's Photoshop and Premiere Pro getting the treatment.

All of this GPGPU speed-up has significant promise for the home user, but we'll only be able to see its true worth once a number of plug-ins have been released and benchmarked against CPU-only performance.

Remember, remember that CUDA isn't limited to GeForce GTX 280, no matter how the marketing spiel is slanted. Got a GeForce 8600 GT? The plug-ins will work, albeit not quite as fast as on higher-end models.

Cut to the chase, pretty please!

We've highlighted the salient architectural points on our technical evaluation of the GeForce GTX 280. All of this doesn't mean much, as a whole, without referencing speeds, and we'll talk about those on the next page...