facebook rss twitter

Review: GPU-accelerated video-encoding. Who is better, AMD or NVIDIA?

by James Smith on 28 October 2009, 12:09

Tags: AMD (NYSE:AMD), NVIDIA (NASDAQ:NVDA), HiS Graphics, BFG Technologies

Quick Link: HEXUS.net/qaukz

Add to My Vault: x

CPU encoding comparisons

CyberLink MediaShow Espresso - YouTube encode (CPU only)
BFG GeForce GTX 295 1,792MBHIS Radeon HD 5870 1,024MB
195196.66


As expected, performance is almost identical between the two cards when using just the CPU. The small difference in performance is easily attributable to the typical variance that can occur between runs.

CyberLink MediaShow Espresso - YouTube encode (GPU acc.)
BFG GeForce GTX 295 1,792MB HIS Radeon HD 5870 1,024MB
146114.66

In stark contrast to what we’ve seen in the gaming benchmarks conducted in our reviews, performance in this scenario of the HIS Radeon HD 5870 1,024MB outstrips that of the GTX 295 by approx. 21%. This isn’t a complete surprise, as the theoretical peak compute performance of Radeon HD 5870 is over 50 per cent higher than the GeForce GTX 295.

For further testing, we did try running benchmarks with two HD 4850 512MBs installed, with performance compared against a single card. The testing showed that there was no measurable difference in performance, regardless if CrossFireX was enabled through Catalyst Control Centre or not. Noting the lack of dual-GPU speed-up, NVIDIA may well have to wait until next-generation Fermi is released before performance is increased in this particular application.

Image quality - CPU

In the first part of the test, we're encoding by using the CPU. Let’s take a look at the input file to serve as a reference:


Input file (click to enlarge)

Image quality is good, as you would expect from HD footage. The application then converts this into a lower-quality version whose bit-rate is almost 1/20th of the original. The resolution, too, is lowered to 720p.

CPU encoding with HIS Radeon HD 5870 GPU (click to enlarge)


CPU encoding with BFG GeForce GTX 295 GPU (click to enlarge)

Theoretically the output should be identical between test systems. The eagle-eyed amongst you may have noticed a slight difference in the areas with severe artefacts. These slight differences also occur when running multiple runs on exactly the same hardware; therefore they are likely to be down to slight coding variances when the footage is compressed.

Due to the relatively low bit-rate the YouTube profile employs, a lot of the fine detail - such as the individual strands of hair, as well as much of the background detail - is lost. Even an advanced codec such as H.264 can’t work miracles when there’s a lot of movement and a fairly low bit-rate, clearly.