facebook rss twitter

Review: NVIDIA Cg

by Ryszard Sommefeldt on 13 June 2002, 00:00

Tags: NVIDIA (NASDAQ:NVDA)

Quick Link: HEXUS.net/qal3

Add to My Vault: x

Page 1



Introduction

If you've been following the graphics card industry over the past few years, you'll have seen a couple of big shifts in the way graphics hardware works and also in performance. Performance has scaled much quicker than CPU performance to a point today where it's simple to become CPU bound when running a graphics application. The GPU on the card sits idle for some part of it's operation while it waits for the CPU to feed it data.

The second shift is in the way the hardware works. We've progressed from the CPU doing all the work with the graphics card being basically a dumb framebuffer for the CPU to render to so you can see images, through having on-card triangle setup engines so that the CPU isn't doing as much geometry work, right up to today. We are now at a stage where GPU's have fully programmable effects hardware, multiple geometry setup and transformation engines, full hardware lighting for multiple lights, hardware anti-aliasing and all the other technology that a GPU implements.

It's a shift from the CPU doing all the work to the GPU doing almost everything needed to render a complex 3D scene at any kind of decent framerate.

With these kinds of shifts, especially today where the GPU has fully programmable effects units where the developer has control over the output of the graphics pipeline, comes complexity. Not just in terms of transistor count but to a level where the hardware is overtaking the developers ability to make use of it.

Thinking back to when x86 CPU's were just appearing, CPU programming was done at the CPU instruction level with programmers writing directly to the CPU in assembly language/machine code. You accessed the registers directly and you wrote in instructions the CPU could directly understand.

As x86 CPU's got more complex, writing to the CPU directly became harder and more complex. So a higher level of abstraction was developed in the form of computer languages, like C for example, where you write a program in higher level terms, more easily human readable and you let a compiler do the hard work, translating your C code down to the machine code or assembly language that you used to write before.

So the programmer can do more work in the same amount of time because of this layer of abstraction and the languages and compilers supported more features for the developers to take advantage of. Add to this the ability of an abstraction layer like a programming language to enable future proofing where any target CPU with a C compiler and runtime can run your programs.

Fast forward to today and you can see that GPU's have evolved in much the same way. They are complex beasts and developers interact with the computation and rendering pipeline by writing shader programs, basically programs that alter directly the registers and instructions that the GPU has and can perform. So the developer has had a little while to contend with this direct programmability of the pipeline and has been able to produce some stunning effects so far.

But with the rapid rate of change in GPU development which is a much faster rate than Moore's Law which still holds true for todays CPU's, the complexity and ability of a GPU has outstripped the developers ability to use it.

With a 6 month cycle at NVIDIA for new products and game production taking up to 2 years or more (Duke Nukem Whenever anyone?), you can see where the problem occurs. GPU's are introducing capabilities faster than developers can make use of them. So the same thing that happened in the CPU industry, with the development of a level of abstraction above the raw machine code in the form of a programming language, has happened in the GPU industry.

So this is what NVIDIA is announcing today, the release of a GPU programming language called Cg or C for Graphics/GPU's.