How will it compete?A kind of CPU?
On top of presently supported C, via CUDA, there will also be support for standards-based OpenCL, DirectCompute, and DX11, as well as C++, the latter made possible by the use of a new, low-level instruction set, known as Parallel Thread eXecution 2.0 (PTX 2.0), that caters for a 40-bit, 1TB unified address space.
NVIDIA says that the GPU will also run the likes of Python and Java, although just how effective coding via a 'wrapper' will be is, perhaps, debatable.
Jen-Hsun Huang, NVIDIA's CEO, has spoken about Fermi in a parlance that's more familiar to the dissemination of CPU architecture. With constant references to cores, caches, and hierachy, one could be forgiven he was describing an Intel Larrabee-esque design, and we'd be very interested to see how Intel's effort shapes up when formally announced.
Better than Radeon HD 5870?
From what we know thus far, it's difficult to determine whether Fermi will be a better 'card' than AMD's RV870 which is in shops today. Given reasonable clock-speed analogous to the GTX 2xx series, Fermi should have a memory-bandwidth advantage and, perhaps, win out in the double-precision stakes, but we'd be surprised if it trounced AMD's best in a gaming environment.
NVIDIA's making much of the compute design, driven on by the massive investment in CUDA, and Fermi can be thought of as much a HPC tool as a gaming card. AMD's design, however, isn't just a basic pixel-pusher. It, too, supports the IEEE 754-2008 standards, runs double-precision at a mighty fast rate and can, we believe, handle multiple kernels.
The big difference is that Radeon HD 5870 is out today, etailing for £300. The higher-end Fermi cards will cost at least as much, we imagine, because fitting 3bn transistors - and yes, NVIDIA and AMD count them differently - must mean a die-size that's appreciably larger than HD 5870's 334mm². Yields are inextricably linked to die-sizes, so producing a 50 per cent larger die (than HD 5870) will, ceteris paribus, lead to a greater number of per-wafer flaws. Going big(ger) is bad for business.
AMD's hit the ground running with its high-end GPUs and will see healthy sales before Fermi ever gets packaged inside a retail box. Time to market is telling, especially in the run-up to the festive season, and NVIDIA's procrastination may well cause financial pain - in the short term at least. We expect the card to hit the shelves no earlier than February 2010, giving AMD a clear four-month-plus run with DX11 parts in the channel.
Ultimately, NVIDIA has detailed Fermi as a forward-looking, programmable architecture that's aimed at enlarging the company's footprint in the lucrative HPC space - somewhat pre-empting Intel's Larrabee - whilst keeping desktop and mobile 'gaming' customers happy. We'll find out more when NVIDIA divulges the intricacies of the design in coming weeks and months.
Read more here, and an in-depth examination here.