facebook rss twitter

High-performance computing from IBM and Intel

by Tarinder Sandhu on 8 May 2008, 08:46

Tags: Intel (NASDAQ:INTC), IBM (NYSE:IBM)

Quick Link: HEXUS.net/qammw

Add to My Vault: x

How does it work

The Montpellier HPCCC is a collaboration between IBM, Intel, and Cisco - housed at IBM's Executive Briefing Centre - and the hardware present enables clients to benchmark and optimise their particular software code, with recommendations on cluster-based implementation - nodes that are linked together to form a supercomputer - derived from the results. For example, a medium-sized company could be best-served with a rack full of 4P Tigerton-based systems, or a number of 2P Xeon 5400s might better fit the bill. If space it as a premium the HPCCC may recommend the adoption of Blades, but energy-conscious users could be directed towards, say, low-voltage Harpertowns (50W TDP) instead.

Some clients may require significant help and advice in optimising current software to better run on multi-core servers. Others with considerable HPC experience, such as Tetra Pak, may need to fine-tune their application(s) to operate at peak efficiency on the latest iteration of Xeon platforms, with little additional assistance, and the Centre provides both - at a fee, of course.

Client code that's used to evaluate platforms can be executed remotely, via secure log-ins, but, according to the Center's Francois Thomas, datasets can be large enough to require the client grabbing a rack, jumping on a plane, and bringing it, personally, to the HPCJCC. As such, remote-initiated benchmarks can take seconds, but some, Thomas said somewhat laconically, last for weeks.

Clients may also request performance projections up to three years in the future, wanting to know how future platforms will perform with their software. That's why the HPCJCC needs to work closely with Intel (and AMD) to evaluate how upcoming platforms - Nehalem, Westmere, and Sandy Bridge, for example - will change the performance landscape.

Make no mistake, however; the HPCJCC is a sales tool that's designed to custom-fit HPC requirements for customers who, most likely, have infrastructure in place and are looking to performance-compare it with what Intel and its partners offer in the high-performance computing ecosystem, or for companies whose growth relies on expanding their IT hardware. Based on the software optimisation and hardware recommendation(s), a parallel operation then goes for the hard-sell, attempting to convince folk that the triumvirate of IBM/Intel/Cisco is the way to go.

We questioned whether the HPC space was large enough to warrant such a Centre. Dave Jursik, VP of  IBM's worldwide Deep Computing (HPC) Sales, presented data that whilst server demand is inexorably growing, the HPC subset  - defined as cluster implementations costing between $50,000 and $500,000, sans support - was growing faster still, at around 19 per cent last year.

Of course, HPC isn't limited to those with a minimum $50,000 spend. A number of companies will provide turnkey HPC solutions for $10,000, or less, and Intel's very own Skulltrail represents a modicum of high-performance computing, really.

The server room

IBM uses a part of the Executive Briefing Center to run mission-critical mainframes for an eclectic array of clients. As with any large-scale server room, it's loud, windy, and houses expensive hardware.




The HPC cluster, used by clients to benchmark and evaluate performance, is shoehorned into a smaller section.



And here it is

The IBM System Cluster is constituted by the three right-hand racks you see in the above picture. Primarily powered by 32 IBM HS21 XM Blades, each housing two Xeon 5300- and 5400-series processors - 65nm and 45nm quad-core CPUs - there is a total of 256 cores on tap.





Adding some more oomph to the proceedings is an IBM x3950 M2, comprising of a 4P (16-core) Tigerton-based box. We'd imagine that the Xeon 7300-series processors will be swapped out when the Dunnington core becomes widely available later on this year.

Together with Ethernet, hooking the nodes together is some Cisco InfiniBand kit, comprising of a switch and ConnectX-based 4x DDR HCAs, shown below the Tigerton node.


Messy, messy, messy, huh?

Some back-of-the-envelope calculations suggest that the retail price of the hardware, assuming the use of 3.0GHz (80W) Xeon E5472s and 16GiB of FB-DIMM per node, is well over $50,000, which, however, is relatively small for an HPC installation.



But then, at a higher tier, there's enterprise class mainframes such as the IBM System z10,  powered by the eponymous CPU, up to 64 in the E64 variant, together with 1.5TiB of memory. The E64 machine weighs in at over two tonnes and using 27.5kW input power.