facebook rss twitter

High-performance computing from IBM and Intel

by Tarinder Sandhu on 8 May 2008, 08:46

Tags: Intel (NASDAQ:INTC), IBM (NYSE:IBM)

Quick Link: HEXUS.net/qammw

Add to My Vault: x

Background: what's the need for high-performance computing?

Essentially, Intel's 45nm-based Core microarchitecture concurrently covers the server/workstation, desktop and mobile spaces. The implementation may, and often is, different in each case - varying in per-socket cores, on-chip cache, FSB speed, and memory requirements - yet the underlying execution units have a common heritage. That's precisely why the enthusiast-oriented Skulltrail platform, pushed as the ultimate gaming platform, doubles-up as a decent workstation. AMD, too, shares its server and desktop processor underpinning with the Opteron and Phenom SKUs.

Servers may use the same Xeon CPUs but the mentality is a somewhat different. Here, the aim is to pack as much compute and memory-addressing power into a small form-factor, usually referenced with respect to the industry-standard 1U footprint, measuring 19in-by-1.75in.

For example, the recently-announced IBM X3450 1U-based server houses two quad-core Xeons - up to 3.2GHz and with a 1,600MHz FSB - along with 16 slots for fully-buffered DIMMs and a x16 PCIe Gen 2.0 processor-to-high-performance I/O connectivity, usually via high-speed InfiniBand.

Such compute power, you may think, would suffice, but meat-and-drink activities web-serving or database-querying have proliferated to the extent that companies such as HEXUS require multiple nodes (1U servers) to effectively serve our content.

Thinking bigger still, the world of high-performance computing, once the domain of well-funded universities and Big Business, requires incredible compute power - hundreds and hundreds of cores - with minimal space and power investment. Want to (somewhat) accurately predict what the weather will be doing in three days' time? Want real-time high-resolution medical imaging? How about calculating the likelihood of striking oil and gas in a designated area? HPC serves an eclectic bunch of ecosystems, each with their individual requirement.

Frankly, you'll need teraFLOPS of performance, if not orders of magnitude more, so whilst today's quad-core CPUs from Intel and AMD both churn out impressive numbers, this is why hundreds and thousands need to be leveraged, together, complemented by virtualisation-aware software, to burn through ever-so-complex calculations whose results impact on our daily lives. That, in a nutshell, is high-performance computing - racks and racks of densely-populated servers working in tandem.

A quad-core Intel Core 2 Extreme system is cool, a 4P Barcelona-based Opteron system is cooler still, but a high-performance cluster (aka supercomputer) is, well, sub-zero cool. At HEXUS we look at most of the latest kit from industry heavyweights such as Intel, AMD, and NVIDIA, primarily based around the single-user desktop and mobile environments, and we figured it was high time to see how the HPC world provided computing solutions to its range of customers. For example, if a weather-forecasting company comes your way with new code, how would it be tested and what would IBM recommend?

Adding to the complexity is the need to run company-generated software code - where code is your particular application, be it imaging, weather prediction, etc - on the hardware, which needs to be optimised for multi-core machines. The IBM HPCJCC aims to provide this and lots more, so let's take a look.