Tera-flops and Terabytes
In our look at Abel Winrib's Day Zero opening presentation, we dropped in a mention for Tera-scale networking as one of the technologies in Intel's high risk, high reward innovation pipeline. So what's
TerrorTera-scale about then? Lasers, people. Lasers.
Current dual core processing is just the tip of the iceberg, as Intel sees it. Even quad core is puny in comparison to what is to come. Tera-scale computing is about tera-flops of processing power crunching away on data sets that are terabytes in size, with many cores working on that data.
On the hardware side, we're talking multi-core architectures with cores specialised to certain tasks, along with memory that can scale with the processors. This must, of course, be coupled with energy efficiency.
However, the problems in this shift towards massively parallelised processing don't rest solely in the laps of hardware designers. The software needs to leverage the new magic that's being conjured up in Intel's boiling pot. There's little point having six cores waiting for data from two cores that are taking their time, for example. New compilers and libraries need to be developed to allow software to be written that will actually make use of Tera-scale architectures.
When processor, software and interconnecting hardware combine we get Intel's favourite product term, the platform. With Tera-scale we'll see virtualisation continue to grow and so operating systems will need to scale well. I/O between devices and networking between computers is also key.