facebook rss twitter

System Buses & Bandwidth

by Parm Mann on 21 October 2008, 00:00

Quick Link: HEXUS.net/qasa4

Add to My Vault: x

In computing terms, system buses are used to connect various components to the motherboard's core logic and, often, to each other. Modern PCs run with a multitude of high-speed buses ranging from the interconnects between, say, the chipset and the CPU, graphics card, memory, and peripherals. Further, system buses also encompass features that you know well; USB (Universal Serial Bus) and FireWire being cases in point. The bandwidth requirement is now such that system buses need to transfer masses of data between devices, thus making the chipset designer's task that much more difficult. We'll take a look at the handful of modern system buses that are required to make your PC tick, and conjecture on what you're likely to see in 2006 and beyond.

The technology and bandwidth concerns

Stating the obvious, year on year computers become faster and faster; it's something that we have come to expect since Gordon Moore, co-founder of Intel, stated that he expected computing power to double every 24 months. The rise in computing power has necessitated an increase in system bandwidth, which in turn has led to the need for faster and faster buses to handle the increased traffic. That's precisely why newer buses have had to be introduced that would scale with the bandwidth requirement that's presented by modern CPUs and graphics cards in particular.

Buses come in various forms but all, basically, do the same job of moving data and power around a subsystem. Taking Intel's i975X chipset as an example of efficient bus implementation, let's take a close look at the technologies involved.

Take a look at the various numbers highlighted on the above block diagram and you'll note the kind of bandwidth that is required today. A classic 2-bridge chipset needs a connecting system bus. Here, Intel chooses a Direct Media Interface (DMI) bus running at a concurrent 2GB/s, with overall speed calculated in terms of bus width and speed (that, effectively, would transfer the data contained in 3 CDs every second) The 128-bit-wide system memory bus, however, when used in conjunction with DDR2-667-rated memory, runs even faster, with overall bus bandwidth topping out at 10.7GB/s ((128/8)*667), of which 8.5GB/s can be transferred to the CPU via the 1066MHz FSB. In short, huge bandwidth figures are required to do the behind-the-scenes work that you don't see.

Modern system buses need bandwidth, bandwidth, and more bandwidth, in order to connect high-speed peripherals to the core logic. That's why PCI-Express was introduced by Intel in 2004. The PCIe protocol allows bus 'lanes', each operating at a bi-directional 250MB/s, to be teamed up to provide, you guessed it, greater bandwidth. The physical manifestation of which is reflected in the x16 (16 lanes, naturally) that can be attributed to a PCI-Express-based graphics card, offering up to 8GB/s of juicy bandwidth. Additionally, being a flexible system interconnect, PCIe lanes, either single or multiple, can be used to hook-up high-speed peripherals to the core logic (Gigabit Ethernet, FireWire, for instance), supplanting the archaic, limited PCI bus architecture that was prevalent for nearly a decade beforehand. The PCI-Express bus offers a forward-looking architecture that will scale as, inevitably, more system traffic is generated in years to come.

The need for high system bus speeds and low operating latencies pervades all modern chipsets. The popular Universal Serial Bus (USB) operates at 60MB/s per port and, lately, on-board SATA2, the conduit used for the latest iteration of hard drives, hums along at a potential 375MB/s per port. Each system bus differs in the manner of data transfer and control, but all need to offer huge bandwidth with the least amount of CPU overhead possible. Even your basic £399 PC will have multiple buses shuttling GBs of data around every second whether you know it or not.

System buses, then, may be implemented with differing widths, speeds, and protocols, yet all work towards the common goal of facilitating the seamless transfer of data and power that define a present-day PC.

The market

The market for which system buses are designed is an ever-changing one. Faster and wider buses are constantly being architected for the next generation of chipsets that will require even more bandwidth and power than the ones in use today. Broadly-speaking, and taking the consumer-level market into account, core logics are designed either for Intel or AMD's CPUs. We've looked at the basic bus setup pertaining to Intel's high-end CPUs. AMD's unique S939 processors, though, thanks to an on-chip memory controller, do away with the need for an ultra-fast bus to connect system memory to the processor.

AMD also differs from Intel in the way it uses buses for its Athlon 64/Opteron processors. The CPUs are connected to the chipset via its low-latency, point-to-point HyperTransport I/O bus that resides on each processor, and they communicate with the chipset with bandwidth scaling up to 8GB/s, based on a 1GHz HTT clock. What's more, the HyperTransport bus can be used for CPUs to communicate with each other. Take AMD's Opteron 800-series processors as an example. Each CPU has 3 high-bandwidth coherent HyperTransport links that can be used to inter-communicate with other CPUs in a multi-way system. HyperTransport, designed by AMD and a number of partners, is evidence of a bus that's been designed for a CPU-specific purpose. Other adopters such as Apple have seen the benefits of a single bus that can replace a number of older buses in one fell swoop. Indeed, with PCI-Express and HyperTransport, AMD appears to have the best formula for piping data in and around a system

The market is such that a handful of chipset companies design the core logics for all PCs, and, over time, system bus improvements are introduced gradually in an ongoing evolutionary process. With that in mind, let's see who are the current movers and shakes of the chipset and, by inference, system bus world.

The players

It's of little surprise that Intel manufactures a large proportion of chipsets for its own processors. System bus design and implementation, therefore, also falls on its shoulders, and the i975X, shown above, is the latest-and-greatest to come from Chipzilla. Two graphics card vendors, NVIDIA and ATI, amongst others, have also been busy designing core logics to support the millions of Intel CPUs currently being churned out of its fabs, and both use a broadly similar system bus setup to Intel, with PCI-Express featuring prominently, which is a particularly useful bus for implementing multi-GPU setups, be it CrossFire (ATI) or SLI (NVIDIA) that high-end systems tend to ship with.

The common consensus is that system bus design will need to accommodate greater bandwidth in the future, just as CPUs tend to house more transistors and complexity every passing year. The challenge is to keep apace with new technologies that require greater bandwidth, and the introduction of the multi-lane PCI-Express bus, in particular, will help chipset designers to meet this need. The one aspect to take away after reading this is that system buses can never have enough bandwidth or speed. Put simply, faster is better.


Sponsered by SCAN