Gość: bzykbzyk.
IP: *.nas1.honolulu1.hi.us.da.qwest.net
05.07.04, 05:51
Delicious stuff with lots of room for growth!
Today's popular PCI bus is a 32-bit, 33MHz shared-bus architecture with a bus-
wide limitation of 133MB/s.
PCI Express trumps that with point-to-point communications and bandwidth
increases to the tune of 500MB/s (250MB/s each way) for what is being called
a "PCI Lane." The serialized, point-to-point nature has allowed Intel to take
16 of these lanes and hookup 'em up to the X16 graphics port (16, get it)
for, you guessed it, 8GB/s of bandwidth (but again, it's only 4GB/s each
way). The specification by Intel allows for lanes to be combined in 2, 4, 6,
8, 12, 16, and even 32-lane configurations.
Most PCI Express boards for the next few years will also support legacy PCI
slots. AGP, however, gets kicked to the curb.
You can hit the reviews for the hard numbers, but what's clear is that this
marks the introduction of a new foundation from which we can expect to see
great things in the future, but for which otherwise we're not really going to
see much payoff in terms of price/performance anytime soon.
For the gamers, the case is clear: you could swing PCI Express if you're
willing to drop the cash on a PCI-Express video card, but the payoff isn't
that strong (yet), and frankly, currently available systems more or less
outperform it right now, anyway, >>>>> thanks to AMD. <<<<<
Furthermore, DDR2 memory at 533MHz isn't really outperforming older DDR400
RAM, so for the most part, we're left with a "next year, this will be great"
kind of vibe, not unlike the original launch of PCI or even the constant
droning about AGP this or that.
Like a router in a network or a telephone switchbox, the switch routes bus
traffic and establishes point-to-point connections between any two
communicating devices on a system. To return to our office analogy from the
previous section, each employee has his or her own private line to the front
desk; so instead of shouting over a shared line to get a particular
employee's attention, the front desk secretary uses a switchboard to connect
employees directly to incoming callers and to each other.
Backwards compatibility
You've probably heard that PCIe is backwards-compatible with PCI, and that
operating systems can boot on and use a PCIe-based system without
modification. So you're no doubt wondering how PCI's load-store model,
described previously, can be compatible with the switched packet-based model
outlined here. The answer is more straightforward than you might think.
PCI and PCI Express, like many computer systems designed to transmit data,
implement a part of the OSI network stack. This article is not the place for
a detailed breakdown of a network stack, but the basic idea behind it is easy
enough to grasp.
PCI implements the first four layers of the OSI stack, which specify the
physical aspects of transmission (i.e., the wire-level signals) up through
the higher-level load-store interface that software uses to send and receive
via PCI.
PCI Express's designers have left this load-store-based, flat memory model
unchanged. So a legacy application that wants to communicate via PCIe still
executes a read from or a write to a specific address. The next two stack
levels down, however, take this read or write request and convert it into a
packet by appending routing and flow control information, as well as CRC
information, placing it in a frame, and then sending it to its destination.
So the application still thinks that it's reading to or writing from a memory
address when it talks to a PCI device, but behind the scenes there's a
totally different network of protocols and signals at work shuffling that
read or write request to along to its destination.
This brings us to back to the topic of command and control signals. As I
hinted at earlier, PCIe takes all PCI side-band signals and converts them to
MSI signals (which are load-store) so that they can be encapsulated into
packets and routed just like any other read/write traffic. Of course, this
means that all types of PCIe traffic — whether command or read/write, or
address or data — are transmitted over a single bus.
It's important to note at this point that the two pairs of bus traffic types
are logically divided under PCIe, even if they're not physically separated
onto different buses. The first two types of traffic, address and data, are
combined in the form of the packet. The core of packet consists of an address
combined with a chunk of data; so the packet structure fuses these two types.
The packets themselves, though, generally fall into the two other categories:
command and read/write. In fact, literature on a packet-based bus system like
PCIe or RAMBUS will often talk of command packets and data packets, the
latter being the more common name for what I'm calling read/write packets.
A link that's composed of a single lane is called an x1 link; a link composed
of two lanes is called an x2 link; a link composed of four lanes is called an
x4 link, etc. PCIe supports x1, x2, x4, x8, x12, x16, and x32 link widths.
PCIe's bandwidth gains over PCI are considerable. A single lane is capable of
transmitting 2.5Gbps in each direction, simultaneously. Add two lanes
together to form an x2 link and you've got 5 Gbps, and so on with each link
width. These high transfer speeds are good, good news, and will enable a new
class of applications, like...
PCIe links hang off of both the northbridge and the southbridge. Just as the
northbridge and southbridge combined with the CPU to fill the role of PCI
host (or root), the northbridge and southbridge join with each other to
fulfill the role of the PCIe switch. In Intel's design, the north and south
bridges are PCIe switches combined with a single, high-bandwidth PCIe link.
The next generation: PCIe