D'Arcy Lemay · 12-30-2004 · Category:
History of PCI
I started in computers in what I consider the "bad old days". Unlike cars from the 60's that "baby boomers" remember with rose-colored glasses, I haven't yet forgotten the nightmare that configuring an ISA or VL-Bus based system was. It was slow and figuring out picky IRQ's and jumper settings was my least favorite thing in the world, and lets face it the slots were just plain huge.
Moving onto a PCI based system seemed like a dream. Back then, PCI took over and became the path through which every part of the computer communicated. It was used to link the north and south bridge, ATA and SCSI controllers passed through it to memory and the CPU, as well as internal fax modems, network cards, sound cards, and of course dedicated graphic controllers. It was a huge improvement over other solutions of the time thanks to not only its extra bandwidth (133MB/s), but because of the creation of a "bus mastering". Instead of the CPU having to preside over every interaction, a separate controller handled the traffic between the devices.
This allowed for "hardware abstraction", where as far as the CPU was concerned all devices on the PCI bus were simply another part of the memory address space. Think of it as mailing a letter. You know the address of where you are sending something, you write to them there and leave it up to the post man to deliver it. You aren't concerned with it after it leaves your hand, it's up to someone else from there on out. The same thing happens when the CPU wants to write data somewhere, for example sending data to the sound card to output to the speakers, or to the modem to send out over the phone line. On the other end, the devices hanging off of the PCI bus wait to hear their name called.
To them, it's like being in a hospital waiting room. You sit patiently for the nurse to call on you. Then it's your turn to go and chat with the doctor all on your own, while everyone else has to wait for their turn. You come back, sit down, and whoever else is next in line gets to do their thing. Everything has to go through the nurse, you can't even talk to your fellow patients while she is busy with someone else. In other words only one person can talk at one time, similar to being on a phone in a conference call. That's fine in a normal conversation between two people, or even three way calling.
Trying to co-ordinate 10 or more people though can become an awful mess. As a result, no one feels they are getting their proper chance to say what they want to say, and having all of those lines open at once trying to listen in adds a lot of background noise to the whole proceedings. It becomes even worse when one person is hogging the phone. This is what happens when you put something like dedicated graphics on a shared bus, which constantly needs to be updated with new information. The other problem with graphics on the PCI bus was one of bandwidth. With higher resolutions and video that consisted of more than a flat 2D background with few colors that didn't often change becoming more and more common, graphic traffic alone pushed the limits of what could be transferred on a 33MHz 32bit bus, to say nothing of the other traffic such as audio and disc access. As a result, two things happened. One was the creation of dedicated busses such as AGP to deal with a single high bandwidth device. The other was to extend the usefulness of the PCI bus design by creating Barry Bonds like versions.
Take the skinny one and make it bigger was the thought process, and PCI-X was born. They doubled the pin count (and as a result, the number of parallel traces carrying data), as well as adding versions with doubled speed. This corrected the immediate bandwidth issues; however it made other inherent problems of PCI only worse. Those who were building computers as ATA grew to be the dominant consumer format for permanent storage know what I'm talking about. Any time you add speed and width to parallel interfaces, cross talk (interference between one wire and its neighbor) increases, and tolerance decreases. As a result you have a more expensive product because you have to add yet more wires in between (the 40 wire ATA cable doubled to 80, with all the extras only going to ground and not carrying any data in between the actual connected wires) and pay more attention to reducing the noise each individual connection adds to the bus. This is one reason why there has not been a proliferation of PCI-X products on the desktop. Instead, just like AGP more independent single device busses showed up. The connection between north and south bridges gained their own bus, so too did certain network controllers, USB, Parallel ATA and Serial ATA, and other high bandwidth devices. A current non PCI Express board is filled with all kinds of separate protocols, speeds, masters, slaves, specifications and so on. Trying to integrate that all onto one PCB must be an absolute nightmare.
- History of PCI
- Explaining PCI Express