This section hasn't been revised for quite a few years and is largely useless now. There is any amount of this sort of thing elsewhere on the web these days, so we can think of more important things to do than update this old page.

The computer industry is notorious for using obscure technical terms. Sometimes this is just to make life difficult for non-technicians. More often, it's because computers are complex, are full of new technologies, and it's not possible to describe them without learning at least some of the language.

Also see part two: N to Z.

Advanced Graphics Port (AGP) The now all but universal AGP interface is more-or-less a "super PCI" especially for video cards. In addition to allowing higher-speed data transfer than the PCI bus, AGP makes provision for direct access between graphics adaptor and main RAM which means that the graphics sub-system is not limited to using just its own RAM — though this is a rather pointless feature, as using main system RAM for graphics is very slow. AGP-equipped main boards first appeared on the market in late 1997. For quite some time AGP was more hype than substance, having the usual host of bleeding edge teething issues but offering little or no performance advantage over PCI. With time it matured into the fast, stable and practical interface that it is now. Within broad limits, the various interface speed variations (4X, 8X and so on) are not particularly significant; the raw capabilities of the card itself are much more important.

→ Part of an LX mainboard with (from the left) two black 16-bit ISA slots, four white PCI slots, and a brown AGP slot.

Areal Density Long the Holy Grail of hard drive manufacturing. Areal density measures the amount of data that can be squeezed onto a given area of a disc. You can increase areal density either by placing the tracks closer together (track density), or by making each bit shorter so that more of them can fit onto each track (linear density). In practice, both methods are used. The benefits of increasing areal density are enormous. Obviously, it gives more storage per disc, allowing higher capacity drives. Less obviously, it allows the designer to use a smaller number of discs and read/write heads, which reduces complexity, weight, and power consumption, and saves money. It increases reliability too. Most important of all, an increase in areal density usually means an increase in linear density, which translates directly into increased performance — because the more data passes under the head per second, the faster the data transfer rate.

Unfortunately, the very high track densities of recent drives, say from about the 10GB per platter drives of 1999 onwards, have made it increasingly difficult to position the read head with the required accuracy. Typical seek times of mass-market and high capacity drives have suffered, and high performance drives now have to sacrifice areal density to achieve good seek performance.

BIOS: Basic Input Output System. The hard-coded program that controls the start-up of your computer and is responsible for both finding and loading your operating system, and for providing the operating software with an interface to the hardware. Although it is usual to mean 'the main board BIOS' when you use the term, many other components have a BIOS too, notably add-on IDE controllers, video cards, and SCSI host adaptors.

Bus More properly "buss" with a double 'S', though it's rarely spelled that way anymore. In electronics, a bus is any conductor that is shared or common. The six or eight ISA slots in a 286 board, for example, are considered a bus because they are connected together and all share the exact same signals. The PCI bus is broadly similar, but notice that AGP is a port, not a bus: there is only one single AGP connector and even if you were to wire in another slot it would not work, as the signal cannot be shared.

Bus Speed The speed your CPU chip runs at is not the speed your computer as a whole runs at. In the early days, all parts of the system ran at the same speed: most XTs, for example, ran at 8MHz. Gradually, the slowest parts of the system were split off onto seperate clocks. First to go was the I/O section: 286 and 386 systems pegged the ISA bus at 8MHz, while the main board and the rest of the system ran at anything up to 40MHz.

With the later-model 486s, the CPU itself was split off, allowing the main board to remain at a practical 33 or 40MHz while the CPU, being much smaller and easier to clock up without problems, was free to reach 133MHz. Most recently it has become common to split the RAM off too: many 100MHz Super 7 boards could run their RAM at 66MHz (to save buying expensive PC-100 RAM), and current systems run a bewildering variety of main board and RAM speed combinations, anywhere between 100 and 400MHz.

In the Socket 7 systems of the later part of the '90s, bus speed was critical as it determined not only the speed of the main board (which was and still is the limiting factor on communication between all other system components), and usually the speed of the RAM, but also the speed of the cache RAM. In general, bus speed was about twice as important as RAM speed. In modern systems with on-chip cache ,bus speed is less critical as the cache RAM is unaffected by it; nevertheless it remains a very important performance factor. Consider, for example, the underwhelming performance of DDR equipped Pentium III boards: although they sported 266MHz DDR RAM, because of their low 133MHz bus speed they coud not take advantage of it.

Over the years, many of the best CPUs have run higher than normal bus speeds: the immortal 386DX-40 (40MHz) outperformed many a 25 or 33MHz 486, the Pentium 100 (66MHz) was excellent in its day, as was the 6x86-200 (75MHz). The Athlon Classic (200MHz) was noticeably faster than competing CPUs, and the K6-2/300 (100MHz) was outstanding. But like all good things, it can be overdone. Fast bus speed is difficult to achieve reliably: the 486DX-50 (50MHz) and the Pentium-66 (66MHz) were never quite worth trusting; the 6x86-200 (75MHz) and the 40MHz bus 486 socket chips (486DX-40, 80, and 120, 5x86-120) needed careful attention to motherboard choice, and the 83MHz Cyrix chips were very difficult to get right.

Cache Usually pronounced as "kaysh", though purists insist that it's a French word and it should be "kashay". A small amount of fast storage used to accelerate a larger amount of slower storage. Modern computers use several layers of cache: primary cache on the CPU itself to accelerate the secondary cache, which accelerates the main RAM, part of which is, in turn, used to accelerate the hard drive.

Clean Room A term with two quite different meanings. The first is obvious and literal: the part of an electronics plant where it's necessary to keep the environment absolutely clean to avoid contamination of the manufacturing process. (Photo: IBM Microelectronics.)

The second meaning is quite different: a clean room design aims to duplicate the characteristics and function of an existing part without looking at the original. There are various tales of the extraordinary measures early monopoly breakers took to not only come up with independent work-alike parts, but to be able to prove that they had never examined the originals. Phoenix and C&T (Chips and Technologies) were amongst the first to use clean room techniques to break IBM's BIOS monopoly and thus make the PC as we know it possible. C&T went on to do a clean room 386 CPU too, which they later abandoned. (Guess who eventually bought C&T — a delightful irony!) There was great excitement in the industry at the first clean room X86 CPU chips from Cyrix and (to a lesser extent) AMD. These days, creating a compatible clean room X86 seems to be almost routine, and the leading manufacturers are more interested in developing their own innovations. With the gradual acceptance of broad cross-licence agreements, we are seeing a healthy trend to take competition out of the clean room — and the courtroom — and back into the development lab.

Co-Pro Maths Co-processor for a CPU. See NPU.

FPU Floating Point Unit. See NPU.

FUD Fear, Uncertainty and Doubt. A classic technique of dirty marketing. Instead of marketing your product on its merits, undermine the competition's product instead. You don't have to actually find anything wrong with it, just hint that it's future is uncertain, that its compatibility is suspect, or that the company making it has shaky finances. Then sit back and let unfounded rumours do their thing. (If any of the rumors happen to be true, then so much the better.) Be careful not to actually say anything concrete — if you make a particular accusation it might be refuted. Just hint in general terms. Your competitor, of course, is now in a very awkward position: if they say nothing, the rumors grow. And if they reply, they are helping you in your campaign to undermine them. Remember to keep it non-technical: you are not trying to convince informed industry insiders — you need some real facts to do that — you just need to plant a tiny seed of uncertainty or doubt in the emotions of the average consumer. Most people don't have the time, the training, or the desire to investigate competing products throughly, so they'll go with the one they "feel safe" with.

FUD is probably the single most effective marketing tool in the computer industry. Most of the really well established major firms are good at it — IBM, Adaptec and Creative are obvious examples — but Intel are particularly skilled, and Microsoft are in a class of their own: undisputed world champions.

Giant Magneto-Resistive Heads See Magneto-Resistive Heads below.

IPC Instructions Per Clock. A measure of CPU efficiency. The more advanced the design, the more instructions the CPU is able to execute per clock-tick. On the other hand, the greater complexity of a highly super-scalar chip means that it is often difficult to make the part work at higher clock rates. The Athlon and the Pentium 4 make good examples: the Athlon averages about nine instructions per clock, the Pentium 4 only about six IPC: so the Pentium 4 has to be clocked around fifty percent faster to deliver equal performance - which, as it happens, is pretty much exactly what happeens in real life. See scalar and super-scalar.

Latency In hard drives, this is the average time it takes for the requested data to arrive under the read head after the head has been positioned over the correct track. Latency is obviously directly related to the RPM of the drive — the faster the drive spins, the smaller the latency and the better the performance. For something on the practical effect of this, see the entry on the performance of a popular but high-latency drive here.

Latency is important in many other areas too, particularly CPUs and RAM. While CPU clock speed and RAM data transfer rates have improved ten-fold over the last few years, RAM latency has actually got worse — this means that when the CPU requests information that isn't already in the cache, there is a long pause while it waits for the RAM to start sending.

LBA (Logical Block Addressing) Traditional CHS addressing (see Sector Translation below) limits hard drives to 1024 cylinders, 16 heads and 63 sectors per track. A sector is 512 bytes, so the biggest drive a CHS system can address is 528MB. To access a drive bigger than this, you need LBA. An LBA controller doesn't care how many cylinders, heads and sectors the drive has, it just starts at sector number one and goes up from there. All current IDE drives use LBA — it came in with Enhanced IDE around 1995. SCSI drives have had LBA for a lot longer than that. (IDE LBA does have a number of upper limits, which the drive manufacturers reach and then find a way around eventually. There are detailed but readable articles about this at several of the hard drive manufacturers' web sites.

Magneto-Resistive Heads See Thin Film Heads first. Magneto-Resistive read heads are a newer and more accurate way of reading data back off a hard drive. A traditional thin film head relies on the current induced in it as it passes through the various magnetic fields encoded onto the disc. A Magneto-Resistive head uses a completely different method. All materials have some resistance to electrical current: with some it is very high (glass, rubber, plastic), with others very low (copper, iron, gold). A very few substances have variable resistance — they pass current more easily when they are in a magnetic field. A Magneto-Resistive read head has a tiny nickel-iron strip embedded in it. By measuring the voltage drop across this strip, the drive electronics can detect when it passes over a magnetised region of the disc, and thus reproduce the data. As it happens, an MR head can be smaller, faster, more tolerant of errors, and read a weaker signal. This allows a four-fold increase in areal density , which translates directly into increased storage capacity and speed, without loss of reliability. Most of the major hard drive makers have more detailed articles on MR technology, start with this paper from the people who invented the MR head, IBM Storage.

The Giant Magneto-Resistive head takes the process a little further, and more than doubles the sensitivity again. This allows still higher areal density, further reducing cost and increasing performance. Essentially, a GMR head has three very thin layers: a pinned layer of MR material which is kept in a single fixed magnetic orientation, a conductive copper (non-MR) spacer, and a second MR layer which is free to change magnetic orientation as the disc rotates beneath it. Current flows more freely when the two MR layers are oriented in the same direction, less freely when they are opposed. Once again, the drive electronics measure the change in current flow and thus reproduce the data. IBM had their first GMR drive in production in January 1998; the other manufacturers took a year or two longer. IBM Storage have a technical but quite readable article on GMR heads if you want to know more.

Moore's Law: Proposed by Intel founder Gordon Moore, this suggests that CPU power increases by fifty percent each year. It has proven to be astonishingly consistent over the last 20 years or so — though it may not be for too much longer. With every year that passes it gets harder to find ways of improving the breed. Eventually, the process of CPU development must run into the basic laws of physics and slow down or stop. In the meantime, the effect of Moore's Law is to put the CPU manufacturers in a red queen's race (running as hard as they can just to stay in the same place). Fifty percent a year doesn't sound so hard, but think about it this way: if you are developing a new chip, for every week it takes you to get it off the drawing board and into volume production, you have to make it go an extra one percent faster, just to stay level with your competition. It's a very tough industry and only the best survive.

MFM (Modified Frequency Modulation) See RLL. A data encoding method used by older hard drives - typically 20 or 40MB — and still used by all floppy drives.

MMX (Multi-Media eXtensions) The first SIMD extension to be added to the X86 instruction set. Introduced by the Intel Pentium MMX and included as a matter of course in all more recent designs. Of little practical use, but followed by much more powerful extensions in 3DNow and SSE. See SIMD.

 

Also see part two: N to Z.