Encyclopedia > Reduced Instruction Set Computer

  Article Content

RISC

Redirected from Reduced Instruction Set Computer

This article is about the processor. For other uses see: RISC (disambiguation)


RISC is an acronym that stands for Reduced Instruction Set Computer (or Computing).

RISC is a CPU design philosophy that followed from the discovery that many of the features that were included in traditional CPU designs for speed were being ignored by the programs that were running on them. In addition, the speed of the CPU in relation to the memory it accessed was increasing. This led to a number of techniques to streamline processing within the CPU, while at the same time attempting to reduce the total number of memory accesses.

More modern terminology refers to these designs as "load-store," for reasons that will become clear below.

Table of contents

Pre-RISC design philosophy

One of the basic design principles of all processors is to add speed by providing some very fast memory for storing temporary data, known as "registers." For instance, almost every CPU includes a command for adding two numbers. The basic operation of the CPU would be to load the two numbers into registers, add them together and store the result in another register, and finally take the result from that final register and store it back out to main memory.

However, registers have the downside of being somewhat complex to implement. Each one is represented by transistors on the chip, whereas main memory tends to be much simpler and less expensive. In addition, the registers add to the complexity of the wiring, because the main processing unit needs to be wired to all of them in order to be able to use them all equally.

As a result, many CPU designs limited the use of registers in one way or another. Some included very few, even though this seriously limited speed. Others dedicated their registers to specific tasks in order to reduce complexity; for instance, one might be able to apply math only to one or two of the registers, while storing the result in any of them.

In the microcomputer world of the 1970s this was even more of an issue because the CPUs were very slow—in fact they tended to be slower than the memory they talked to. In these cases it made sense to eliminate almost all of the registers and instead provide the programmer with a number of ways of dealing with the external memory to make their task easier.

Given the addition example, most CPU designs strove to create a command that would do all of the work automatically: load up the two numbers to be added, add them, and then store the result back out directly. Another version would read the two numbers from memory, but store the result in a register. Another version would read one from memory and the other from a register and store to memory again. And so on.

The general goal at the time was to provide every possible addressing mode for every instruction, a principle known as "orthogonality." This led to some complexity on the CPU, but in theory each possible command could be tuned individually, making the design faster than if the programmer used simpler commands.

The ultimate expression of this sort of design can be seen at two ends of the power spectrum, the 6502 at one end, and the VAX at the other. The $25 single-chip 6502 effectively had only a single register, and by careful tuning of the memory interface it was still able to outperform designs running at much higher speeds (like the 4MHz Zilog Z80). The VAX was a minicomputer whose initial implementation required 3 racks of equipment for a single cpu, and was notable for the amazing variety of memory access styles it supported, and the fact that every one of them was available for every instruction.

RISC design philosophy

In the late 1970s research at IBM (and similar projects elsewhere) demonstrated that the majority of these "orthogonal" addressing modes were ignored by most programs. This was a side effect of the increasing use of compilers to generate the programs, as opposed to writing them in assembly language. The compilers tended to be fairly dumb in terms of the features they used, largely a side effect of attempting to be fairly small. The market was clearly moving to even wider use of compilers, diluting the usefulness of these orthogonal modes even more.

At about the same time, CPUs started to run faster than the memory they talked to. Even in the late 1970s it was apparent that this disparity was going to continue to grow for at least the next decade, by which time the CPU would be tens to hundreds of times faster than the memory. This meant that the advantages of tuning any one addressing mode would be completely overwhelmed by the slow speed at which it took place.

Meanwhile, new ideas about how to dramatically increase performance were starting to gel. One of these ideas was to include a pipeline which would break down instructions into steps, and work on one step of several different instructions at the same time. A normal processor might read an instruction, decode it, fetch the memory the instruction asked for, perform the operation, and then write the results back out. The key to pipelining is that the processor can start reading the next instruction as soon as it finishes the last, meaning that there are now two instructions being worked on (one is being read, the next is being decoded), and after another cycle there will be three. While no single instruction completed any faster, the next instruction would complete right after. The illusion was of a much faster system.

Another solution was to use several processing elements inside the processor and run them in parallel. Instead of working on one instruction to add two numbers, these superscalar processors would look at the next instruction in the pipeline and attempt to run it at the same time in an identical unit. This is not a very easy thing to do however, as many instructions in computing depend on the results of some other instruction.

Both of these techniques relied on increasing speed by adding complexity to the basic layout of the CPU, as opposed to the instructions running on them. With chip space being a finite quantity, in order to include these features something else would have to be removed to make room. RISC attempted to take advantage of all of these issues by designing processors with more registers, and fewer commands. By cutting out many of the commands from the traditional designs one would end up with a simpler core logic that would leave room for "other things."

Yet another solution came from practical measurements on real-world programs. Andrew Tanenbaum summed up many of these, demonstrating that most processors were vastly overdesigned. For instance, he showed that 98% of all the constants in a program would fit in 13 bits, yet almost every CPU design dedicated some multiple of 8 bits to storing them, typically 8, 16 or 32. Taking this fact into account suggests that a machine should allow for constants to be stored in unused bits of the instruction itself, decreasing the number of memory accesses. Of course you can only do this if the instructions are small and have leftover room.

It was the small number of modes and commands that resulted in the term Reduced Instruction Set. This is not an accurate terminology, as RISC designs often have huge command sets of their own. The real difference is the philosophy of doing everything in registers and loading and saving the data to and from them. This is why the design is more properly referred to as load-store. Over time the older design technique became known as Complex Instruction Set Computer, or CISC, although this was largely to give them a different name for comparison purposes.

The long and short of it is that for any given level of general performance, a RISC chip will typically have many fewer transistors dedicated to the core logic. This allows the designers considerable flexibility; they can, for instance:

  • increase the size of the register set
  • implement measures to increase internal parallelism
  • add huge caches
  • add other functionality, like I/O and timers for microcontrollers
  • add vector (SIMD) processors like AltiVec
  • build the chips on older lines, which would otherwise go unused
  • do nothing; offer the chip for low-power or size-limited applications

Meanwhile, since the basic design is simpler, development costs are lower. In theory this meant that RISC developers could easily afford to develop chips with similar power to the most advanced CISC designs, but do so for a fraction of the development cost. After a few generations, CISC would simply not be able to keep up.

Features which are generally found in RISC designs are uniform instruction encoding (e.g. the op-code is always in the same bit positions in each instruction, which is always one word long), which allows faster decoding; a homogenous register set, allowing any register to be used in any context and simplifying compiler design (although there are almost always separate integer and floating point register files); simple addressing modes with more complex modes replaced by sequences of simple arithmetic instructions; few data types supported in hardware (for example, some CISC machines had instructions for dealing with byte strings—such instructions are unlikely to be found on a RISC machine).

RISC designs are also more likely to feature a Harvard memory model, where the instruction stream and the data stream are conceptually separated; this means that modifying the addresses where code is held might not have any effect on the instructions executed by the processor (because the CPU has a separate instruction and data cache), at least until a special synchronization instruction is issued. On the upside, this allows both caches to be accessed separately, which can often improve performance.

Many of these early RISC designs also shared a not-so-nice feature, the "branch delay slot." A branch delay slot is an instruction space immediately following a jump or branch. The instruction in this space is executed whether or not the branch is taken (in other words the effect of the branch is delayed). This instruction keeps the ALU of the CPU busy for the extra time normally needed to perform a branch. Nowadays the branch delay slot is considered an unfortunate side effect of a particular strategy for implementing some RISC designs, and modern RISC designs generally do away with it (such as PowerPC, more recent versions of SPARC, and MIPS).

Early RISC

The first system that would today be known as RISC wasn't at the time; it was the CDC 6600 supercomputer, designed in 1964 by Seymour Cray. At the time, memory performance wasn't as specific a problem as it was in the 1980s, but I/O in general was consuming much of the CPU's time. Cray's solution was to use a simple but very highly-tuned CPU (with 74 op-codes, compared with a 8086's 400) and a series of specialized controllers to handle I/O. This may not sound like the system outlined above, but in fact if one considers the I/O processors to be the equivalent of the load/store commands, the overall design is similar.

The most public RISC designs, however, were the results of university research programs run with funding from the DARPA VLSI Program[?]. The VLSI Program, practically unknown today, led to a huge number of advances in chip design, fabrication, and even computer graphics.

Berkeley's RISC project started in 1980 under the direction of David Patterson, based on gaining performance through the use of pipelining and an agressive used of registers known as register windows. In a normal CPU you have a small number of registers and programs can use all of them, in the RISC design there are a huge number of registers, 128, but programs can only use a small number of them, 8, at a time. The idea was to allow a program to make very fast procedure calls by limiting itself to 8 registers per procedure; a call could then be returned simply by moving the pointer to the set of 8 registers that was currently being used.

The RISC project delivered the RISC-I processor in 1982. Consisting of only 44,420 transistors (compared with averages of about 100,000 in newer designs of the era) RISC-I had only 32 instructions and yet completely outperformed any other single-chip design. They followed this up with the 40,760 transistor, 39 instruction RISC-II in 1983, which ran over three times as fast as RISC-I.

At about the same time, John Hennessy started a similar project called MIPS at Stanford University in 1981. MIPS focussed almost entirely on the pipeline, making sure it could be run as "full" as possible. Although pipelining was already in use in other designs, several features of the MIPS chip made its pipeline far faster. The most important, and perhaps annoying, of these features was the demand that all instructions be able to complete in one cycle. This demand allowed the pipeline to be run at much higher speeds (there was no need for induced delays) and is responsible for much of the processor's speed. However, it also had the negative side effect of eliminating many potentially useful instructions, like a multiply or a divide.

Interestingly the earliest attempt to make a chip-based RISC CPU was a project at IBM which started in 1975, predating both of the projects above. Named after the building where the project ran, the work led to the IBM 801 CPU family which was used widely inside IBM hardware. The 801 was eventually produced in a single-chip form as the ROMP in 1981, which stood for Research (Office Products Division) Mini Processor. As the name implies, this CPU was designed for "mini" tasks, and when IBM released the IBM RT-PC[?] based on the design in 1986, the performance was not acceptable. Nevertheless the 801 inspired several research projects, including new ones at IBM that would eventually lead to their POWER system.

In the early years, the RISC efforts were well known, but largely confined to the university labs that had created them. The Berkeley effort became so well known that it eventually became the name for the entire concept. Many in the computer industry criticized that the performance benefits were unlikely to translate into real-world settings, and that that was the reason no one was using them. But starting in 1986, all of the RISC research projects started delivering products. In fact, almost all modern RISC processors are direct copies of the RISC-II design.

Modern RISC

Berkeley's research was not directly commercialized, but the RISC-II design was used by Sun Microsystems to develop the SPARC, by Pyramid to develop their line of mid-range multi-processor machines, and by almost every other company a few years later. It was Sun's use of a RISC chip in their new machines that demonstrated that RISC's benefits were real, and their machines quickly outpaced the competition and essentially took over the entire workstation market.

John Hennessy left Stanford to commercialize the MIPS design, starting the company known as MIPS Technologies Inc. Their first design was a second-generation MIPS chip known as the R2000. MIPS designs went on to become one of the most used RISC chips when they were included in the Nintendo game consoles.

IBM learned from the RT-PC failure and would go on to design the RS/6000 based on their new POWER architecture. They then moved their existing S/370 mainframes to POWER chips, and found much to their surprise that even the very complex instruction set (dating to the S/360 from 1964) ran considerably faster. The result was the new System/390 series which continues to be sold today as the zSeries. POWER would also find itself moving "down" in scale to produce the PowerPC design, which eliminated many of the "IBM only" instructions and created a single-chip implementation. Today the PowerPC is used in all Apple Macintosh machines, as well as being one of the most commonly used CPUs for automotive applications (some cars have over 10 of them inside).

Almost all other vendors quickly joined. From the UK similar research efforts resulted in the INMOS Transputer and the ARM line, which is a huge success today. Companies with existing CISC designs also quickly joined the revolution. Intel released the i860 and i960[?] by the late 1980s; AMD released their 29000[?] which would go on to become the most popular RISC design in the early 1990s.

Today RISC CPUs (and microcontrollers) represent the vast majority of all CPUs in use. The RISC design technique offers power in even small sizes, and thus has come to completely dominate the market for low-power "embedded" CPUs. Embedded CPUs are by far the most common market for processors: consider that a family with one or two PCs may own several dozen devices with embedded processors. RISC has also completely taken over the market for larger workstations. After the release of the Sun SPARCstation the other vendors rushed to compete with RISC based solutions of their own. Even the mainframe world is now completely RISC based.

This is surprising in view of the domination of the Intel x86 in the desktop PC market and the commodity server market. Although RISC was indeed able to scale up in speed quite quickly and cheaply, Intel simply applied massive amounts of effort and cash. If it costs ten times as much to double performance of their CPU, no matter, they have ten times the cash. In fact they have more, and Intel's CPUs continue to make great (and to many, surprising) strides in performance. In any complexity comparison the latest Intel chips fare poorly, often including twice as many transistors for a similar amount of speed. However, this makes little difference to the end consumer, where the only considerations are outright speed and compatibility with older machines.

RISC designs have led to a number of successful platforms and architectures, some of the larger ones being:

Meaningless Term???

As the original RISC architectures evolved, the ones that have survived (notably the PowerPC) have acquired a variety of additional instructions, some of which are for quite elaborate operations useful for graphics. Implementations of the remaining CISC architecture in wide use, the x86, was optimized for use with a subset of the original operations, and also gained additional instructions to support graphics. With this convergence in design it is widely questioned whether such distictions are particularly meaningful in the context of the computer market of 2003.

For this reason a number of people have started to use the term load-store to describe RISC chips, because this is the key element to all RISC designs. Instead of the CPU itself handling all sorts of addressing modes, a single separate unit is dedicated solely to handling all load and store operations.

See also:



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
David McReynolds

... the request of fellow Socialists, McReynolds would run again for President as the SPUSA candidate in 2000 and receive approximately 8,000 votes. Today, McReynolds ...

 
 
 
This page was created in 26.4 ms