CS10051 Review - Chapter 5 - Paul Durand

Sections 001/002 and 005/006 - Spring 2007

Concepts

Overview

This chapter studies the design and organization of computers from a higher level of abstraction, that of the Von Neumann model. The stored program concept, sequential execution of instructions, and the four major subsystems of the Von Neumann architecture – memory, input/output, the arithmetic/logic unit, and the control unit – are examined in detail. Using a hypothetical instruction set, the chapter traces through the fetch, decode, and execute phases of the execution of a program. It ends with a discussion of the future of computer systems and non-Von Neumann models, including parallel processing computers.

Chapter Objectives

 

Introduction

In order to understand how a computer “really works” it is helpful to describe it as a collection of functional units, each unit performs some required task, like input/output or information storage. Computer organization is the area of computer science that focuses on this level of abstraction. While all the units described in this chapter are implemented in terms of circuits and gates, they are easier to understand when abstracted a little. Recall that the construction of a hierarchy of abstractions is a key organizing principle of this text.

The Components of a Computer System

In Chapter 1 the development of the modern computer system was discussed, and the Von Neumann architecture was mentioned. This is an abstract design for a computer system, which has become the “standard” architecture for almost all modern computers. A Von Neumann machine has four main systems to it: a memory, some way to do input/output, an arithmetic/logic unit, and a control unit. If you look back at Chapter 1, these are essentially the same components envisioned by Charles Babbage. This sort of computer executes one instruction at a time in sequence. Von Neumann introduced the stored program concept, in which the program itself is stored in the computer’s memory. This turned out to be a very important idea in the development of modern computers.

Memory and Cache

The memory subsystem manages the information – program and data – that the computer operates on. A computer memory is typically random access (RAM): data is stored in cells, usually 8 bits long, which are numbered sequentially. Values are stored and retrieved using the cell’s memory address, and all cells take equal time to store or fetch. The maximum address space of a memory is bounded by the size of an address: if an address is N long, then there may be at most 2N cells. Each memory cell has two values associated with it: its address and its contents.

The memory system must support two operations: “fetching” the contents of a cell when given its address, and “storing” a new value into a cell, when given its address. Two special memory locations, called registers, are set aside for these memory operations. The MAR (memory address register) holds the address of the memory cell being accessed or changed; the MDR (memory data register) is used either to receive the result of a fetch or to hold the new value for a store. Selecting a specific memory location, given the address in the MAR, requires a decoder circuit (or, more often, two decoders) to signal one memory cell. A “fetch/store controller” circuit decides whether a fetch or a store is required at each time step.

A cache memory is a set of relatively small, very fast memory cells used to store data values that are currently in use: most programs tend to use and reuse the same memory cells and their neighbors. When accessing data, the computer first looks in cache memory. If the data is not there, the computer then looks in normal RAM. This allows efficient memory access without the prohibitive cost of storing all RAM in fast, but expensive, memory.

Input/Output and Mass Storage

The input/output subsystem allows the computer to communicate with the outside world, including archival storage for information the system needs. Input/output devices vary widely: keyboard, mouse, monitor, printer, hard drive, tape storage, etc. Each has different modes of communication. Direct access storage devices, like hard drives and DVDs, use an organization similar to memory systems, but with much slower access times. On such devices, data is stored in concentric rings. To look up a particular data value, the device must move a read/write head to the correct ring, and then wait for the disk to spin around to the exact physical location on the disk. Sequential access storage devices, like tape storage, store data in sequence, on a magnetic tape. Access times for different cells vary depending on where on the tape the data is stored. Such storage devices are typically used for backup systems, to inexpensively archive data in case of a system failure.

Most I/O devices are extremely slow compared to even RAM access times, or CPU times. An I/O controller is a special-purpose computer that acts as liaison between the CPU and the I/O device. It receives read or write requests, stores the pertinent address information, and allows the CPU to go on to some other task. When the I/O request is complete, it “interrupts” the CPU to tell it the data is ready.

The Arithmetic/Logic Unit

The arithmetic/logic unit (ALU) is the system that performs actual computations: basic arithmetic and comparison operations. Modern computers often combine the ALU with the control unit, but they remain conceptually distinct. Many modern computers have more than one ALU, or a special one for real number operations. The ALU contains some number of registers, very fast memory locations that are referred to by special names, rather than addresses. Typically a register may pass its value as an input to the ALU’s computing circuitry, or may have its value set as the output of the circuitry. Modern computers have up to several hundred registers, with 16 through 64 assigned to the ALU.

The circuitry of the ALU performs a number of arithmetic, comparison, and logical operations (for example: addition, subtraction, less than, AND). When the ALU performs a computation, all its sub-circuits operate on the input data, which is copied from a specified register. It uses a multiplexor, then, to select only the one answer that was actually desired.

The Control Unit

The control unit manages the execution of the program stored in the computer’s memory. It repeatedly fetches instructions from memory, decodes them to determine what should be done, and sends the appropriate signals to the other portions of the computer. Instructions are stored in machine code: each operation the hardware can perform is assigned an unsigned binary number. This code is stored in memory, followed by the correct number of memory addresses, stored as fixed-size binary numbers, for that operation. A typical instruction set contains operations for data transfer, arithmetic, comparison, and branching. Data transfer operations move values around. Arithmetic operations perform computations, storing the result in a specified location. Comparison operations set special-purpose bits to reflect the outcome of the comparison, for later reference. Branch operations alter the normal sequence of execution. Normally, the control unit executes each instruction in memory in sequence. A branch operation tells the control unit to jump to a specified location to continue execution.

The control unit needs two special registers, the program counter (PC) and the instruction register (IR). The program counter contains the address of the next instruction to be executed. The IR holds the code for the current instruction while it is being decoded. An instruction decoder circuit converts the op code into a signal to the correct subsystem of the computer.

Putting All the Pieces Together – the Von Neumann Architecture

The previous section described all the pieces of a Von Neumann computer. All these subsystems are connected by a bus, a set of wires through which they can communicate with each other. The control unit is in charge of the overall process. A detailed example describing each phase of the control unit’s operation is given in the chapter. Note how already computer scientists have abstracted away from the gate-level description of the computer, to operate on the abstraction inherent in machine language. Further abstraction will improve the computer’s ease of use and therefore the complexity of tasks that can be undertaken.

The Future: Non-Von Neumann Architectures

The Von Neumann architecture has dominated computer design until the current time. However, as more and more complex and computationally intensive applications are undertaken, the computer industry has begun to reach the physical limitations of the current architecture and its underlying circuits. An important area of research in computer science is the exploration of new architectures that may be faster. Parallel processing is one important area of research into non-Von Neumann computers. Different models have been explored: SIMD, MIMD, and others. SIMD computers apply the same program to multiple pieces of data simultaneously. MIMD computers have different processors running on different data at the same time. Modern supercomputers are typically parallel machines, and even most modern PCs may have several processors within them to improve the processing speed of the computer.

Key Terms

 

Things You Should Know How To Do