Engineering
Advertisement

Computer architecture is the theory behind the design of a computer. In the same way as a building architect sets the principles and goals of a building project as the basis for the draftsman's plans, so too, a computer architect sets out the computer architecture as a basis for the actual design specifications.

There are several usages of the term, which can be used to refer to:

  • The design of a computer's CPU architecture, instruction set, addressing modes, and techniques such as SIMD and MIMD parallelism.
  • More general wider-scale hardware architectures, such as cluster computing and Non-Uniform Memory Access (NUMA) architectures.
  • The less formal usage refers to a description of the requirements (especially speeds and interconnection requirements) or design implementation for the various parts of a computer. (Such as memory, motherboard, electronic peripherals, or most commonly the CPU.)
  • Architecture is often defined as the set of machine attributes that a programmer should understand in order to successfully program the specific computer (i.e., being able to reason about what the program will do when executed). For example, part of the architecture are the instructions and the width of operands manipulated by them. Similarly, the frequency at which the system operates is not part of the architecture. This definition reveals the two main considerations for computer architects: (1) Design hardware that behaves as the programmers think it should. (2) Utilize existing implementation technologies (e.g., semiconductors) to build the best computer possible (best can be defined in many different ways as described in Design Goals). The latter consideration is often referred to as microarchitecture.


Design goals[]

The most common goals in computer architecture revolve around the tradeoffs between cost and performance (i.e. speed), although other considerations, such as size, weight, reliability, feature set, expandability and power consumption, may be factors as well.

Cost[]

Generally cost is held constant, determined by either system or commercial requirements, and speed and storage capacity are adjusted to meet the cost target.

Performance[]

Computer retailers describe the performance of their machines in terms of clock speed(usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have higher performance. Modern CPUs can execute multiple instructions per clock cycle, which dramatically speeds-up a program. Other factors aid speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs being run.

But there are also different types of speed. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (e.g. when the disk drive finishes moving some data). This number is affected by a very wide range of design choices—for example, adding cache usually makes latency worse (slower) but makes other things faster. Computers that control machinery usually need low interrupt latencies, because the machine can't, won't or should not wait. For example, computer-controlled anti-lock brakes should not wait for the computer to finish what it's doing - they should brake. Low latencies can often be had very inexpensively.

Benchmarking tries to take all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it may not help one to choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might play popular video games more smoothly. Furthermore, designers have been known to add special features to their products, whether in hardware or software, features which permit a specific benchmark to execute quickly but which do not offer similar advantages to other, more general computational tasks. Naïve users are apt to be unaware of such deceptive tricks.

The general scheme of optimization is to find the costs of the different parts of the computer. In a balanced computer system, the data rate will be constant for all parts of the system, and cost will be allocated proportionally to assure this. The exact form of the computer system will depend on the constraints and goals it was optimized for.

Virtual memory[]

Another common problem involves virtual memory.

Historically, random access memory has been thousands of times more expensive than rotating mechanical storage, i.e. hard drives in a modern computer.

For businesses, and many general computing tasks, it is a good compromise to never let the computer run out of memory, an event which would halt the program, and greatly inconvenience the user.

Instead of halting the program, many computer systems save less-frequently used blocks of memory to the rotating mechanical storage. In essence, the mechanical storage becomes main memory. However, mechanical storage is thousands of times slower than electronic memory.

Thus, almost all general-purpose computing systems use "virtual memory" and also have unpredictable interrupt latencies.

A few operating systems contain a real-time scheduler. Such a scheduler keeps critical pieces of code and data in solid-state RAM and guarantees a minimum amount of CPU time and a maximum interrupt latency.

Computer architecture on a future horizon[]

A very notable approach in the research phase potentially breaks the structural limits of conventional processing architectures. It is called configurable computing. Here the program code causes the compiler to create intermediate code suitable for runtime reconfigurable field-programmable gate arrays in which during the scope of program objects the configurable logic represents the calculating structure able to perform as desired. Since many objects can potentially perform in parallel on streaming data this ultimatively constitutes an advanced parallel processing architecture. Configurable computing could be categorized under computing in memory which is inspired by the function of the neuronal brain, where the processor and the memory eventually cannot be distinguished from each other.

See also[]

  • CPU design
  • Orthogonal instruction set
  • List of computer architecture topics

References[]

External links[]

Independent Web sites

This page uses Creative Commons Licensed content from Wikipedia (view authors). Smallwikipedialogo.png
Advertisement