The Architecture Of Computer Hardware And Syste...
In computer engineering, computer architecture is a description of the structure of a computer system made from component parts. It can sometimes be a high-level description that ignores details of the implementation. At a more detailed level, the description may include the instruction set architecture design, microarchitecture design, logic design, and implementation.
The Architecture of Computer Hardware and Syste...
The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. While building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e., the stored-program concept. Two other early and important examples are:
The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959. Johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos National Laboratory (at the time known as Los Alamos Scientific Laboratory). To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of "system architecture", a term that seemed more useful than "machine organization".
Brooks went on to help develop the IBM System/360 (now called the IBM zSeries) line of computers, in which "architecture" became a noun defining "what the user needs to know". Later, computer users came to use the term in many less explicit ways.
There are other technologies in computer architecture. The following technologies are used in bigger companies like Intel, and were estimated in 2002 to count for 1% of all of computer architecture:
Computer architecture is concerned with balancing the performance, efficiency, cost, and reliability of a computer system. The case of instruction set architecture can be used to illustrate the balance of these competing factors. More complex instruction sets enable programmers to write more space efficient programs, since a single instruction can encode some higher-level abstraction (such as the x86 Loop instruction). However, longer and more complex instructions take longer for the processor to decode and can be more costly to implement effectively. The increased complexity from a large instruction set also creates more room for unreliability when instructions interact in unexpected ways.
An instruction set architecture (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand high-level programming languages such as Java, C++, or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as binary numbers. Software tools, such as compilers, translate those high level languages into instructions that the processor can understand.
ISAs vary in quality and completeness. A good ISA compromises between programmer convenience (how easy the code is to understand), size of the code (how much code is required to do a specific action), cost of the computer to interpret the instructions (more complexity means more hardware needed to decode and execute the instructions), and speed of the computer (with more complex decoding hardware comes longer decode time). Memory organization defines how instructions interact with the memory, and how memory interacts with itself.
Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while virtual machines may need fast interrupts. Sometimes certain tasks need additional components as well. For example, a computer capable of running a virtual machine needs virtual memory hardware so that the memory of different virtual computers can be kept separated. Computer organization and features also affect power consumption and processor cost.
Once an instruction set and micro-architecture have been designed, a practical machine must be developed. This design process is called the implementation. Implementation is usually not considered architectural design, but rather hardware design engineering. Implementation can be further broken down into several steps:
The exact form of a computer system depends on the constraints and goals. Computer architectures usually trade off standards, power versus performance, cost, memory capacity, latency (latency is the amount of time that it takes for information from one node to travel to the source) and throughput. Sometimes other considerations, such as features, size, weight, reliability, and expandability are also factors.
Modern computer performance is often described in instructions per cycle (IPC), which measures the efficiency of the architecture at any clock frequency; a faster IPC rate means the computer is faster. Older computers had IPC counts as low as 0.1 while modern processors easily reach nearly 1. Superscalar processors may reach three to five IPC by executing several instructions per clock cycle.
Counting machine-language instructions would be misleading because they can do varying amounts of work in different ISAs. The "instruction" in the standard measurements is not a count of the ISA's machine-language instructions, but a unit of measurement, usually based on the speed of the VAX computer architecture.
Benchmarking takes all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it shouldn't be how you choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might render video games more smoothly. Furthermore, designers may target and add special features to their products, through hardware or software, that permit a specific benchmark to execute quickly but don't offer similar advantages to general tasks.
Power efficiency is another important measurement in modern computers. Higher power efficiency can often be traded for lower speed or higher cost. The typical measurement when referring to power consumption in computer architecture is MIPS/W (millions of instructions per second per watt).
The Architecture of Computer Hardware, Systems Software and Networking is designed help students majoring in information technology (IT) and information systems (IS) understand the structure and operation of computers and computer-based devices. Requiring only basic computer skills, this accessible textbook introduces the basic principles of system architecture and explores current technological practices and trends using clear, easy-to-understand language. Throughout the text, numerous relatable examples, subject-specific illustrations, and in-depth case studies reinforce key learning points and show students how important concepts are applied in the real world.
Computer architecture is the engineering of a computer system through the careful design of its organization, using innovative mechanisms and integrating software techniques, to achieve a set of performance goals.
Some examples of embedded systems include ATMs, cell phones, printers, thermostats, calculators, and videogame consoles. Handheld computers or PDAs are also considered embedded devices because of the nature of their hardware design, even though they are more expandable in software terms. This line of definition continues to blur as devices expand.
The field of embedded system research is rich with potential because it combines two factors. First, the system designer usually has control over both the hardware design and the software design, unlike general-purpose computing. Second, embedded systems are built upon a wide range of disciplines, including computer architecture (processor architecture and microarchitecture, memory system design), compiler, scheduler/operating system, and real-time systems. Combining these two factors means that barriers between these fields can be broken down, enabling synergy between multiple fields and resulting in optimizations which are greater than the sum of their parts.
The challenge for parallel computer architects is to provide hardware and software mechanisms to extract and exploit parallelism for performance on a broad class of applications, not just the huge scientific applications used by supercomputers. Reaching this goal requires advances in processors, interconnection networks, memory systems, compilers, programming languages, and operating systems. Some mechanisms allow processors to share data, communicate, and synchronize more efficiently. Others make it easier for programmers to write correct programs. Still others enable the system to maximize performance while minimizing power consumption.
Reliable/fault tolerant computing deals with techniques to provide a computer system an ability to keep normal operation despite the occurrence of failures. A failure may be permanent in which a component cannot function properly after the failure, or transient in which a component suffers from a temporary failure (such as loss of data) but remains functional after the failure. A failure may be suffered by a hardware component or by software components due to bugs in code.
At North Carolina State University, we cover fault tolerance and computer security briefly in several courses at the undergraduate level and introductory graduate level, and cover them extensively in an advanced graduate level course. Our research program addresses fault tolerance and computer security concerns at various components of the computer system, such as at the processor microarchitecture level, memory system architecture level, and at system software level. Some examples of our past projects that have demonstrated our role in pioneering research effort in memory subsystem include: 041b061a72