Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
АНГЛИЙСКИЙ__МОЙ - копия.doc
Скачиваний:
28
Добавлен:
13.08.2019
Размер:
1.83 Mб
Скачать

Harvard or von Neumann?

The von Neumann architecture is a computer design model that uses a processing unit and a single separate storage structure to hold both instructions and data. It is namedTrfiter mathematician and early computer scientist John von Neumann who knew of Alan Turing's

64

3. Зек. 496

— 65

seminal hypothetical idea of a 'universal computing machine', that had been published in 1936. Such a computer implements a universal Turing machine, and me common «referential model» of specify- ing sequential architectures, in contrast with parallel architectures. A stored-program computer is generally a computer with this design, although as modern computers are usually of tins type, the term has fallen into disuse.

The earliest computing machines had fixed programs. Some very simple computers still use this design, either for simplicity or train- ing purposes. For example, a desk calculator (in principle) is a fixed program computer. It can do basic mathematics, but it cannot be used as a word processor or to run video games. To change the program of such a machine, you have to re-wire, re-structure, or even re-de- sign the machine. Indeed, the earliest computers were not so much «programmed» as they were «designed». «Reprogramming», when it was possible at all, was a laborious process, starting with flow charts and paper notes, followed by detailed engineering designs, and then the often-arduous process of physically re-wiring and re-building the machine.

The idea of the stored-program computer changed all that. By creating an instruction set architecture and detailing the computation as a series of instructions (the program), the machine becomes much more flexible. By treating those instructions in the same way as data, a stored-program machine can easily change the program, and can do so under program control.

The terms «von Neumann architecture» and «stored-program computer» are generally used interchangeably, and that usage is followed in this article. However, me Harvard architecture concept should also be mentioned as a design which stores the program in a modifiable form, but without using the same physical storage or format as for general data.

A stored-program design also lets programs modify themselves while running, effectively allowing the computer to program itself. One early motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instruc- tions, which had to be done manually in early designs. This became less important when index registers and indirect addressing became customary features of machine architecture. Self-modifying code has largely fallen out of favor, since it is very hard to understand and

debug, as well as inefficient under modern processor pipelining and caching schemes.

■ On a large scale, the ability to treat instructions as data is what makes assemblers, compilers and other automated programming tools possible. One can «write programs which write programs». On a small- er scale, I/O-intensive machine instructions such as the BITBLT primi- tive used to modify images on a bitmap display, were once thought to be impossible to implement without custom hardware. It was shown fetter that these instructions could be implemented efficiently by «on die fly compilation» technology, e.g. code-generating programs — one form of self-modifying code that has remained popular.

There are drawbacks to the von Neumann design. Aside from the von Neumann bottleneck described below, program modifications can be quite harmful, either by accident or design. In some simple stored- program computer designs, a malfunctioning program can damage itself, other programs, or the operating system, possibly leading to a crash. This ability for programs to create and modify other programs ti& also frequently exploited by malware. A buffer overflow is one very common example of such a malfunction. Malware might use a buffer overflow to smash the call stack, overwrite the existing program, and men proceed to modify other program files on the system to further propagate the malware to other machines. Memory protection and other forms of access control can help protect against both accidental tttd malicious program modification.

By contrast with a von Neumann architecture computer, in which both instructions and data are stored in the same memory system and (without the complexity of a cache) must be accessed in turn, the original Harvard architecture computer, the Harvard Mark I, employed entirely separate memory systems to store instructions and data. The CPU fetched the next instruction and loaded or stored data simultane- ously and independently. The physical separation of instruction and data memory is sometimes held to be the distinguishing feature of modern Harvard architecture computers. However, with entire com- pleter systems being integrated onto single chips, the use of different memory technologies for instructions (e.g., flash memory) and data (typically read/write memory) in von Neumann machines is becoming popular. The true distinction of a Harvard machine is that instruction «Ml data memory occupy different address spaces. In other words, a memory address does not uniquely identify a storage location (as it

— 66

3*

67

does in a von Neumann machine); you also need to know the memory space (instruction or data) to which the address applies.

A pure Harvard architecture computer suffers from the disadvan- tage that mechanisms must be provided to separately load the program to be executed into instruction memory and any data to be operated upon into data memory. Additionally, modern Harvard architecture machines often use a read-only technology for the instruction memory and read/write technology for the data memory. This allows the com- puter to begin execution of a pre-loaded program as soon as power is applied. The data memory will at this time be in an unknown state, so it is not possible to provide any kind of pre-defined data values to the program.

The solution is to provide a hardware pathway and machine lan- guage instructions so that the contents of the instruction memory can be read as if they were data. Initial data values can then be copied from the instruction memory into the data memory when the program starts. If the data is not to be modified (for example, if it is a constant value, such as pi, or a text string), it can be accessed by the running program directly from instruction memory without taking up space in data memory (which is often at a premium).

Most modern computers that are documented as Harvard Archi- tecture are, in fact, Modified Harvard Architecture. The Modified Harvard Architecture is a variation of the Harvard computer architec- ture that allows the contents of the instruction memory to be accessed as if it were data.

Harvard or von Neumann?

Three characteristics of Harvard architecture machines may be used to distinguish them from von Neumann machines:

Instruction and data memories occupy different address spaces. That is, there is an address 'zero' in instruction space that refers to an instruction storage location and also an address 'zero' in data space that refers to a distinct data storage location. By contrast, a von Neumann machine stores both instructions and data in a single address space, so address 'zero' refers to only one thing and whether the binary pattern in that location is interpreted as an instruction or data is defined by how the program is written. This characteristic unambiguously identifies a Harvard machine; that is, if instruction and data memories occupy different address spaces then the architecture is Harvard, not von Neumann.

Instruction and data memories have separate hardware pathways to the central processing unit (CPU). This is pretty much the whole point of modern Harvard machines and why they still co-exist with the more flexible and general von Neumann architecture. Separate memory pathways to the CPU allow instructions to be fetched and data to be accessed at the same time without the considerable extra complexity of a cache. Therefore, when performance is important but a cache is impractical (due to complexity or the difficulty of predicting execution speed) and the extra difficulty of programming a Harvard machine is acceptable, this becomes the architecture of choice. However, a von Neumann machine with independent instruc- tion and data caches also has separate hardware pathways to the CPU (for precisely the same purpose of increasing speed). Some processors are referred to as Harvard architecture even though instructions and data occupy the same address space because they cache instructions and data separately and pass them to the CPU via separate hardware pathways. As a result, this characteristic is no longer unambiguous. From a programmer's point-of-view, a processor with a single ad- dress space for instruction and data is programmed in the same way whether or not it has cache and is therefore a von Neumann machine. Ffom the point-of-view of me CPU designer, simultaneous access to instructions and data may appear sufficiently important to warrant a special term to distinguish the results from a von Neumann machine with no cache or a unified cache.

Instruction and data memories are implemented in different ways. The original Harvard machine, the Mark I, stored instructions on a punched paper tape and data in electro-mechanical counters. This, however, was entirely due to the limitations of technology avail- able at the time. Modern embedded computer systems (for example, the microcontroller in a digital camera) have the need to store their software programs without power and without the disk drives used in general purpose computers. Therefore, instructions are stored in a read-only memory technology. Read/write memory (which loses its contents when power is removed) is only used for data storage. There is no obstacle to combining different memory technologies in a single address space and thus building a von Neumann machine with read-only instructions and read/write data. So, this characteristic of the original Harvard machine is no longer relevant as a distinction from von Neumann machines.

68^

-—69 —

Turing machine; arduous; debug; overflow; malfunction; fetch; value; pi; text string; pathway

8. Read the text and translate it into Russian without a dictionary.

extrapolatable to a range of use cases. Only later on did 'internals' such as «the way by which the CPU performs internally and accesses addresses in memory,» mentioned above, slip into the definition of computer architecture.

Historical Perspective Early Usage in Computer Context

The term «architecture» in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members in 1959 of the Machine Organization department in IBM's main re- search center. Johnson had occasion to write a proprietary research communication about Stretch, an IBM-developed supercomputer for Los Alamos Scientific Laboratory; in attempting to characterize his chosen level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements aimed at die level of «system architecture» — a term that seemed more useful than «machine organization.» Subsequently Brooks, one of die Stretch designers, started Chapter 2 of a book (Planning a Computer System: Project Stretch, ed. W. Buchholz, 1962) by writing, «Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and men designing to meet those needs as effectively as possible within economic and technological constraints.» Brooks went on to play a major role in the development of the IBM System/360 line of computers, where «architecture» gained currency as a noun with the definition «what the user needs to know.» Later the computer world would employ die term in many less-explicit ways.

The first mention of me term architecture in die refereed com- puter literature is in a 1964 article describing the IBM System/360. The article defines architecture as die set of «attributes of a system as seen by the programmer, i.e., the conceptual structure and func- tional behavior, as distinct from the organization of the data flow and controls, me logical design, and the physical implementation.» In the definition, the programmer perspective of the computer's functional behavior is me key. The conceptual structure part of an architecture description makes the functional behavior comprehensible, and

proprietary, embellish, enhancement, currency, comprehensible

9. Read the text and translate it into English. Speak on the distinct System of computer architecture classification.