Sign up
Forgot password?
FAQ: Login

Hockney R.W., Jesshope C.R. Parallel Computers. Architecture, Programming and Algorithms

  • djvu file
  • size 3,56 MB
  • added by
  • info modified
Hockney R.W., Jesshope C.R. Parallel Computers. Architecture, Programming and Algorithms
Adam Hilger, 1981. — 432 p.
The 1980s are likely to be the decade of the parallel computer, and it is the purpose of this book to provide an introduction to the topic. Although many computers have displayed examples of parallel or concurrent operation since the 1950s, it was not until 1974-5 that the first computers appeared that were designed specifically to use parallelism in order to operate efficiently on vectors or arrays of numbers. These computers were based on either executing in parallel the various subfunctions of an arithmetic operation in the same manner as a factory assembly line (pipelined computers such as the CDC STAR and the TIASC), or replicating complete arithmetic units (processor arrays such as the ILLIAC IV). There were many problems in these early designs but by 1980 several major manufacturers were offering parallel computers on a commercial basis, as opposed to the previous research projects. The main examples are the CRAY-1 (actually first installed in 1976) and the CDC CYBER 205 pipelined computers, and the ICL DAP and Burroughs BSP processor arrays. Unfortunately, since we wrote this material Burroughs have experienced problems in the production of the BSP and, although a prototype machine was built, Burroughs have withdrawn from this project. However, this still remains a very interesting design and with its demise perhaps gives more insight into this field. Pipelined designs are also becoming popular as processors to attach to minicomputers for signal processing or the analysis of seismic data, and also as attachments to micro based systems. Examples are the FPS AP-120B, FPS-164, Data General AP/130 and IBM 3838.
Parallelism has been introduced in the above designs because improvements in circuit speeds alone cannot produce the required performance. This is evident also in the design studies produced for the proposed National Aeronautics Simulation Facility at NASA Ames. This is to be based on a computer capable of 109 floating-point operations per second. The CDC proposal for this machine comprises four high-performance pipelined units, whereas the Burroughs design is an array of 512 replicated arithmetic units. The advent of very large-scale integration (VLSI) as a reliable chip manufacturing process, provides a technology that can produce very large arrays of simple processing elements (pes). The ICL DAP (4096 pes) and the Goodyear Aerospace MPP are early examples of processor arrays that are likely to be developed within the decade to take advantage of VLSI technology. It may be claimed that the above designs are of interest only to large scientific laboratories and are not likely to make an impact on the mass of computer users. Most experience shows, however, that advances in computer architecture first made for the scientific market do later become part of the general computing scene.
It seems therefore that the need for greater parallelism in design (i.e. the demand for greater performance than circuit speed alone can give), coupled with the technology for implementing highly parallel designs (VLSI), is likely to make parallelism in computer architecture a growth area in the 1980s. It is the purpose of this book to explain the principles of and to classify both pipelined and array-like designs, to show how these principles have been actually implemented in a number of successful current designs (the CRAY-1, CYBER 205, FPS AP-120B, ICL DAP and Burroughs BSP), and to compare the performance of the different designs on a number of substantial applications (matrix operations, FFT, Poisson- solving). The advent of highly parallel architectures also introduces the problem of designing numerical algorithms that execute efficiently on them, and computer languages in which these algorithms can be expressed. We regard the algorithmic and language aspects of parallelism as being equally important as the architectural, and devote chapters specifically to them.
Pipelined computers
Processor arrays
Parallel languages
Parallel algorithms
Future developments
  • Sign up or login using form at top of the page to download this file.
  • Sign up
Up