Nick Maclaren University of Cambridge Computer Laboratory New Museums Site, Pembroke Street, Cambridge CB2 3QG, England Email: nmm1@cam.ac.uk Tel.: +44 (0)1223 334761 Fax: +44 (0)1223 334679We start with a program design or algorithm and convert it into our programming language (i.e. we code it); the compiler and run-time system then convert that into machine operations (i.e. we compile and run it). To get the best performance, the algorithm must be fast and both conversion steps must be done efficiently. With the advent of highly parallel computers, the biggest problem has become how to distribute and update data consistently yet efficiently.
Fortran was designed for traditional scientific calculations, and Fortran 90 and HPF are well suited for coding matrix-based algorithms to run on SIMD (vector) supercomputers. There are now several compilers that can convert existing vector code for parallel computers, and we shall discuss some of the techniques and problems. C and C++ are usually less efficient than Fortran on supercomputers, because of problems with compilation, and we shall see why.
At one time, functional programming was heralded as the way to use highly parallel computers, but this does not work in practice. One method that does work is explicit message passing (e.g. Occam, MPI etc.), but many people find this difficult to use. We shall consider one aspect (aliasing) and a few ways that conventional high-level languages could handle this better. Many of these ways have existed for 25 or more years in less common languages (e.g. Algol68 or Reduce).