High Performance Computing and Communications Glossary 2.1

A significant part of the material of this glossary was adapted from material originally written by Gregory V. Wilson which appeared as "A Glossary of Parallel Computing Terminology" (IEEE Parallel & Distributed Technology, February 1993), and is being re-printed in the same author's "Practical Parallel Programming" (MIT Press, 1995). Several people have contributed additions to this glossary, especially Jack Dongarra, Geoffrey Fox and many of my colleagues at Edinburgh and Syracuse.

Original version is from NPAC at <URL:http://nhse.npac.syr.edu/hpccgloss/>

Original author: Ken Hawick, khawick@cs.adelaide.edu.au

See also the index of all letters and the full list of entries (very large)

Sections: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z


D4 (n.) An AT&T specified format for T1 facilities. Designates every 193rd bit as reserved for framing and synchronization information. Term orignates from the fourth generation digital channel bank.

DACS (n.) Digital Access Cross-connect System is a switch that enables test access and switching of digital signals in a T system.

data cache (n.) a cache that holds data but does not hold instructions.

data dependency (n.) a situation existing between two statements if one statement can store into a location that is later accessed by the other statement. See also dependence.

data flow analysis (n.) process of finding dependencies among instructions.

data flow graph (n.) (1) machine language for a data flow computer; (2) result of data flow analysis.

data parallelism (n.) A model of parallel computing in which a single operation can be applied to all elements of a data structure simultaneously. Typically, these data structures are arrays, and the operations are arithmetic and act independently on every array element, or reduction operations. See also array processor, processor array, SIMD and vector processor.

data-driven (n.) a data flow architecture in which execution of instructions depends on availability of operands.

dataflow (n.) A model of parallel computing in which programs are represented as dependence graphs and each operation is automatically blocked until the values on which it depends are available. The parallel functional and parallel logic programming models are very similar to the dataflow model.

dead code(n.) A portion of a program that does not have to be executed (because the values it calculates will never be used) or that will never be entered. Compiler optimization usually removes sections of dead code. See also dependence.

deadlock (n.) A situation in which each possible activity is blocked, waiting on some other activity that is also blocked. If a directed graph represents how activities depend on others, then deadlock arises if and only if there is a cycle in this graph. See also dependence graph.

decision problem (n.) a problem whose solution, if any, is found by satisfying a set of constraints.

declustered (adj.) A file system that distributes blocks of individual files between several disks. This contrasts with a traditional file system, in which all blocks of a single file are placed on the same disk. See also striped.

DECnet (n.) See DNA.

decomposition (n.) A division of a data structure into substructures that can be distributed separately, or a technique for dividing a computation into subcomputations that can be executed separately. The most common decomposition strategies in parallel computing are: functional decomposition; geometric decomposition and iterative decomposition.

dedicated throughput (n.) the number of results returned for a single job per time unit.

demand-driven (n.) data flow architecture in which execution of an instruction depends upon both availability of its operands and a request for the result.

dependence (n.) The relationship of a calculation B to a calculation A if changes to A, or to the ordering of A and B, could affect B. If A and B are calculations in a program, for example, then B is dependent on A if B uses values calculated by A. There are four types of dependence: true dependence, where B uses values calculated by A; antidependence, where A uses values overwritten by B; output dependence, where A and B both write to the same variables; control dependence, where B's execution is controlled by values set in A. Dependence is also used in message routing to mean that some activity X cannot proceed until another activity Y has completed. For example, if X and Y are messages attempting to pass through a region with limited buffer space, and Y currently holds some or all of the buffer, X may depend on Y releasing some buffer space before proceeding.

dependence graph (n.) A directed graph whose nodes represent calculations and whose edges represent dependencies among those calculations. If the calculation represented by node k depends on the calculations represented by nodes i and j, then the dependence graph contains the edges i-k and j-k. See also compiler optimization, dataflow, dependence.

dependence analysis (n.) an analysis by compiler or precompiler that reveals which portions of a program depend on the prior completion of other portions of the program. Dependency analysis usually relates statements in an iterative code construct.

depth (n.) parallel time complexity.

deque (n.) a double ended queue; that is a list of elements on which insertions and deletions can be performed at both the front and rear.

deterministic model (n.) a task model in which precedence relations between tasks and the execution time needed by each task are fixed and known before the schedule is devised.

diameter (n.) The distance across a graph, measured by the number of links traversed. Diameter is usually taken to mean maximum diameter (ie the greatest internode distance in the graph, but it can also mean the average of all internode distances. Diameter is sometimes used as a measure of the goodness of a topology.

direct mapping (n.) a cache that has a set associativity of one so that each item has a unique place in the cache at which it can be stored.

direct memory access (n.) See DMA.

direct method (n.) Any technique for solving a system of equations that relies on linear algebra directly. LU decomposition with back substitution is an example of a direct method. See also indirect method.

direct naming (n.) a message passing scheme in which source and destination designators are the names of processes.

directed graph (n.) a graph in which the edges have an orientation, denoted by arrowheads.

disjoint memory (n.) Memory that appears to the user to be divided amongst many separate address spaces. In a multicomputer, each processor typically has its own private memory and manages requests to it from processes running on other processors. Disjoint memory is more commonly called distributed memory, but the memory of many shared memory> computers is physically distributed.

disk striping (n.) technique of interleaving a disk file across two or more disk drives to enhance input/output performance. The performance gain is a function of the number of drives and channels used.

distributed computer (n.) A computer made up of many smaller and potentially independent computers, such as a network of workstations. This architecture is increasingly studied because of its cost effectiveness and flexibility. Distributed computers are often heterogeneous. See also multi-processor, multicomputer.

distributed memory (n.) Memory that is physically distributed amongst several modules. A distributed memory architecture may appear to users to have a single address space and a single shared memory or may appear as disjoint memory made up of many separate address spaces.

divide and conquer (n.) a problem solving methodology that involves partitioning a problem into subproblems, solving the subproblems, and then combining the solutions to the subproblems into a solution for the original problem.

DLCI (n.) Data Link Connection Identifier, a frame relay header field that identifies the destination of the packet.

DMA (n.) Direct Memory Access; allows devices on a bus to access memory without requiring intervention by the CPU.

DNA (n.) Digital Network Architecture is Digital Equipment Corporation's proprietary digital network architecture and is also known as DECnet.

domain (n.) That part of a larger computing resource allocated for the sole use of a specific user or group of users. See also space sharing.

domain decomposition (n.) See decomposition

DRAM (n.) Dynamic RAM; memory which periodically needs refreshing, and is therefore usually slower than SRAM but is cheaper to produce.

DS0 (n.) .Digital service hierachy level 0 with a maximum channel capacity of 64kbps.

DS1 (n.) Digital service hierachy level 1 with a maximum channel capacity of 1.544Mbps. This term is used interchangeably with T1. 24 DS-0 channels per DS1.

DS3 (n.) Digital service hierachy level 3 with a maximum channel capacity of 44.736. This term is used interchangeably with T3. 28 DS1 channels per DS3.

dusty deck (n.) A term applied to old programs (usually Fortran or Cobol). The term is derived from the image of a deck of punched cards grown dusty over the years.

dynamic channel naming (n.) a message passing scheme allowing source and destination designators to be established at run time.

dynamic decomposition (n.) a task allocation policy that assumes tasks are generated at execution time. See also decomposition.