Newest entries are first. Older changes can be found here.

30th July 1996

/parallel/theory/formal/csp/jeremy-martin/
The Design and Construction of Deadlock-Free Concurrent Systems by Jeremy Martin <jeremy.martin@oucs.ox.ac.uk>, Oxford University Computing Services, 13 Banbury Road, Oxford, OX2 6NN, UK; Tel: 01865 273236 PhD Thesis, University of Buckingham, UK, 1996
/parallel/theory/formal/csp/jeremy-martin/thesis.ps.gz
The Design and Construction of Deadlock-Free Concurrent Systems by Jeremy Martin <jeremy.martin@oucs.ox.ac.uk>, Oxford University Computing Services, 13 Banbury Road, Oxford, OX2 6NN, UK; Tel: 01865 273236 PhD Thesis, University of Buckingham, UK, 1996 ABSTRACT: It is a difficult task to produce software which is guaranteed never to fail, but it is a vital goal for which to strive in many real-life situations. The problem is especially complex in the field of parallel programming, where there are extra things that can go wrong. A particularly serious problem is Here we consider how to construct systems which are guaranteed deadlock-free by design. Design rules, old and new, which eliminate deadlock are catalogued, and their theoretical foundation illuminated. Then the development a software engineering tool is described which proves deadlock-freedom by verifying adherence to these methods. Use of this tool is illustrated with several case studies. The thesis concludes with a discussion of related issues of parallel program reliability.
/parallel/theory/formal/csp/jeremy-martin/frontispiece.ps.gz
Frontispiece i-ix (10 pages, 1-10)
/parallel/theory/formal/csp/jeremy-martin/introduction.html
/parallel/theory/formal/csp/jeremy-martin/introduction.ps.gz
Introduction 1-4 (4 pages, 11-14)
/parallel/theory/formal/csp/jeremy-martin/chapter1.ps.gz
Chapter 1. CSP and Deadlock 5-33 (29 pages, 15-43)
/parallel/theory/formal/csp/jeremy-martin/chapter2.ps.gz
Chapter 2. Design Rules for Deadlock Freedom 34-61 (28 pages, 44-71)
/parallel/theory/formal/csp/jeremy-martin/chapter3.ps.gz
Chapter 3. A Tool for Proving Deadlock-Freedom 62-105 (44 pages, 72-115)
/parallel/theory/formal/csp/jeremy-martin/chapter4.ps.gz
Chapter 4. Engineering Applications 106-123 (18 pages, 116-133)
/parallel/theory/formal/csp/jeremy-martin/conclusions.ps.gz
Conclusions and Directions for Future Work 124-129 (6 pages, 134-139)
/parallel/theory/formal/csp/jeremy-martin/references.ps.gz
References 130-133 (4 pages, 140-143)
/parallel/theory/formal/csp/jeremy-martin/appendixa.ps.gz
Appendix A: Partial Orders 134-135 (2 pages, 144-145)
/parallel/theory/formal/csp/jeremy-martin/appendixb.ps.gz
Appendix B: Graphs and Digraphs 136-141 (6 pages, 146-151)

25th July 1996

/parallel/events/pdpta96
Updated PDPTA96 conference details: program, registration form and exhibition information.
/parallel/internet/www/sites/europe/germany/
Updated German WWW sites with DKRZ below
http://www.dkrz.de/index-eng.html
German Climate Research Center / Deutsches Klimarechenzentrum (DKRZ) by Kerstin Kleese <kleese@dkrz.de> In English and also in Deutsch at http://www.dkrz.de/. See parallel computing pages at http://www.dkrz.de/dkrz/parallel/parallel-english.html (English) and http://www.dkrz.de/dkrz/parallel/parallel.html (Deutsch)
/parallel/teaching/books/aspects-computational-science
Aspects of Computational Science by Jaap Hollenberg <sondjaap@horus.sara.nl> A text book on high-performance computing edited by Aad van der Steen from the Academic Computer Centre in Utrecht and 8 other Dutch scientists in the field. Published by NCF, the Dutch National Computing Facilities Foundation.

The book contains a course in the broad field of "high-performance computing", intended for students, graduate or post doctoral from universities and vocational high schools and for technical and research workers. The course is self contained but some global knowledge (e.g. of a high-level programming language) is assumed. The material can be used by lecturers in courses on computers, computer science or computational science, but can also be used by students directly without further guidance.

/parallel/journals/cpe-commercial-dist-env
"Concurrency: Practice and Experience: Special Issue - Commercial Applications of Distributed Computing Environments" by Mark Baker <mab@npac.syr.edu>, http://www.npac.syr.edu/users/mab/homepage/, Northeast Parallel Architectures Center, 111 College Place, Syrcuse University, Syracuse, NY 13244-4100, USA; Tel: +1 315 443 2083; FAX: +1 315 443 1973 Call for papers for journal planned for Spring 1997, published by John Wiley & Sons Ltd.

Topics: Experiences using a CMS package with commercial applications; Comparisons of the performance characteristics encountered when running applications under different environments; Experiences of using similar computing environments on different hardware platforms; Load balancing applications; Experiences of using various different environments - from Virtual Shared Memory to Message Passing; Application responsiveness and scalability; Parameter estimation for performance modelling of applications; Monitoring and modeling applications; Resource management and configuration; Task Management and synchronisation; Collaborative tools and techniques; Authentication, security and information surety; The use of emerging technologies to help tackle problem solving in a distributed computing environment and others.

Deadlines: Abstracts: 15th September 1996; Abstracts approved: 30th September 1996; Full papers: 15th December 1996.

/parallel/events/irregular96
Updated IRREGULAR'96 event to include program and registration form.

24th July 1996

/parallel/events/ipps97
"11th International Parallel Processing Symposium and Symposium on New Directions in Parallel and Concurrent Computing" (IPPS '97 / PARCON 97) by Stephane Ubeda <Stephane.Ubeda@lip.ens-lyon.fr> Call for papers and participation for conference being held from 1st-5th April 1997 at University of Geneva, Geneva, Switzerland. Sponsored by IEEE Computer Society Technical Committee on Parallel Processing (TCPP) in cooperation with ACM SIGARCH; University of Geneva; European Association for Theoretical Computer Science (EATCS); Swiss Special Interest Group on Parallelism (SIPAR) and SPEEDUP Society.

Topics: Parallel Architectures; Memory Hierarchies; Parallel Algorithms; Scientific Computing; Parallel Languages; Programming Environments; Parallelizing Compilers; Special Purpose Processors; VLSI Systems; Performance Modeling/Evaluation; Signal & Image Processing Systems; Parallel Implementations of Application Tasks; Interconnection Networks and Implementation Technologies and others.

Deadlines: Papers: 20th September 1996; Notification: 13th December 1996; Camera-ready papers: 20th January 1997; Workshop proposals: 30th August 1996; Tutorial proposals: 31st October 1996; Exhibits: 31st October 1996.

See also http://cuiwww.unige.ch/~ipps97

/parallel/libraries/numerical/linear-algebra/plapack-paper
Towards Usable and Lean Parallel Linear Algebra Libraries by Almadena Chtchelkanova; Carter Edwards; John Gunnels; Greg Morrow,; James Overfelt and Robert A. van de Geijn <rvdg@cs.utexas.edu>. Announcement of Technical Report TR-96-09, Department of Computer Sciences, University of Texas, May 1996. Submitted to Supercomputing 96.

See also http://www.cs.utexas.edu/users/rvdg/abstracts/SC96.html ABSTRACT: In this paper, we introduce a new parallel library effort, as part of the PLAPACK project, that attempts to address discrepencies between the needs of applications and parallel libraries. A number of contributions are made, including a new approach to matrix distribution, new insights into layering parallel linear algebra libraries, and the application of ``object based'' programming techniques which have recently become popular for (parallel) scientific libraries. We present an overview of a prototype library, the <bf> SL_Library </b>, which incorporates these ideas. Preliminary performance data shows this more application-centric approach to libraries does not necessarily adversely impact performance, compared to more traditional approaches.

/parallel/libraries/others/scanmacs
SCANMACS - Scan Macros for Regularly Distributed Arrays by Peter A. Dinda <pdinda@cs.cmu.edu> A small set of C Macros that let you instantiate high performance scan (parallel prefix) functions for regularly (ie, HPF Block-Cyclic style) distributed arrays.
/parallel/environments/para++.announce
Para++: C++ Bindings for Message Passing Libraries by Eric Dillon <Eric.Dillon@loria.fr> The aim of Para++ is to provide C++ bindings to use any message passing library. With it, the use of Message Passing libraries is simplified and more attractive, without significant performances lost. Para++ is implemented with MPI.

See also http://www.loria.fr/para++/parapp.html

/parallel/environments/mpi/mpi-io.announce
MPI-IO: A Parallel File I/O Interface for MPI by Bill Nitzberg <nitzberg@nas.nasa.gov> Announcement of MPI-IO, a proposed standard interface for parallel I/O (reading and writing files from parallel applications) which provides a high-level interface to describe the partitioning of file data among processes, a collective interface describing complete transfers of global data structures between process memories and files, full support for asynchronous I/O operations, and a hints interface for supporting machine dependent optimizations.

See also http://lovelace.nas.nasa.gov/MPI-IO/

/parallel/languages/fortran/f90/f90-95-explained
Fortran 90/95 Explained by Michael METCALF <Michael.METCALF@cern.ch> and John Reid. Fortran 95 is a revision of the ISO Fortran 90 standard based on the interpretations that have been requested following its implementation and use. In addition, new features to keep ISO Fortran aligned with High Performance Fortran have been added, along with a small number of other improvements. It is now in its final stages of formal approval.

This volume represents a thorough revision of "Fortran 90 Explained". It includes more detailed explanations of many features with more examples (giving about 18 additional pages), as well as new appendices (on avoiding Fortran 77 extensions and an extended pointer example, a further 12 pages). Also, it incorporates all the interpretations, and has a completely new chapter on Fortran 95 (18 pages). It is a complete and authoritive description of Fortran 90/95.

Published by Oxford University Press, Oxford and New York, 1996, ISBN 0 19 851888 9. See also http://www.oup.co.uk/ (UK) or http://www.oup-usa.org (US)

/parallel/environments/mpi/oompi.announce
Object Oriented MPI (OOMPI) by Jeff Squyres <jsquyres@lsc.nd.edu> A full-featured class library for MPI (1.1) from Laboratory for Scientific Computing of the Department of Computer Science and Engineering at the University of Notre Dame.

Provides full MPI-1.1 functionality; implemented as a thin layer on top of the C MPI bindings; offers convienent and intuitive object-oriented abstractions for message passing and uses many of the powerful semantic features of the C++ language, such as data typing, polymorphism, etc.

See http://www.cse.nd.edu/~lsc/research/oompi/

/parallel/journals/informatica-pdds
"Informatica - Special Issue on Parallel and Distributed Database Systems" by Katarzyna M Paprzycka <kmpst6+@pitt.edu> Call for papers for special issue of journal.

Topics: Distributed database modeling and design techniques; Parallel and distributed object management; Interoperability in multidatabase systems; Parallel on-line transaction processing; Parallel and distributed query optimization; Parallel and distributed active databases; Parallel and distributed real-time databases; Multimedia and hypermedia databases; Databases and programming systems; Mobile computing and databases; Transactional workflow control; Parallel and distributed algorithms; Temporal databases; Data mining/Knowledge discovery; Use of distributed database technology in managing engineering, biological, geographic, spatial, scientific, and statistical data; Scheduling and resource management and others.

Deadlines: Papers: 1st November 1996; Notification: 1st March 1997.

22nd July 1996

/parallel/events/hpcasia97
HPC ASIA'97 Conference and Exhibition by Joon Kwon Kim <hpc97@seri.re.kr>, http://www.seri.re.kr/HPC97.html Call for papers for conference being held from 21-25th April 1997 at Seoul, Korea. Hosted by Supercomputer Center, Systems Engineering Research Institute in cooperation with Parallel Processing Symposium Society of KISS.

Topics: Computational Chemistry and Biomedical Application; Communication and Computer Networks; Computational Physics / Astronomy; Computer Architecture; Computing Applications(Elctronics, Fluid Dynamics, Meteorology/Environmental Science, Solid Mechanics); Data Mining; Parallel Algorithms; Parallel Programming Languages and Tools; High Performance Parallel System and Performance Evaluation; Scalable I/O; Applications on Information Superhighway; Scientific Visualization and Workstation Clustering.

Deadlines: Papers, Tutorials, Round-tables, Panels, Visual Presentations, Research Exhibits and Exhibitors: 15th November, 1996;

19th July 1996

http://www.hpcc.ecs.soton.ac.uk/events.html
Writing Data Parallel Programs with HPF Call for attendance for course being held from 22-23rd July 1996 at Southampton, UK.

A two day course offered in association with EPCC. We will follow the EPCC notes and course structure. Speakers from Edinburgh and Southampton will present the 8 sections of the course interspersed with practical sessions using NA Software's HPF compiler on the Meiko CS-2. This course will only take place if there is sufficient interest. (Free to members of UK higher education institutions)

16th July 1996

/parallel/events/hpcn-europe97
High-Performance Computing and Networking, Europe 1997 (HPCN Europe 97) by Jaap Hollenberg <sondjaap@horus.sara.nl> Call for papers, posters and workshopse for conference being held from 28-30th April 1997 at Vienna, Austria.

Topics: End-user HPCN applications, computational science and computer science research in HPCN.

Deadlines: Extended Abstracts / Full Papers: 1st November 1996; Posters: 1st November 1996; Workshops: 1st November 1996; Notification: 1st February 1997.

See also http://www.wins.uva.nl/hpcn/ (From 15th August 1996)

/parallel/groups/wotug/java
Java Threads Workshop Created area for above workshop being held from 23-24th September 1996 at University of Kent at Canterbury. Includes announcement of workshop and on-line registration.
/parallel/internet/usenet/comp.parallel/FAQ/
Updated to latest versions of parts 02,04,06,08,10,18 and 20 of the comp.parallel newsgroup FAQ.

15th July 1996

/parallel/vendors/
Added Parsys Ltd. Home page at http://www.parsys.com/

9th July 1996

/parallel/architecture/communications/io/pario/papers/Kotz/kotz:agents.ps.Z
Transportable Agents Support Worldwide Applications by Robert Gray <dfk@cs.dartmouth.edu> To appear in 1996 SIGOPS European Workshop. ABSTRACT: Worldwide applications exist in an environment that is inherently distributed, dynamic, heterogeneous, insecure, unreliable, and unpredictable. In particular, the latency and bandwidth of network connections varies tremendously from place to place and time to time, particularly when considering wireless networks, mobile devices, and satellite connections. Applications in this environment must be able to adapt to different and changing conditions. We believe that transportable autonomous agents provide an excellent mechanism for the construction of such applications. We describe our prototype transportable-agent system and several applications.
/parallel/architecture/communications/io/pario/papers/Kotz/nieuwejaar:galley.ps.Z
The Galley Parallel File System by Nils Nieuwejaar and David Kotz <dfk@cs.dartmouth.edu>. Copyright 1996 by ACM. Appeared in International Conference on Supercomputing, May 1996, pp. 374-381. ABSTRACT: As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.
/parallel/architecture/communications/io/pario/papers/Kotz/nieuwejaar:galley-perf.ps.Z
Performance of the Galley Parallel File System by Nils Nieuwejaar and David Kotz <dfk@cs.dartmouth.edu>. Copyright 1996 by ACM. Appeared in IOPADS '96, May 1996, pp. 83-94. IOPADS is the Workshop on I/O in Parallel and Distributed Systems. ABSTRACT: As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance I/O to applications that access data in patterns that have been observed to be common.

8th July 1996

/parallel/transputer/software/drivers/linux/transputer-08b.tar.gz
Device-driver for B004/B008 Transputers for Linux Version 0.8b by Christoph Niemann <niemann@swt.ruhr-uni-bochum.de> Works via an INMOS B004-compatible link-interface (most link-interfaces should support this mode because it works via simple io-instructions) Requires linux 1.2.X or above. Works up to 1.2.13 ELF and a.out.
/parallel/languages/code/
(Described more CODE papers) CODE is a visual parallel programming system. Programs are created by drawing and then annotating a directed graph that shows the structure of the parallel program. Nodes represent sequential computations, and data flows on arcs interconnecting the nodes, which run in parallel.
/parallel/languages/code/CodeICS92.ps.Z
The CODE 2.0 Graphical Parallel Programming Language by Peter Newton <newton@cs.utexas.edu> and James C. Browne <browne@cs.utexas.edu>. Department of Computer Sciences, University of Texas at Austin, Austin, TX 78712, USA. ABSTRACT: CODE 2.0 is a graphical parallel programming system that targets the three goals of ease of use, portability, and production of efficient parallel code. Ease of use is provided by an integrated graphical/textual interface, a powerful dynamic model of parallel computation, and an integrated concept of program component reuse. Portability is approached by the declarative expression of synchronization and communication operators at a high level of abstraction in a manner which cleanly separates overall computation structure from the primitive sequential computations that make up a program. Execution efficiency is approached through a systematic class hierarchy that supports hierarchical translation refinement including special case recognition. This paper reports results obtained through experimental use of a prototype implementation of the CODE 2.0 system. CODE 2.0 represents a major conceptual advance over its predecessor systems (CODE 1.0 and CODE 1.2) in terms of the expressive power of the model of computation which is implemented and in potential for attaining efficiency across a wide spectrum of parallel architectures through the use of class hierarchies as a means of mapping from logical to executable program representations.
/parallel/languages/code/DissBook.ps.Z
A Unified Approach To Concurrent Debugging by Syed Irfan Hyder PhD Thesis, University of Texas Austin, December 1994. ABSTRACT: Debugging is a process that involves establishing relationships between several entities: The behavior specified in the program, P, the model/predicate of the expected behavior, M, and the observed execution behavior, E. The thesis of our approach is that a consistent representation for P, M and E greatly simplifies the problem of concurrent debugging, both from the viewpoint of the programmer attempting to debug a program and from the viewpoint of the implementer of debugging facilities. Provision of such a consistent representation becomes possible when sequential behavior is separated from concurrent or parallel structuring. Given this separation, the program becomes a set of sequential actions and relationships among these actions. The debugging process, then, becomes a matter of specifying and determining relations on the set of program actions. The relations are specified in P, modeled in M and observed in E. This simplifies debugging because it allows the programmer to think in terms of the program which he understands. It also simplifies the development of a unified debugging system because all of the different approaches to concurrennt debugging become instances of the establishment of relationships between the actions. We define a formal model of concurrent debugging in which the entire debugging process is specified in terms of program actions. This unified model of concurrent debugging places all of the approaches to debugging of parallel programs such as execution replay, race detection, model/predicate checking, execution history displays and animation, which are commonly formulated as disjoint facilities, in a single, uniform framework. We have also developed a feasibility demonstration prototype of a debugger implementing this unified model of concurrent debugging in the context of the CODE 2.0 parallel programming system. This implementation demonstrates and validates the claims of integration of debugging facilities in a single framework. It is further the case that the unified model of debugging greatly simplifies the construction of a concurrent debugger. All of the capabilities previously regarded as separate for debugging of parallel programs, both in shared memory models of execution and distributed memory models of execution, have been given an implementation in this prototype.
/parallel/languages/code/DistrExecEnvironments.ps.Z
"Distributed Execution Environments for the CODE 2.0 Parallel Programming System" by Rajeev Mandayam Vokkarne Master Thesis dissertation, The University of Texas at Austin, May 1995. ABSTRACT: Writing parallel programs which are both efficient and portable has been a major barrier to effective utilization of parallel computer architectures. One means of obtaining portable parallel programs is to express the parallelism in a declarative abstract manner. The conventional wisdom is that the difficulty of translation of abstract specifications to executable code leads to loss of efficiency in execution. This thesis demonstrates that programs written in the CODE 2.0 representation where parallel structure is defined in declarative abstract forms can be straightforwardly compiled to give efficient execution on the distributed execution environment defined by the Parallel Virtual Machine (PVM) system. The CODE 2.0 model of programming casts parallel programs as dynamic hierarchical dependence graphs where the nodes are sequential computations and the arcs define the dependencies among the nodes. Both partitioned and shared name spaces are supported. This abstract representation of parallel structure is independent of implementation architecture. The challenge is to compile this abstract parallel structure to an efficient executable program. CODE 2.0 was originally implemented on the Sequent Symmetry shared memory multiprocessor and was shown to give executable code which was competitive with good hand coded programs in this environment. This thesis demonstrates that CODE 2.0 programs can be compiled for efficient execution on a distributed memory execution environment with a modest amount of effort. The environment chosen for this demonstration was PVM. PVM was chosen because it is available on a variety of distributed memory parallel computer architectures. Development of the translator from CODE 2.0 to the PVM execution environment required only a modest amount of effort. Translations to other distributed execution environments can probably be accomplished with a few man-weeks of effort. The efficiency of the executable is demonstrated by comparing the measured execution time of several parallel programs to hand-coded versions of the same algorithms.
/parallel/languages/code/DissUnifiedAppConcDbg.ps.Z
A Unified Approach To Concurrent Debugging by Syed Irfan Hyder D.Phil dissertation, University of Texas, Austin, USA. December 1994. Abstract is as DissBook.ps above.
/parallel/languages/code/Exp_Code_Hence.ps.Z
"Experiences with CODE and HeNCE in Visual Programming for Parallel Computing" by James C. Browne <browne@cs.utexas.edu>; Jack Dongarra; Syed I. Hyder; Keith Moor and Peter Newton <newton@cs.utexas.edu>. ABSTRACT: Visual programming has particular appeal for explicit parallel programming, particularly coarse grain MIMD programming. Explicitly parallel programs are multi-dimensional objects; the natural representations of a parallel program are annotated directed graphs: data flow graphs, control flow graphs, etc. where the nodes of the graphs are sequential computations. A visually based `directed graph' representation of parallel programs is thus more natural than a pure text string language where multi-dimensional structures must be implicitly defined. The naturalness of the annotated directed graph representation of parallel programs enables methods for programming and debugging which are qualitatively different and arguably superior to the conventional practice based on pure text string languages. Two visually-oriented parallel programming systems, CODE 2.0 and HeNCE, will be used to illustrate these concepts. The benefits of visually-oriented realizations of these models for program structure capture, performance analysis and debugging will be explored. It is only by actually implementing and using visual parallel programming languages that we have been able to fully evaluate their merits.
/parallel/languages/code/KleynDissBook.ps.Z
"A High Level Language for Specifying Graph-Based Languages and Their Programming Environments" by Michiel Florian Eugene Kleyn D.Phil Thesis Dissertation, University Of Texas At Austin, USA. August 1995. ABSTRACT: This dissertation addresses the problem of creating interactive graphical programming environments for visual programming languages that are based on directed graph models of computation. Such programming environments are essential to using these languages but their complexity makes them difficult and time consuming to construct. The dissertation describes a high level specification language, Glide, for defining integrated graphical/textual programming environments for such languages. It also describes the design of a translation system, Glider , which generates an executable representation from specifications in the Glide language. Glider is a programming environment generator; it automates the task of creating the programming environments used for developing programs in graphbased visual languages. The capabilities supported by the synthesized programming environments include both program capture and animation of executing programs. The significant concepts developed for this work and embodied in the abstractions provided by the Glide language are: an approach to treating programs as structured data in a way that allows an integrated representation of graph and text structure; a means to navigate through the structure to identify program components; a query language to concisely identify collections of components in the structure so that selective views of program components can be specified; a unified means of representing changes to the structure so that editing, execution, and animation semantics associated with the language can all be captured in a uniform way; and a means to associate the graphical capabilities of user interface libraries with displaying components of the language. The data modeling approach embodied in the Glide specification language is a powerful new way of representing graph-based visual languages. The approach extends the traditional restricted mechanisms for specifying composition of text language structure. The extensions allow programming in visual languages to be expressed as a seamless extension of programming in text-based languages. A data model of a graph-based visual language specif ied in Glide forms the basis for specifying the program editing, language execution semantics, and program animation in a concise and abstract way.
/parallel/languages/code/KleynRecursiveTypes.ps.Z
Data Types for Graph-Based Visual Programming by Michal F. Kleyn <kleyn@cs.utexas.edu>, Department of Computer Sciences, The University of Texas at Austin, Austin TX 78712, USA ABSTRACT: This paper argues the appropriateness of using data types with sharing to characterize the underlying data structures of a large category of graphical programming interfaces - those interfaces which include building programs by interactively manipulating graphical elements in graphs as well as editing characters and words in text. The paper examines the difficulties in providing direct formal representations of the definitions and manipulations of such 'graph' data types that allow sharing of structure. The problem of formalizing the class is shown to be closely related to similar problems that arise in many different areas including the specification of abstract data types, functional programming, and models of object-oriented and network databases. The paper presents the particular approach used in the context of our work on a high-level specification language for describing interactive graphical programming environments and an associated generator.
/parallel/languages/code/newton_diss.tar.Z
"A Graphical Retargetable Parallel Programming Environment and Its Efficient Implementation" by Peter Newton <newton@cs.utexas.edu> Dissertation, Dept. of Computer Sciences, University of Texas at Austin. December 1993 ABSTRACT: This dissertation addresses the problem of facilitating the development of efficiently executing programs for multiple-instruction multi-datastream (MIMD) parallel computers. The family of MIMD parallel computer architectures is the most flexible and most widely applicable means of meeting requirements for very high performance computation. It is widely accepted, however, that current methods of preparing programs for these systems are inadequate and are the primary bottleneck for attainment of these machines' potential.

It is difficult to write programs which are both correct and efficient even for a single MIMD parallel architecture. A program which is efficient in execution on one member of this architecture class is often either not portable at all to different members of the architecture class, or if portability is possible, the efficiency attained is usually not satisfactory on any architecture.

The conceptual basis of the approach we have taken to providing a solution for the problem of programming MIMD parallel architectures is based upon raising the level of abstraction at which parallel program structures are expressed and moving to a compositional approach to programming. The CODE 2.0 model of parallel programming permits parallel programs to be created by composing basic units of computation and defining relationships among them. It expresses the communication and synchronization relationships of units of computation as abstract dependencies. Runtime determined communications structures can be expressed.

Ready access to these abstractions is provided by a flexible graphical interface in which the user can specify them in terms of extended directed graphs. Both ease of preparation of correct programs and compilation to efficient execution on multiple target architectures is enabled. The compositional approach to programming focuses the programmer's attention upon the structure of the program, rather than development of small unit transformations. In the CODE 2.0 system, the units of computation are prepared using conventional sequential programming languages along with declaratively specified conditions under which the unit is enabled for execution.

The system is built upon a unique object-oriented model of compilation in which communication and synchronization mechanisms are implemented by parameterized class templates which are used to custom tailor the translation of abstract specifications in communication and synchronization to efficient local models.

The attainment of the goals of the research is measured in the following ways. There have been several uses of the CODE 2.0 system by casual users in parallel programming classes. The results are uniformly positive; the programs which are developed are simple and easy to read, and execute at least as efficiently as programs written in conventional parallel languages. Experimental measurement of the execution behavior of benchmark programs has shown that the executable code generated by CODE 2.0 is efficient, often within 5% or less, and sometimes more efficient than hand-generated parallel programs. Portability with retention of efficiency of execution has been demonstrated by implementations on two different execution environments; an implementation on the synchronous message paradigm given by Ada and in the shared-memory environment of the Sequent Dynix operating system.

/parallel/languages/code/SC93tut.ps.Z
"SC 93 Graph/Visual Abstract Models and tools in Parallel Computation" by J.C. Browne <browne@cs.utexas.edu> and Peter Newton <newton@cs.utexas.edu>. Dept. of Computer Sciences, University of Texas at Austin.
/parallel/languages/code/wet94.ps.Z
Chapter 1: Visual Programming and Parallel Computing by Peter Newton ABSTRACT: Visual programming languages have a number of advantages for parallel computing. They integrate well with programming environments and graphical program behavior visualization tools, and they present programmers with useful abstractions that aid them in understanding the large-scale structure of their programs. Such understanding is important for achieving good execution performance of parallel programs. Furthermore, graphical programming languages can be easier for non-specialists to use than other explicitly parallel languages since they relieve the programmer of the need to directly use low-level primitives such as message sends or locks. This paper discusses some of these general advantages and presents simple examples in the existing visual parallel programming languages, HeNCE and CODE 2.0.
/parallel/tools/editors/folding-editors/fe-mh/fe-0.3.tz
FE Folding Editor Version 0.3 by Michael Haardt <michael@cantor.informatik.rwth-aachen.de>
/parallel/standards/mpi/anl/errata-1.1.ps.Z
MPI Report V1.1 Errata
/parallel/environments/pvm3/distribution/pvm3.3.11.tar.gz
/parallel/environments/pvm3/distribution/pvm3.3.11.tar.Z
/parallel/environments/pvm3/distribution/pvm3.3.11.shar.Z
/parallel/environments/pvm3/distribution/pvm3.3.11.tar.Z.uu.Z
PVM V3.3.11: Parallel Virtual Machine System by A. L. Beguelin; J. J. Dongarra; G. A. Geist; W. C. Jiang; R. J. Manchek; B. K. Moore and V. S. Sunderam. University of Tennessee, Knoxville TN, USA; Oak Ridge National Laboratory, Oak Ridge TN, USA; Emory University, Atlanta GA, USA.
/parallel/environments/pvm3/distribution/pvm3310to11.Z
Patch from PVM V3.3.10 to V3.3.11
/parallel/standards/mpi/anl/mpi2/apr2496a.ps.Z
/parallel/standards/mpi/anl/mpi2/apr2496a.dvi
Minutes of MPI Meeting April 24-26, 1996, Chicago, USA by William Gropp <gropp@mcs.anl.gov> An unedited set of minutes taken during this MPI meeting. This contains both a summary of some of the discussions and official, binding votes of the MPI Forum. [binary document]
/parallel/transputer/software/compilers/gcc/pereslavl/ttools/ttools-2.0alpha35.tar.gz
Ttools V2.0 alpha 35 by Yury Shevchuk <sizif@botik.ru> A free package comprising assembler, linker, loader(s), and auxiliary programs intended for software development for transputers. It is designed to work with gcc-t800, a port of GNU C/C++ Compiler to transputers.
/parallel/simulation/communications/chaos/docs/guide.ps.gz
A Guide to Literature on Chaotic Routing
/parallel/standards/mpi/mpimap/MPIMap.tar.Z
MPIMap 1.1.2 - A tool for visualizing MPI datatypes by John May <johnmay@llnl.gov> Must run on the target parallel machine for which the MPI code is being developed, since it calls MPI to determine datatype layouts and sizes.

Designed to only with the MPICH implementation of MPI and has been tested with versions V1.0.10 through V1.0.13 It also requires Tcl and Tk. it has been tested with Tcl7.3/Tk3.6, Tcl7.4/Tk4.0 and and Tcl 7.5/Tk 4.1 and works best with a colour display.

/parallel/transputer/software/compilers/gcc/pereslavl/
GCC T800 backend V8 update
/parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/changes8
Changes in gcc-2.7.2-t800.8
/parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/gcc-2.7.2-t800.8.dif.gz
gcc-2.7.2 for t800 (source diff) V8
/parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/patch8.gz
Patch from V7 to V8
/parallel/environments/pvm3/tkpvm/
tkPvm is the result of a wedding. The husband is pvm3.3.x (preferably 3.3.10) and the wife is Tcl7.5/Tk4.1. As usual with a marriage, both sides profit from the combination. See also http://www.nici.kun.nl/tkpvm/
/parallel/environments/pvm3/tkpvm/tkpvm1.0.tar.gz
/parallel/environments/pvm3/tkpvm/tkpvm1.0.README
Tkpvm Version 1.0 by Jan Nijtmans <nijtmans@nici.kun.nl>, http://www.nici.kun.nl/~nijtmans/, Nijmegen Institute of Cognition and Information (NICI), Netherlands

Copyright © 1993-2000 Dave Beckett & WoTUG