Newest entries are first. Older changes can be found here.

30th November 1995

/parallel/languages/sisal/distribution/OSC-13.0.3.tar.Z
Optimizing Sisal Compiler (OSC) V13.0.3 Native Compiler and Debugger by Pat Miller <patmiller@llnl.gov>; Scott Denton <smd@llnl.gov>; Rea Simpson; David Cann; S. Harikrishnan and Rod Oldehoeft. CRG/OSC Development Crew, Lawrence Livermore National Laboratory, L-306, Livermore, CA 94550, USA. Tel: +1 (510) 423-0309 Contains compiler and run-time library software for running SISAL programs on various machines. Contains software for SISAL compiler (osc); run time support library written in C; utility program for multiprocessing; manual pages and utilities. Ported to: SGI IRIS with IRIX 4.04; Cray C90 with UNICOS 7.c; Meiko CS-2 with Solaris 2.1; IBM RS6000 with AIX; Sun 3 with UNIX 4.2; Sun Sparc 10 with Solaris 2.3; DEC Decstation with UNLTRIX V4.3 R44; Mac with MachTen 2.1.1; PC x486 with LINUX and Cray T3D with UNICOS Bugs to sisal-bugs@sisal.llnl.gov
/parallel/languages/sisal/distribution/sisal.tutorial.files.tar.Z
SISAL Tutorial Files Available in HTML, MacWrite, PostScript and plain text.
/parallel/languages/sisal/distribution/mini-faq.html
/parallel/languages/sisal/distribution/mini-faq.txt
Short version of the full SISAL FAQ (frequently asked questions)
/parallel/standards/mpi/anl/sut-1.0.25.tar.Z
Scalable Unix Tools V1.0.25 by William Gropp <gropp@mcs.anl.gov> and Ewing Lusk <lusk@mcs.anl.gov>. Mathematics and Computer Science Division, Argonne National Laboratory, USA. Includes tools: pps, pls, load, gload, prun, pkill, prm, pdistrib, pfind, fps, pfps etc and a paper. Requires rsh. Bug reports to sut-maint@mcs.anl.gov

29th November 1995

/parallel/libraries/communication/c4/ds++-951128.tar.gz
DS++ - the C++ Data Structure Library of 11th Nov 1995 Author: Geoffrey Furnish <furnish@dino.ph.utexas.edu>, http://dino.ph.utexas.edu/~furnish, Institute for Fusion Studies, University of Texas at Austin, Austin, Tx 78712, USA

28th November 1995

/parallel/events/hpcs96
10th Annual International Conference on High Performance Computers (HPCS 96) by Frank Dehne <dehne@scs.carleton.ca> Call for papers for Conference being held from 5th-7th June 1996 at Citadel Inn, Ottawa, Canada. Organised by: SUPER*CAN; Universities of Ottawa and Carleton; IEEE (Ottowa) and OPCOM.. Topics: Parallel, distributed or vector systems in these topic areas: robotics; embedded systems; signal and image processing; metacentres; parallel distributed processing; photonics networks; neural networks adapted to high performance computers; speech synthesis; real time data processing; visual presentation of parallel systems; performance measures and telecommunications. Deadlines: Preliminary Abstracts: 15th December 1995; Full Papers: 5th February 1996; Notification: 5th March 1996 and Final Papers: 31st March 1996. See also http://www.ieee.ca/supercan/hpcs96.html/
/parallel/environments/pvm3/glenda/glenda.tar.Z
Updated: "Glenda 1.0 Distribution" by Ray Seyfarth <seyfarth@whale.st.usm.edu>; Jerry Bickham and Suma Arumugham. University of Southern Mississippi, Hattiesburg, MS, USA. Includes sources, documentation and examples.
/parallel/environments/pvm3/emory-vss/pvm_naskernels.tar.Z
PVM versions of 5 NAS Parallel Benchmarks Kernels by Anders Alund Benchmarks contained: pvmep, pvmcg, pvmmg, pvmmfr, pvmmis
/parallel/environments/pvm3/pvanim/pvanimOL.tar.Z
Updated: "PVaniM 2.0: Online and Postmortem Visualization Support for PVM" by Brad Topol, Georgia Institute of Technology; John T. Stasko <stasko@cc.gatech.edu>, Georgia Institute of Technology and Vaidy Sunderam, Emory University. The PVaniM 2.0 system provides online and postmortem visualization support as well as rudimentary I/O for long running, communication-intensive PVM applications. PVaniM 2.0 provides these features while using several techniques to keep system perturbation to a minimum. Questions, comments and complaints to: pvanim@cc.gatech.edu
/parallel/environments/chimp/ssp/ssp_95-vispat.ps.Z
VISPAT (VISualisation of Performance Analysis and Tuning) - Application Engineering Tools for MPI and PUL by Patricio R. Domingues <patricio@ssp.epcc.ed.ac.uk> EPCC Technical Report: EPCC-SS95-12 ABSTRACT: VISPAT is a post-mortem visualisation tool based on the concept of program execution phases. It consists of several graphic displays, each of which presents a different aspect of the parallel program under consideration. Execution related information is collected at run-time in trace files by using calls to an instrumentation library. The processing of the trace files by VISPAT results in a graphical playback of all recorded run-time events. This report describes the enhancements and changes performed in VISPAT during this year's project.
/parallel/environments/chimp/ssp/ssp_94-vispat.ps.Z
VISPAD (VISualisation tool for Performance Analysis and Debugging) - Application Engineering Tools for MPI and PUL by Kesavan; Shanmugam and Konstantinos. ABSTRACT: This report describes the adaptation of VISPAD, a visualisation tool for performance analysis and debugging, from the CHIMP message passing system to the recently established MPI standard. VISPAD is a post-mortem visualisation tool based on the concept of program execution phases. It consists of a number of displays, each of which presents a dif ferent aspect of the parallel program under consideration. Execution related information is collected at run-time in trace files by using calls to an instrumentation library . The processing of the trace files by VISPAD results in a graphical playback of all the recorded run-time events. The process of adapting VISPAD to MPI included a restructuring of the instrumentation library , the implementation of an instrumented version of the MPI interface, changing the format of the trace files, the adaptation of existing displays, and the introduction of two new displays. The latter serve the purpose of visualising the rich set of communication operations supported by MPI.
/parallel/environments/chimp/ssp/ssp_93-vispat.ps.Z
VISPAD (VISualisation tool for Performance Analysis and Debugging) - Application Engineering Tools for CHIMP and PUL by K-J. Wierenga September 1993 ABSTRACT: This project is concerned with the implementation of a visualisation tool for performance analysis and debugging - VISPAD. The tool's interface is based on Anna Hondroudakis' thesis work on visualisation tools for parallel applications. VISPAD processes information produced by a run of a parallel application. Information about the application run is recorded in trace files by instrumented versions of the CHIMP and PUL libraries and by instrumentation library calls added to the application. VISPAD can then be used to provide postmortem visualisation by replaying the application run from the information in the trace files. Visualisation is provided by a number of graphical displays, which show different aspects of the performance of the parallel application. In this way, it is hoped, the programmer will be assisted in the debugging and optimisation of her/his program. In the nine weeks of the project, three of VISPAD's displays were implemented. The Navigation Display provides a rich system of temporal abstractions (phases) to present a concise view of the application run to the user, allowing her/him to easily locate particular areas of interest. The Membership Matrix Display shows how the different processes in the parallel application join various SAP groups and the way group memberships change over time. The CHIMP Level Animation Display reconstructs CHIMP communications between processes.
/parallel/environments/pvm3/tkpvm/
Update to TkPVM patches for Tcl 7.5a2, Tk 4.1a2 and Tix 4.0+a1
/parallel/standards/mpi/anl/using/examples/advanced/nbodyfinal.c
/parallel/standards/mpi/anl/using/examples/advanced/pipe.c
/parallel/standards/mpi/anl/using/examples.tar.Z
Updated "Using MPI" examples and all the examples in one file.
/parallel/environments/lam/distribution/lam60-doc.tar.gz
LAM 6.0 Documentation by LAM Project <lam@tbag.osc.edu>, http://www.osc.edu/lam.html, Ohio Supercomputer Center, Ohio State University, USA Copyright 1995 The Ohio State University. Available under GNU General Public License version 2 or later. Contains manual pages, documentation, tutorials and examples
/parallel/environments/pvm3/povray/pvmpov28.tar.gz.txt
Patches to add parallel processing to POV-Ray. Requires PVM 3.3. Author: Andreas Dilger <adilger@enel.ucalgary.ca>, http://www-mddsp.enel.ucalgary.ca/People/adilger/, Micronet Research Group, Dept of Electrical & Computer Engineering, University of Calgary, Canada
/parallel/environments/pvm3/povray/pvmpov28.tar.gz
PVM'd POV-Ray by Andreas Dilger <adilger@enel.ucalgary.ca>, http://www-mddsp.enel.ucalgary.ca/People/adilger/, Micronet Research Group, Dept of Electrical & Computer Engineering, University of Calgary, Canada PVM POV-RAY patch. Based on original work by Brad Kline of Cray Research Inc.

27th November 1995

/parallel/events/hinet96
2nd International Workshop on High-Speed Network Computing (HiNet '96) by Dr. Mounir Hamdi <hamdi@cs.ust.hk> Call for papers for workshop at 10th International Parallel Processing Symposium (IPPS '96) being held at Sheraton Waikiki Hotel, Honolulu, Hawaii, USA. Topics: Computing paradigms for high speed networks ; High-speed network protocols for parallel and distributed processing; Programming environments and tools for high-speed network computing; Performance evaluation and simulation; Experimentations and test-beds; Architecture and topology of high-speed networks; Switching and routing techniques; Distributed data-base design and processing on high speed networks; Scheduling and load balancing aspects of high-speed network computing; Efficient communication interfaces; Mobile Computing; Mapping parallel algorithms onto high-speed networks; Algorithm design and analysis for high-speed networks and Applications and software tools. Deadlines: Papers: 11th December 1995; Notification of acceptance: 10th January 1996 and Camera ready papers: 9th February 1996. See also http://www.usc.edu/dept/ceng/prasanna/home.html/ and http://www.cs.ust.hk/Postings/Hinet.html/
/parallel/events/europar96-par-lang-prog
Euro-Par'96 Workshop #5: Parallel Languages and Programming by Ian Foster <itf@dalek.mcs.anl.gov> Call for papers for workshop being held at ENS Lyon, France. Topics: Parallel & agent languages; Heterogeneous systems; Application-specific languages; Programming methodology; Programming models; Formal verification; Runtime systems & compilers and Computer-aided derivation. Deadlines: Paper submission: 4th February 1996; Electronic submissions: 18th February 1996; Notification of acceptance: 10th May 1996; Final papers: 10th June 1996. See also http://www.ens-lyon.fr/LIP/europar96/
/parallel/events/europar96-sched-load-bal
Euro-Par'96 Workshop 17: Scheduling and Load Balancing by Gerasoulis Apostolos <gerasoul@cs.rutgers.edu> Call for papers for workshop being held from 27th-29th August 1996 at ENS Lyon, France. Topics: Static and dynamic scheduling; Load balancing and resource management; Program and data partitioning; Compile-time and run-time optimization; Software tools and programming systems with respect to scheduling and load balancing on homogeneous or heterogeneous platforms; Scheduling and load balancing in scientific computing and multi-media applications and others Deadlines: Paper submission: 4th February 1996; Electronic submissions: 18th February 1996; Notification of acceptance: 10th May 1996; Final papers: 10th June 1996. See also http://www.ens-lyon.fr/LIP/europar96/

16th November 1995

/parallel/environments/pvm3/tape-pvm/ReadMe
Overview of files and patches Author: Eric Maillet <maillet@imag.fr>
/parallel/environments/pvm3/tape-pvm/tape0.9pl7.tgz
Tape/Pvm 0.9 Patch level 7 sources including instructions on setting up, building and installing the distribution. Changes: Suppressed clock synchro tasks - clock synchro is now done using successive calls to pvm_hostsync. User interface to Tape/Pvm is not changed. Author: Eric Maillet <maillet@imag.fr>, LMC-IMAG, Grenoble, France
/parallel/teaching/hpctec/epcc/tech-watch/epic.tar.Z
EPCC-TEC EPIC Interactive Course Package - High Performance Fortran by Mario Antonioletti <epcc-tec@epcc.ed.ac.uk> and Alistair Ewing. Edinburgh Parallel Computing Centre, Edinburgh, UK. Slides.
/parallel/simulation/emulation/sb-pram/fork95/
Fork95 compiler for a synchronous massively-parallel MIMD machine, the SB-PRAM. FORK is an imperative parallel programming language that supports a synchronous data parallel programming style as well as a recursive divide-and-conquer paradigm. The SB-PRAM simulator runs on sun workstations. See also http://www-wjp.cs.uni-sb.de/fork95/fork95.html
/parallel/simulation/emulation/sb-pram/fork95/Announcement
Announcement of Fork95 compiler for SB-PRAM Author: Christoph Kessler <kessler@verleihnix.uni-sb.de>
/parallel/simulation/emulation/sb-pram/fork95/README
Overview of SB-PRAM and Fork95 Authors: Michael Bosch <hirbli@cs.uni-sb.de>; Stefan Franziskus <stefran@cs.uni-sb.de> and Christoph W. Kessler. University of Saarbruecken, Germany.
/parallel/simulation/emulation/sb-pram/fork95/SBPRAM-Fork95.tar.gz
SB-PRAM and Fork95 simulator sources Authors: Michael Bosch <hirbli@cs.uni-sb.de>; Stefan Franziskus <stefran@cs.uni-sb.de> and Christoph W. Kessler. University of Saarbruecken, Germany.

15th November 1995

/parallel/vendors/tmc/chapter11-pr
Thinking Machines Corporation Files Plan to Emerge from Bankruptcy Protection by Joshua Spiewak <jss@Think.COM> Press release from Thinking Machines Corporation.
/parallel/jobs/axiomtech-usa
Axiom Technology, Madison, Alabama, USA is seeking individuals experienced in the multiprocessor market. See also http://www.axiomtech.com/ Author: Ted Thompson <Ted.Thompson@axiomtech.com>
/parallel/jobs/adelade-australia-researcher
Computer Science Department, University of Adelaide, Australia reuire a Fellow/Research Fellow/Postdoctoral Fellow for the Distributed High Performance Computing Project (DHPC). Requirements: research record in distributed computing, supercomputing and/or distributed data management. See also http://www.cs.adelaide.edu.au/~dhpc/DHPC.html and ftp://cisr.anu.edu.au/pub/DHPC/ Author: Matthew Wilson <matt@chook.cs.adelaide.edu.au>
/parallel/jobs/utaustin-grad-study
Computational and Applied Mathematics program at the University of Texas at Austin, USA are looking for graduate students in the above area. See also: http://www.ticam.utexas.edu Author: Robert van de Geijn <rvdg@cs.utexas.edu>
/parallel/jobs/epcc-96-summer-students
Edinburgh Parallel Computing Centre's Summer Scholarship Programme are looking for students to spend 10 weeks working on EPCC's HPC systems in Summer 1996. Requirements: 2 years undergrad study; some programming experience. See also http://www.epcc.ed.ac.uk/ssp/ Author: EPCC SSP <epccssp@epcc.edinburgh.ac.uk>
/parallel/jobs/llnl-meiko-hw-eng
Lawrence Livermore National Laboratory, California, USA need a hardware support and maintenance engineer for their Meiko MPP CS-2 system. Requirements: CS-2 and Solaris experience. US citizens only. Author: John Fuchs-Chesney <jfc@interserv.com>
/parallel/jobs/emory-postdoc
The department of Mathematics and Computer Science at Emory University, USA invites applications for a Postdoctoral Research Associate in the areas of distributed and collaborative computing. PhD required. Author: Post Doctoral Position <postdoc@mathcs.emory.edu>
/parallel/events/bulgaria-num-apps
First Bulgarian Workshop on Numerical Analysis and Applications by Katarzyna M Paprzycka <kmpst6+@pitt.edu> Call for papers for workshop being held from 24-26th June 1996 at Russe, Bulgaria. Sponsored by ACM Special Interest Group on Numerical Mathematics and Society for Industial and Applied Mathematics. Topics: Numerical Linear Algebra; Numerical Methods for Differential Equations; Numerical Modeling; High Performance Scientific Computing and others. Deadlines: Abstract: 1st December 1995; Full papers: 28th February 1996.
/parallel/events/bsp-worldwide
BSP Worldwide Inaugural Meeting by Bob McLatchie <rcfm@ecs.oxford.ac.uk> Call for participation in meeting being held on 4th December 1996 at Oxford University Computing Laboratory, Wolfson Building, Parks Road, Oxford, UK.
/parallel/events/ukparallel96
UK Parallel 1996 Call for papers for conference being held from 3rd-5th July 1996 at University of Surrey, Guildford, UK. Sponsored by BCS Parallel Processing Specialist Group (PPSG). Topics: application experience; algorithm development; benchmarking; performance and system evaluation; management issues; architecture of parallel computer systems; networks for parallel and distributed computers; compilation techniques; libraries and applications development environments and others. Deadlines: Extended Abstracts: 11th January 1996; Notification: 29th February 1996; Full papers: 31st March 1996; Extended Abstracts for experience sessions: 1st April 1996. See also http://orc.ee.surrey.ac.uk/UKPAR96/
/parallel/events/iwpia95
International Workshop on Parallel Image Analysis (IWPIA'95) by Jean-Marc Nicod <Jean-Marc.Nicod@cri.ens-lyon.fr> Call for participation, programme and registration form for workshop being held from 7-8th December 1995 at Ecole Normale Superieure de Lyon, Lyon, France. The workshop will consist of invited and contributed papers on models, algorithms, and architectures for parallel image analysis. See also http://www.ens-lyon.fr/LIP/IWPIA/
/parallel/events/wdag96
10th International Workshop on Distributed Algorithms (WDAG'96) by Aleta Ricciardi <aleta@ece.utexas.edu> Call for papers for workshop being held from 9th-11th October 1996 at Bologna, Italy. Sponsored by Department of Computer Science, University of Bologna, CaberNet: ESPRIT Network of Excellence in Distributed Systems Architectures, Italian National Research Council CNR-GNI. Topics: Algorithms for control and communication; Distributed searching, resource discovery and retrieval; Network protocols, applications and services; Fault tolerance and high availability; Real-time distributed systems; Algorithms for dynamic topology management; Mobile computing; Distributed intelligent agents; Issues in synchrony, asynchrony, scalability and real-time; Issues in replicated data management, consistency vs. availability; Security in distributed systems; Self stabilization; Wait-free algorithms; Techniques and paradigms for the design and analysis of distributed systems and others. Deadlines: Papers: 26th April 1996; Notification: 20th June 1996; Camera-ready papers: 15th July 1996. See also http://www.cs.unibo.it/wdag96/
/parallel/events/cosmase-intro-par
COSMASE Course on Parallel Computation: A Practical Introduction to High Performance Parallel Computing by Mark SAWLEY <sawley@imhfhp33.epfl.ch> Call for participation for course being held from 12th-16th February 1996 at Lausanne, Switzerland. The aim of this practical course is to provide a broad overview of the essential components of present-day high performance parallel computing. See also http://imhefwww.epfl.ch/COSMASE/
/parallel/events/irregular96
Third International Workshop on Parallel Algorithms For Irregularly Structured Problems (IRREGULAR'96) by ROLIM Jose D. P. <rolim@cui.unige.ch> Call for papers for conference being held from 19th-2st August 1996 at Santa Barbara, USA. Sponsored by the IFIP WG 10.3, the EATCS, the Laboratoire de l'Informatique du Parallelisme de l'ENS Lyon, the University of Geneva and the University of California at Santa Barbara. Topics: applications, approximating and randomized methods, automatic synthesis, branch and bound, combinatorial optimization, compiling, computer vision, load balancing, parallel data structures, scheduling and mapping, sparse matrix and symbolic computation and others. Deadlines: Extended abstracts: 15th March 1996; Notification: 19th May 1995; Camera-ready papers: 1st June 1996.
/parallel/vendors/telmat
Added information for Telmat Multinode

14th November 1995

/parallel/environments/chimp/release/
CHIMP (Common High-level Interface to Message Passing) was developed by the Edinburgh Parallel Computing Centre at the University of Edinburgh in an effort to provide a portable, programmable and efficient message passing library upon which EPCC's application work could be based. Since its inception in 1991 CHIMP has evolved from prototype through Version 1 to the current Version 2 interface specification.
/parallel/environments/chimp/release/chimpv2.1.1c.tar.Z
CHIMP V2.1.1c source distribution by R. Alasdair A. Bruce; James G. Mills and A. Gordon Smith. Edinburgh Parallel Computing Centre, James Clerk Maxwell Building, The King's Buildings, Edinburgh EH9 3JZ, UK. CHIMP Version 2.0 is designed to run on a variety of different platforms: Sun workstations running SunOS 4.1.x; Sun workstations running Solaris 2.x; Silicon Graphics running IRIX 4; Silicon Graphics running IRIX 5; IBM RS/6000 running AIX 3.2; Sequent Symmetry; DEC Alpha AXP running OSF/1; Meiko Computing Surface - transputer node; Meiko Computing Surface - i860 node (MK096); Meiko Computing Surface - SPARC node.
/parallel/environments/chimp/release/chimp-axposf.tar.Z
CHIMP binary distribution for DEC Alpha AXP with OSF/1
/parallel/environments/chimp/release/chimp-sgi5.tar.Z
CHIMP binary distribution for Silicon Graphics with IRIX 5
/parallel/environments/chimp/release/chimp-sun4.tar.Z
CHIMP binary distribution for Sun Sparc workstations running SunOS 4.1.x
/parallel/environments/chimp/release/chimp-sun5.tar.Z
CHIMP binary distribution for Sun Sparc workstations running Solaris 2.x
/parallel/environments/pvm3/pvanim/pvanimOL.tar.Z
PVaniM 2.0: Online and Postmortem Visualization Support for PVM by Brad Topol, Georgia Institute of Technology; John T. Stasko <stasko@cc.gatech.edu>, Georgia Institute of Technology and Vaidy Sunderam, Emory University. The PVaniM 2.0 system provides online and postmortem visualization support as well as rudimentary I/O for long running, communication-intensive PVM applications. PVaniM 2.0 provides these features while using several techniques to keep system perturbation to a minimum.
/parallel/languages/sisal/distribution/OSC-MANUAL.12.7.tar.Z
Optimizing Sisal Compiler Manual
/parallel/environments/pvm3/distribution/pvm3.3.10.tar.gz
/parallel/environments/pvm3/distribution/pvm3.3.10.tar.Z
/parallel/environments/pvm3/distribution/pvm3.3.10.shar.Z
/parallel/environments/pvm3/distribution/pvm3.3.10.tar.Z.uu.Z
PVM V3.3.10: Parallel Virtual Machine System by A. L. Beguelin; J. J. Dongarra; G. A. Geist; W. C. Jiang; R. J. Manchek; B. K. Moore and V. S. Sunderam. University of Tennessee, Knoxville TN, USA; Oak Ridge National Laboratory, Oak Ridge TN, USA; Emory University, Atlanta GA, USA. . PVM is a software system that enables a collection of heterogeneous computers to be used as a coherent and flexible concurrent computational resource. The individual computers may be shared- or local-memory multiprocessors, vector supercomputers, specialized graphics engines, or scalar workstations, that may be interconnected by a variety of networks, such as ethernet, FDDI. User programs written in C, C++ or Fortran access PVM through library routines.
/parallel/environments/pvm3/distribution/pvm339to10.Z
Patch from PVM V3.3.9 to V3.3.10
/parallel/transputer/documentation/st020-450/datasheets/st20450.ps.gz
SGS-Thomson ST020-450 Transputer Datasheet 32 bit microprocessor - 32-bit CPU; 0-40 MHz processor clock; 32 MIPS at 40MHz; fast integer/bit operations; 16K on-chip SRAM; 160Mbytes/s max bandwidth; Programmable memory interface: 4 separately configurable regions, 8/16/32-bits wide, support for mixed memory, 2 cycle external access, support for page mode DRAM; Serial communications; 4 OS-Links - 5/10/20 Mbits/s Link0, 10/20 Mbits/s Link1-3; Event channel; Vectored interrupt subsystem- Fully prioritized interrupts, 8 levels of preemption, 500 ns response time; Power management - low power operation, power down mode; Professional toolset support - ANSI C compiler and libraries, INQUEST advanced debugging tools; Technology - 0 to 40 MHz processor clock, 0.5 micron process technology, 3V operation, V outputs/bi-directionals, 5V inputs, 208 pin PQFP package; Test Access Port. 106 pages. 4M uncompressed.
/parallel/standards/mpi/anl/oct23.ps.Z
/parallel/standards/mpi/anl/oct23.dvi
Minutes of MPI Meeting October 23-25, 1995 by Rusty Luck <lusk@mcs.anl.gov> An unedited set of minutes taken during this MPI meeting. This contains both a summary of some of the discussions and official, binding votes of the MPI Forum.

7th November 1995

/parallel/environments/lam/distribution/tutorials/mpi_ezstart.tut
MPI: It's Easy to Get Started

6th November 1995

/parallel/books/kluwer/symbolic-analysis-par-compilers
Symbolic Analysis for Parallelizing Compilers by Mohammad Reza Haghighat <mohammad@csrd.uiuc.edu> Overview and table of contents.
/parallel/languages/fortran/adaptor/
ADAPTOR (Automatic DAta Parallelism TranslatOR) is a tool that transforms data parallel programs written in Fortran with array extensions, parallel loops, and layout directives to parallel programs with explicit message passing.
/parallel/languages/fortran/adaptor/README
ADAPTOR v3.1 (October 1995) overview. Author: Dr. Thomas Brandes <brandes@gmd.de>, German National Research Center for Computer Science (GMD), P.O. Box 1316, 53731 Sankt Augustin, Germany. ; Tel: +49 2241-14-2492; FAX: +49 2241-14-2181
/parallel/languages/fortran/adaptor/adp_3.1.tar.gz
/parallel/languages/fortran/adaptor/adp_3.1.tar.Z
ADAPTOR v3.1 source, documentation and examples. Requires a Fortran 77 compilation system. Supported on: CM-5; iPSC/860 and Intel Paragon; Net of SUN Sparc Workstations; Net of IBM Risc Workstations; Meiko CS1, CS2; SGI Multiprocessor Systems; IBM SP; NEC Cenju-3.
/parallel/languages/fortran/adaptor/hpf_examples.tar.Z
High Performance Fortran examples for ADAPTOR.
/parallel/languages/fortran/adaptor/docs/iguide.ps.Z
Adaptor Installation Guide Version 3.1 by Dr. Thomas Brandes <brandes@gmd.de>, German National Research Center for Computer Science (GMD), P.O. Box 1316, 53731 Sankt Augustin, Germany. ; Tel: +49 2241-14-2492; FAX: +49 2241-14-2181
/parallel/languages/fortran/adaptor/docs/uguide.ps.Z
Adaptor Users Guide Version 3.1 by Dr. Thomas Brandes <brandes@gmd.de>, German National Research Center for Computer Science (GMD), P.O. Box 1316, 53731 Sankt Augustin, Germany. ; Tel: +49 2241-14-2492; FAX: +49 2241-14-2181
/parallel/languages/fortran/adaptor/docs/pguide.ps.Z
Adaptor Programmers Guide Version 3.1
/parallel/teaching/hpctec/epcc/tech-watch/vzecca-hpf.tar.gz
Writing Data Parallel Programs with High Performance Fortran Slides. 200 pages.
/parallel/environments/pvm3/xab3/Pyxis_PDL_Retreat_10_95.ps.Z
Programming Distributed Multimedia by Adam Beguelin <adamb@cs.cmuedu>, http://www.cs.cmu.edu/~adamb.html, School of Computer Science, Carnegie Mellon University Slices. 11 pages.

2nd November 1995

/parallel/occam/examples/fft.occ
Efficient Transputer FFT Author: Herman Roebbers <herman7@iaehv.nl>
/parallel/transputer/books/
The complete text of a textbook written on programming transputer systems in C can be found at http://cs.smith.edu/~thiebaut/transputer/descript.html Author: Dominique Thiebaut <thiebaut@grendel.csc.smith.edu>, Smith College, USA.
/parallel/languages/sisal/distribution/tools/README.html
TreeView README File
/parallel/languages/sisal/distribution/tools/ssi.html
Simple Sisal pictures
/parallel/environments/pvm3/distribution/ncwn.html
Network Computing Working Notes by V. S. Sunderam
/parallel/standards/mpi/anl/using/errata.dvi
/parallel/standards/mpi/anl/using/errata.ps.Z
Updated: "Using MPI" book, by Gropp, Lusk, and Skjellum errata.
/parallel/standards/mpi/anl/using/examples/libraries/linalg/LA_library2.c
/parallel/standards/mpi/anl/using/examples.tar.Z
Updated examples and all the examples in one file.
/parallel/environments/pvm3/emory-vss/parviz.ps.Z
ICDCS'95 paper on parallel visualization with John Stasko and Brad Topol of Gatech.
/parallel/environments/pvm3/emory-vss/pious_fgcs.ps.Z
Comparison of parallel vs. distributed I/O, issues in distributed I/O mechanisms over general purpose networks, Pious file model, interface, implementation and experiences. To appear in FGCS 95.
/parallel/events/siwork96
Workshop Workstations (SIWORK'96) by Clemens Cap <cap@ifi.unizh.ch> Call for papers for workshop being held from 14th-15th May 1996 at Department of Computer Science, University of Zurich, Switzerland. Topics: Organization and application of workstations and networks; System and network management; Workstations as an access to the data highway; Security of workstation networks; Hardware, software and systems architecture; Workstations, multimedia and virtual reality; Mobile computing and others. Deadlines: Papers: 8th January 1996 See also http://www.ifi.unizh.ch/groups/cap/conferences/workshop.html
/parallel/environments/mpi/unify/reports/OO/ppuc_mpi++.ps.Z
Explicit Parallel Programming in C++ based on the Message-Passing Interface (MPI) by Anthony Skjellum; Ziyang Lu; Purushotham V. Bangalore and Nathan Doss. Department of Computer Science and NSF Engineering Research Center for Computational Field Simulation, Mississippi State, MS 39762, USA. ABSTRACT: Explicit parallel programming using the Message Passing Interface (MPI), a de facto standard created by the MPI Forum, is quickly becoming the strategy of choice for performance-portable parallel application programming on multicomputers and networks of workstations, so it is inevitably of interest to C++ programmers who use such systems. MPI programming is currently undertaken in C and/or Fortran-77, via the language bindings defined by the MPI Forum. While the committee deferred the job of defining a C++ binding for MPI to MPI-2, it is already possible to develop parallel programs in C++ using MPI, with the added help of one of several support libraries. These systems all strive to enable immediate C++ programming based on MPI. The first such enabling system, MPI++, is the focus of this chapter. MPI++ was an early effort on our part to let us leverage MPI while programming in C++. Here this system is, to a large extent, our vehicle to illustrate the value added of C++ in a message passing environment and, conversely, the value of MPI towards parallel programming with C++. We will be describing a performance-conscious alternative to exploit parallelism with C++ without the benefit of a portable, mature compiler environment suitable for a network of workstations or massively parallel computer. We emphasize performance, portability, good performance and good portability at the same time, and good design issues throughout, and that will put constraints on how eagerly we exploit certain features of C++ when creating our parallel environment on top of MPI.

[ WoTUG | Parallel Archive | Up | New | Add | Search | Mail | Help ]


Copyright © 1995 Dave Beckett, University of Kent at Canterbury, UK.