Newest entries are first. Older changes can be found here.

29th February 1996

/parallel/vendors/elcom/wserver/
Updated Elcom's Transputer Iserver for Windows 3.1, Windows 95 and Windows NT.
/parallel/vendors/elcom/wserver/ws95l32.zip
Windows Server Package 95 (Lite) The shareware version of WSP95 (some features are absent; windows library contains ~100 functions instead of ~210 in the full version). Works with INMOS ANSI C Toolset (IMSD7214) and B004/B008 compatible transputer boards under Windows 95 and Windows NT. Registration - $20.
/parallel/vendors/elcom/wserver/ws95l32s.zip
Windows Server Package 95 (Lite) (for Win31 with Win32s) The same package as above, but working under Windows 3.1 with Win32s (version 1.30 required - available from Microsoft or Elcom home page). Don't install it under Windows 95/NT!
/parallel/vendors/elcom/wserver/wsprel.zip
Windows Server Package 95 Release Notes.
/parallel/vendors/elcom/wserver/wspprog1.zip
Windows Server Package 95 Programmer's Guide, part 1.
/parallel/vendors/elcom/wserver/wspprog2.zip
Windows Server Package 95 Programmer's Guide, part 2.
/parallel/vendors/elcom/wserver/wspref1.zip
Windows Server Package 95 Programmer's Reference, part 1.
/parallel/vendors/elcom/wserver/wspref2.zip
Windows Server Package 95 Programmer's Reference, part 2.

26th February 1996

/parallel/transputer/software/compilers/gcc/pereslavl/
/parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/changes4
Changes in gcc-2.7.2-t800.4
/parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/gcc-2.7.2-t800.4.dif.gz
gcc-2.7.2 for t800 (source diff) V4
/parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/patch4.gz
Patch from V3 to V4
/parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/ttools/ttools-2.0alpha34.tar.gz
Ttools V2.0 alpha 34 by Yury Shevchuk <sizif@botik.ru> A free package comprising assembler, linker, loader(s), and auxiliary programs intended for software development for transputers. It is designed to work with gcc-t800, a port of GNU C/C++ Compiler to transputers.
/parallel/environments/pvm3/pgpvm/pgpvm.tar.gz
Updated version of PVPGM: Performance Visualization support for PVM
/parallel/standards/hippi/hippi-6400-ph_0.1.ps.gz
/parallel/standards/hippi/hippi-6400-ph_0.1.pdf
High-Performance Parallel Interface -6400 Mbit/s Physical Layer (HIPPI-6400-PH) by Roger Cummings <Roger_Cummings@Stortek.com>, Storage Technology Corporation, 2270 South 88th Street, Louisville, CO 80028-0268, USA; Tel: +1 303 661-6357; FAX: +1 303 684-8196 and Carl Zeitler <zeitler@ausvm6.vnet.ibm.com>, IBM Corporation, MS 9440, 11400 Burnet Road, Austin, TX 78758, USA; Tel: +1 512 838-1797; FAX: +1 512 838-3822. February 16, 1996. NOTE: This is an internal working document of X3T11, a Technical Committee of Accredited Standards Committee X3. As such, this is not a completed standard. The contents are actively being modified by X3T11. This document is made available for review and comment only. ABSTRACT: This standard specifies a physical-level, point-to-point, full-duplex, link interface for transmitting digital data at 6400 Mbit/s over parallel copper cables across distances of TBD m, or over parallel fiber-optic cables across distances of TBD m. Small fixed-size micro-packets provide an efficient, low-latency, structure for small messages, and a building block for large messages. Services are provided for transporting data streams specified by HIPPI-PH, ANSI X3.183-1991, which is limited to 25 m distances, and 800 or 1600 Mbit/s data rates.
/parallel/standards/hippi/hippi-atm_1.6.ps.gz
/parallel/standards/hippi/hippi-atm_1.6.pdf
High-Performance Parallel Interface - Mapping to Asynchronous Transfer Mode (HIPPI-ATM) by Roger Cummings <Roger_Cummings@Stortek.com>, Storage Technology Corporation, 2270 South 88th Street, Louisville, CO 80028-0268, USA; Tel: +1 303 661-6357; FAX: +1 303 684-8196; Carl Zeitler <zeitler@ausvm6.vnet.ibm.com>, IBM Corporation, MS 9440, 11400 Burnet Road, Austin, TX 78758, USA; Tel: +1 512 838-1797; FAX: +1 512 838-3822 and Don Tolmie <det@lanl.gov>, Los Alamos National Laboratory, CIC-5, MS-B255, Los Alamos, NM 87545, USA; Tel: +1 505 667-5502; FAX: +1 505 665-7793. Version 1.6. NOTE: This is an internal working document of X3T11, a Technical Committee of Accredited Standards Committee X3. As such, this is not a completed standard. The contents are actively being modified by X3T11. This document is made available for review and comment only. ABSTRACT: This standard defines the frame formats and protocol definitions for encapsulation of High Performance Parallel Interface - Mechanical, Electrical, and Signalling Protocol Specification (HIPPI-PH) packets for transfer over Asynchronous Transfer Mode (ATM) equipment, or for use with other media. An informative annex describes an IP Router for use between HIPPI and ATM systems.

19th February 1996

/parallel/architecture/communications/io/pario/papers/Kotz/kotz:flexibility.ps.Z
Flexibility and Performance of Parallel File Systems by David Kotz <dfk@cs.dartmouth.edu> and Nils Nieuwejaar. ACM Operating Systems Review 30(2), April 1996 (to appear). ABSTRACT: Many scientific applications for high-performance multiprocessors have tremendous I/O requirements. As a result, the I/O system is often the limiting factor of application performance. Several new parallel file systems have been developed in recent years, each promising better performance for some class of parallel applications. As we gain experience with parallel computing, and parallel file systems in particular, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make application portability a significant problem. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all platforms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application-programmer interfaces (APIs). We think of this approach as the ``RISC'' of parallel file-system design. We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.
/parallel/environments/pvm3/emory-vss/omisdoc.ps.gz
OMIS - On-line Monitoring Interface Specification by Thomas Ludwig <fludwig@informatik.tu-muenchen.de>, Lehrstuhl fur Rechnertechnik und Rechnerorganisation, Institut fur Informatik (LRR-TUM), Technische Universitat Munchen, D-80290 Munchen, Germany; Tel: +49-89-2105-2042; FAX: +49-89-2105-8232; Roland Wismuller <wismuell@informatik.tu-muenchen.de>; Vaidy Sunderam <vss@mathcs.emory.edu>, Mathematics & Computer Science, Emory University, Atlanta, Georgia 30322, USA; Tel: +1-404-727-5926; FAX: +1-404-727-5611 and Arndt Bode <bodeg@informatik.tu-muenchen.de>. See http://wwwbode.informatik.tu-muenchen.de/omis/ ABSTRACT: The On-line Monitoring Interface Specification (OMIS) aims at defining an open interface for connecting on-line software development tools to parallel programs running in a distributed environment. Interactive tools like debuggers and performance analyzers and automatic tools like load balancers are typical representatives of the considered class of tools. The current situation is characterized by the fact that tools either follow the off-line paradigm by only having access to trace data and not to the running program or else they are on-line oriented but suffer from the following deficiencies: they do not support interoperability in the sense that different tools can be used simultaneously - not even tools from the same developer. Furthermore, no unified environment exists where the same tools can be used for parallel programs running on different target architectures. A reason for this situation can be found in a lack of systematic development of monitoring systems, i.e. systems which provide a tool with necessary runtime information about the application programs and make it possible to even manipulate the program run. The goal of the OMIS project is to specify an interface which is appropriate for a large set of different tools. Having an agreed on on-line monitoring interface facilitates the development of tools in the way that tool implementation and monitoring system implementation are now decoupled. Bringing n tools to m systems (consisting of hardware, operating system, programming libraries etc.) will be reduced in complexity from n m to n + m. In addition, it will eventually be possible to simultaneously use tools of different developers and to compose unified tool environments. The research group at LRR-TUM will implement an OMIS compliant monitoring system for the PVM programming model running on a network of workstations. Several interactive and automatic tools will be connected to this concrete system. The present document defines the goals of the OMIS project and list necessary requirements for such a monitoring system. We will describe the system model OMIS is primarily intended for and give an outline of available services of the interface. A special section will give details on how to extend OMIS, as this is an indispensable feature for future tool development. We would appreciate to get feedback on the design of OMIS. If you would like to see special issues incorporated into this specification document you are invited to contact the authors.
/parallel/transputer/software/compilers/gcc/pereslavl/
Updated GCC T800 backend (unofficial). See also http://www.botik.ru/~sizif/ttools/
/parallel/transputer/software/compilers/gcc/pereslavl/README.T800
Transputer gcc backend overview.
/parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/
gcc 2.7.2 for transputer architecture (patches to GNU release)
/parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/changes3
Changes in gcc-2.7.2-t800.3
/parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/gcc-2.7.2-t800.3.dif.gz
gcc-2.7.2 for t800 (source diff) V3
/parallel/transputer/software/compilers/gcc/pereslavl/gcc-2.7.2/
Earlier sources are present too
/parallel/transputer/software/compilers/gcc/pereslavl/ttools/
Transputer tools - assembler, linker, loader and auxiliary programs.

12th February 1996

/parallel/events/rovpia96
International Conference on Robotics, Vision and Parallel Processing for Industrial Automation (ROVPIA '96) by Mohd Noh Karsiti <eemnoh@cs.usm.my> Call for papers for conference being held from 28th-30th November 1996 at Ipoh, Malaysia. Organised by School of Electrical and Electronic Engineering, Perak Campus of University of Science, Malaysia. Topics: Robotics and Computer Vision; Adaptive Control Systems; Multivariable ; Systems; Guided Vehicles; Fuzzy Control Systems; Intelligent ; Instrumentation and Control systems; Manufacturing Automation; Vision ; Inspection Systems; Pattern Recognition; Multidimensional Signal ; Processing; Medical Imaging; Speech Processing; Acoustic Signal ; Processing; Image Processing; Parallel Processing; Parallel Architecture; Multiprocessing and Distributed systems and Neural nets and applications and others. Deadlines: Abstract: 15th April 1996; Acceptance: 10th May 1996 and Camera-ready papers: 10th September 1996. See also http://www.eng.usm.my/rovpia/
/parallel/events/hpce-uk
High Performance Computational Engineering in the UK Call for attendance at meeting being held from 18th-19th March 1996 at Daresbury Laboratory, Warrington, UK on using the UK's flagship computing facilities, in particular the Cray T3D MPP system. Organised by High Performance Computing in Engineering (HPCE) project. See also http://www.dl.ac.uk/TCSC/CompEng/MEETINGS/ws0396.html/
/parallel/events/imacs15
15TH IMACS World Congress 1997 on Scientific Computation, Applied Mathematics and Simulation (IMACS 15) by IMACS-97 (ralf) <imacs97@diana.first.gmd.de> Call for papers and sessions for conference being held from 24th-30th August 1997 at Berlin, Germany. Organised by IMACS - The International Association for Mathematics and Computers in Simulation. Topics: Methods for ODE's and PDE's; Integral Equations; Computational Linear Algebra; Parallel Computing; Computer Arithmetic; Computational Physics/Chemistry/Biology; Computational Acoustics; Nonlinear Science; Knowledge-based Systems; Symbolic Computation; Modelling and Simulation; Applications in Engineering, Control Systems, and Robotics, Biology, Medicine, Economics, the Environment. Deadlines: Papers: 1st December 1996; Notification: 28th February 1997 and Camera-ready papers: 30th April 1997. See also http://www.first.gmd.de/imacs97/
/parallel/events/hipc3
3rd International Conference on High Performance Computing (HiPC 3) by D N Jayasimha <djayasim@magnus.acs.ohio-state.edu> Call for papers for conference being held from 19-22nd December 1996 at Trivandrum, India. In cooperation with IEEE Computer Society; ACM SIGARCH; Centre for Development of Advanced Computing(C-DAC), India; CSIR Centre for Mathematical Modelling and Computer Simulation, India; Supercomputer Education and Research Centre, India; Tata Institute of Fundamental Research, India and Indian Institutes of Technology. Topics: Parallel Algorithms; Scientific Computation; Parallel Architectures; Visualization; Parallel Languages & Compilers; Network Based Computing; Distributed Systems; Signal & Image Processing Systems; Programming Environments; Supercomputing Applications and others. Deadlines: Papers: 10th April 1996; Tutorials: 3rd June 1996; Notification: 30th June 1996; Camera-ready papers: 1st August 1996. See also http://www.usc.edu/dept/ceng/prasanna/home.html
/parallel/events/iwpcpp
International Workshop on Parallel C++ (IWPC++) by Yutaka Ishikawa <ishikawa@trc.rwcp.or.jp> Call for participation for workshop being held from 3rd-12th March 1996 at Kanazawa, Ishikawa Prefecture, Japan. Organised by Real World Computing Partnership in cooperation with Japan Society for Software Science and Technology ISOTAS'96 Workshop. Topics: C++ Language Extensions ; C++ Compiler/Runtime Techniques ; C++ Class/Template Libraries ; C++ Parallel Applications ; C++ Parallel Programming Environment and Interoperability in C++. See also http://www.jaist.ac.jp/misc/meetings/ISOTAS96/
/parallel/events/HPCNeurope96
High Performance Computing and Networking Europe 1996 (HPCN Europe '96) by Jaap Hollenberg <sondjaap@horus.sara.nl> Call for participation and exhibitors and full program for conference being held from 15th-19th April 1996 at Palais des Congres, Brussels, Belgium. See also http://www.fwi.uva.nl/HPCN/
/parallel/environments/pvm3/clpvm/
CL-PVM: A set of Common Lisp functions that interfaces Common Lisp (KCL, AKCL, or GCL) to the C-based library of PVM. CL-PVM also offers a set of tools to help use it effectively with Lisp and MAXIMA tasks Author: Paul S. Wang <pwang@monkey.mcs.kent.edu>, Institute for Computational Mathematics, Kent State University, Kent, OH, USA
/parallel/environments/pvm3/clpvm/Announcement
Announcement of CL-PVM
/parallel/environments/pvm3/clpvm/clpvm.ftp.readme
Overview of CL-PVM
/parallel/environments/pvm3/clpvm/clpvm.tar.gz
CL-PVM 1.6 distribution containing source code, examples, manual pages, documentation and tools.
/parallel/algorithms/genetic/pgapack/
PGAPack is a general-purpose, data-structure-neutral, parallel genetic algorithm library. It is intended to provide most capabilities desired in a genetic algorithm library, in an integrated, seamless, and portable manner. See also http://www.mcs.anl.gov/pgapack.html Author: David Levine <levine@mcs.anl.gov>, http://www.mcs.anl.gov/home/levine, Argonne National Laboratory, Argonne, Illinois 60439, USA; Tel: +1 (708)-252-6735; FAX: +1 (708)-252-5986
/parallel/algorithms/genetic/pgapack/Announcement
Announcement of PGAPack
/parallel/algorithms/genetic/pgapack/README
Top level README for PGAPack V1.0
/parallel/algorithms/genetic/pgapack/pgapack-1.0.tar.Z
by David Levine <levine@mcs.anl.gov> PGAPack 1.0 distribution including the source code, examples, user guide and manual pages. Written in ANSI C using the MPI message passing interface. It has been used with MPICH, a freely available implementation of MPI.
/parallel/algorithms/genetic/pgapack/user_guide.ps.Z
Users Guide to the PGAPack Parallel Genetic Algorithm Library by David Levine <levine@mcs.anl.gov>
/parallel/libraries/numerical/finite-element-meshes/
METIS: Unstructured Graph Partitioning and Sparse Matrix Ordering System. See also http://www.cs.umn.edu/~karypis/metis/metis.html and the papers described in the Announcement below.
/parallel/libraries/numerical/finite-element-meshes/Announcement
Announcement of METIS 2.0 Author: George Karypis <karypis@in19.arc.umn.edu>, http://www.cs.umn.edu/~karypis, Computer Science Department, University of Minnesota, USA
/parallel/libraries/numerical/finite-element-meshes/INSTALL
METIS installation document. METIS requires Unix and an ANSI C compiler and has been tested on AIX 3.2.5, SunOS 4.1, Solaris 2.4, Irix 5.3 and Unicos. Author: George Karypis <karypis@in19.arc.umn.edu>, http://www.cs.umn.edu/~karypis, Computer Science Department, University of Minnesota, USA
/parallel/libraries/numerical/finite-element-meshes/manual.ps.Z
METIS 2.0 Manual
/parallel/libraries/numerical/finite-element-meshes/metis-2.0.tar.gz
METIS 2.0 Distribution by George Karypis <karypis@in19.arc.umn.edu>, http://www.cs.umn.edu/~karypis and Vipin Kumar. Computer Science Department, University of Minnesota, USA. Contains the source code, documentation, manual pages and graphs. Includes manual.ps.Z and INSTALL files.
/parallel/tools/editors/folding-editors/fe-mh/
FE Folding Editor. See also the design goals page at http://cantor.informatik.rwth-aachen.de/~michael/projects/fe-design.html and the reference manual at http://cantor.informatik.rwth-aachen.de/~michael/projects/fe.html
/parallel/tools/editors/folding-editors/fe-mh/Announcement
Announcement of FE Author: Michael Haardt <michael@cantor.informatik.rwth-aachen.de>
/parallel/tools/editors/folding-editors/fe-mh/fe-0.2.tz
FE Version 0.2 by Michael Haardt <michael@cantor.informatik.rwth-aachen.de>
/parallel/tools/editors/folding-editors/fe-mh/fe-0.1.tz
FE Version 0.1 by Michael Haardt <michael@cantor.informatik.rwth-aachen.de>
/parallel/standards/mpi/anl/workingnote/nextgen.ps.Z
MPICH Working Note: The Second-Generation ADI for the MPICH Implementation of MPI by William Gropp and Ewing Lusk. ABSTRACT: In this paper we describe an abstract device interface (ADI) that may be used to efficiently implement the Message Passing Interface (MPI). After experience with a first-generation ADI that made certain assumptions about the devices and tradeoffs in the design, it has become clear that, particularly on systems with low-latency communication, the first-generation ADI design imposes too much additional latency. In addition, the first-generation design is awkward for heterogeneous systems, complex for noncontiguous messaging, and inadequate at error handling. The design in this note describes a new ADI that provides lower latency in common cases and is still easy to implement, while retaining many opportunities for customization to any advanced capabilities that the underlying hardware may support.
/parallel/simulation/communications/chaos/simulator/chaosSim.tar.Z
Chaos router simulator package by Kevin Bolding <kwb@cs.washington.edu>; Sung-Eun Choi <sungeun@cs.washington.edu>; Melanie Fulghum <mel@cs.washington.edu>; Neil McKenzie <mckenzie@cs.washington.edu>; Thu Nguyen <thu@cs.washington.edu> and Wayne Ohlrich <ohlrich@cs.washington.edu>. Compiled routing network simulator with the capability to handle various types of networks of varying sizes.

7th February 1996

/parallel/events/vecpar96
2nd International Meeting on Vector and Parallel Processing (Systems and Applications) (VECPAR'96) by Jose M. Laginha M. Palma <jpalma@garfield.fe.up.pt> Details of conference being held at Faculdade de Engenharia da Universidade do Porto, Porto, Portugal. Topics: Architectures, operating systems, environments, software tools and languages; Numerical and symbolical algorithms; Applications in Science and Engineering; Industrial and commercial systems and applications and Signal processing and both image processing and synthesis. Deadlines: Papers and Posters: 29th March 1996; Notification: 10th May 1996 and Final papers: 30th August 1996. See also http://garfield.fe.up.pt:8001/~vecpar96/
/parallel/events/oopar
Object-Oriented Approaches to Parallel Programming by Mike Quinn <M.J.Quinn@ecs.soton.ac.uk> Details of workshop being held at Chilworth Manor Conference Centre, Chilworth (near Southampton), England. Sponsored by: EPSRC and the University of Southampton. A one-day workshop with invited speakers. See also http://www.hpcc.ecs.soton.ac.uk/~mjq/abstracts.html/
/parallel/standards/mpi/anl/
Release of 1.0.12 of MPI Chameleon implementation and updated documents for it (below).
/parallel/standards/mpi/anl/mpich-1.0.12.tar.Z
MPI Chameleon implementation version 1.0.12
/parallel/standards/mpi/anl/patch1.0.11-1.0.12.Z
/parallel/standards/mpi/anl/patch1.0.11-1.0.12
Patch from MPICH 1.0.11 to MPICH 1.0.12
/parallel/standards/mpi/anl/userguide.ps.Z
Users' Guide to mpich, a Portable Implementation of MPI
/parallel/standards/mpi/anl/install.ps.Z
Installation Guide to mpich, a Portable Implementation of MPI
/parallel/standards/mpi/anl/manwww.tar.Z
HTML versions of the manual pages for MPI and MPE functions.
/parallel/standards/mpi/anl/nupshot.tar.Z
Nupshot: A performance visualization tool

6th February 1996

/parallel/
Over a 1,000,000 files served On Tuesday 6th February 1996 at 02:13 GMT, the millionth file was sent from the IPCA. It was part of the gcc for transputer distribution and sent to a user at University of Florida, USA.

1st February 1996

/parallel/standards/mpi/anl/misc/mpich-1.0.12.tar.Z
MPI Chameleon implementation version 1.0.12
/parallel/standards/mpi/anl/misc/mpe-1.0.12.tar.Z
MPE extensions for MPICH 1.0.12 Timing routines; logging routines; real-time graphics routines and parallel I/O routines.
/parallel/standards/mpi/winmpi/thesis.ps.Z
/parallel/standards/mpi/winmpi/thesis.zip
Message-Passing Interface (MPI) for Microsoft Windows 3.1 by Joerg Meyer <jmeyer1@unocss.unomaha.edu> MSC Thesis, University of Nebraska at Omaha ABSTRACT: Parallel computing offers the potential to push the performance of computer systems into new dimensions. Exploiting parallelism, concurrent tasks cooperate in solving huge computational problems. The theoretical foundations of parallel processing are well- established, and numerous types of parallel computers and environments are commercially available. The main obstacle for a broad application of parallel technology is the lack of parallel programming standards. This research is aimed to promote the acceptance of the Message-Passing Interface (MPI) provides the means for writing portable software on a wide variety of parallel computers under UNIX. This thesis outlines the development and implementation of MPI for MS-Windows 3.1, which we call WinMPI. The goal of WinMPI is two-fold: 1. as a development tool, to allow the easy and inexpensive implementation of parallel software, and 2. as a learning tool, to provide a larger group of computer users the opportunity to gain first experience with parallel programming. We give an introduction to the MPI standard, illustrated by an MPI example program. We discuss design and implementation issues for WinMPI, focusing on the simulation of a UNIX-like run-time environment in Windows. Among others, preemptive multitasking, process start-up, and shared memory communication are addressed. Special consideration is given to the implementation of WinMPI applications. We describe instructions for porting MPI applications between WinMPI and UNIX. Users with some background in MPI programming will find sufficient information to succeed in employing WinMPI for parallel programming projects.
/parallel/standards/hippi/hippi-serial_2.2.ps.gz
/parallel/standards/hippi/hippi-serial_2.2.pdf
High-Performance Parallel Interface - Serial Specification V2.2 (HIPPI-Serial) by Roger Cummings <Roger_Cummings@Stortek.com>, Storage Technology Corporation, 2270 South 88th Street, Louisville, CO 80028-0268, USA; Tel: +1 303 661-6357; FAX: +1 303 684-8196 and Don Tolmie <det@lanl.gov>, Los Alamos National Laboratory, CIC-5, MS-B255, Los Alamos, NM 87545, USA; Tel: +1 505 667-5502; FAX: +1 505 665-7793. This is a working draft proposed ANSI standard. December 18, 1995. NOTE: This is an internal working document of X3T11, a Technical Committee of Accredited Standards Committee X3. As such, this is not a completed standard. The contents are actively being modified by X3T11. This document is made available for review and comment only. ABSTRACT: This standard specifies a physical-level interface for transmitting digital data at 800 Mbit/s or 1600 Mbit/s serially over fiber-optic or coaxial cables across distances of up to 10 km. The signalling sequences and protocol used are compatible with HIPPI-PH, ANSI X3.183-1991, which is limited to 25m distances. HIPPI-Serial may be used as an external extender for HIPPI-PH ports, or may be integrated as a host's native interface without HIPPI-PH.
/parallel/standards/hippi/hippi-serial_2.2_changes.ps.gz
/parallel/standards/hippi/hippi-serial_2.2_changes.pdf
Changes between HIPPI-Serial Rev 2.1 and Rev 2.2 All of the changes were essentially editorial in nature although some of them changed the technical content as well. The changes clarify and correct the document.
/parallel/standards/hippi/hippi-sc_3.0.ps.gz
/parallel/standards/hippi/hippi-sc_3.0.pdf
High-Performance Parallel Interface - Physical Switch Control (HIPPI-SC) A maintenance copy of ANSI X3.222-1993. Sept 28, 1995. ABSTRACT: This standard provides a protocol for controlling physical layer switches which are based on the High-performance Parallel Interface, a simple high-performance point-to-point interface for transmitting digital data at paek data rates of 800 or 1600 Mbit/s between data-processing equipment.
/parallel/standards/hippi/hippi-sc_3.0_changes.ps.gz
/parallel/standards/hippi/hippi-sc_3.0_changes.pdf
Changes between HIPPI-SC Rev 2.9 and Rev 3.0

[ WoTUG | Parallel Archive | Up | New | Add | Search | Mail | Help ]


Copyright © 1996 Dave Beckett, University of Kent at Canterbury, UK.