Table of Contents Author Guidelines Submit a Manuscript
Scientific Programming
Volume 6, Issue 2, Pages 201-214

Implementation and Performance of DSMPI

Luis M. Silva,1 JoÃo Gabriel Silva,1 and Simon Chapple2

1Departamento Engenharia Informática, Universidade de Coimbra-POLO II, Vila Franca-3030 Coimbra, Portugal
2Quadstone Ltd., 16 Chester Street, Edinburgh, EH3 7RA, Scotland

Received 26 September 1995; Accepted 26 March 1996

Copyright © 1997 Hindawi Publishing Corporation. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Distributed shared memory has been recognized as an alternative programming model to exploit the parallelism in distributed memory systems because it provides a higher level of abstraction than simple message passing. DSM combines the simple programming model of shared memory with the scalability of distributed memory machines. This article presents DSMPI, a parallel library that runs atop of MPI and provides a DSM abstraction. It provides an easy-to-use programming interface, is fully, portable, and supports heterogeneity. For the sake of flexibility, it supports different coherence protocols and models of consistency. We present some performance results taken in a network of workstations and in a Cray T3D which show that DSMPI can be competitive with MPI for some applications.