Table of Contents Author Guidelines Submit a Manuscript
Scientific Programming
Volume 9, Issue 2-3, Pages 73-81
http://dx.doi.org/10.1155/2001/712152

Overlapping Communication and Computation with OpenMP and MPI

Timothy H. Kaiser1 and Scott B. Baden2

1University of California, San Diego, San Diego Supercomputer Center, MC 0505, 9500 Gilman Drive, La Jolla, CA 92093-0505, USA
2Computer Science and Engineering Department, University of California, San Diego, 9500 Gilman Drive, Mail Stop 0114, La Jolla, CA 92093-0114, USA

Received 29 January 2002; Accepted 29 January 2002

Copyright © 2001 Hindawi Publishing Corporation. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Machines comprised of a distributed collection of shared memory or SMP nodes are becoming common for parallel computing. OpenMP can be combined with MPI on many such machines. Motivations for combing OpenMP and MPI are discussed. While OpenMP is typically used for exploiting loop-level parallelism it can also be used to enable coarse grain parallelism, potentially leading to less overhead. We show how coarse grain OpenMP parallelism can also be used to facilitate overlapping MPI communication and computation for stencil-based grid programs such as a program performing Gauss-Seidel iteration with red-black ordering. Spatial subdivision or domain decomposition is used to assign a portion of the grid to each thread. One thread is assigned a null calculation region so it was free to perform communication. Example calculations were run on an IBM SP using both the Kuck & Associates and IBM compilers.