Table of Contents Author Guidelines Submit a Manuscript
Scientific Programming
Volume 11 (2003), Issue 2, Pages 95-104
http://dx.doi.org/10.1155/2003/278167

OpenMP Issues Arising in the Development of Parallel BLAS and LAPACK Libraries

C. Addison,2 Y. Ren,1 and M. van Waveren3

1Fujitsu European Centre for Information Technology, Hayes, UK
2Department of Computer Science, University of Manchester, Manchester, England
3Fujitsu Systems Europe, Toulouse, France

Received 12 May 2003; Accepted 12 May 2003

Copyright © 2003 Hindawi Publishing Corporation. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Dense linear algebra libraries need to cope efficiently with a range of input problem sizes and shapes. Inherently this means that parallel implementations have to exploit parallelism wherever it is present. While OpenMP allows relatively fine grain parallelism to be exploited in a shared memory environment it currently lacks features to make it easy to partition computation over multiple array indices or to overlap sequential and parallel computations. The inherent flexible nature of shared memory paradigms such as OpenMP poses other difficulties when it becomes necessary to optimise performance across successive parallel library calls. Notions borrowed from distributed memory paradigms, such as explicit data distributions help address some of these problems, but the focus on data rather than work distribution appears misplaced in an SMP context.