Abstract

The present paper describes a parallel preconditioned algorithm for the solution of partial eigenvalue problems for large sparse symmetric matrices, on parallel computers. Namely, we consider the Deflation-Accelerated Conjugate Gradient (DACG) algorithm accelerated by factorized-sparse-approximate-inverse- (FSAI-) type preconditioners. We present an enhanced parallel implementation of the FSAI preconditioner and make use of the recently developed Block FSAI-IC preconditioner, which combines the FSAI and the Block Jacobi-IC preconditioners. Results onto matrices of large size arising from finite element discretization of geomechanical models reveal that DACG accelerated by these type of preconditioners is competitive with respect to the available public parallel hypre package, especially in the computation of a few of the leftmost eigenpairs. The parallel DACG code accelerated by FSAI is written in MPI-Fortran 90 language and exhibits good scalability up to one thousand processors.

1. Introduction

The computation by iterative methods of the ๐‘  partial eigenspectrum of the generalized eigenproblem: ๐ด๐ฎ=๐œ†๐ต๐ฎ,(1.1) where ๐ด,๐ตโˆˆโ„๐‘›ร—๐‘› are large sparse symmetric positive definite (SPD) matrices, is an important and difficult task in many applications. It has become increasingly widespread owing to the development in the last twenty years of robust and computationally efficient schemes and corresponding software packages. Among the most well-known approaches for the important class of symmetric positive definite (SPD) matrices are the implicitly restarted Arnoldi method (equivalent to the Lanczos technique for this type of matrices) [1, 2], the Jacobi-Davidson (JD) algorithm [3], and schemes based on preconditioned conjugate gradient minimization of the Rayleigh quotient [4, 5].

The basic idea of the latter is to minimize the Rayleigh Quotient ๐ฑ๐‘ž(๐‘ฅ)=๐‘‡๐ด๐ฑ๐ฑ๐‘‡๐ต๐ฑ.(1.2) in a subspace which is orthogonal to the previously computed eigenvectors via a preconditioned CG-like procedure. Among the different variants of this technique we chose to use the Deflation-Accelerated Conjugate Gradient (DACG) scheme [4, 6] which has been shown to be competitive with the Jacobi Davidson method and with the PARPACK package [7]. As in any other approach, for our DACG method, the choice of the preconditioning technique is a key factor to accelerate and, in some cases even to allow for, convergence. To accelerate DACG in a parallel environment we selected the Factorized Sparse Approximate inverse (FSAI) preconditioner introduced in [8]. We have developed a parallel implementation of this algorithm which has displayed excellent performances on both the setup phase and the application phase within a Krylov subspace solver [9โ€“11]. The effectiveness of the FSAI preconditioner in the acceleration of DACG is compared to that of the Block FSAI-IC preconditioner, recently developed in [12], which combines the FSAI and the Block Jacobi-IC preconditioners obtaining good results on a small number of processors for the solution of SPD linear systems and for the solution of large eigenproblems [13]. We used the resulting parallel codes to compute a few of the leftmost eigenpairs of a set of test matrices of large size arising from Finite Element discretization of geomechanical models. The reported results show that DACG preconditioned with either FSAI or BFSAI is a scalable and robust algorithm for the partial solution of SPD eigenproblems. The parallel performance of DACG is also compared to that of the publicly available parallel package hypre [14] which implements a number of preconditioners which can be used in combination with the Locally Optimal Block PCG (LOBPCG) iterative eigensolver [15]. The results presented in this paper show that the parallel DACG code accelerated by FSAI exhibits good scalability up to one thousand processors and displays comparable performance with respect to hypre, specially when a low number of eigenpairs is sought.

The outline of the paper is as follows: in Section 2 we describe the DACG Algorithm; in Sections 3 and 4 we recall the definition and properties of the FSAI and BFSAI preconditioners, respectively. Section 5 contains the numerical results obtained with the proposed algorithm in the eigensolution of very large SPD matrices of size up to almost 7 million unknowns and 3ร—108 nonzeros. A comparison with the hypre eigensolver code is also included. Section 6 ends the paper with some conclusions.

2. The DACG Iterative Eigensolver and Implementation

The DACG algorithm sequentially computes the eigenpairs, starting from the leftmost one (๐œ†1,๐ฎ1). To evaluate the ๐‘—th eigenpair, ๐‘—>1, DACG minimizes the Rayleigh Quotient (RQ) in a subspace orthogonal to the ๐‘—โˆ’1 eigenvectors previously computed. More precisely, DACG minimizes the Rayleigh Quotient: ๐ณ๐‘ž(๐ณ)=๐‘‡๐ด๐ณ๐ณ๐‘‡๐ณ,(2.1) where ๐ณ=๐ฑโˆ’๐‘ˆ๐‘—๎€ท๐‘ˆ๐‘‡๐‘—๐ฑ๎€ธ,๐‘ˆ๐‘—=๎€บ๐ฎ1,โ€ฆ,๐ฎ๐‘—โˆ’1๎€ป,๐ฑโˆˆ๐‘…๐‘›.(2.2) The first eigenpair (๐œ†1,๐ฎ1) is obtained by minimization of (2.1) with ๐ณ=๐ฑ(๐‘ˆ1=โˆ…). Indicating with ๐‘€ the preconditioning matrix, that is, ๐‘€โ‰ˆ๐ดโˆ’1, the ๐‘  leftmost eigenpairs are computed by the conjugate gradient procedure [6] described in Algorithm 1.

Choose tolerance ๐œ€ , set ๐‘ˆ = 0 .
DO ๐‘— = 1 , ๐‘ 
โ€ƒ ( 1 ) Choose ๐ฑ 0 such that ๐‘ˆ ๐‘‡ ๐ฑ 0 = 0 ; set ๐‘˜ = 0 , ๐›ฝ 0 = 0 ;
โ€ƒ ( 2 ) โ€‰โ€‰ ๐ฑ ๐ด 0 = ๐ด ๐ฑ 0 , ๐›พ = ๐ฑ ๐‘‡ 0 ๐ฑ ๐ด 0 , ๐œ‚ = ๐ฑ ๐‘‡ 0 ๐ฑ 0 , ๐‘ž 0 โ‰ก ๐‘ž ( ๐ฑ 0 ) = ๐›พ / ๐œ‚ , ๐ซ 0 = ๐ฑ ๐ด 0 โˆ’ ๐‘ž 0 ๐ฑ 0 ;
โ€ƒ ( 3 ) REPEAT
โ€ƒ โ€ƒ (3.1) ๐  ๐‘˜ โ‰ก โˆ‡ ๐‘ž ( ๐ฑ ๐‘˜ ) = ( 2 / ๐œ‚ ) ๐ซ ๐‘˜ ;
โ€ƒ โ€ƒ (3.2) ๐  ๐‘€ ๐‘˜ = ๐‘€ ๐  ๐‘˜ ;
โ€ƒ โ€ƒ (3.3) I F ๐‘˜ > 0 T H E N ๐›ฝ ๐‘˜ = ๐  ๐‘‡ ๐‘˜ ( ๐  ๐‘€ ๐‘˜ โˆ’ ๐  ๐‘€ ๐‘˜ โˆ’ 1 ) / ๐  ๐‘‡ ๐‘˜ โˆ’ 1 ๐  ๐‘€ ๐‘˜ โˆ’ 1 ;
โ€ƒ โ€ƒ (3.4) ฬƒ ๐ฉ ๐‘˜ = ๐  ๐‘€ ๐‘˜ + ๐›ฝ ๐‘˜ ๐ฉ ๐‘˜ โˆ’ 1 ;
โ€ƒ โ€ƒ (3.5) ๐ฉ ๐‘˜ = ฬƒ ๐ฉ ๐‘˜ โˆ’ ๐‘ˆ ( ๐‘ˆ ๐‘‡ ฬƒ ๐ฉ ๐‘˜ )
โ€ƒ โ€ƒ (3.6) ๐ฉ ๐ด ๐‘˜ = ๐ด ๐ฉ ๐‘˜ ,
โ€ƒ โ€ƒ (3.7) ๐›ผ ๐‘˜ = a r g m i n ๐‘ก { ๐‘ž ( ๐ฑ ๐‘˜ + ๐‘ก ๐ฉ ๐‘˜ โˆš ) } = ( ๐œ‚ ๐‘‘ โˆ’ ๐›พ ๐‘ + ฮ” ) / 2 ( ๐‘ ๐‘ โˆ’ ๐‘Ž ๐‘‘ ) , with
โ€ƒโ€ƒโ€ƒโ€ƒโ€‰ ๐‘Ž = ๐ฉ ๐‘‡ ๐‘˜ ๐ฑ ๐ด ๐‘˜ , ๐‘ = ๐ฉ ๐‘‡ ๐‘˜ ๐ฉ ๐ด ๐‘˜ , ๐‘ = ๐ฉ ๐‘‡ ๐‘˜ ๐ฑ ๐‘˜ , ๐‘‘ = ๐ฉ ๐‘‡ ๐‘˜ ๐ฉ ๐‘˜ ,
โ€ƒโ€ƒโ€ƒโ€ƒโ€‰ ฮ” = ( ๐œ‚ ๐‘‘ โˆ’ ๐›พ ๐‘ ) 2 โˆ’ 4 ( ๐‘ ๐‘ โˆ’ ๐‘Ž ๐‘‘ ) ( ๐›พ ๐‘Ž โˆ’ ๐œ‚ ๐‘ ) ;
โ€ƒ โ€ƒ (3.8) ๐ฑ ๐‘˜ + 1 = ๐ฑ ๐‘˜ + ๐›ผ ๐‘˜ ๐ฉ ๐‘˜ , ๐ฑ ๐ด ๐‘˜ + 1 = ๐ฑ ๐ด ๐‘˜ + ๐›ผ ๐‘˜ ๐ฉ ๐ด ๐‘˜ ;
โ€ƒ โ€ƒ (3.9) ๐›พ = ๐›พ + 2 ๐‘Ž ๐›ผ ๐‘˜ + ๐‘ ๐›ผ 2 ๐‘˜ , ๐œ‚ = ๐œ‚ + 2 ๐‘ ๐›ผ ๐‘˜ + ๐‘‘ ๐›ผ 2 ๐‘˜ ;
โ€ƒ โ€ƒ (3.10) ๐‘ž ๐‘˜ + 1 โ‰ก ๐‘ž ( ๐ฑ ๐‘˜ + 1 ) = ๐›พ / ๐œ‚ ;
โ€ƒ โ€ƒ (3.11) ๐‘˜ = ๐‘˜ + 1 ;
โ€ƒ โ€ƒ (3.12) ๐ซ ๐‘˜ = ๐ฑ ๐ด ๐‘˜ โˆ’ ๐‘ž ๐‘˜ ๐ฑ ๐‘˜
โ€ƒโ€ƒUNTIL ( ๐‘ž ๐‘˜ + 1 โˆ’ ๐‘ž ๐‘˜ ) / ๐‘ž ๐‘˜ + 1 < ๐‘ก ๐‘œ ๐‘™ ;
โ€ƒ ( 4 ) โ€‰โ€‰ ๐œ† ๐‘— = ๐‘ž ๐‘˜ , ๐ฎ ๐‘— = ๐ฑ ๐‘˜ / โˆš ๐œ‚ , ๐‘ˆ = [ ๐‘ˆ , ๐ฎ ๐‘— ] .
END DO

The schemes relying on the Rayleigh quotient optimization are quite attractive for parallel computations; however preconditioning is an essential feature to ensure practical convergence. When seeking for an eigenpair (๐œ†๐‘—,๐ฎ๐‘—) it can be proved that the number of iterations is proportional to the square root of the condition number ๐œ‰๐‘—=๐œ…(๐ป๐‘—) of the Hessian of the Rayleigh quotient in the stationary point ๐ฎ๐‘— [4]. It turns out that ๐ป๐‘— is similar to (๐ดโˆ’๐œ†๐‘—๐ผ)๐‘€ which is not SPD. However, ๐ป๐‘— operates on the orthogonal space spanned by the previous eigenvectors, so that the only important eigenvalues are the positive ones. In the non-preconditioned case (i.e., ๐‘€=๐ผ) we would have ๐œ…๎€ท๐ป๐‘—๎€ธโ‰ˆ๐œ†๐‘๐œ†๐‘—+1โˆ’๐œ†๐‘—.(2.3) where in the ideal case ๐‘€โ‰ก๐ดโˆ’1, we have ๐œ…๎€ท๐ป๐‘—๎€ธโ‰ˆ๐œ†๐‘—๐œ†๐‘—+1โˆ’๐œ†๐‘—=๐œ‰๐‘—โ‰ช๐œ†๐‘๐œ†๐‘—+1โˆ’๐œ†๐‘—.(2.4) Therefore, even though ๐ดโˆ’1 is not the optimal preconditioner for ๐ดโˆ’๐œ†๐‘—๐ผ, however, if ๐‘€ is a good preconditioner of ๐ด then the condition number ๐œ…(๐ป๐‘—) will approach ๐œ‰๐‘—.

3. The FSAI Preconditioner

The FSAI preconditioner, initially proposed in [8, 16], has been later developed and implemented in parallel by Bergamaschi and Martรญnez in [9]. Here, we only shortly recall the main features of this preconditioner. Given an SPD matrix ๐ด the FSAI preconditioner approximately factorizes its inverse as a product of two sparse triangular matrices as ๐ดโˆ’1โ‰ˆ๐‘Š๐‘‡๐‘Š.(3.1) The choice of nonzeros in ๐‘Š is based on a sparsity pattern which in our work may be the same as ๎‚๐ด๐‘‘ where ๎‚๐ด is the result of prefiltration [10] of ๐ด, that is, dropping of all elements below of a threshold parameter ๐›ฟ. The entries of ๐‘Š are computed by minimizing the Frobenius norm of ๐ผโˆ’๐‘Š๐ฟ, where ๐ฟ is the exact Cholesky factor of ๐ด, without forming explicitly the matrix ๐ฟ. The computed ๐‘Š is then sparsified by dropping all the elements which are below a second tolerance parameter (๐œ€). The final FSAI preconditioner is therefore related to the following three parameters: ๐›ฟ, prefiltration threshold; ๐‘‘, power of ๐ด generating the sparsity pattern (we allow ๐‘‘โˆˆ{1,2,4} in our experiments); ๐œ€, postfiltration threshold.

3.1. Parallel Implementation of FSAI-DACG

We have developed a parallel code written in FORTRAN 90 and which exploits the MPI library for exchanging data among the processors. We used a block row distribution of all matrices (๐ด, ๐‘Š, and ๐‘Š๐‘‡), that is, with complete rows assigned to different processors. All these matrices are stored in static data structures in CSR format.

Regarding the preconditioner computation, we stress that any row ๐‘– of matrix ๐‘Š of FSAI preconditioner is computed independently of each other, by solving a small SPD dense linear system of size ๐‘›๐‘– equal to the number of nonzeros allowed in row ๐‘– of ๐‘Š. Some of the rows which contribute to form this linear system may be nonlocal to processor ๐‘– and should be received from other processors. To this aim we implemented a routine called get_extra_rows which carries out all the row exchanges among the processors, before starting the computation of ๐‘Š, which proceed afterwards entirely in parallel. Since the number of nonlocal rows needed by each processor is relatively small we chose to temporarily replicate these rows on auxiliary data structures. Once ๐‘Š is obtained a parallel transposition routine provides every processor with its part of ๐‘Š๐‘‡.

The DACG iterative solver is essentially based on scalar and matrix-vector products. We made use of an optimized parallel matrix-vector product which has been developed in [17] showing its effectiveness up to 1024 processors.

4. Block FSAI-IC Preconditioning

The Block FSAI-IC preconditioner, BFSAI-IC in the following, is a recent development for the parallel solution to Symmetric Positive Definite (SPD) linear systems. Assume that ๐ท is an arbitrary nonsingular block diagonal matrix consisting of ๐‘›๐‘ equal size blocks.

Let ๐’ฎ๐ฟ and ๐’ฎBD be a sparse lower triangular and a dense block diagonal nonzero pattern, respectively, for an ๐‘›ร—๐‘› matrix. Even though not strictly necessary, for the sake of simplicity assume that ๐’ฎBD consists of ๐‘›๐‘ diagonal blocks with equal size ๐‘š=๐‘›/๐‘›๐‘ and let ๐ทโˆˆโ„๐‘›ร—๐‘› be an arbitrary full-rank matrix with nonzero pattern ๐’ฎBD.

Consider the set of lower block triangular matrices ๐น with a prescribed nonzero pattern ๐‘†BL and minimize over ๐น the Frobenius norm: โ€–๐ทโˆ’๐น๐ฟโ€–๐น,(4.1) where ๐ฟ is the exact lower Cholesky factor of an SPD matrix ๐ด. A matrix ๐น satisfying the minimality condition (4.1) for a given ๐ท is the lower block triangular factor of BFSAI-IC. Recalling the definition of the classical FSAI preconditioner, it can be noticed that BFSAI-IC is a block generalization of the FSAI concept.

The differentiation of (4.1) with respect to the unknown entries [๐น]๐‘–๐‘—, (๐‘–,๐‘—)โˆˆ๐‘†BL, yields the solution to ๐‘› independent dense subsystems which, in the standard FSAI case, do not require the explicit knowledge of L. The effect of applying F to A is to concentrate the largest entries of the preconditioned matrix ๐น๐ด๐น๐‘‡ into ๐‘›๐‘ diagonal blocks. However, as ๐ท is arbitrary, it is still not ensured that ๐น๐ด๐น๐‘‡ is better than ๐ด in an iterative method, so it is necessary to precondition ๐น๐ด๐น๐‘‡ again. As ๐น๐ด๐น๐‘‡ resembles a block diagonal matrix, an efficient technique relies on using a block diagonal matrix which collects an approximation of the inverse of each diagonal block ๐ต๐‘–๐‘ of ๐น๐ด๐น๐‘‡.

It is easy to show that ๐น is guaranteed to exist with SPD matrices and ๐ต๐‘–๐‘ is SPD, too [12]. Using an IC decomposition with partial fill-in for each block ๐ต๐‘–๐‘ and collecting in ๐ฝ the lower IC factors, the resulting preconditioned matrix reads ๐ฝโˆ’1๐น๐ด๐น๐‘‡๐ฝโˆ’๐‘‡=๐‘Š๐ด๐‘Š๐‘‡(4.2) with the final preconditioner ๐‘€=๐‘Š๐‘‡๐‘Š=๐น๐‘‡๐ฝโˆ’๐‘‡๐ฝโˆ’1๐น.(4.3)๐‘€ in (4.3) is the BFSAI-IC preconditioner of ๐ด.

For its computation BFSAI-IC needs the selection of ๐‘›๐‘ and ๐’ฎ๐ฟ. The basic requirement for the number of blocks ๐‘›๐‘ is to be larger than or equal to the number of computing cores ๐‘. From a practical viewpoint, however, the most efficient choice in terms of both wall clock time and iteration count is to keep the blocks as large as possible, thus implying ๐‘›๐‘=๐‘. Hence, ๐‘›๐‘ is by default set equal to ๐‘. By distinction, the choice of ๐’ฎ๐ฟ is theoretically more challenging and still not completely clear. A widely accepted option for other approximate inverses, such as FSAI or SPAI, is to select the nonzero pattern of ๐ด๐‘‘ for small values of ๐‘‘ on the basis of the Neumann series expansion of ๐ดโˆ’1. Using a similar approach, in the BFSAI construction we select ๐’ฎ๐ฟ as the lower block triangular pattern of ๐ด๐‘‘. As the nonzeros located in the diagonal blocks are not used for the computation of ๐น a larger value of ๐‘‘, say 3 or 4, can still be used.

Though theoretically not necessary, three additional user-specified parameters are worth introducing in order to better control the memory occupation and the BFSAI-IC density:(1)๐œ€ is a postfiltration parameter that allows for dropping the smallest entries of ๐น. In particular, [๐น]๐‘–๐‘— is neglected if [๐น]๐‘–๐‘—<๐œ€โ€–๐Ÿ๐‘–โ€–2, where ๐Ÿ๐‘– is the ๐‘–th row of ๐น;(2)๐œŒ๐ต is a parameter that controls the fill-in of ๐ต๐‘–๐‘ and determines the maximum allowable number of nonzeros for each row of ๐ต๐‘–๐‘ in addition to the corresponding entries of ๐ด. Quite obviously, the largest ๐œŒ๐ต entries only are retained;(3)๐œŒ๐ฟ is a parameter that controls the fill-in of each IC factor ๎‚๐ฟ๐‘–๐‘ denoting the maximum allowable number of nonzeros for each row of ๎‚๐ฟ๐‘–๐‘ in addition to the corresponding entries of ๐ต๐‘–๐‘.

An OpenMP implementation of the algorithms above is available in [18].

5. Numerical Results

In this section we examine the performance of the parallel DACG preconditioned by both FSAI and BFSAI in the partial solution of four large-size sparse eigenproblems. The test cases, which we briefly describe below, are taken from different real engineering mechanical applications. In detail, they are as follows.(i)FAULT-639 is obtained from a structural problem discretizing a faulted gas reservoir with tetrahedral finite elements and triangular interface elements [19]. The interface elements are used with a penalty formulation to simulate the faults behavior. The problem arises from a 3D discretization with three displacement unknowns associated to each node of the grid.(ii)PO-878 arises in the simulation of the consolidation of a real gas reservoir of the Po Valley, Italy, used for underground gas storage purposes (for details, see [20]).(iii)GEO-1438 is obtained from a geomechanical problem discretizing a region of the earth crust subject to underground deformation. The computational domain is a box with an areal extent of 50 ร— 50โ€‰km and 10โ€‰km deep consisting of regularly shaped tetrahedral finite elements. The problem arises from a 3D discretization with three displacement unknowns associated to each node of the grid [21].(iv)CUBE-6091 arises from the equilibrium of a concrete cube discretized by a regular unstructured tetrahedral grid.

Matrices FAULT-639 and GEO-1438 are publicly available in the University of Florida Sparse Matrix Collection at http://www.cise.ufl.edu/research/sparse/matrices/.

In Table 1 we report sizes and nonzeros of the four matrices together with three of the most significant eigenvalues for each problem.

The computational performance of FSAI is compared to the one obtained by using BFSAI as implemented in [12]. The comparison is done evaluating the number of iterations ๐‘›iter to converge at the same tolerance, the wall clock time in seconds ๐‘‡prec and ๐‘‡iter for the preconditioner computation, and the eigensolver to converge, respectively, with the total time ๐‘‡tot=๐‘‡prec+๐‘‡iter. All tests are performed on the IBM SP6/5376 cluster at the CINECA Centre for High Performance Computing, equipped with IBM Power6 processors at 4.7โ€‰GHz with 168 nodes, 5376 computing cores, and 21โ€‰Tbyte of internal network RAM. The FSAI-DACG code is written in Fortran 90 and compiled with -O4 -q64 -qarch=pwr6 -qtune=pwr6 -qnoipa -qstrict -bmaxdata:0x70000000 options. For the BFSAI-IC code only an OpenMP implementation is presently available.

To study parallel performance we will use a strong scaling measure to see how the CPU times vary with the number of processors for a fixed total problem size. Denote with ๐‘‡๐‘ the total CPU elapsed times expressed in seconds on ๐‘ processors. We introduce a relative measure of the parallel efficiency achieved by the code, ๐‘†(๐‘๐‘), which is the pseudo speedup computed with respect to the smallest number of processors (๐‘) used to solve a given problem. Accordingly, we will denote by ๐ธ(๐‘๐‘) the corresponding efficiency: ๐‘†(๐‘๐‘)=๐‘‡๐‘๐‘๐‘‡๐‘,๐ธ(๐‘๐‘)=๐‘†(๐‘๐‘)๐‘=๐‘‡๐‘๐‘๐‘‡๐‘๐‘.(5.1)

5.1. FSAI-DACG Results

In this section we report the results of our FSAI-DACG implementation in the computation of the 10 leftmost eigenpairs of the 4 test problems. We used the exit test described in the DACG algorithm (see Algorithm 1) with tol=10โˆ’10. The results are summarized in Table 2. As the FSAI parameters, we choose ๐›ฟ=0.1, ๐‘‘=4, and ๐œ€=0.1 for all the test matrices. This combination of parameters produces, on the average, the best (or close to the best) performance of the iterative procedure. Note that the number of iterations does not change with the number of processors, for a fixed problem. The scalability of the code is very satisfactory in both the setup stage (preconditioner computation) and the iterative phase.

5.2. BFSAI-IC-DACG Results

We present in this section the results of DACG accelerated by the BFSAI-IC preconditioner for the approximation of the ๐‘ =10 leftmost eigenpairs of the matrices described above.

Table 3 provides iteration count and total CPU time for BFSAI-DACG with different combinations of the parameters needed to construct the BFSAI-IC preconditioner for matrix PO-878 and using from 2 to 8 processors. It can be seen from Table 3 that the assessment of the optimal parameters, ๐œ€, ๐œŒ๐ต, and ๐œŒ๐ฟ, is not an easy task, since the number of iterations may highly vary depending on the number of processors. We chose in this case the combination of parameters producing the second smallest total time with ๐‘=2,4,8 processors. After intensive testing for all the test problems, we selected similarly the โ€œoptimalโ€ values which are used in the numerical experiments reported in Table 4:(i)FAULT-639: ๐‘‘=3, ๐œ€=0.05, ๐œŒ๐ต=10, ๐œŒ๐ฟ=60,(ii)PO-878 ๐‘‘=3, ๐œ€=0.05, ๐œŒ๐ต=10, ๐œŒ๐ฟ=10,(iii)GEO-1438: ๐‘‘=3, ๐œ€=0.05, ๐œŒ๐ต=10, ๐œŒ๐ฟ=50,(iv)CUBE-6091: ๐‘‘=2, ๐œ€=0.01, ๐œŒ๐ต=0, ๐œŒ๐ฟ=10.

The user-specified parameters for BFSAI-IC given above provide evidence that it is important to build a dense preconditioner based on the lower nonzero pattern of ๐ด3 (except for CUBE-6091, which is built on a regular discretization) with the aim at decreasing the number of DACG iterations. Anyway, the cost for computing such a dense preconditioner appears to be almost negligible with respect to the wall clock time needed to iterate to convergence.

We recall that, presently, the code BFSAI-IC is implemented in OpenMP, and the results in terms of CPU time are significant only for ๐‘โ‰ค8. For this reason the number of iterations reported in Table 4 is obtained with increasing number of blocks ๐‘›๐‘ and with ๐‘=8 processors. This iteration number accounts for a potential implementation of BFSAI-DACG under the MPI (or hybrid OpenMP-MPI) environment as the number of iterations depends only on the number of blocks, irrespective of the number of processors.

The only meaningful comparison between FSAI-DACG and BFSAI-DACG can be carried out in terms of iteration numbers which are smaller for BFSAI-DACG for a small number of processors. The gap between FSAI and BFSAI iterations reduces when the number of processors increases.

5.3. Comparison with the LOBPCG Eigensolver Provided by hypre

In order to validate the effectiveness of our preconditioning in the proposed DACG algorithm with respect to already available public parallel eigensolvers, the results given in Tables 2 and 4 are compared with those obtained by the schemes implemented in the hypre software package [14]. The Locally Optimal Block Preconditioned Conjugate Gradient method (LOBPCG) [15] is experimented with, using the different preconditioners developed in the hypre project, that is, algebraic multigrid (AMG), diagonal scaling (DS), approximate inverse (ParaSails), additive Schwarz (Schwarz), and incomplete LU (Euclid). The hypre preconditioned CG is used for the inner iterations within LOBPCG. For details on the implementation of the LOBPCG algorithm, see, for instance, [22]. The selected preconditioner, ParaSails, is on its turn based on the FSAI preconditioner, so that the different FSAI-DACG and ParaSails-LOBPCG performances should be ascribed mainly to the different eigensolvers rather than to the preconditioners.

We first carried out a preliminary set of runs with the aim of assessing the optimal value of the block size bl parameter, that is, the size of the subspace where to seek for the eigenvectors. Obviously it must be bl โ‰ฅ s = 10. We fixed to 16 the number of processors and obtained the results summarized in Table 5 with different values of blโ€‰โ€‰โˆˆโ€‰โ€‰[10,15]. We found that, only in problem CUBE-6091, a value of bl larger than 10, namely, bl = 12, yields an improvement in the CPU time. Note that we also made this comparison with different number of processors, and we obtained analogous results.

Table 6 presents the number of iterations and timings using the LOBPCG algorithm in the hypre package. The LOBPCG wall clock time is obtained with the preconditioner allowing for the best performance in the specific problem at hand, that is, ParaSails for all the problems. Using AMG as the preconditioner did not allow for convergence in three cases out of four, with the only exception of the FAULT-639 problem, in which the CPU timings were however very much larger than using ParaSails.

All matrices have to be preliminarily scaled by their maximum coefficient in order to allow for convergence. To make the comparison meaningful, the outer iterations of the different methods are stopped when the average relative error measure of the computed leftmost eigenpairs gets smaller than 10โˆ’10, in order to obtain a comparable accuracy as in the other codes. We also report in Table 6 the number of inner preconditioned CG iterations (pcgitr).

To better compare our FSAI DACG with the LOBPCG method, we depict in Figure 1 the total CPU time versus the number of processor for the two codes. FSAI-DACG and LOBPCG provide very similar scalability, being the latter code a little bit more performing on the average. On the FAULT-639 problem, DACG reveals faster than LOBPCG, irrespective of the number of processors employed.

Finally, we have carried out a comparison of the two eigensolvers in the computation of only the leftmost eigenpair. Differently from LOBPCG, which performs a simultaneous approximation of all the selected eigenpairs, DACG proceeds in the computation of the selected eigenpairs in a sequential way. For this reason, DACG should be the better choice, at least in principle, when just one eigenpair is sought. We investigate this feature, and the results are summarized in Table 7. We include the total CPU time and iteration count needed by LOBPCG and FSAI-DACG to compute the leftmost eigenpair with 16 processors. For the LOBPCG code we report only the number of outer iterations.

The parameters used to construct the FSAI preconditioner for these experiments are as follows: (1)FAULT-639. ๐›ฟ=0.1, ๐‘‘=2, ๐œ€=0.05,(2) PO-878. ๐›ฟ=0.2, ๐‘‘=4, ๐œ€=0.1,(3) GEO-1438. ๐›ฟ=0.1, ๐‘‘=2, ๐œ€=0.1,(4) CUBE-6091. ๐›ฟ=0.0, ๐‘‘=1, ๐œ€=0.05.

These parameters differ from those employed to compute the FSAI preconditioner in the assessment of the 10 leftmost eigenpairs and have been selected in order to produce a preconditioner relatively cheap to compute. This is so because otherwise the setup time would prevail over the iteration time. Similarly, to compute just one eigenpair with LOBPCG we need to setup a different value for pcgitr, the number of inner iterations. As it can be seen from Table 7, in the majority of the test cases, LOBPCG takes less time to compute 2 eigenpairs than just only 1. FSAI-DACG reveals more efficient than the best LOBPCG on problems PO-878 and GEO-1438. On the remaining two problems the slow convergence exhibited by DACG is probably due to the small relative separation ๐œ‰1 between ๐œ†1 and ๐œ†2.

6. Conclusions

We have presented the parallel DACG algorithm for the partial eigensolution of large and sparse SPD matrices. The scalability of DACG, accelerated with FSAI-type preconditioners, has been studied on a set of test matrices of very large size arising from real engineering mechanical applications. Our FSAI-DACG code has shown comparable performances with the LOBPCG eigensolver within the well-known public domain package, hypre. Numerical results reveal that not only the scalability achieved by our code is roughly identical to that of hypre but also, in some instances, FSAI-DACG proves more efficient in terms of absolute CPU time. In particular, for the computation of the leftmost eigenpair, FSAI-DACG is more convenient in 2 problems out of 4.

Acknowledgment

The authors acknowledge the CINECA Iscra Award SCALPREC (2011) for the availability of HPC resources and support.