Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 872901, 14 pages
http://dx.doi.org/10.1155/2012/872901
Research Article

Parallel Rayleigh Quotient Optimization with FSAI-Based Preconditioning

Department of Mathematical Methods and Models for Scientific Applications, University of Padova, Via Trieste 63, 35121 Padova, Italy

Received 2 November 2011; Revised 1 February 2012; Accepted 3 February 2012

Academic Editor: Massimiliano Ferronato

Copyright © 2012 Luca Bergamaschi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The present paper describes a parallel preconditioned algorithm for the solution of partial eigenvalue problems for large sparse symmetric matrices, on parallel computers. Namely, we consider the Deflation-Accelerated Conjugate Gradient (DACG) algorithm accelerated by factorized-sparse-approximate-inverse- (FSAI-) type preconditioners. We present an enhanced parallel implementation of the FSAI preconditioner and make use of the recently developed Block FSAI-IC preconditioner, which combines the FSAI and the Block Jacobi-IC preconditioners. Results onto matrices of large size arising from finite element discretization of geomechanical models reveal that DACG accelerated by these type of preconditioners is competitive with respect to the available public parallel hypre package, especially in the computation of a few of the leftmost eigenpairs. The parallel DACG code accelerated by FSAI is written in MPI-Fortran 90 language and exhibits good scalability up to one thousand processors.

1. Introduction

The computation by iterative methods of the 𝑠 partial eigenspectrum of the generalized eigenproblem: 𝐴𝐮=𝜆𝐵𝐮,(1.1) where 𝐴,𝐵𝑛×𝑛 are large sparse symmetric positive definite (SPD) matrices, is an important and difficult task in many applications. It has become increasingly widespread owing to the development in the last twenty years of robust and computationally efficient schemes and corresponding software packages. Among the most well-known approaches for the important class of symmetric positive definite (SPD) matrices are the implicitly restarted Arnoldi method (equivalent to the Lanczos technique for this type of matrices) [1, 2], the Jacobi-Davidson (JD) algorithm [3], and schemes based on preconditioned conjugate gradient minimization of the Rayleigh quotient [4, 5].

The basic idea of the latter is to minimize the Rayleigh Quotient 𝐱𝑞(𝑥)=𝑇𝐴𝐱𝐱𝑇𝐵𝐱.(1.2) in a subspace which is orthogonal to the previously computed eigenvectors via a preconditioned CG-like procedure. Among the different variants of this technique we chose to use the Deflation-Accelerated Conjugate Gradient (DACG) scheme [4, 6] which has been shown to be competitive with the Jacobi Davidson method and with the PARPACK package [7]. As in any other approach, for our DACG method, the choice of the preconditioning technique is a key factor to accelerate and, in some cases even to allow for, convergence. To accelerate DACG in a parallel environment we selected the Factorized Sparse Approximate inverse (FSAI) preconditioner introduced in [8]. We have developed a parallel implementation of this algorithm which has displayed excellent performances on both the setup phase and the application phase within a Krylov subspace solver [911]. The effectiveness of the FSAI preconditioner in the acceleration of DACG is compared to that of the Block FSAI-IC preconditioner, recently developed in [12], which combines the FSAI and the Block Jacobi-IC preconditioners obtaining good results on a small number of processors for the solution of SPD linear systems and for the solution of large eigenproblems [13]. We used the resulting parallel codes to compute a few of the leftmost eigenpairs of a set of test matrices of large size arising from Finite Element discretization of geomechanical models. The reported results show that DACG preconditioned with either FSAI or BFSAI is a scalable and robust algorithm for the partial solution of SPD eigenproblems. The parallel performance of DACG is also compared to that of the publicly available parallel package hypre [14] which implements a number of preconditioners which can be used in combination with the Locally Optimal Block PCG (LOBPCG) iterative eigensolver [15]. The results presented in this paper show that the parallel DACG code accelerated by FSAI exhibits good scalability up to one thousand processors and displays comparable performance with respect to hypre, specially when a low number of eigenpairs is sought.

The outline of the paper is as follows: in Section 2 we describe the DACG Algorithm; in Sections 3 and 4 we recall the definition and properties of the FSAI and BFSAI preconditioners, respectively. Section 5 contains the numerical results obtained with the proposed algorithm in the eigensolution of very large SPD matrices of size up to almost 7 million unknowns and 3×108 nonzeros. A comparison with the hypre eigensolver code is also included. Section 6 ends the paper with some conclusions.

2. The DACG Iterative Eigensolver and Implementation

The DACG algorithm sequentially computes the eigenpairs, starting from the leftmost one (𝜆1,𝐮1). To evaluate the 𝑗th eigenpair, 𝑗>1, DACG minimizes the Rayleigh Quotient (RQ) in a subspace orthogonal to the 𝑗1 eigenvectors previously computed. More precisely, DACG minimizes the Rayleigh Quotient: 𝐳𝑞(𝐳)=𝑇𝐴𝐳𝐳𝑇𝐳,(2.1) where 𝐳=𝐱𝑈𝑗𝑈𝑇𝑗𝐱,𝑈𝑗=𝐮1,,𝐮𝑗1,𝐱𝑅𝑛.(2.2) The first eigenpair (𝜆1,𝐮1) is obtained by minimization of (2.1) with 𝐳=𝐱(𝑈1=). Indicating with 𝑀 the preconditioning matrix, that is, 𝑀𝐴1, the 𝑠 leftmost eigenpairs are computed by the conjugate gradient procedure [6] described in Algorithm 1.

alg1
Algorithm 1: DACG Algorithm.

The schemes relying on the Rayleigh quotient optimization are quite attractive for parallel computations; however preconditioning is an essential feature to ensure practical convergence. When seeking for an eigenpair (𝜆𝑗,𝐮𝑗) it can be proved that the number of iterations is proportional to the square root of the condition number 𝜉𝑗=𝜅(𝐻𝑗) of the Hessian of the Rayleigh quotient in the stationary point 𝐮𝑗 [4]. It turns out that 𝐻𝑗 is similar to (𝐴𝜆𝑗𝐼)𝑀 which is not SPD. However, 𝐻𝑗 operates on the orthogonal space spanned by the previous eigenvectors, so that the only important eigenvalues are the positive ones. In the non-preconditioned case (i.e., 𝑀=𝐼) we would have 𝜅𝐻𝑗𝜆𝑁𝜆𝑗+1𝜆𝑗.(2.3) where in the ideal case 𝑀𝐴1, we have 𝜅𝐻𝑗𝜆𝑗𝜆𝑗+1𝜆𝑗=𝜉𝑗𝜆𝑁𝜆𝑗+1𝜆𝑗.(2.4) Therefore, even though 𝐴1 is not the optimal preconditioner for 𝐴𝜆𝑗𝐼, however, if 𝑀 is a good preconditioner of 𝐴 then the condition number 𝜅(𝐻𝑗) will approach 𝜉𝑗.

3. The FSAI Preconditioner

The FSAI preconditioner, initially proposed in [8, 16], has been later developed and implemented in parallel by Bergamaschi and Martínez in [9]. Here, we only shortly recall the main features of this preconditioner. Given an SPD matrix 𝐴 the FSAI preconditioner approximately factorizes its inverse as a product of two sparse triangular matrices as 𝐴1𝑊𝑇𝑊.(3.1) The choice of nonzeros in 𝑊 is based on a sparsity pattern which in our work may be the same as 𝐴𝑑 where 𝐴 is the result of prefiltration [10] of 𝐴, that is, dropping of all elements below of a threshold parameter 𝛿. The entries of 𝑊 are computed by minimizing the Frobenius norm of 𝐼𝑊𝐿, where 𝐿 is the exact Cholesky factor of 𝐴, without forming explicitly the matrix 𝐿. The computed 𝑊 is then sparsified by dropping all the elements which are below a second tolerance parameter (𝜀). The final FSAI preconditioner is therefore related to the following three parameters: 𝛿, prefiltration threshold; 𝑑, power of 𝐴 generating the sparsity pattern (we allow 𝑑{1,2,4} in our experiments); 𝜀, postfiltration threshold.

3.1. Parallel Implementation of FSAI-DACG

We have developed a parallel code written in FORTRAN 90 and which exploits the MPI library for exchanging data among the processors. We used a block row distribution of all matrices (𝐴, 𝑊, and 𝑊𝑇), that is, with complete rows assigned to different processors. All these matrices are stored in static data structures in CSR format.

Regarding the preconditioner computation, we stress that any row 𝑖 of matrix 𝑊 of FSAI preconditioner is computed independently of each other, by solving a small SPD dense linear system of size 𝑛𝑖 equal to the number of nonzeros allowed in row 𝑖 of 𝑊. Some of the rows which contribute to form this linear system may be nonlocal to processor 𝑖 and should be received from other processors. To this aim we implemented a routine called get_extra_rows which carries out all the row exchanges among the processors, before starting the computation of 𝑊, which proceed afterwards entirely in parallel. Since the number of nonlocal rows needed by each processor is relatively small we chose to temporarily replicate these rows on auxiliary data structures. Once 𝑊 is obtained a parallel transposition routine provides every processor with its part of 𝑊𝑇.

The DACG iterative solver is essentially based on scalar and matrix-vector products. We made use of an optimized parallel matrix-vector product which has been developed in [17] showing its effectiveness up to 1024 processors.

4. Block FSAI-IC Preconditioning

The Block FSAI-IC preconditioner, BFSAI-IC in the following, is a recent development for the parallel solution to Symmetric Positive Definite (SPD) linear systems. Assume that 𝐷 is an arbitrary nonsingular block diagonal matrix consisting of 𝑛𝑏 equal size blocks.

Let 𝒮𝐿 and 𝒮BD be a sparse lower triangular and a dense block diagonal nonzero pattern, respectively, for an 𝑛×𝑛 matrix. Even though not strictly necessary, for the sake of simplicity assume that 𝒮BD consists of 𝑛𝑏 diagonal blocks with equal size 𝑚=𝑛/𝑛𝑏 and let 𝐷𝑛×𝑛 be an arbitrary full-rank matrix with nonzero pattern 𝒮BD.

Consider the set of lower block triangular matrices 𝐹 with a prescribed nonzero pattern 𝑆BL and minimize over 𝐹 the Frobenius norm: 𝐷𝐹𝐿𝐹,(4.1) where 𝐿 is the exact lower Cholesky factor of an SPD matrix 𝐴. A matrix 𝐹 satisfying the minimality condition (4.1) for a given 𝐷 is the lower block triangular factor of BFSAI-IC. Recalling the definition of the classical FSAI preconditioner, it can be noticed that BFSAI-IC is a block generalization of the FSAI concept.

The differentiation of (4.1) with respect to the unknown entries [𝐹]𝑖𝑗, (𝑖,𝑗)𝑆BL, yields the solution to 𝑛 independent dense subsystems which, in the standard FSAI case, do not require the explicit knowledge of L. The effect of applying F to A is to concentrate the largest entries of the preconditioned matrix 𝐹𝐴𝐹𝑇 into 𝑛𝑏 diagonal blocks. However, as 𝐷 is arbitrary, it is still not ensured that 𝐹𝐴𝐹𝑇 is better than 𝐴 in an iterative method, so it is necessary to precondition 𝐹𝐴𝐹𝑇 again. As 𝐹𝐴𝐹𝑇 resembles a block diagonal matrix, an efficient technique relies on using a block diagonal matrix which collects an approximation of the inverse of each diagonal block 𝐵𝑖𝑏 of 𝐹𝐴𝐹𝑇.

It is easy to show that 𝐹 is guaranteed to exist with SPD matrices and 𝐵𝑖𝑏 is SPD, too [12]. Using an IC decomposition with partial fill-in for each block 𝐵𝑖𝑏 and collecting in 𝐽 the lower IC factors, the resulting preconditioned matrix reads 𝐽1𝐹𝐴𝐹𝑇𝐽𝑇=𝑊𝐴𝑊𝑇(4.2) with the final preconditioner 𝑀=𝑊𝑇𝑊=𝐹𝑇𝐽𝑇𝐽1𝐹.(4.3)𝑀 in (4.3) is the BFSAI-IC preconditioner of 𝐴.

For its computation BFSAI-IC needs the selection of 𝑛𝑏 and 𝒮𝐿. The basic requirement for the number of blocks 𝑛𝑏 is to be larger than or equal to the number of computing cores 𝑝. From a practical viewpoint, however, the most efficient choice in terms of both wall clock time and iteration count is to keep the blocks as large as possible, thus implying 𝑛𝑏=𝑝. Hence, 𝑛𝑏 is by default set equal to 𝑝. By distinction, the choice of 𝒮𝐿 is theoretically more challenging and still not completely clear. A widely accepted option for other approximate inverses, such as FSAI or SPAI, is to select the nonzero pattern of 𝐴𝑑 for small values of 𝑑 on the basis of the Neumann series expansion of 𝐴1. Using a similar approach, in the BFSAI construction we select 𝒮𝐿 as the lower block triangular pattern of 𝐴𝑑. As the nonzeros located in the diagonal blocks are not used for the computation of 𝐹 a larger value of 𝑑, say 3 or 4, can still be used.

Though theoretically not necessary, three additional user-specified parameters are worth introducing in order to better control the memory occupation and the BFSAI-IC density:(1)𝜀 is a postfiltration parameter that allows for dropping the smallest entries of 𝐹. In particular, [𝐹]𝑖𝑗 is neglected if [𝐹]𝑖𝑗<𝜀𝐟𝑖2, where 𝐟𝑖 is the 𝑖th row of 𝐹;(2)𝜌𝐵 is a parameter that controls the fill-in of 𝐵𝑖𝑏 and determines the maximum allowable number of nonzeros for each row of 𝐵𝑖𝑏 in addition to the corresponding entries of 𝐴. Quite obviously, the largest 𝜌𝐵 entries only are retained;(3)𝜌𝐿 is a parameter that controls the fill-in of each IC factor 𝐿𝑖𝑏 denoting the maximum allowable number of nonzeros for each row of 𝐿𝑖𝑏 in addition to the corresponding entries of 𝐵𝑖𝑏.

An OpenMP implementation of the algorithms above is available in [18].

5. Numerical Results

In this section we examine the performance of the parallel DACG preconditioned by both FSAI and BFSAI in the partial solution of four large-size sparse eigenproblems. The test cases, which we briefly describe below, are taken from different real engineering mechanical applications. In detail, they are as follows.(i)FAULT-639 is obtained from a structural problem discretizing a faulted gas reservoir with tetrahedral finite elements and triangular interface elements [19]. The interface elements are used with a penalty formulation to simulate the faults behavior. The problem arises from a 3D discretization with three displacement unknowns associated to each node of the grid.(ii)PO-878 arises in the simulation of the consolidation of a real gas reservoir of the Po Valley, Italy, used for underground gas storage purposes (for details, see [20]).(iii)GEO-1438 is obtained from a geomechanical problem discretizing a region of the earth crust subject to underground deformation. The computational domain is a box with an areal extent of 50 × 50 km and 10 km deep consisting of regularly shaped tetrahedral finite elements. The problem arises from a 3D discretization with three displacement unknowns associated to each node of the grid [21].(iv)CUBE-6091 arises from the equilibrium of a concrete cube discretized by a regular unstructured tetrahedral grid.

Matrices FAULT-639 and GEO-1438 are publicly available in the University of Florida Sparse Matrix Collection at http://www.cise.ufl.edu/research/sparse/matrices/.

In Table 1 we report sizes and nonzeros of the four matrices together with three of the most significant eigenvalues for each problem.

tab1
Table 1: Size, number of nonzeros, and three representative eigenvalues of the test matrices.

The computational performance of FSAI is compared to the one obtained by using BFSAI as implemented in [12]. The comparison is done evaluating the number of iterations 𝑛iter to converge at the same tolerance, the wall clock time in seconds 𝑇prec and 𝑇iter for the preconditioner computation, and the eigensolver to converge, respectively, with the total time 𝑇tot=𝑇prec+𝑇iter. All tests are performed on the IBM SP6/5376 cluster at the CINECA Centre for High Performance Computing, equipped with IBM Power6 processors at 4.7 GHz with 168 nodes, 5376 computing cores, and 21 Tbyte of internal network RAM. The FSAI-DACG code is written in Fortran 90 and compiled with -O4 -q64 -qarch=pwr6 -qtune=pwr6 -qnoipa -qstrict -bmaxdata:0x70000000 options. For the BFSAI-IC code only an OpenMP implementation is presently available.

To study parallel performance we will use a strong scaling measure to see how the CPU times vary with the number of processors for a fixed total problem size. Denote with 𝑇𝑝 the total CPU elapsed times expressed in seconds on 𝑝 processors. We introduce a relative measure of the parallel efficiency achieved by the code, 𝑆(𝑝𝑝), which is the pseudo speedup computed with respect to the smallest number of processors (𝑝) used to solve a given problem. Accordingly, we will denote by 𝐸(𝑝𝑝) the corresponding efficiency: 𝑆(𝑝𝑝)=𝑇𝑝𝑝𝑇𝑝,𝐸(𝑝𝑝)=𝑆(𝑝𝑝)𝑝=𝑇𝑝𝑝𝑇𝑝𝑝.(5.1)

5.1. FSAI-DACG Results

In this section we report the results of our FSAI-DACG implementation in the computation of the 10 leftmost eigenpairs of the 4 test problems. We used the exit test described in the DACG algorithm (see Algorithm 1) with tol=1010. The results are summarized in Table 2. As the FSAI parameters, we choose 𝛿=0.1, 𝑑=4, and 𝜀=0.1 for all the test matrices. This combination of parameters produces, on the average, the best (or close to the best) performance of the iterative procedure. Note that the number of iterations does not change with the number of processors, for a fixed problem. The scalability of the code is very satisfactory in both the setup stage (preconditioner computation) and the iterative phase.

tab2
Table 2: Number of iterations, timings, and scalability indices for FSAI-DACG in the computation of the 10 leftmost eigenpairs of the four test problems.
5.2. BFSAI-IC-DACG Results

We present in this section the results of DACG accelerated by the BFSAI-IC preconditioner for the approximation of the 𝑠=10 leftmost eigenpairs of the matrices described above.

Table 3 provides iteration count and total CPU time for BFSAI-DACG with different combinations of the parameters needed to construct the BFSAI-IC preconditioner for matrix PO-878 and using from 2 to 8 processors. It can be seen from Table 3 that the assessment of the optimal parameters, 𝜀, 𝜌𝐵, and 𝜌𝐿, is not an easy task, since the number of iterations may highly vary depending on the number of processors. We chose in this case the combination of parameters producing the second smallest total time with 𝑝=2,4,8 processors. After intensive testing for all the test problems, we selected similarly the “optimal” values which are used in the numerical experiments reported in Table 4:(i)FAULT-639: 𝑑=3, 𝜀=0.05, 𝜌𝐵=10, 𝜌𝐿=60,(ii)PO-878 𝑑=3, 𝜀=0.05, 𝜌𝐵=10, 𝜌𝐿=10,(iii)GEO-1438: 𝑑=3, 𝜀=0.05, 𝜌𝐵=10, 𝜌𝐿=50,(iv)CUBE-6091: 𝑑=2, 𝜀=0.01, 𝜌𝐵=0, 𝜌𝐿=10.

tab3
Table 3: Performance of BFSAI-DACG for matrix PO-878 with 2 to 8 processors and different parameter values.
tab4
Table 4: Number of iterations for BFSAI-DACG in the computations of the 10 leftmost eigenpairs.

The user-specified parameters for BFSAI-IC given above provide evidence that it is important to build a dense preconditioner based on the lower nonzero pattern of 𝐴3 (except for CUBE-6091, which is built on a regular discretization) with the aim at decreasing the number of DACG iterations. Anyway, the cost for computing such a dense preconditioner appears to be almost negligible with respect to the wall clock time needed to iterate to convergence.

We recall that, presently, the code BFSAI-IC is implemented in OpenMP, and the results in terms of CPU time are significant only for 𝑝8. For this reason the number of iterations reported in Table 4 is obtained with increasing number of blocks 𝑛𝑏 and with 𝑝=8 processors. This iteration number accounts for a potential implementation of BFSAI-DACG under the MPI (or hybrid OpenMP-MPI) environment as the number of iterations depends only on the number of blocks, irrespective of the number of processors.

The only meaningful comparison between FSAI-DACG and BFSAI-DACG can be carried out in terms of iteration numbers which are smaller for BFSAI-DACG for a small number of processors. The gap between FSAI and BFSAI iterations reduces when the number of processors increases.

5.3. Comparison with the LOBPCG Eigensolver Provided by hypre

In order to validate the effectiveness of our preconditioning in the proposed DACG algorithm with respect to already available public parallel eigensolvers, the results given in Tables 2 and 4 are compared with those obtained by the schemes implemented in the hypre software package [14]. The Locally Optimal Block Preconditioned Conjugate Gradient method (LOBPCG) [15] is experimented with, using the different preconditioners developed in the hypre project, that is, algebraic multigrid (AMG), diagonal scaling (DS), approximate inverse (ParaSails), additive Schwarz (Schwarz), and incomplete LU (Euclid). The hypre preconditioned CG is used for the inner iterations within LOBPCG. For details on the implementation of the LOBPCG algorithm, see, for instance, [22]. The selected preconditioner, ParaSails, is on its turn based on the FSAI preconditioner, so that the different FSAI-DACG and ParaSails-LOBPCG performances should be ascribed mainly to the different eigensolvers rather than to the preconditioners.

We first carried out a preliminary set of runs with the aim of assessing the optimal value of the block size bl parameter, that is, the size of the subspace where to seek for the eigenvectors. Obviously it must be bls = 10. We fixed to 16 the number of processors and obtained the results summarized in Table 5 with different values of bl    [10,15]. We found that, only in problem CUBE-6091, a value of bl larger than 10, namely, bl = 12, yields an improvement in the CPU time. Note that we also made this comparison with different number of processors, and we obtained analogous results.

tab5
Table 5: Iterations and CPU time for the iterative solver of LOBPCG-hypre preconditioned by Parasails with different values of bl and 𝑝=16 processors.

Table 6 presents the number of iterations and timings using the LOBPCG algorithm in the hypre package. The LOBPCG wall clock time is obtained with the preconditioner allowing for the best performance in the specific problem at hand, that is, ParaSails for all the problems. Using AMG as the preconditioner did not allow for convergence in three cases out of four, with the only exception of the FAULT-639 problem, in which the CPU timings were however very much larger than using ParaSails.

tab6
Table 6: Number of iterations, timings, and scalability of LOBPCG-hypre preconditioned by Parasails.

All matrices have to be preliminarily scaled by their maximum coefficient in order to allow for convergence. To make the comparison meaningful, the outer iterations of the different methods are stopped when the average relative error measure of the computed leftmost eigenpairs gets smaller than 1010, in order to obtain a comparable accuracy as in the other codes. We also report in Table 6 the number of inner preconditioned CG iterations (pcgitr).

To better compare our FSAI DACG with the LOBPCG method, we depict in Figure 1 the total CPU time versus the number of processor for the two codes. FSAI-DACG and LOBPCG provide very similar scalability, being the latter code a little bit more performing on the average. On the FAULT-639 problem, DACG reveals faster than LOBPCG, irrespective of the number of processors employed.

fig1
Figure 1: Comparison between FSAI-DACG and LOBPCG-hypre in terms of total CPU time for different number of processors.

Finally, we have carried out a comparison of the two eigensolvers in the computation of only the leftmost eigenpair. Differently from LOBPCG, which performs a simultaneous approximation of all the selected eigenpairs, DACG proceeds in the computation of the selected eigenpairs in a sequential way. For this reason, DACG should be the better choice, at least in principle, when just one eigenpair is sought. We investigate this feature, and the results are summarized in Table 7. We include the total CPU time and iteration count needed by LOBPCG and FSAI-DACG to compute the leftmost eigenpair with 16 processors. For the LOBPCG code we report only the number of outer iterations.

tab7
Table 7: Performance of LOBPCG-hypre with Parasails and 16 processors in the computation of the smallest eigenvalue using bl = 1 and bl = 2 and FSAI-DACG.

The parameters used to construct the FSAI preconditioner for these experiments are as follows: (1)FAULT-639. 𝛿=0.1, 𝑑=2, 𝜀=0.05,(2) PO-878. 𝛿=0.2, 𝑑=4, 𝜀=0.1,(3) GEO-1438. 𝛿=0.1, 𝑑=2, 𝜀=0.1,(4) CUBE-6091. 𝛿=0.0, 𝑑=1, 𝜀=0.05.

These parameters differ from those employed to compute the FSAI preconditioner in the assessment of the 10 leftmost eigenpairs and have been selected in order to produce a preconditioner relatively cheap to compute. This is so because otherwise the setup time would prevail over the iteration time. Similarly, to compute just one eigenpair with LOBPCG we need to setup a different value for pcgitr, the number of inner iterations. As it can be seen from Table 7, in the majority of the test cases, LOBPCG takes less time to compute 2 eigenpairs than just only 1. FSAI-DACG reveals more efficient than the best LOBPCG on problems PO-878 and GEO-1438. On the remaining two problems the slow convergence exhibited by DACG is probably due to the small relative separation 𝜉1 between 𝜆1 and 𝜆2.

6. Conclusions

We have presented the parallel DACG algorithm for the partial eigensolution of large and sparse SPD matrices. The scalability of DACG, accelerated with FSAI-type preconditioners, has been studied on a set of test matrices of very large size arising from real engineering mechanical applications. Our FSAI-DACG code has shown comparable performances with the LOBPCG eigensolver within the well-known public domain package, hypre. Numerical results reveal that not only the scalability achieved by our code is roughly identical to that of hypre but also, in some instances, FSAI-DACG proves more efficient in terms of absolute CPU time. In particular, for the computation of the leftmost eigenpair, FSAI-DACG is more convenient in 2 problems out of 4.

Acknowledgment

The authors acknowledge the CINECA Iscra Award SCALPREC (2011) for the availability of HPC resources and support.

References

  1. P. Arbenz, U. Hetmaniuk, R. Lehoucq, and R. Tuminaro, “A comparison of eigensolvers for large-scale 3D modal analysis using AMG-preconditioned iterative methods,” International Journal for Numerical Methods in Engineering, vol. 64, no. 2, pp. 204–236, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. R. B. Lehoucq and D. C. Sorensen, “Deflation techniques for an implicitly restarted Arnoldi iteration,” SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 4, pp. 789–821, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  3. G. L. G. Sleijpen and H. A. Van der Vorst, “A Jacobi-Davidson iteration method for linear eigenvalue problems,” SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 2, pp. 401–425, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. L. Bergamaschi, G. Gambolati, and G. Pini, “Asymptotic convergence of conjugate gradient methods for the partial symmetric eigenproblem,” Numerical Linear Algebra with Applications, vol. 4, no. 2, pp. 69–84, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. A. V. Knyazev and A. L. Skorokhodov, “Preconditioned gradient-type iterative methods in a subspace for partial generalized symmetric eigenvalue problems,” SIAM Journal on Numerical Analysis, vol. 31, no. 4, pp. 1226–1239, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. L. Bergamaschi, G. Pini, and F. Sartoretto, “Approximate inverse preconditioning in the parallel solution of sparse eigenproblems,” Numerical Linear Algebra with Applications, vol. 7, no. 3, pp. 99–116, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. L. Bergamaschi and M. Putti, “Numerical comparison of iterative eigensolvers for large sparse symmetric positive definite matrices,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 45, pp. 5233–5247, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. L. Yu. Kolotilina and A. Yu. Yeremin, “Factorized sparse approximate inverse preconditionings. I. Theory,” SIAM Journal on Matrix Analysis and Applications, vol. 14, no. 1, pp. 45–58, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. L. Bergamaschi and A. Martínez, “Parallel acceleration of Krylov solvers by factorized approximate inverse preconditioners,” in VECPAR 2004, M. Dayd et al., Ed., vol. 3402 of Lecture Notes in Computer Sciences, pp. 623–636, Springer, Heidelberg, Germany, 2005. View at Google Scholar · View at Zentralblatt MATH
  10. L. Bergamaschi, A. Martínez, and G. Pini, “An efficient parallel MLPG method for poroelastic models,” Computer Modeling in Engineering & Sciences, vol. 49, no. 3, pp. 191–215, 2009. View at Google Scholar
  11. L. Bergamaschi and A. Martínez, “Parallel inexact constraint preconditioners for saddle point problems,” in Proceedings of the 17th International Conference on Parallel Processing (Euro-Par'11), R. N. E. Jeannot and J. Roman, Eds., vol. 6853, part 2, pp. 78–89, Springer, Bordeaux, France, 2011, Lecture Notes in Computer Sciences.
  12. C. Janna, M. Ferronato, and G. Gambolati, “A block FSAI-ILU parallel preconditioner for symmetric positive definite linear systems,” SIAM Journal on Scientific Computing, vol. 32, no. 5, pp. 2468–2484, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  13. M. Ferronato, C. Janna, and G. Pini, “Efficient parallel solution to large-size sparse eigenproblems with block FSAI preconditioning,” Numerical Linear Algebra with Applications. In press. View at Publisher · View at Google Scholar
  14. Lawrence Livermore National Laboratory, hypre user manual, software version 1.6.0. Center for Applied Scientific Computing (CASC), University of California, 2001.
  15. A. V. Knyazev, “Toward the optimal preconditioned eigensolver: locally optimal block preconditioned conjugate gradient method,” SIAM Journal on Scientific Computing, vol. 23, no. 2, pp. 517–541, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. L. Yu. Kolotilina, A. A. Nikishin, and A. Yu. Yeremin, “Factorized sparse approximate inverse preconditionings. IV. Simple approaches to rising efficiency,” Numerical Linear Algebra with Applications, vol. 6, no. 7, pp. 515–531, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  17. A. Martínez, L. Bergamaschi, M. Caliari, and M. Vianello, “A massively parallel exponential integrator for advection-diffusion models,” Journal of Computational and Applied Mathematics, vol. 231, no. 1, pp. 82–91, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  18. C. Janna, M. Ferronato, and N. Castelletto, “BFSAI-IC OpenMP implementation,” Release V1.0, January 2011, http://www.dmsa.unipd.it/~ferronat/software.html.
  19. M. Ferronato, C. Janna, and G. Gambolati, “Mixed constraint preconditioning in computational contact mechanics,” Computer Methods in Applied Mechanics and Engineering, vol. 197, no. 45–48, pp. 3922–3931, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  20. N. Castelletto, M. Ferronato, G. Gambolati et al., “3D geomechanics in UGS projects: a comprehensive study in northern Italy,” in Proceedings of the 44th US Rock Mechanics Symposium, Salt Lake City, Utah, USA, 2010. View at Publisher · View at Google Scholar
  21. P. Teatini, M. Ferronato, G. Gambolati, D. Bau, and M. Putti, “An-thropogenic Venice uplift by seawater pumping into a heterogeneous aquifer system,” Water Resources Research, vol. 46, Article ID W11547, 16 pages, 2010. View at Publisher · View at Google Scholar
  22. A. V. Knyazev and M. E. Argentati, “Implementation of a preconditioned eigensolver using hypre,” 2005, http://math.ucdenver.edu/~rargenta/index_files/rep220.pdf.