About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 767042, 10 pages
http://dx.doi.org/10.1155/2013/767042
Research Article

Parallel RFSAI-BFGS Preconditioners for Large Symmetric Eigenproblems

1Department of Civil, Environmental, and Architectural Engineering, University of Padua, Via Trieste 63, 35100 Padova, Italy
2Department of Mathematics, University of Padua, Via Trieste 63, 35100 Padova, Italy

Received 9 April 2013; Revised 18 July 2013; Accepted 1 August 2013

Academic Editor: D. R. Sahu

Copyright © 2013 L. Bergamaschi and A. Martínez. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We propose a parallel preconditioner for the Newton method in the computation of the leftmost eigenpairs of large and sparse symmetric positive definite matrices. A sequence of preconditioners starting from an enhanced approximate inverse RFSAI (Bergamaschi and Martínez, 2012) and enriched by a BFGS-like update formula is proposed to accelerate the preconditioned conjugate gradient solution of the linearized Newton system to solve , being the Rayleigh quotient. In a previous work (Bergamaschi and Martínez, 2013) the sequence of preconditioned Jacobians is proven to remain close to the identity matrix if the initial preconditioned Jacobian is so. Numerical results onto matrices arising from various realistic problems with size up to 1.5 million unknowns account for the efficiency and the scalability of the proposed low rank update of the RFSAI preconditioner. The overall RFSAI-BFGS preconditioned Newton algorithm has shown comparable efficiencies with a well-established eigenvalue solver on all the test problems.

1. Introduction

The computation of the leftmost eigenpairs of a symmetric positive definite (SPD) matrix is a common task in many scientific applications. Typical examples are offered by the vibrational analysis of mechanical structures [1]: the spectral superposition approach for the solution of large sets of 1st order linear differential equations [2] and the approximation of the generalized inverse of the graph Laplacian [3], to mention a few.

We denote as the eigenvalues of and as the corresponding (normalized) eigenvectors.

In this paper, we propose to use the Newton method in the unit sphere [4, 5] or Newton-Grassman method, which constructs a sequence of vectors by solving the linear systems ensuring that the correction be orthogonal to . Then the next approximation is set as , where . Linear system (3) is shown to be better conditioned than the one with . For SPD matrices, this scheme is shown to converge cubically to the wanted eigenvector, provided that the system (3) is solved exactly. The same linear system represents the correction equation in the well-known Jacobi-Davidson method [6], which in its turn can be viewed as an accelerated inexact Newton method [7]. When is SPD and the leftmost eigenpairs are being sought, it has been proved in [8] that the preconditioned conjugate gradient (PCG) method can be employed in the solution of the correction equation.

Starting from the findings in [911] the main contribution of this paper is the development of a sequence of preconditioners for the PCG solution of the Newton correction equation (3), based on the BFGS update of a given parallel initial preconditioner for the coefficient matrix . A similar approach has been used in [12] where a rank-two modification of a given preconditioner is used to accelerated MINRES in the framework of the inexact Rayleigh quotient iteration. Updating (cheaply) a given preconditioner to obtain a new preconditioner for a slightly changed linear system is also common in optimization problems. See for example, [1315].

The initial Newton vector is obtained after a small number of iterations of a conjugate gradient procedure for the minimization of the Rayleigh quotient (DACG, [16]). As the initial preconditioner we choose RFSAI [17], which is built by recursively applying the (factorized sparse approximate inverse) FSAI preconditioner developed in [18]. We elected a factorized approximate inverse preconditioner (AIP) since it is more naturally parallelizable than preconditioners based on ILU factorizations. Moreover, factorized variants of AIP provide better approximations to for the same amount of storage than nonfactorized ones, because they can express denser matrices than the total number of nonzeros in their factors [19]. The RFSAI formulation enhances this characteristic, since the final preconditioner is implicitly built by a product of four or more triangular factors.

The combined DACG-Newton algorithm is used in the approximation of eigenpairs of a number of matrices arising from various realistic applications of size up to and number of nonzeros up to . Numerical results show that, in the solution of the correction equation, the PCG method preconditioned by BFGS displays much faster convergence than the same method when the preconditioner is kept fixed during the Newton process, in every test case. Moreover, the proposed approach is shown to outperform the pure DACG method.

2. BFGS Sequence of Preconditioners

2.1. Choice of the Initial Preconditioner

Following the developments in [17], we propose an implicit enlargement of the sparsity pattern using a banded target matrix ; the lower factor of the FSAI preconditioner is obtained by minimizing over the set of matrices having a fixed sparsity pattern. Denoting with the result of this minimization, we compute explicitly the preconditioned matrix and then evaluate a second FSAI factor for . Thus, the final preconditioner can be written as the product . This RFSAI—recursive FSAI—procedure can be iterated a number of times to yield a preconditioner written as a product of several triangular factors. This preconditioner has shown its efficiency in the acceleration of the PCG onto realistic geomechanical problems of very large size [17].

We denote FSAIout as the procedure which computes the factor by minimizing , where is an arbitrary banded matrix. FSAIout depends on the nband parameter in addition to the usual FSAI parameters: prefiltration threshold, power of defining the sparsity pattern, and postfiltration parameter.

The second preconditioner factor, , is the result of the FSAI procedure applied to the whole product matrix , with parameters , and . The steps to obtain the final RFSAI preconditioner are summarized in Algorithm 1.

alg1
Algorithm 1: RFSAI computation.

Following the idea described in [10], we propose a sequence of preconditioners for the Newton systems using the BFGS rank-two update. To precondition the initial Newton system as follows: where We choose to use a projected RFSAI preconditioner, with and .

2.2. Update of Initial Preconditioner by BFGS-Like Rank-Two Corrections

A sequence of projected preconditioners for the subsequent linear systems may be defined by where the solution of the previous correction equation and . In view of the cubic convergence of the Newton process, we used the residual instead of .

Theorem 3 of Section 3 will state that the preconditioner defined in (6) is SPD if is so.

3. Theoretical Analysis of the Preconditioner

3.1. Finding the Smallest Eigenpair

We recall in this section the main theoretical results regarding the sequence of preconditioners previously defined. Differently from the classical papers which study BFGS convergence properties, here our Jacobian matrix is singular whatever , in particular it is singular when is equal to the exact eigenvector. The theoretical analysis of the “goodness” of the preconditioner will be, therefore, completely different, though obtaining similar results, than that proposed, for example, in [10, 20]. In the following developments we will indicate as the exact eigenvector corresponding to the smallest exact eigenvalue . The error vector at step is denoted by , while the error in the eigenvalue approximation is (>0). It is easily proved that there is a constant independent of such that

Remark 1. At first sight the Jacobian matrix in the correction equation is singular, but this does not matter since the PCG algorithm is run within the subspace of vectors orthogonal to (in fact also ). Thus, notion of positive definiteness, eigenvalue distribution, condition number, norms, and so forth, apply as usual but with respect to matrices restricted to this subspace.

The following lemma will bound the extremal eigenvalues of in the subspace orthogonal to .

Lemma 2. There is a positive number such that if then is SPD in the subspace orthogonal to . Moreover the following bounds hold: for every unit norm vector orthogonal to .

Proof. See [21].

The previous lemma allows us to prove that the preconditioner defined in (6) is SPD, as stated in the following theorem.

Theorem 3. If the correction equation is solved exactly, then any matrix defined by (6) is SPD and hence is SPD in the subspace orthogonal to .

Proof. See [21].

Let us define the difference between the preconditioned Jacobian and the identity matrix as Since by definition we have then is the eigenvector of corresponding to the zero eigenvalue. Hence, since also the error can also be defined as The following technical lemma will bound the norm of in terms of that of . Being SPD we can define its norm in the space orthogonal to as

Lemma 4. There is a positive number such that if then

Proof. See [21].

The next lemma will relate the norms of the difference and of the error vector .

Lemma 5. There exists a positive number s.t. if then

Proof. See [21].

Before stating Theorem 7 we need to state as a last preliminary result that also the difference between the square root of two consecutive Jacobians is bounded in terms of the norm of the error vector.

Lemma 6. Let . Then there is a positive number s.t. if then for a suitable constant .

Proof. See [21].

The following theorem will state the main theoretical result of this Section: the so called bounded deterioration [22] of the preconditioner at step with respect to that of step . In other words it can be proved that the distance of the preconditioned matrix from the identity matrix at step is less or equal than that at step plus a constant that may be small as desired, depending on the closeness of to the exact eigenvector. We report also the proof of this theorem, which is taken from [21].

Theorem 7. Let be such that , there is a positive number s.t. if then for a suitable constant .

Proof. After defining , we can write where we set and ; is an orthogonal projector since . To bound , we will use Lemmas 4 and 6 as follows: Now taking norms in (17) yields which can be rewritten as From (20), we derive a bound for . If then
Again from (20) and using the bound (21) we finally have Setting completes the proof.

3.2. Computing Several Eigenpairs

When seeking an eigenvalue different from , say , the Jacobian matrix changes as where is the matrix whose first columns are the previously computed eigenvectors. Analogously, also the preconditioner must be chosen orthogonal to as The theoretical analysis developed in Section 3.1 applies with small technical variants also in this case since it is readily proved that . The most significant changes regard the definition of , , and the statement of Lemma 2 (and the proof of Lemma 4 that uses its results); namely, the bound for the smallest eigenvalue of which in the general case becomes for every unit norm vector such that .

4. Implementation

4.1. Choosing an Initial Eigenvector Guess

As mentioned in Section 1, another important issue in the efficiency of the Newton approach for eigenvalue computation is represented by the appropriate choice of the initial guess. We propose here to perform some preliminary iterations using the DACG eigensolver [16, 23, 24], which is based on the preconditioned conjugate gradient (nonlinear) minimization of the Rayleigh quotient. This method has proven very robust, and not particularly sensitive to the initial vector, in the computation of a few eigenpairs of large SPD matrices. Recently in [25], the DACG method has been compared with the Jacobi-Davidson method in parallel environments, showing similar performances when seeking a small number of eigenpairs .

4.2. Implementation of the BFGS Preconditioner Update

At a certain nonlinear iteration level, , we need to compute , where is the residual of the linear system at iteration . Let us suppose we compute an initial preconditioner . Then, at the initial nonlinear iteration , we simply have . At step the preconditioner is defined recursively by (6) while using (24) can be written as To compute vector first we observe that is orthogonal to so there is no need to apply matrix on the right of (26). Application of preconditioner to the vector can be performed at the price of dot products and daxpys as described in Algorithm 2. The scalar products , which appear at the denominator of , can be computed once and for all before starting the solution of the th linear system. Last, the obtained vector must be orthogonalized against the columns of by a classical Gram-Schimdt procedure.

alg2
Algorithm 2: Computation of for the BFGS preconditioner.

4.3. PCG Solution of the Correction Equation

As a Krylov subspace solver for the correction equation, we chose the preconditioned conjugate gradient (PCG) method since the Jacobian has been shown to be SPD in the subspace orthogonal to . Regarding the implementation of PCG, we mainly refer to the work [8], where the author shows that it is possible to solve the linear system in the subspace orthogonal to and hence, the projection step needed in the application of can be skipped. Moreover, we adopted the exit strategy for the linear system solution described in the previous paper, which allows for stopping the PCG iteration, in addition to the classical exit test based on a tolerance on the relative residual and on the maximum number of iterations, whenever the current solution satisfies or when the decrease of is slower than the decrease of , because in this case further iterating does not improve the accuracy of the eigenvector. Note that this dynamic exit strategy implicitly defines an inexact Newton method since the correction equation is not solved “exactly”, that is, up to machine precision.

We have implemented the PCG method as described in Algorithm 5.1 of [8] with the obvious difference in the application of the preconditioner which in this case is accomplished as described in Algorithm 2.

4.4. Implementation of the DACG-Newton Method

The BFGS preconditioner defined in Algorithm 2 suffers from two main drawbacks, namely, increasing costs of memory for storing and and the increasing cost of preconditioner application with the iteration index . Note that these drawbacks are common to many iterative schemes, such as sparse (Limited Memory) Broyden implementations [26], GMRES [27], and Arnoldi method for eigenvalue problems [28]. If the number of nonlinear iterations is high the application of BFGS update may be too costly as compared with the expected reduction in the iteration number. To this aim we define the maximum number of rank-two corrections we allow. When the nonlinear iteration counter is larger than , the vectors , , are substituted with the last computed , . Vectors are stored in a matrix with rows and columns.

The implementation of our DACG-Newton method for computing the leftmost eigenpairs of large SPD matrices is described in Algorithm 3.

alg3
Algorithm 3: DACG-Newton algorithm.

4.5. Parallel Implementation

We have developed a parallel code which implements the construction and application inside parallel PCG, of the RFSAI algorithm along with the BFGS updates. The resulting program is written in Fortran 90 and exploits the MPI library for exchanging data among the processors. We use a block row distribution of all matrices, that is, with complete rows assigned to different processors. All the matrices are stored in static data structures in CSR format.

Parallelization of the FSAI preconditioner, which is the basis of the parallel RFSAI construction, has been performed and tested for example, in [2931] where prefiltration and postfiltration have been implemented together with a priori sparsity pattern based on nonzeros of with .

The code makes also use of an optimized parallel matrix-vector product which has been developed in [32] showing its effectiveness up to 1024 processors.

5. Numerical Results

5.1. Test Problems

We report the results of our experiments with the RFSAI preconditioner in the solution of a set of problems of large size.

The test matrices are realistic examples arising from 3D FE discretization of fluid flow and geomechanical models. Described in detail as follows: (1)FLOW3D-663 arises from tetrahedral FE discretization of an elliptic PDE describing fluid flow in porous media.(2)EMILIA-923 arises from the regional geomechanical model of a deep hydrocarbon reservoir. It is obtained discretizing the structural problem with tetrahedral finite elements. Due to the complex geometry of the geological formation, it was not possible to obtain a computational grid characterized by regularly shaped elements.(3)LONG-1103 is the structural SPD block obtained from a 3D coupled consolidation problem of a geological formation, discretized with tetrahedral finite elements on a higly irregular computational grid.(4)GEO-1438 arises from a regional geomechanical model of the sedimentary basin underlying the Venice lagoon discretized by a linear FE with randomly heterogeneous properties. The computational domain is a box with an areal extent of  km and 10 km deep consisting of regularly shaped tetrahedral finite elements.

Matrices 2 to 4 are publicly available in the University of Florida Sparse Matrix Collection at http://www.cise.ufl.edu/research/sparse/matrices/. The size and number of nonzero elements for each matrix is provided in Table 1.

tab1
Table 1: Size and number of nonzeros (nnz) of the test matrices.

All tests are performed on the IBM BlueGeneQ cluster at the CINECA Centre for HPC, equipped with IBM PowerA2 processors at 1.6 GHz with 10240 nodes, 163 840 computing cores, and 164 Tbytes of internal network RAM. The Fortran 90 code is compiled with the native IBM xlf compiler using the -O3 -qarch=qp -qtune=qp options.

5.2. Results on a Fixed Number of Processors

We now compare the performance of the DACG-Newton algorithm for various values. We tested the proposed algorithm in the computation of the 10 smallest eigenpairs of the afore mentioned large matrices arising from various realistic applications. In all the runs, unless differently specified, we selected the values of the parameters as reported in Table 2.

tab2
Table 2: Default values of parameters.

We will also compute the fill-in of the initial preconditioner defined as In Table 3, we report the parameters for the RFSAI preconditioner which we selected for the four-test problems.

tab3
Table 3: RFSAI parameters used for the eigensolution of the four test problems.

We first analyze the influence of parameter in the convergence of the Newton-DACG method. To this aim we provide, in Table 4 the values of outer iteration number, total matrix-vector products (MVP) and CPU time in evaluating 10 eigenpairs of matrix EMILIA-923 for different values of . From the table, we notice how iteration number and CPU time monotonically decreases with increasing . Moreover, for , the iterative process reached the maximum number of iterations before fulfilling the exit test on the relative residual. Another evidence of the efficiency of the BFGS update is given in Figure 1, where we plot the relative residual norm versus number of linear iterations in converging to eigenpair number 3 of matrix FLOW3D-663. Note that, with , that is, using the constant RFSAI preconditioner, the Newton phase could not reach the accuracy within the prescribed maximum number of outer iterations.

tab4
Table 4: Influence of parameter in the number of iterations of the Newton phase. Problem EMILIA-923 with processors.
767042.fig.001
Figure 1: Convergence profile of the relative residual norm versus cumulative inner iterations for eigenvalue number 3, matrix FLOW3D-663.

In Table 5, we report the number of matrix vector products and CPU times for Newton-DACG as compared with “pure” DACG, that is, with DACG run with a final tolerance of . In every test problem DACG-Newton reveals superior to DACG.

tab5
Table 5: Comparison between DACG and Newton-DACG for the four test problems on processors. Results on problem LONG-1103 are obtained using .

In particular, for problem FLOW3D-663, the improvement provided by DACG-Newton is impressive. The DACG method, if run until the final tolerance , is very slow due to the very small relative separation between consecutive eigenvalues , which drives convergence of this PCG-like solver [16, 25]. In fact the ratio also influences the convergence of Newton’s method being related to the condition number of the Jacobian. However, the (RFSAI-BFGS preconditioned) DACG-Newton algorithm seems less sensitive to this ill-conditioning.

5.3. Parallel Results and Scalability

We now analyze the parallel efficiency of the previously described eigensolver preconditioned by RFSAI-BFGS. We will use a strong scaling measure to see how CPU times vary with the number of processors for a fixed total problem size. Denoting with the total CPU elapsed times expressed in seconds on processors, we define relative measures of the parallel efficiency and speedup of our code. We define as the pseudo speedup computed with respect to the smallest number of processors used to solve a given problem and the corresponding efficiency:

The results obtained with the four test matrices are presented in Tables 6, 7, 8, and 9. As a general comment, we may observe that the overall code scales well for the three largest matrices, up to 256 processors (512 for problem GEO-1438). The parallel pseudo efficiency, as expected, decreases with growing number of processors but it is roughly 50% for the largest number of processors employed.

tab6
Table 6: Iterations, timings, and pseudo efficiencies for problem FLOW3D-663.
tab7
Table 7: Iterations, timings, and pseudo efficiencies for problem EMILIA-923.
tab8
Table 8: Iterations, timings, and pseudo efficiencies for problem LONG-1103.
tab9
Table 9: Iterations, timings, and pseudo efficiencies for problem GEO-1438.

Regarding the FLOW3D-663 problem, which is characterized by a relatively small dimension and high sparsity (15 nonzeros per row), the parallel efficiency starts to worsen with processors.

6. Conclusion

We have proposed a parallel RFSAI-BFGS preconditioner for the acceleration of the PCG method in the approximate solution of the linearized Newton systems in the evaluation of a number of the leftmost eigenpairs of large SPD matrices. We have shown that updating an initial preconditioner (here RFSAI) by a low-rank correction using the BFGS formula, produces significant savings in the total number of iterations of the inner solver (the PCG method). The Newton’s algorithm, preconditioner by RFSAI-BFGS, with the aid of a number of initial iterations of DACG to obtain a good initial eigenvector guess, has been completely parallelized and run on the new IBM BlueGeneQ, located at CINECA, Bologna, Italy. The scalability results are very satisfactory, as compared to the size and nonzeros of the problems selected. In particular, for the largest problem, an efficiency of 72% is obtained with 256 processors. Future research is aimed at investigating the relations between the proposed accelerated Newton method and the well-established Jacobi-Davidson method.

Acknowledgments

The work of the first author has been partially supported by the Spanish Grant MTM2010-18674. The authors also acknowledge the CINECA Iscra Award SPREAD (2012) for the availability of HPC resources and support.

References

  1. K. J. Bathe, Finite Element Procedures in Engineering Analysis, Prentice-Hall, Englewood Cliffs, NJ, USA, 1982.
  2. G. Gambolati, “On time integration of groundwater flow equations by spectral methods,” Water Resources Research, vol. 29, no. 4, pp. 1257–1267, 1993. View at Publisher · View at Google Scholar · View at Scopus
  3. E. Bozzo and M. Franceschet, “Approximations of the generalized inverse of the graph Laplacian matrix,” Internet Mathematics, vol. 8, no. 4, pp. 456–481, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  4. V. Simoncini and L. Eldén, “Inexact Rayleigh quotient-type methods for eigenvalue computations,” BIT Numerical Mathematics, vol. 42, no. 1, pp. 159–182, 2002. View at MathSciNet · View at Scopus
  5. M. A. Freitag and A. Spence, “Rayleigh quotient iteration and simplified Jacobi-Davidson method with preconditioned iterative solves,” Linear Algebra and Its Applications, vol. 428, no. 8-9, pp. 2049–2060, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. G. L. G. Sleijpen and H. A. Van der Vorst, “Jacobi-Davidson iteration method for linear eigenvalue problems,” SIAM Review, vol. 42, no. 2, pp. 267–293, 2000. View at MathSciNet · View at Scopus
  7. D. R. Fokkema, G. L. G. Sleijpen, and H. A. van der Vorst, “Accelerated inexact Newton schemes for large systems of nonlinear equations,” SIAM Journal on Scientific Computing, vol. 19, no. 2, pp. 657–674, 1998. View at Scopus
  8. Y. Notay, “Combination of Jacobi-Davidson and conjugate gradients for the partial symmetric eigenproblem,” Numerical Linear Algebra with Applications, vol. 9, no. 1, pp. 21–44, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. L. Bergamaschi, R. Bru, A. Martínez, and M. Putti, “Quasi-newton preconditioners for the inexact Newton method,” Electronic Transactions on Numerical Analysis, vol. 23, pp. 76–87, 2006. View at Scopus
  10. L. Bergamaschi, R. Bru, and A. Martínez, “Low-rank update of preconditioners for the inexact Newton method with SPD Jacobian,” Mathematical and Computer Modelling, vol. 54, no. 7-8, pp. 1863–1873, 2011. View at Publisher · View at Google Scholar · View at Scopus
  11. L. Bergamaschi, R. Bru, A. Martínez, and M. Putti, “Quasi-Newton acceleration of ILU preconditioners for nonlinear two-phase flow equations in porous media,” Advances in Engineering Software, vol. 46, no. 1, pp. 63–68, 2012. View at Publisher · View at Google Scholar · View at Scopus
  12. F. Xue and H. C. Elman, “Convergence analysis of iterative solvers in inexact Rayleigh Quotient iteration,” SIAM Journal on Matrix Analysis and Applications, vol. 31, no. 3, pp. 877–899, 2009.
  13. S. Bellavia, V. De Simone, D. di Serafino, and B. Morini, “Efficient preconditioner updates for shifted linear systems,” SIAM Journal on Scientific Computing, vol. 33, no. 4, pp. 1785–1809, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. S. Bellavia, D. Bertaccini, and B. Morini, “Nonsymmetric preconditioner updates in newton-krylov methods for nonlinear systems,” SIAM Journal on Scientific Computing, vol. 33, no. 5, pp. 2595–2619, 2011. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Bellavia, V. De Simone, D. Di Serafino, and B. Morini, “A preconditioning framework for sequences of diagonally modified linear systems arising in optimization,” SIAM Journal on Numerical Analysis, vol. 50, no. 6, pp. 3280–3302, 2012.
  16. L. Bergamaschi, G. Gambolati, and G. Pini, “Asymptotic convergence of conjugate gradient methods for the partial symmetric eigenproblem,” Numerical Linear Algebra with Applications, vol. 4, no. 2, pp. 69–84, 1997. View at Scopus
  17. L. Bergamaschi and A. Martínez, “Banded target matrices and recursive FSAI for parallel preconditioning,” Numerical Algorithms, vol. 61, no. 2, pp. 223–241, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  18. L. Yu. Kolotilina and A. Yu. Yeremin, “Factorized sparse approximate inverse preconditionings. I. Theory,” SIAM Journal on Matrix Analysis and Applications, vol. 14, no. 1, pp. 45–58, 1993. View at Publisher · View at Google Scholar · View at MathSciNet
  19. M. Benzi and M. Tůma, “A comparative study of sparse approximate inverse preconditioners,” Applied Numerical Mathematics, vol. 30, no. 2, pp. 305–340, 1999. View at Publisher · View at Google Scholar · View at Scopus
  20. L. Bergamaschi, R. Bru, A. Martínez, J. Mas, and M. Putti, “Low-rank update of preconditioners for the nonlinear Richards equation,” Mathematical and Computer Modelling, vol. 57, no. 7-8, pp. 1933–1941, 2013. View at Publisher · View at Google Scholar · View at Scopus
  21. L. Bergamaschi and A. Martínez, “Efficiently preconditioned inexact newton methods for large symmetric eigenvalue problems,” submitted.
  22. C. T. Kelley, Iterative Methods for Optimization, vol. 18 of Frontiers in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, Pa, USA, 1999. View at Publisher · View at Google Scholar · View at MathSciNet
  23. L. Bergamaschi, G. Pini, and F. Sartoretto, “Approximate inverse preconditioning in the parallel solution of sparse eigenproblems,” Numerical Linear Algebra with Applications, vol. 7, no. 3, pp. 99–116, 2000. View at Scopus
  24. L. Bergamaschi and M. Putti, “Numerical comparison of iterative eigensolvers for large sparse symmetric positive definite matrices,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 45, pp. 5233–5247, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. L. Bergamaschi, A. Martínez, and G. Pini, “Parallel Rayleigh quotient optimization with FSAI-based preconditioning,” Journal of Applied Mathematics, vol. 2012, Article ID 872901, 14 pages, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  26. S. G. Nash and J. Nocedal, “A numerical study of the limited memory BFGS method and the truncated-Newton method for large scale optimization,” SIAM Journal on Optimization, vol. 1, no. 3, pp. 358–372, 1991.
  27. Y. Saad and M. H. Schultz, “GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems,” Journal on Scientific and Statistical Computing, vol. 7, no. 3, pp. 856–869, 1986. View at Publisher · View at Google Scholar · View at MathSciNet
  28. R. B. Lehoucq and D. C. Sorensen, “Deflation techniques for an implicitly restarted Arnoldi iteration,” SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 4, pp. 789–821, 1996. View at Scopus
  29. L. Bergamaschi and Á. Martínez, “Parallel acceleration of Krylov solvers by factorized approximate inverse preconditioners,” in Proceedings of the 6th International Conference High Performance Computing for Computational Science (VECPAR '05), M. Daydè, Ed., vol. 3402 of Lecture Notes in Computer Sciences, pp. 623–636, Springer, Heidelberg, Germany, 2005. View at Scopus
  30. L. Bergamaschi, Á. Martínez, and G. Pini, “An efficient parallel MLPG method for poroelastic models,” Computer Modeling in Engineering & Sciences, vol. 49, no. 3, pp. 191–216, 2009. View at Scopus
  31. L. Bergamaschi and A. Martínez, “Parallel inexact constraint preconditioners for saddle point problems,” in Proceedings of the 17th International Conference on Parallel Processing, R. N. E. Jeannot and J. Roman, Eds., vol. 6853 of Lecture Notes in Computer Sciences, pp. 78–89, Springer, Bordeaux, France, 2011.
  32. A. Martínez, L. Bergamaschi, M. Caliari, and M. Vianello, “A massively parallel exponential integrator for advection-diffusion models,” Journal of Computational and Applied Mathematics, vol. 231, no. 1, pp. 82–91, 2009. View at Publisher · View at Google Scholar · View at Scopus