Table of Contents Author Guidelines Submit a Manuscript
International Journal of Mathematics and Mathematical Sciences
Volume 2009 (2009), Article ID 179481, 6 pages
http://dx.doi.org/10.1155/2009/179481
Research Article

On the Relation between the AINV and the FAPINV Algorithms

Department of Mathematics, University of Mohaghegh Ardabili, P.O. Box 179, Ardabil, Iran

Received 26 July 2009; Accepted 3 November 2009

Academic Editor: Victor Nistor

Copyright © 2009 Davod Khojasteh Salkuyeh and Hadi Roohani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The approximate inverse (AINV) and the factored approximate inverse (FAPINV) are two known algorithms in the field of preconditioning of linear systems of equations. Both of these algorithms compute a sparse approximate inverse of matrix in the factored form and are based on computing two sets of vectors which are -biconjugate. The AINV algorithm computes the inverse factors and of a matrix independently of each other, as opposed to the AINV algorithm, where the computations of the inverse factors are done independently. In this paper, we show that, without any dropping, removing the dependence of the computations of the inverse factors in the FAPINV algorithm results in the AINV algorithm.

1. Introduction

Consider the linear system of equations where the coefficient matrix is nonsingular, large, sparse, and . Such linear systems are often solved by Krylov subspace methods such as the GMRES (see Saad and Schultz [1], Saad [2]) and the BiCGSTAB (see van der Vorst [3], Saad [2]) methods in conjunction with a suitable preconditioner. A preconditioner is a matrix such that can be easily computed for a given vector and system is easier to solve than (1.1). Usually, to this end one intends to find such that matrix , where is the identity matrix. There are various methods to compute such an appropriate matrix (see Benzi [4], Benzi and Tuma [5], Saad [2]). The factored approximate inverse (FAPINV) (Lee and Zhang [6, 7], Luo [810], Zhang [11, 12]) and the approximate inverse (AINV) (see Benzi and Tuma [13, 14]) are among the algorithms for computing an approximate inverse of in the factored form. In fact both of these methods compute lower unitriangular matrices and and a diagonal matrix such that . In this case, the matrix may be used as a preconditioner for (1.1). It is well-known that the AINV algorithm is free from breakdown for the class of -matrices [13].

The main idea of the FAPINV algorithm was first introduced by Luo (see Luo [810]). Then the algorithm was more investigated by Zhang in [12]. Since in this procedure the factorization is performed in backward direction, we call it BFAPINV (for backward FAPINV) algorithm. In [11], Zhang proposed an alternative procedure to compute the factorization in the forward direction, which we call it FFAPINV (for forward FAPINV) algorithm. In [7], Lee and Zhang showed that the BFAPINV algorithm is free from breakdown for -matrices. It can be easily seen that the FFAPINV algorithm is free from breakdown for -matrices, as well. In the left-looking AINV algorithm (see Benzi and Tuma [13, 14]), the inverse factors are computed quite independently. In contrast, in the FFAPINV algorithm, the inverse factors and are not computed completely independently of each other. In this paper, from the FFAPINV algorithm without any dropping, we obtain a procedure which bypasses this dependence. Then we show that this procedure is equivalent to the left-looking AINV algorithm. In the same way one can see that the right-looking AINV algorithm (see Benzi and Tuma [13]) can be obtained from BFAPINV algorithm.

In Section 2, we give a brief description of the FFAPINV algorithm. The main results are given in Section 3. Section 4 is devoted to some concluding remarks.

2. A Review of the FFAPINV Algorithm

Let and be the inverse factors of , that is, where , and , in which 's and 's are the rows and columns of and , respectively. Using (2.1) we obtain From the structure of the matrices and , we have for some 's and 's, where is the th column of the identity matrix.

First of all, we see that Now let be fixed. Then from (2.2) and (2.3) and for , we have where is the th column of . Therefore In the same manner where is the th row of . Putting these results together gives the Algorithm 1 for computing the inverse factors of .

alg1
Algorithm 1: The FFAPINV algorithm without dropping.

Some observation can be posed here. It can be easily seen that (see, e.g., Salkuyeh [15]) In this algorithm, the computations for the inverse factors and are tightly coupled. This algorithm needs the columns of the strictly upper triangular part of for computing and the strictly lower triangular part of for computing . A sparse approximate inverse of in the factored form is computed by inserting some dropping strategies in Algorithm 1.

3. Main Results

At the beginning of this section we mention that all of the results presented in this section are valid only when we do not use any dropping. As we mentioned in the previous section the computations for the inverse factors and are tightly coupled. In this section, we extract a procedure from Algorithm 1 such that the computations for the inverse factors are done independently. We also show that the resulting algorithm is equivalent to the left-looking AINV algorithm.

From we have . Obviously, the right-hand side of the latter equation is a lower triangular matrix and . Therefore Premultiplying both sides of (2.3) by , , from the left, we obtain Taking into account (3.1), we obtain Therefore Hence we can state a procedure for computing the inverse factor without need to the inverse factor as follows:

(1), (2)For Do(3)For , Do(4)(5)EndDo(6)(7)(8)EndDo

By some modifications this algorithm can be converted in a simple form, avoiding extra computations. Letting , steps ()–() may be written as follows:

(i)(ii)For , Do(iii)(iv)(v)EndDo(vi)(vii).

Obviously the parameter at step (iii) of this procedure can be computed via We have . This shows that the matrix is a lower triangular matrix. Therefore, since is a unit upper triangular matrix, we deduce On the other hand from (2.9), in step () of this procedure, we can replace by . Now by using the above results we can summarized an algorithm for computing as in Algorithm 2.

alg2
Algorithm 2: Left-looking AINV algorithm without dropping.

This algorithm is known as the left-looking AINV algorithm . We observe that the left-looking AINV algorithm can be extracted from the FFAPINV algorithm. This algorithm computes with working on rows of . Obviously the factor can be computed via this algorithm, working on rows of . In the same way, one can obtain the right-looking AINV algorithm from the BFAPINV algorithm.

4. Conclusions

In this paper, we have shown that the AINV and FAPINV algorithms are strongly related. In fact, we have shown that the AINV algorithm can be extracted from the FAPINV algorithm by some modification. Although, without any dropping, the computation of inverse factors of a matrix by the two algorithms is done in different ways, but the results are the same. Hence many of the properties of each of these algorithms are valid for the other one. For example, in (Benzi and Tuma [13]), it has been shown that the right-looking AINV algorithm without any dropping role is well defined for -matrices. Therefore we conclude that the BFAPINV algorithm is well defined for -matrices as well.

Acknowledgment

The authors would like to thank one of the referees for helpful suggestions.

References

  1. Y. Saad and M. H. Schultz, “GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems,” SIAM Journal on Scientific and Statistical Computing, vol. 20, no. 3, pp. 856–869, 1986. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Press, New York, NY, USA, 2nd edition, 1995.
  3. H. A. van der Vorst, “Bi-CGSTAB: a fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems,” SIAM Journal on Scientific and Statistical Computing, vol. 12, no. 2, pp. 631–644, 1992. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. M. Benzi, “Preconditioning techniques for large linear systems: a survey,” Journal of Computational Physics, vol. 182, no. 2, pp. 418–477, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. M. Benzi and M. Tuma, “A comparative study of sparse approximate inverse preconditioners,” Applied Numerical Mathematics, vol. 30, no. 2-3, pp. 305–340, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. E.-J. Lee and J. Zhang, “A two-phase preconditioning strategy of sparse approximate inverse for indefinite matrices,” Tech. Rep. 476-07, Department of Computer Science, University of Kentuky, Lexington, Ky, USA, 2007. View at Google Scholar
  7. E.-J. Lee and J. Zhang, “Factored approximate inverse preonditioners with dynamic sparsity patterns,” Tech. Rep. 488-07, Department of Computer Science, University of Kentuky, Lexington, Ky, USA, 2007. View at Google Scholar
  8. J.-G. Luo, “An incomplete inverse as a preconditioner for the conjugate gradient method,” Computers & Mathematics with Applications, vol. 25, no. 2, pp. 73–79, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. J.-G. Luo, “A new class of decomposition for inverting asymmetric and indefinite matrices,” Computers & Mathematics with Applications, vol. 25, no. 4, pp. 95–104, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. J.-G. Luo, “A new class of decomposition for symmetric systems,” Mechanics Research Communications, vol. 19, pp. 159–166, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. J. Zhang, A procedure for computing factored approximate inverse, M.S. dissertation, Department of Computer Science, University of Kentucky, Lexington, Ky, USA, 1999.
  12. J. Zhang, “A sparse approximate inverse preconditioner for parallel preconditioning of general sparse matrices,” Applied Mathematics and Computation, vol. 130, no. 1, pp. 63–85, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. M. Benzi and M. Tuma, “A sparse approximate inverse preconditioner for nonsymmetric linear systems,” SIAM Journal on Scientific Computing, vol. 19, no. 3, pp. 968–994, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  14. M. Benzi and M. Tuma, “Numerical experiments with two approximate inverse preconditioners,” BIT, vol. 38, no. 2, pp. 234–241, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  15. D. K. Salkuyeh, “ILU preconditioning based on the FAPINV algorithm,” submitted.