Table of Contents Author Guidelines Submit a Manuscript
International Journal of Mathematics and Mathematical Sciences
VolumeΒ 2012, Article IDΒ 134653, 11 pages
Research Article

A Rapid Numerical Algorithm to Compute Matrix Inversion

Department of Applied Mathematics, School of Mathematical Sciences, Ferdowsi University of Mashhad, Mashhad, Iran

Received 22 March 2012; Revised 1 June 2012; Accepted 1 June 2012

Academic Editor: TaekyunΒ Kim

Copyright Β© 2012 F. Soleymani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The aim of the present work is to suggest and establish a numerical algorithm based on matrix multiplications for computing approximate inverses. It is shown theoretically that the scheme possesses seventh-order convergence, and thus it rapidly converges. Some discussions on the choice of the initial value to preserve the convergence rate are given, and it is also shown in numerical examples that the proposed scheme can easily be taken into account to provide robust preconditioners.

1. Introduction

Let us consider the square matrix 𝐴𝑁×𝑁 with real or complex elements which is nonsingular. It is well known that its inverse is available and could be found by the direct methods such as LU or QR decompositions, see for example [1]. When the matrix inverse is computed, the method of choice should be probably Gaussian elimination with partial pivoting (GEPP). The resulting residual bounds and possible backward errors may be much smaller in this case, see [2] (subsection on the β€œUse and abuse of the matrix inverse”).

An effective tool to compute approximate inverses of the matrix 𝐴 is to use iteration-type methods for this purpose which are based on matrix multiplications and are of great interest and accuracy when implementing on parallel machines. In fact, one way is to construct iterative methods of high order of convergence to find matrix inversion numerically for all types of matrices (especially for ill-conditioned ones).

A clear use of such schemes is that one may apply them to find π΄βˆ’1 and then, by an easy matrix-vector multiplication, compute the solution of the linear system of the equations 𝐴π‘₯=𝑏. However another use is in constructing approximate inverse preconditioners; that is, a very robust approximate preconditioner can easily be constructed using one, two, or three steps of such iterative methods, and the resulting left preconditioned systems would be 𝐴π‘₯=𝑏,(1.1) wherein 𝐴=π‘ƒβˆ’1𝐴, 𝑏=π‘ƒβˆ’1𝑏, and π‘ƒβˆ’1β‰ˆπ΄βˆ’1.

Such obtained approximate inverse preconditioners could be robust competitors to the classical or modern methods such as AINV or FAPINV; see for example [3, 4]. The approximate inverse (AINV) and the factored approximate inverse (FAPINV) are two known algorithms in the field of preconditioning of linear systems of equations. Both of these algorithms compute a sparse approximate inverse of matrix 𝐴 in the factored form and are based on computing two sets of vectors which are 𝐴 biconjugate.

In this paper, in order to challenge with very ill-conditioned matrices or to find the preconditioner π‘ƒβˆ’1 in less number of iterations and having high accuracy, we will propose an efficient iterative method for finding π΄βˆ’1 numerically. Theoretical analysis and numerical experiments show that the new method is more effective than the existing ones in the case of constructing approximate inverse preconditioners.

The rest of the paper is organized as follows. Section 2 is devoted to a brief review of the available literature. The main contribution of this paper is given in Section 3. Subsequently, the method is examined in Section 4. Finally, concluding remarks are presented in Section 5.

2. Background

Several methods of various orders were proposed for approximating (rectangular or square) matrix inverses, such as those according to the minimum residual iterations [5] and Hotelling-Bodewig algorithm [6].

The Hotelling-Bodewig algorithm [6] is defined as 𝑉𝑛+1=𝑉𝑛2πΌβˆ’π΄π‘‰π‘›ξ€Έ,𝑛=0,1,2,…,(2.1) where 𝐼 is the identity matrix. Note that throughout this paper we consider matrices of the same dimension unless it is stated obviously.

In 2011, Li et al. in [7] presented the following third-order method: 𝑉𝑛+1=𝑉𝑛3πΌβˆ’π΄π‘‰π‘›ξ€·3πΌβˆ’π΄π‘‰π‘›ξ€Έξ€Έ,𝑛=0,1,2,…,(2.2) and also proposed another third-order iterative method for approximating π΄βˆ’1 as comes next 𝑉𝑛+1=1𝐼+4ξ€·πΌβˆ’π‘‰π‘›π΄ξ€Έξ€·3πΌβˆ’π‘‰π‘›π΄ξ€Έ2𝑉𝑛,𝑛=0,1,2,….(2.3)

It is intersecting to mention that the method (2.2) can be found in the Chapter 5 of [8].

As an another method from the existing literature, Krishnamurthy and Sen suggested the following sixth-order iteration method [8] for the above purpose: 𝑉𝑛+1=𝑉𝑛2πΌβˆ’π΄π‘‰π‘›ξ€Έξ€·3πΌβˆ’π΄π‘‰π‘›ξ€·3πΌβˆ’π΄π‘‰π‘›ξ€Έξ€Έξ€·πΌβˆ’π΄π‘‰π‘›ξ€·πΌβˆ’π΄π‘‰π‘›ξ€Έξ€Έ=𝑉𝑛𝐼+πΌβˆ’π΄π‘‰π‘›ξ€·ξ€Έξ€·πΌ+πΌβˆ’π΄π‘‰π‘›ξ€·ξ€Έξ€·πΌ+πΌβˆ’π΄π‘‰π‘›ξ€·ξ€Έξ€·πΌ+πΌβˆ’π΄π‘‰π‘›ξ€·ξ€Έξ€·πΌ+πΌβˆ’π΄π‘‰π‘›ξ€·ξ€Έξ€·πΌ+πΌβˆ’π΄π‘‰π‘›,ξ€Έξ€Έξ€Έξ€Έξ€Έξ€Έξ€Έ(2.4) where 𝑛=0,1,2,…

For further reading refer to [9, 10].

3. An Accurate Seventh-Order Method

This section contains a new high-order algorithm for finding π΄βˆ’1 numerically. In order to deal with very ill-conditioned linear systems, to find efficient preconditioners rapidly, or to find robust approximate inverses, we suggest the following matrix multiplication-based iterative method: 𝑉𝑛+1=1𝑉16𝑛120𝐼+π΄π‘‰π‘›ξ€·βˆ’393𝐼+𝐴𝑉𝑛735𝐼+𝐴𝑉𝑛(βˆ’861𝐼+𝐴𝑉𝑛651𝐼+π΄π‘‰π‘›ξ€·βˆ’315𝐼+𝐴𝑉𝑛93𝐼+π΄π‘‰π‘›ξ€·βˆ’15𝐼+𝐴𝑉𝑛,ξ€Έξ€Έξ€Έξ€Έξ€Έξ€Έξ€Έξ€Έ(3.1) for any 𝑛=0,1,2,…, wherein 𝐼 is the identity matrix, and the sequence of iterates {𝑉𝑛}𝑛=βˆžπ‘›=0 converges to π΄βˆ’1 for a good initial guess.

Theorem 3.1. Assume that 𝐴=[π‘Žπ‘–,𝑗]𝑁×𝑁 be an invertible matrix with real or complex elements. If the initial guess 𝑉0 satisfies β€–β€–πΌβˆ’π΄π‘‰0β€–β€–<1,(3.2) then the iteration (3.1) converges to π΄βˆ’1 with at least seventh convergence order.

Proof. In order to prove the convergence behavior of (3.1), we assume that β€–πΌβˆ’π΄π‘‰0β€–<1, 𝐸0=πΌβˆ’π΄π‘‰0, and 𝐸𝑛=πΌβˆ’π΄π‘‰π‘›. We then have 𝐸𝑛+1=πΌβˆ’π΄π‘‰π‘›+11=πΌβˆ’π΄π‘‰16𝑛120𝐼+π΄π‘‰π‘›ξ€·βˆ’393𝐼+𝐴𝑉𝑛735𝐼+π΄π‘‰π‘›ξ€·βˆ’861𝐼+𝐴𝑉𝑛651𝐼+𝐴𝑉𝑛(βˆ’315𝐼+𝐴𝑉𝑛93𝐼+π΄π‘‰π‘›ξ€·βˆ’15𝐼+𝐴𝑉𝑛1ξ€Έξ€Έξ€Έξ€Έξ€Έξ€Έξ€Έξ€Έξ€»=πΌβˆ’π΄π‘‰16𝑛120πΌβˆ’393𝐴𝑉𝑛+735𝐴𝑉𝑛2ξ€·βˆ’861𝐴𝑉𝑛3ξ€·+651𝐴𝑉𝑛4ξ€·βˆ’315𝐴𝑉𝑛5ξ€·+93𝐴𝑉𝑛6ξ€·βˆ’15𝐴𝑉𝑛7+𝐴𝑉𝑛81=βˆ’ξ€·16βˆ’4𝐼+𝐴𝑉𝑛2ξ€·βˆ’πΌ+𝐴𝑉𝑛7=1ξ€·163𝐼+πΌβˆ’π΄π‘‰π‘›ξ€Έ2ξ€·πΌβˆ’π΄π‘‰π‘›ξ€Έ7=1ξ€·163𝐼+𝐸𝑛2𝐸𝑛7=1ξ€·169𝐼+6𝐸𝑛+𝐸2𝑛𝐸𝑛7=1𝐸169𝑛+6𝐸8𝑛+9𝐸7𝑛.(3.3) Hence, it is now and according to the above simplifications easily to have ‖‖𝐸𝑛+1‖‖≀1‖‖𝐸169𝑛+6𝐸8𝑛+9𝐸7𝑛‖‖≀1‖‖𝐸16𝑛‖‖9‖‖𝐸+6𝑛‖‖8‖‖𝐸+9𝑛‖‖7.(3.4) Furthermore, since ‖𝐸0β€–<1, and ‖𝐸1‖≀‖𝐸0β€–7<1, we get that ‖‖𝐸𝑛+1‖‖≀‖‖𝐸𝑛‖‖7β‰€β€–β€–πΈπ‘›βˆ’1β€–β€–72‖‖𝐸≀⋯≀0β€–β€–7𝑛+1<1,(3.5) where (3.5) tends to zero when π‘›β†’βˆž. That is to say πΌβˆ’π΄π‘‰π‘›β†’0,(3.6) when π‘›β†’βˆž, and thus for (3.6), we obtain π‘‰π‘›β†’π΄βˆ’1,asπ‘›β†’βˆž.(3.7)
We must now illustrate the seventh order of convergence for (3.1) due to the obtained theoretical discussions that (3.1) converges under the assumption made in Theorem 3.1 to π΄βˆ’1. To do this aim, we denote πœ€π‘›=π‘‰π‘›βˆ’π΄βˆ’1, as the error matrix in the iterative procedure (3.1). We have πΌβˆ’π΄π‘‰π‘›+1=1ξ‚€ξ€·16πΌβˆ’π΄π‘‰π‘›ξ€Έ9ξ€·+6πΌβˆ’π΄π‘‰π‘›ξ€Έ8ξ€·+9πΌβˆ’π΄π‘‰π‘›ξ€Έ7.(3.8) Equation (3.8) yields in π΄ξ€·π΄βˆ’1βˆ’π‘‰π‘›+1ξ€Έ=1𝐴169ξ€·π΄βˆ’1βˆ’π‘‰π‘›ξ€Έ9+6𝐴8ξ€·π΄βˆ’1βˆ’π‘‰π‘›ξ€Έ8+9𝐴7ξ€·π΄βˆ’1βˆ’π‘‰π‘›ξ€Έ7,𝐴(3.9)βˆ’1βˆ’π‘‰π‘›+1=1𝐴168ξ€·π΄βˆ’1βˆ’π‘‰π‘›ξ€Έ9+6𝐴7ξ€·π΄βˆ’1βˆ’π‘‰π‘›ξ€Έ8+9𝐴6ξ€·π΄βˆ’1βˆ’π‘‰π‘›ξ€Έ7.(3.10) Using (3.10), we attain πœ€π‘›+1=1𝐴168ξ€·πœ€π‘›ξ€Έ9βˆ’6𝐴7ξ€·πœ€π‘›ξ€Έ8+9𝐴6ξ€·πœ€π‘›ξ€Έ7,(3.11) which simplifies by taking norm from both sides β€–β€–πœ€π‘›+1‖‖≀1‖‖𝐴168πœ€9𝑛‖‖+β€–β€–6𝐴7πœ€8𝑛‖‖+β€–β€–9𝐴6πœ€7𝑛‖‖,(3.12) and consequently β€–β€–πœ€π‘›+1‖‖≀1916‖𝐴‖6+6‖𝐴‖7β€–β€–πœ€π‘›β€–β€–+‖𝐴‖8β€–β€–πœ€π‘›β€–β€–2β€–β€–πœ€ξ‚„ξ‚π‘›β€–β€–7.(3.13) This shows that the method (3.1) converges to π΄βˆ’1 with at least seventh order of convergence. This concludes the proof.

Remark 3.2. For the above Theorem 3.1, we can conclude some points as follows. (1)From (3.4), one knows that the condition (3.2) may be weakened. In fact, we only need that the spectral radius of 𝐴𝑉0 be less than one for the convergence of the above new method (3.1). In this case, the choice of 𝑉0 may be obtained according to the estimate formulas for the spectral radius 𝜌(𝐴𝑉0) (see, e.g., [11]) (2)In some experiments and to reduce the computational cost, we may solve the matrix multiplications, based on the vector and parallel processors. (3)Finally, for the choice of 𝑉0, there exist many of different forms. We will describe this problem after Theorem 3.3 based on some literatures.
We now give a property about the scheme (3.1). This property shows that {𝑉𝑛}𝑛=βˆžπ‘›=0 of (3.1) may be applied to not only the left preconditioned linear system 𝑉𝑛𝐴π‘₯=𝑉𝑛𝑏 but also to the right preconditioned linear system 𝐴𝑉𝑛𝑦=𝑏, where 𝑦=𝑉𝑛π‘₯.

Theorem 3.3. Let again 𝐴=[π‘Žπ‘–,𝑗]𝑁×𝑁 be a nonsingular real or complex matrix. If 𝐴𝑉0=𝑉0𝐴,(3.14) is valid, then, for the sequence of {𝑉𝑛}𝑛=βˆžπ‘›=0 of (3.1), one has that 𝐴𝑉𝑛=𝑉𝑛𝐴,(3.15) holds, for all 𝑛=1,2,….

Proof. The mathematical induction is taken into account herein. First, since 𝐴𝑉0=𝑉0𝐴, we have 𝐴𝑉1ξ‚€1=𝐴𝑉160ξ€·120𝐼+𝐴𝑉0ξ€·βˆ’393𝐼+𝐴𝑉0ξ€·735𝐼+𝐴𝑉0ξ€·βˆ’861𝐼+𝐴𝑉0(651𝐼+𝐴𝑉0ξ€·βˆ’315𝐼+𝐴𝑉0ξ€·93𝐼+𝐴𝑉0ξ€·βˆ’15𝐼+𝐴𝑉0=1𝑉160𝐴120𝐼+𝑉0π΄ξ€·βˆ’393𝐼+𝑉0𝐴735𝐼+𝑉0π΄ξ€·βˆ’861𝐼+𝑉0𝐴(651𝐼+𝑉0π΄ξ€·βˆ’315𝐼+𝑉0𝐴93𝐼+𝑉0π΄ξ€·βˆ’15𝐼+𝑉0𝐴=1𝑉160ξ€·120𝐼+𝑉0π΄ξ€·βˆ’393𝐼+𝑉0𝐴735𝐼+𝑉0π΄ξ€·βˆ’861𝐼+𝑉0𝐴(651𝐼+𝑉0π΄ξ€·βˆ’315𝐼+𝑉0𝐴93𝐼+𝑉0π΄ξ€·βˆ’15𝐼+𝑉0𝐴𝐴=𝑉1𝐴.(3.16) Equation (3.16) shows that when 𝑛=1, (3.15) is true. At the moment, suppose that 𝐴𝑉𝑛=𝑉𝑛𝐴 is true, and then a straightforward calculation using (3.16) shows that, for all 𝑛β‰₯1, 𝐴𝑉𝑛+11=𝐴𝑉16𝑛120𝐼+π΄π‘‰π‘›ξ€·βˆ’393𝐼+𝐴𝑉𝑛735𝐼+π΄π‘‰π‘›ξ€·βˆ’861𝐼+𝐴𝑉𝑛(651𝐼+π΄π‘‰π‘›ξ€·βˆ’315𝐼+𝐴𝑉𝑛93𝐼+π΄π‘‰π‘›ξ€·βˆ’15𝐼+𝐴𝑉𝑛=1𝑉16𝑛𝐴120𝐼+π‘‰π‘›π΄ξ€·βˆ’393𝐼+𝑉𝑛𝐴735𝐼+π‘‰π‘›π΄ξ€·βˆ’861𝐼+𝑉𝑛𝐴(651𝐼+π‘‰π‘›π΄ξ€·βˆ’315𝐼+𝑉𝑛𝐴93𝐼+π‘‰π‘›π΄ξ€·βˆ’15𝐼+𝑉𝑛𝐴=𝑉𝑛+1𝐴.(3.17)
This concludes the proof.

Note that according to the literatures [7, 9, 12, 13] and to find an initial value 𝑉0 to preserve the convergence order of such iterative methods, we need to fulfill at least the condition given in Remark 3.2. or Theorem 3.1. We list some ways for this purpose in what follows. WAY 1 If a matrix is strictly diagonally dominant, then choose 𝑉0 as 𝑉0=diag(1/π‘Ž11,1/π‘Ž22,…,1/π‘Žπ‘›π‘›), where π‘Žπ‘–π‘– are the diagonal elements of 𝐴. WAY 2 For the matrix 𝐴, choose 𝑉0 as 𝑉0=𝐴𝑇/(‖𝐴‖1β€–π΄β€–βˆž), wherein 𝑇 stands for transpose, ‖𝐴‖1=max𝑗{Σ𝑛𝑖=1|π‘Žπ‘–π‘—|} and β€–π΄β€–βˆž=max𝑖{Σ𝑛𝑗=1|π‘Žπ‘–π‘—|}. WAY 3 If the ways 1-2 fail, then use 𝑉0=𝛼𝐼, where 𝐼 is the identity matrix, and π›ΌβˆˆR should adaptively be determined such that β€–πΌβˆ’π›Όπ΄β€–<1.

4. Numerical Testing

In this section, experiments are presented to demonstrate the capability of the suggested method. For solving a square linear system of equations of the general form 𝐴π‘₯=𝑏, wherein π΄βˆˆβ„‚π‘Γ—π‘, we can now propose the following efficient algorithm: π‘₯𝑛+1=𝑉𝑛+1𝑏while𝑛=0,1,2,….

The programming package MATHEMATICA 8, [14, 15], has been used in this section. For numerical comparisons, we have used the methods (2.1), (2.2), the sixth-order method of Krishnamurthy and Sen (2.4), and the new algorithm (3.1).

As noted in Section 1, such high order schemes are so much fruitful in providing robust preconditioners to the linear systems. In fact, we believe that only one full iteration of the scheme (3.1) for even large sparse systems and by defining SparseArray[mat] to reduce the computational load of matrix multiplications is enough to find acceptable preconditioners to the linear systems. Anyhow, using parallel computations with simple commands for this purpose in MATHEMATICA 8 may reduce the computational burden much more.

Test Problem 1
Consider the linear system 𝐴π‘₯=𝑏, wherein 𝐴 is the large sparse complex matrix defined by n = 1000; A = SparseArray[ { Band[ { 1, 120 } ] -> -2., Band[ { 950, 1 } ] -> 2. - I, Band[ { -700, 18 } ] -> 1., Band[ { 1, 1 } ] -> 23., Band[ { 1, 100 } ] -> 0.2, Band[ { 214, -124 } ] -> 1., Band[ { 6, 800 } ] -> 1.1 } , { n, n } , 0.].
The dimension of this matrix is 1000 with complex elements in its structure. The right hand side vector is considered to be 𝑏=(1,1,…,1)𝑇. Although page limitations do not allow us to provide the full form of such matrices, the structure of such matrices can easily be drawn. Figure 1 illustrates the plot and also the array plot of this matrix.

Figure 1: The matrix plot (a) and the array plot (b) of the coefficient complex matrix in test problem 1.

Now, we expect to find robust approximate inverses for 𝐴 in less iterations by the high-order iterative methods. Furthermore, as we described in the previous sections, the approximate inverses could be considered for left or right preconditioned systems.

Table 1 clearly shows the efficiency of the proposed iterative method (3.1) in finding approximate inverses by manifesting the number of iterations and the obtained residual norm. When working with sparse matrices, an important factor which affects clearly on the computational cost of the iterative method is the number of nonzeros elements. The considered sparse complex matrix 𝐴, in test problem 1, has 3858 nonzero elements at the beginning. We list the number of nonzero elements for different iteration matrix multiplication-based schemes in Table 1. It is obvious that the new scheme is much better than (2.1) and (2.2), but when comparing to (2.4) its number of nonzero elements are higher; however, this is completely satisfactory due to the better numerical results we have obtained for the residual norm. In fact, if one let the (2.4) to cycle more in order to reach the residual norm which is correct up to seven decimal places, then the obtained nonzero elements will be more than the corresponding one of (3.1).

Table 1: Results of comparisons for the test problem 1.

At this time, if the user is satisfied of the obtained residual norm for solving the large sparse linear system then can be stopped, else, one can use the attained approximate inverse as a left or right preconditioner and solve the resulting preconditioned linear system with low condition number using the LinearSolve command. Note that the written code in order to implement the test problem 1 for the method (3.1) is given as follows: b = SparseArray[Table[1, { k, n } ]];DA = Diagonal[A]; B = SparseArray[Table[(1/DA[[i]]), { i, 1, n } ]]; Id = SparseArray[{{i_, i_ } -> 1 } , { n, n } ]; X = SparseArray[DiagonalMatrix[B]]; Do[X1 = SparseArray[X];AV = Chop[SparseArray[A.X1]]; X = Chop[(1/16) X1.SparseArray[(120 Id +AV.(-393 Id +AV.(735 Id +AV.(-861 Id +AV.(651 Id +AV.(-315 Id+  AV.(93 Id + AV.(-15 Id + AV))))))))]]; Print[X];L[i] = Norm[b - SparseArray[A.(X.b)]]; Print["The residual norm of the linear system solution is:" Column[ { i } , Frame -> All, FrameStyle -> Directive[Blue]] Column[ { L[i] } , Frame -> All, FrameStyle -> Directive[Blue]]];  , { i, 1 } ].

In what follows, we try to examine the matrix inverse-finding iterative methods of this paper on a dense matrix which is of importance in applications.

Test Problem 2
Consider the linear system 𝐴π‘₯=𝑏, wherein 𝐴 is the dense matrix defined by  n = 40; A = Table[Sin[(x*y)]/(x + y) - 1., { x, n } , { y, n } ].
The right hand side vector is again considered to be 𝑏=(1,1,…,1)𝑇. Figure 2 illustrates the plot and array plot of this matrix. Due to the fact that, for dense matrices, such plots will not be illustrative, we have drawn the 3-D plots of the coefficient dense matrix using the zero-order interpolation in test problem 2, alongside its approximate inverse obtained from the method (3.1) after 11 iterations in Figure 3. We further expect to have a similar identity-like behavior for the multiplication of A to its approximate inverse, and this is clear in part (c) of Figure 3.

Figure 2: The matrix plot (a) and the array plot (b) of the coefficient complex matrix in test problem 2.
Figure 3: The 3D plot of the structure of the coefficient matrix (a) in the Test Problem 2, its approximate inverse (b) by the method (3.1) and their multiplication to obtain a similar output to the identity matrix (c).

Table 2 shows the number of iterations and the obtained residual norm for different methods in order to reveal the efficiency of the proposed iteration. There is a clear reduction in computational steps for the proposed method (3.1) in contrast to the other existing well-known methods of various orders in the literature. Table 2 further confirms the use of such iterative inverse finders in order to find robust approximate inverse preconditioners again. Since, the condition number of the coefficient matrix in Test Problem 2 is 18137.2, which is quite high for a 40Γ—40 matrix. But as can be furnished in Table 2, the obtained condition number of the preconditioned matrix after the specified number of iterations is very small.

Table 2: Results of comparisons for the test problem 2 with condition number 18137.2.

We should here note that the computational time requiring for implementing all the methods in Tables 1 and 2 for the Test Problems 1 and 2 is less than one second, and due to this we have not listed them herein. For the second test proplem, we have used 𝑉0=𝐴𝑇/(‖𝐴‖1β€–π΄β€–βˆž)

5. Concluding Remarks

In the present paper, the author along with the helpful suggestions of the reviewers has developed an iterative method in inverse finding of matrices. Note that such high order-iterative methods are so efficient for very ill-conditioned linear systems or to find robust approximate inverse preconditioners. We have shown analytically that the suggested method (3.1), reaches the seventh order of convergence. Moreover, the efficacy of the new scheme was illustrated numerically in Section 4 by applying to a sparse matrix and an ill-conditioned dense matrix. All the numerical results confirm the theoretical aspects and show the efficiency of (3.1).


The author would like to take this opportunity to sincerely acknowledge many valuable suggestions made by the two anonymous reviewers and especially the third suggestion of reviewer 1, which have been made to substantially improve the quality of this paper.


  1. T. Sauer, Numerical Analysis, Pearson, 2nd edition, 2011. View at Zentralblatt MATH
  2. N. J. Higham, Accuracy and Stability of Numerical Algorithms, Society for Industrial and Applied Mathematics, 2nd edition, 2002. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  3. D. K. Salkuyeh and H. Roohani, β€œOn the relation between the AINV and the FAPINV algorithms,” International Journal of Mathematics and Mathematical Sciences, vol. 2009, Article ID 179481, 6 pages, 2009. View at Google Scholar Β· View at Zentralblatt MATH
  4. M. Benzi and M. Tuma, β€œNumerical experiments with two approximate inverse preconditioners,” BIT, vol. 38, no. 2, pp. 234–241, 1998. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  5. E. Chow and Y. Saad, β€œApproximate inverse preconditioners via sparse-sparse iterations,” SIAM Journal on Scientific Computing, vol. 19, no. 3, pp. 995–1023, 1998. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  6. G. Schulz, β€œIterative berechnung der reziproken matrix,” Zeitschrift für Angewandte Mathematik und Mechanik, vol. 13, pp. 57–59, 1933. View at Publisher Β· View at Google Scholar
  7. H.-B. Li, T.-Z. Huang, Y. Zhang, V.-P. Liu, and T.-V. Gu, β€œChebyshev-type methods and preconditioning techniques,” Applied Mathematics and Computation, vol. 218, no. 2, pp. 260–270, 2011. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  8. E. V. Krishnamurthy and S. K. Sen, Numerical Algorithms—Computations in Science and Engineering, Affiliated East-West Press, New Delhi, India, 1986.
  9. G. Codevico, V. Y. Pan, and M. V. Barel, β€œNewton-like iteration based on a cubic polynomial for structured matrices,” Numerical Algorithms, vol. 36, no. 4, pp. 365–380, 2004. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  10. W. Li and Z. Li, β€œA family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix,” Applied Mathematics and Computation, vol. 215, no. 9, pp. 3433–3442, 2010. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  11. H. B. Li, T. Z. Huang, S. Shen, and H. Li, β€œA new upper bound for spectral radius of block iterative matrices,” Journal of Computational Information Systems, vol. 1, no. 3, pp. 595–599, 2005. View at Google Scholar Β· View at Scopus
  12. V. Y. Pan and R. Schreiber, β€œAn improved Newton iteration for the generalized inverse of a matrix, with applications,” SIAM Journal on Scientific and Statistical Computing, vol. 12, no. 5, pp. 1109–1130, 1991. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  13. K. Moriya and T. Nodera, β€œA new scheme of computing the approximate inverse preconditioner for the reduced linear systems,” Journal of Computational and Applied Mathematics, vol. 199, no. 2, pp. 345–352, 2007. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  14. S. Wagon, Mathematica in Action, Springer, 3rd edition, 2010.
  15. S. Wolfram, The Mathematica Book, Wolfram Media, 5th edition, 2003.