International Journal of Mathematics and Mathematical Sciences

International Journal of Mathematics and Mathematical Sciences / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 134653 | https://doi.org/10.1155/2012/134653

F. Soleymani, "A Rapid Numerical Algorithm to Compute Matrix Inversion", International Journal of Mathematics and Mathematical Sciences, vol. 2012, Article ID 134653, 11 pages, 2012. https://doi.org/10.1155/2012/134653

A Rapid Numerical Algorithm to Compute Matrix Inversion

Academic Editor: Taekyun Kim
Received22 Mar 2012
Revised01 Jun 2012
Accepted01 Jun 2012
Published03 Sep 2012

Abstract

The aim of the present work is to suggest and establish a numerical algorithm based on matrix multiplications for computing approximate inverses. It is shown theoretically that the scheme possesses seventh-order convergence, and thus it rapidly converges. Some discussions on the choice of the initial value to preserve the convergence rate are given, and it is also shown in numerical examples that the proposed scheme can easily be taken into account to provide robust preconditioners.

1. Introduction

Let us consider the square matrix 𝐴𝑁×𝑁 with real or complex elements which is nonsingular. It is well known that its inverse is available and could be found by the direct methods such as LU or QR decompositions, see for example [1]. When the matrix inverse is computed, the method of choice should be probably Gaussian elimination with partial pivoting (GEPP). The resulting residual bounds and possible backward errors may be much smaller in this case, see [2] (subsection on the “Use and abuse of the matrix inverse”).

An effective tool to compute approximate inverses of the matrix 𝐴 is to use iteration-type methods for this purpose which are based on matrix multiplications and are of great interest and accuracy when implementing on parallel machines. In fact, one way is to construct iterative methods of high order of convergence to find matrix inversion numerically for all types of matrices (especially for ill-conditioned ones).

A clear use of such schemes is that one may apply them to find 𝐴−1 and then, by an easy matrix-vector multiplication, compute the solution of the linear system of the equations 𝐴𝑥=𝑏. However another use is in constructing approximate inverse preconditioners; that is, a very robust approximate preconditioner can easily be constructed using one, two, or three steps of such iterative methods, and the resulting left preconditioned systems would be 𝐴𝑥=𝑏,(1.1) wherein 𝐴=𝑃−1𝐴, 𝑏=𝑃−1𝑏, and 𝑃−1≈𝐴−1.

Such obtained approximate inverse preconditioners could be robust competitors to the classical or modern methods such as AINV or FAPINV; see for example [3, 4]. The approximate inverse (AINV) and the factored approximate inverse (FAPINV) are two known algorithms in the field of preconditioning of linear systems of equations. Both of these algorithms compute a sparse approximate inverse of matrix 𝐴 in the factored form and are based on computing two sets of vectors which are 𝐴 biconjugate.

In this paper, in order to challenge with very ill-conditioned matrices or to find the preconditioner 𝑃−1 in less number of iterations and having high accuracy, we will propose an efficient iterative method for finding 𝐴−1 numerically. Theoretical analysis and numerical experiments show that the new method is more effective than the existing ones in the case of constructing approximate inverse preconditioners.

The rest of the paper is organized as follows. Section 2 is devoted to a brief review of the available literature. The main contribution of this paper is given in Section 3. Subsequently, the method is examined in Section 4. Finally, concluding remarks are presented in Section 5.

2. Background

Several methods of various orders were proposed for approximating (rectangular or square) matrix inverses, such as those according to the minimum residual iterations [5] and Hotelling-Bodewig algorithm [6].

The Hotelling-Bodewig algorithm [6] is defined as 𝑉𝑛+1=𝑉𝑛2𝐼−𝐴𝑉𝑛,𝑛=0,1,2,…,(2.1) where 𝐼 is the identity matrix. Note that throughout this paper we consider matrices of the same dimension unless it is stated obviously.

In 2011, Li et al. in [7] presented the following third-order method: 𝑉𝑛+1=𝑉𝑛3𝐼−𝐴𝑉𝑛3𝐼−𝐴𝑉𝑛,𝑛=0,1,2,…,(2.2) and also proposed another third-order iterative method for approximating 𝐴−1 as comes next 𝑉𝑛+1=1𝐼+4𝐼−𝑉𝑛𝐴3𝐼−𝑉𝑛𝐴2𝑉𝑛,𝑛=0,1,2,….(2.3)

It is intersecting to mention that the method (2.2) can be found in the Chapter 5 of [8].

As an another method from the existing literature, Krishnamurthy and Sen suggested the following sixth-order iteration method [8] for the above purpose: 𝑉𝑛+1=𝑉𝑛2𝐼−𝐴𝑉𝑛3𝐼−𝐴𝑉𝑛3𝐼−𝐴𝑉𝑛𝐼−𝐴𝑉𝑛𝐼−𝐴𝑉𝑛=𝑉𝑛𝐼+𝐼−𝐴𝑉𝑛𝐼+𝐼−𝐴𝑉𝑛𝐼+𝐼−𝐴𝑉𝑛𝐼+𝐼−𝐴𝑉𝑛𝐼+𝐼−𝐴𝑉𝑛𝐼+𝐼−𝐴𝑉𝑛,(2.4) where 𝑛=0,1,2,…

For further reading refer to [9, 10].

3. An Accurate Seventh-Order Method

This section contains a new high-order algorithm for finding 𝐴−1 numerically. In order to deal with very ill-conditioned linear systems, to find efficient preconditioners rapidly, or to find robust approximate inverses, we suggest the following matrix multiplication-based iterative method: 𝑉𝑛+1=1𝑉16𝑛120𝐼+𝐴𝑉𝑛−393𝐼+𝐴𝑉𝑛735𝐼+𝐴𝑉𝑛(−861𝐼+𝐴𝑉𝑛651𝐼+𝐴𝑉𝑛−315𝐼+𝐴𝑉𝑛93𝐼+𝐴𝑉𝑛−15𝐼+𝐴𝑉𝑛,(3.1) for any 𝑛=0,1,2,…, wherein 𝐼 is the identity matrix, and the sequence of iterates {𝑉𝑛}𝑛=âˆžğ‘›=0 converges to 𝐴−1 for a good initial guess.

Theorem 3.1. Assume that 𝐴=[ğ‘Žğ‘–,𝑗]𝑁×𝑁 be an invertible matrix with real or complex elements. If the initial guess 𝑉0 satisfies ‖‖𝐼−𝐴𝑉0‖‖<1,(3.2) then the iteration (3.1) converges to 𝐴−1 with at least seventh convergence order.

Proof. In order to prove the convergence behavior of (3.1), we assume that ‖𝐼−𝐴𝑉0‖<1, 𝐸0=𝐼−𝐴𝑉0, and 𝐸𝑛=𝐼−𝐴𝑉𝑛. We then have 𝐸𝑛+1=𝐼−𝐴𝑉𝑛+11=𝐼−𝐴𝑉16𝑛120𝐼+𝐴𝑉𝑛−393𝐼+𝐴𝑉𝑛735𝐼+𝐴𝑉𝑛−861𝐼+𝐴𝑉𝑛651𝐼+𝐴𝑉𝑛(−315𝐼+𝐴𝑉𝑛93𝐼+𝐴𝑉𝑛−15𝐼+𝐴𝑉𝑛1=𝐼−𝐴𝑉16𝑛120𝐼−393𝐴𝑉𝑛+735𝐴𝑉𝑛2−861𝐴𝑉𝑛3+651𝐴𝑉𝑛4−315𝐴𝑉𝑛5+93𝐴𝑉𝑛6−15𝐴𝑉𝑛7+𝐴𝑉𝑛81=−16−4𝐼+𝐴𝑉𝑛2−𝐼+𝐴𝑉𝑛7=1163𝐼+𝐼−𝐴𝑉𝑛2𝐼−𝐴𝑉𝑛7=1163𝐼+𝐸𝑛2𝐸𝑛7=1169𝐼+6𝐸𝑛+𝐸2𝑛𝐸𝑛7=1𝐸169𝑛+6𝐸8𝑛+9𝐸7𝑛.(3.3) Hence, it is now and according to the above simplifications easily to have ‖‖𝐸𝑛+1‖‖≤1‖‖𝐸169𝑛+6𝐸8𝑛+9𝐸7𝑛‖‖≤1‖‖𝐸16𝑛‖‖9‖‖𝐸+6𝑛‖‖8‖‖𝐸+9𝑛‖‖7.(3.4) Furthermore, since ‖𝐸0‖<1, and ‖𝐸1‖≤‖𝐸0‖7<1, we get that ‖‖𝐸𝑛+1‖‖≤‖‖𝐸𝑛‖‖7≤‖‖𝐸𝑛−1‖‖72‖‖𝐸≤⋯≤0‖‖7𝑛+1<1,(3.5) where (3.5) tends to zero when ğ‘›â†’âˆž. That is to say 𝐼−𝐴𝑉𝑛→0,(3.6) when ğ‘›â†’âˆž, and thus for (3.6), we obtain 𝑉𝑛→𝐴−1,asğ‘›â†’âˆž.(3.7)
We must now illustrate the seventh order of convergence for (3.1) due to the obtained theoretical discussions that (3.1) converges under the assumption made in Theorem 3.1 to 𝐴−1. To do this aim, we denote 𝜀𝑛=𝑉𝑛−𝐴−1, as the error matrix in the iterative procedure (3.1). We have 𝐼−𝐴𝑉𝑛+1=116𝐼−𝐴𝑉𝑛9+6𝐼−𝐴𝑉𝑛8+9𝐼−𝐴𝑉𝑛7.(3.8) Equation (3.8) yields in 𝐴𝐴−1−𝑉𝑛+1=1𝐴169𝐴−1−𝑉𝑛9+6𝐴8𝐴−1−𝑉𝑛8+9𝐴7𝐴−1−𝑉𝑛7,𝐴(3.9)−1−𝑉𝑛+1=1𝐴168𝐴−1−𝑉𝑛9+6𝐴7𝐴−1−𝑉𝑛8+9𝐴6𝐴−1−𝑉𝑛7.(3.10) Using (3.10), we attain 𝜀𝑛+1=1𝐴168𝜀𝑛9−6𝐴7𝜀𝑛8+9𝐴6𝜀𝑛7,(3.11) which simplifies by taking norm from both sides ‖‖𝜀𝑛+1‖‖≤1‖‖𝐴168𝜀9𝑛‖‖+‖‖6𝐴7𝜀8𝑛‖‖+‖‖9𝐴6𝜀7𝑛‖‖,(3.12) and consequently ‖‖𝜀𝑛+1‖‖≤1916‖𝐴‖6+6‖𝐴‖7‖‖𝜀𝑛‖‖+‖𝐴‖8‖‖𝜀𝑛‖‖2‖‖𝜀𝑛‖‖7.(3.13) This shows that the method (3.1) converges to 𝐴−1 with at least seventh order of convergence. This concludes the proof.

Remark 3.2. For the above Theorem 3.1, we can conclude some points as follows. (1)From (3.4), one knows that the condition (3.2) may be weakened. In fact, we only need that the spectral radius of 𝐴𝑉0 be less than one for the convergence of the above new method (3.1). In this case, the choice of 𝑉0 may be obtained according to the estimate formulas for the spectral radius 𝜌(𝐴𝑉0) (see, e.g., [11]) (2)In some experiments and to reduce the computational cost, we may solve the matrix multiplications, based on the vector and parallel processors. (3)Finally, for the choice of 𝑉0, there exist many of different forms. We will describe this problem after Theorem 3.3 based on some literatures.
We now give a property about the scheme (3.1). This property shows that {𝑉𝑛}𝑛=âˆžğ‘›=0 of (3.1) may be applied to not only the left preconditioned linear system 𝑉𝑛𝐴𝑥=𝑉𝑛𝑏 but also to the right preconditioned linear system 𝐴𝑉𝑛𝑦=𝑏, where 𝑦=𝑉𝑛𝑥.

Theorem 3.3. Let again 𝐴=[ğ‘Žğ‘–,𝑗]𝑁×𝑁 be a nonsingular real or complex matrix. If 𝐴𝑉0=𝑉0𝐴,(3.14) is valid, then, for the sequence of {𝑉𝑛}𝑛=âˆžğ‘›=0 of (3.1), one has that 𝐴𝑉𝑛=𝑉𝑛𝐴,(3.15) holds, for all 𝑛=1,2,….

Proof. The mathematical induction is taken into account herein. First, since 𝐴𝑉0=𝑉0𝐴, we have 𝐴𝑉11=𝐴𝑉160120𝐼+𝐴𝑉0−393𝐼+𝐴𝑉0735𝐼+𝐴𝑉0−861𝐼+𝐴𝑉0(651𝐼+𝐴𝑉0−315𝐼+𝐴𝑉093𝐼+𝐴𝑉0−15𝐼+𝐴𝑉0=1𝑉160𝐴120𝐼+𝑉0𝐴−393𝐼+𝑉0𝐴735𝐼+𝑉0𝐴−861𝐼+𝑉0𝐴(651𝐼+𝑉0𝐴−315𝐼+𝑉0𝐴93𝐼+𝑉0𝐴−15𝐼+𝑉0𝐴=1𝑉160120𝐼+𝑉0𝐴−393𝐼+𝑉0𝐴735𝐼+𝑉0𝐴−861𝐼+𝑉0𝐴(651𝐼+𝑉0𝐴−315𝐼+𝑉0𝐴93𝐼+𝑉0𝐴−15𝐼+𝑉0𝐴𝐴=𝑉1𝐴.(3.16) Equation (3.16) shows that when 𝑛=1, (3.15) is true. At the moment, suppose that 𝐴𝑉𝑛=𝑉𝑛𝐴 is true, and then a straightforward calculation using (3.16) shows that, for all 𝑛≥1, 𝐴𝑉𝑛+11=𝐴𝑉16𝑛120𝐼+𝐴𝑉𝑛−393𝐼+𝐴𝑉𝑛735𝐼+𝐴𝑉𝑛−861𝐼+𝐴𝑉𝑛(651𝐼+𝐴𝑉𝑛−315𝐼+𝐴𝑉𝑛93𝐼+𝐴𝑉𝑛−15𝐼+𝐴𝑉𝑛=1𝑉16𝑛𝐴120𝐼+𝑉𝑛𝐴−393𝐼+𝑉𝑛𝐴735𝐼+𝑉𝑛𝐴−861𝐼+𝑉𝑛𝐴(651𝐼+𝑉𝑛𝐴−315𝐼+𝑉𝑛𝐴93𝐼+𝑉𝑛𝐴−15𝐼+𝑉𝑛𝐴=𝑉𝑛+1𝐴.(3.17)
This concludes the proof.

Note that according to the literatures [7, 9, 12, 13] and to find an initial value 𝑉0 to preserve the convergence order of such iterative methods, we need to fulfill at least the condition given in Remark 3.2. or Theorem 3.1. We list some ways for this purpose in what follows. WAY 1 If a matrix is strictly diagonally dominant, then choose 𝑉0 as 𝑉0=diag(1/ğ‘Ž11,1/ğ‘Ž22,…,1/ğ‘Žğ‘›ğ‘›), where ğ‘Žğ‘–ğ‘– are the diagonal elements of 𝐴. WAY 2 For the matrix 𝐴, choose 𝑉0 as 𝑉0=𝐴𝑇/(‖𝐴‖1â€–ğ´â€–âˆž), wherein 𝑇 stands for transpose, ‖𝐴‖1=max𝑗{Σ𝑛𝑖=1|ğ‘Žğ‘–ğ‘—|} and â€–ğ´â€–âˆž=max𝑖{Σ𝑛𝑗=1|ğ‘Žğ‘–ğ‘—|}. WAY 3 If the ways 1-2 fail, then use 𝑉0=𝛼𝐼, where 𝐼 is the identity matrix, and 𝛼∈R should adaptively be determined such that ‖𝐼−𝛼𝐴‖<1.

4. Numerical Testing

In this section, experiments are presented to demonstrate the capability of the suggested method. For solving a square linear system of equations of the general form 𝐴𝑥=𝑏, wherein 𝐴∈ℂ𝑁×𝑁, we can now propose the following efficient algorithm: 𝑥𝑛+1=𝑉𝑛+1𝑏while𝑛=0,1,2,….

The programming package MATHEMATICA 8, [14, 15], has been used in this section. For numerical comparisons, we have used the methods (2.1), (2.2), the sixth-order method of Krishnamurthy and Sen (2.4), and the new algorithm (3.1).

As noted in Section 1, such high order schemes are so much fruitful in providing robust preconditioners to the linear systems. In fact, we believe that only one full iteration of the scheme (3.1) for even large sparse systems and by defining SparseArray[mat] to reduce the computational load of matrix multiplications is enough to find acceptable preconditioners to the linear systems. Anyhow, using parallel computations with simple commands for this purpose in MATHEMATICA 8 may reduce the computational burden much more.

Test Problem 1
Consider the linear system 𝐴𝑥=𝑏, wherein 𝐴 is the large sparse complex matrix defined by n = 1000; A = SparseArray[ { Band[ { 1, 120 } ] -> -2., Band[ { 950, 1 } ] -> 2. - I, Band[ { -700, 18 } ] -> 1., Band[ { 1, 1 } ] -> 23., Band[ { 1, 100 } ] -> 0.2, Band[ { 214, -124 } ] -> 1., Band[ { 6, 800 } ] -> 1.1 } , { n, n } , 0.].
The dimension of this matrix is 1000 with complex elements in its structure. The right hand side vector is considered to be 𝑏=(1,1,…,1)𝑇. Although page limitations do not allow us to provide the full form of such matrices, the structure of such matrices can easily be drawn. Figure 1 illustrates the plot and also the array plot of this matrix.

Now, we expect to find robust approximate inverses for 𝐴 in less iterations by the high-order iterative methods. Furthermore, as we described in the previous sections, the approximate inverses could be considered for left or right preconditioned systems.

Table 1 clearly shows the efficiency of the proposed iterative method (3.1) in finding approximate inverses by manifesting the number of iterations and the obtained residual norm. When working with sparse matrices, an important factor which affects clearly on the computational cost of the iterative method is the number of nonzeros elements. The considered sparse complex matrix 𝐴, in test problem 1, has 3858 nonzero elements at the beginning. We list the number of nonzero elements for different iteration matrix multiplication-based schemes in Table 1. It is obvious that the new scheme is much better than (2.1) and (2.2), but when comparing to (2.4) its number of nonzero elements are higher; however, this is completely satisfactory due to the better numerical results we have obtained for the residual norm. In fact, if one let the (2.4) to cycle more in order to reach the residual norm which is correct up to seven decimal places, then the obtained nonzero elements will be more than the corresponding one of (3.1).


Iterative methods (2.1) (2.2) (2.4) (3.1)

Number of iterations 3 2 1 1
Number of nonzero elements 126035 137616 65818 119792
Residual norm 3 . 0 0 6 × 1 0 − 7 2 . 6 2 8 × 1 0 − 7 1 . 4 2 8 × 1 0 − 5 9 . 0 7 7 × 1 0 − 7

At this time, if the user is satisfied of the obtained residual norm for solving the large sparse linear system then can be stopped, else, one can use the attained approximate inverse as a left or right preconditioner and solve the resulting preconditioned linear system with low condition number using the LinearSolve command. Note that the written code in order to implement the test problem 1 for the method (3.1) is given as follows: b = SparseArray[Table[1, { k, n } ]];DA = Diagonal[A]; B = SparseArray[Table[(1/DA[[i]]), { i, 1, n } ]]; Id = SparseArray[{{i_, i_ } -> 1 } , { n, n } ]; X = SparseArray[DiagonalMatrix[B]]; Do[X1 = SparseArray[X];AV = Chop[SparseArray[A.X1]]; X = Chop[(1/16) X1.SparseArray[(120 Id +AV.(-393 Id +AV.(735 Id +AV.(-861 Id +AV.(651 Id +AV.(-315 Id+  AV.(93 Id + AV.(-15 Id + AV))))))))]]; Print[X];L[i] = Norm[b - SparseArray[A.(X.b)]]; Print["The residual norm of the linear system solution is:" Column[ { i } , Frame -> All, FrameStyle -> Directive[Blue]] Column[ { L[i] } , Frame -> All, FrameStyle -> Directive[Blue]]];  , { i, 1 } ].

In what follows, we try to examine the matrix inverse-finding iterative methods of this paper on a dense matrix which is of importance in applications.

Test Problem 2
Consider the linear system 𝐴𝑥=𝑏, wherein 𝐴 is the dense matrix defined by  n = 40; A = Table[Sin[(x*y)]/(x + y) - 1., { x, n } , { y, n } ].
The right hand side vector is again considered to be 𝑏=(1,1,…,1)𝑇. Figure 2 illustrates the plot and array plot of this matrix. Due to the fact that, for dense matrices, such plots will not be illustrative, we have drawn the 3-D plots of the coefficient dense matrix using the zero-order interpolation in test problem 2, alongside its approximate inverse obtained from the method (3.1) after 11 iterations in Figure 3. We further expect to have a similar identity-like behavior for the multiplication of A to its approximate inverse, and this is clear in part (c) of Figure 3.

Table 2 shows the number of iterations and the obtained residual norm for different methods in order to reveal the efficiency of the proposed iteration. There is a clear reduction in computational steps for the proposed method (3.1) in contrast to the other existing well-known methods of various orders in the literature. Table 2 further confirms the use of such iterative inverse finders in order to find robust approximate inverse preconditioners again. Since, the condition number of the coefficient matrix in Test Problem 2 is 18137.2, which is quite high for a 40×40 matrix. But as can be furnished in Table 2, the obtained condition number of the preconditioned matrix after the specified number of iterations is very small.



Iterative methods
(2.1) (2.2) (2.4) (3.1)

Number of iterations 29 18 11 10
The condition number of 𝑉 𝑛 𝐴 1.00135 1.01234 1.01780 1.00114
Residual norm 6 . 4 7 7 × 1 0 − 7 5 . 9 1 6 × 1 0 − 6 8 . 5 1 7 × 1 0 − 6 5 . 4 8 2 × 1 0 − 7

We should here note that the computational time requiring for implementing all the methods in Tables 1 and 2 for the Test Problems 1 and 2 is less than one second, and due to this we have not listed them herein. For the second test proplem, we have used 𝑉0=𝐴𝑇/(‖𝐴‖1â€–ğ´â€–âˆž)

5. Concluding Remarks

In the present paper, the author along with the helpful suggestions of the reviewers has developed an iterative method in inverse finding of matrices. Note that such high order-iterative methods are so efficient for very ill-conditioned linear systems or to find robust approximate inverse preconditioners. We have shown analytically that the suggested method (3.1), reaches the seventh order of convergence. Moreover, the efficacy of the new scheme was illustrated numerically in Section 4 by applying to a sparse matrix and an ill-conditioned dense matrix. All the numerical results confirm the theoretical aspects and show the efficiency of (3.1).

Acknowledgments

The author would like to take this opportunity to sincerely acknowledge many valuable suggestions made by the two anonymous reviewers and especially the third suggestion of reviewer 1, which have been made to substantially improve the quality of this paper.

References

  1. T. Sauer, Numerical Analysis, Pearson, 2nd edition, 2011. View at: Zentralblatt MATH
  2. N. J. Higham, Accuracy and Stability of Numerical Algorithms, Society for Industrial and Applied Mathematics, 2nd edition, 2002. View at: Publisher Site | Zentralblatt MATH
  3. D. K. Salkuyeh and H. Roohani, “On the relation between the AINV and the FAPINV algorithms,” International Journal of Mathematics and Mathematical Sciences, vol. 2009, Article ID 179481, 6 pages, 2009. View at: Google Scholar | Zentralblatt MATH
  4. M. Benzi and M. Tuma, “Numerical experiments with two approximate inverse preconditioners,” BIT, vol. 38, no. 2, pp. 234–241, 1998. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  5. E. Chow and Y. Saad, “Approximate inverse preconditioners via sparse-sparse iterations,” SIAM Journal on Scientific Computing, vol. 19, no. 3, pp. 995–1023, 1998. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  6. G. Schulz, “Iterative berechnung der reziproken matrix,” Zeitschrift für Angewandte Mathematik und Mechanik, vol. 13, pp. 57–59, 1933. View at: Publisher Site | Google Scholar
  7. H.-B. Li, T.-Z. Huang, Y. Zhang, V.-P. Liu, and T.-V. Gu, “Chebyshev-type methods and preconditioning techniques,” Applied Mathematics and Computation, vol. 218, no. 2, pp. 260–270, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  8. E. V. Krishnamurthy and S. K. Sen, Numerical Algorithms—Computations in Science and Engineering, Affiliated East-West Press, New Delhi, India, 1986.
  9. G. Codevico, V. Y. Pan, and M. V. Barel, “Newton-like iteration based on a cubic polynomial for structured matrices,” Numerical Algorithms, vol. 36, no. 4, pp. 365–380, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  10. W. Li and Z. Li, “A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix,” Applied Mathematics and Computation, vol. 215, no. 9, pp. 3433–3442, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  11. H. B. Li, T. Z. Huang, S. Shen, and H. Li, “A new upper bound for spectral radius of block iterative matrices,” Journal of Computational Information Systems, vol. 1, no. 3, pp. 595–599, 2005. View at: Google Scholar
  12. V. Y. Pan and R. Schreiber, “An improved Newton iteration for the generalized inverse of a matrix, with applications,” SIAM Journal on Scientific and Statistical Computing, vol. 12, no. 5, pp. 1109–1130, 1991. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  13. K. Moriya and T. Nodera, “A new scheme of computing the approximate inverse preconditioner for the reduced linear systems,” Journal of Computational and Applied Mathematics, vol. 199, no. 2, pp. 345–352, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  14. S. Wagon, Mathematica in Action, Springer, 3rd edition, 2010.
  15. S. Wolfram, The Mathematica Book, Wolfram Media, 5th edition, 2003.

Copyright © 2012 F. Soleymani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views17897
Downloads2819
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.