Abstract

A stable numerical method is proposed for matrix inversion. The new method is accompanied by theoretical proof to illustrate twelfth-order convergence. A discussion of how to achieve the convergence using an appropriate initial value is presented. The application of the new scheme for finding Moore-Penrose inverse will also be pointed out analytically. The efficiency of the contributed iterative method is clarified on solving some numerical examples.

1. Introduction and Preliminary Notes

It is well known that the inverse of a square matrix , which is also known as a reciprocal matrix, is a matrix such that , where is the identity matrix. A regular nonsingular matrix can be inverted using methods such as the Gauss-Jordan elimination or Gaussian elimination method. Such schemes fall in the category of direct methods for this purpose.

The direct methods cannot properly handle sparse matrices possessing sparse inverses arising in the numerical solution of integral equations [1]. On the other hand, methods such as conjugate gradient for symmetric positive definite matrices and GMRES are effective for large sparse linear systems. However, there is a problem when the coefficient matrix (when solving a linear system of equations) is ill-conditioned. To remedy this, one may apply a preconditioner to the system in which its construction is not an easy task [2].

An iterative method for preconditioning is the SPAI (sparse approximate inverse preconditioner) algorithm [3]. Given a sparse matrix the SPAI algorithm computes a sparse approximate inverse by minimizing in the Frobenius norm. Then, the approximate inverse is computed explicitly and can be applied as a preconditioner to an iterative method.

There are other types of schemes, which can be considered as iteration methods while they have different structures; see, for example, [4, 5]. In such iterative methods, at each iteration an approximate inverse of a matrix (if it is rectangular, one can find Moore-Penrose inverse) may easily be attained. And consequently, the users have the ability to solve the linear systems (with multiple right-hand side vectors) iteratively or use the approximate inverses in sensitivity analysis and the preconditioning of a linear system. This type of methods is in focus here.

Several known methods were proposed for approximating matrix inverse, such as those based on the so-called minimum residual iterations and Hotelling-Bodewig algorithm [6]. The Hotelling-Bodewig algorithm is defined by where is the identity matrix. Schulz in [7] found that the eigenvalues of must have magnitudes less than 1 to ensure the convergence, which is a key element in designing higher efficient Schulz-type iterative methods.

In 2011, Li et al. in [8] theoretically investigated and also proposed another third-order iterative method for finding as follows:

The iterative method (2) can also be found in Chapter 5 of the textbook [9]. As an another example from this primary source, the authors provided the following twelfth-order method:in which . For further reading, one may refer to [1012].

In this paper, we will propose an efficient iterative method for finding numerically. The theoretical convergence of the method will also be studied. We also discuss the application of the new scheme in finding Moore-Penrose inverse (also known as pseudoinverse) for rectangular or singular matrices. It is also proven analytically that the new method has asymptotical stability in general. Some large-scale sparse matrices will be taken into account as some examples to put on show a clear reduction of the execution time when the new algorithm is applied.

The rest of the paper is organized as follows. The main contribution of this paper is given in Sections 2-3. Section 2 is devoted to the analysis of convergence which shows that the method can be considered for the pseudoinverse as well. Section 3 thoroughly and for the first time studies the stability of this high-order Schulz-type iterative method for finding generalized inverses. Section 4 covers the matter of initial guess/value in order to preserve the convergence order. Subsequently, the method is examined in Section 5 numerically. And finally, concluding remarks are presented in Section 6.

2. A New Method and Its Convergence Study

By applying the following nonlinear equation solver (to see the new developments on root-finding methods, refer to [13]) on the nonlinear matrix equation , we obtain a fixed-point iteration for matrix inversion using as follows: Simplifying (6) by proper factorizing yields wherein the sequence of iterates converges to for a good initial value. Such a guess, , will be discussed in Section 4.

Theorem 1. Let be a nonsingular real or complex matrix. If the initial approximation satisfies then the iterative method (7) converges with at least twelfth order to .

Proof. Let , and for the sake of simplicity assume that and stand for the symmetric residual matrix. It is straightforward to have Hence, we attain In addition, since , by relation (10) and using mathematical induction, we obtain that . If we consider , therefore Furthermore, we get that That is, , when , and thus , as .
Now, we must show the twelfth order using the sequence . To do this, we denote as the error matrix in the iterative procedure (7). We have Hence, we could get that By multiplying by the left side, we have which now by taking an arbitrary matrix norm results in And thus That is, the iteration (7) converges with at least twelfth order to . This concludes the proof.

At this time, we discuss an application of (7) for finding the generalized inverses. The Moore-Penrose inverse of a complex matrix (also called pseudoinverse), denoted by , is a unique matrix satisfying the following four Penrose equations: wherein is the conjugate transpose of . Ben-Israel and his colleagues in [14, 15] used the method (1) with the starting value where and denotes the spectral radius. The authors in [15] further investigated that the sequence generated by converges to the pseudoinverse.

In the following theorem, we show analytically that in case of having singular or rectangular matrices, scheme (7) by considering the initial approximation (19) converges to the Moore-Penrose generalized inverse.

Theorem 2. For the sequence generated by the iterative Schulz-type method (7), and any , it holds that

Proof. We will prove the conclusion by induction on . For , and considering (19), the first two equations of (21) can be demonstrated simply. And thus, we only give a verification to the last two equations as follows: Assume now that the conclusion holds for some . We now show that it continues to hold for . Using the iterative method (7), one has where the following fact has been used. Thus, the first equality in (21) holds for , and the second equality can be proved in a similar way. For the third equality in (21), using the assumption that and the iterative method (7), we could write down Hence, the third equality in (21) holds for . The fourth equality can similarly be proved, and the desired result follows.

The iterative method (7) is a matrix multiplication rich scheme. So, in order to reduce the computational load of matrix multiplications, it is enough to use SparseArray[mat] to avoid unnecessary multiplications to the nonzero elements when dealing with large sparse matrices.

Remark 3. The (inverse finder) informational efficiency index is defined as where and stand for the local order and the number of matrix-matrix products per cycle. The proposed method of this paper requires 8 matrix-matrix multiplications to achieve the convergence order 12. This implies that as its informational index, which is much better than , , , and of schemes (1), (2), (3), and (4), respectively.

Before stating the main theorem for finding Moore-Penrose inverse, it is required to recall that for with the singular values and the initial approximation with , it holds that We are about to use this fact in the following theorem so as to find the theoretical order of the proposed method (7) for finding the Moore-Penrose inverse (see [16] for more details).

Theorem 4. For , with the singular values , the sequence generated by (7) and using the initial approximation converges to the Moore-Penrose inverse in twelfth order provided that .

Proof. Set and ; we have On the other hand, from the definitions of the Moore-Penrose inverse , we have The use of these relations implies that So, for any matrix norm , we obtain Applying (25), which implies that , and a similar reasoning as in (10)–(12), one can obtain
Finally, using the properties of the Moore-Penrose inverse and Theorem 2, it would be now easy to find the error inequality of the new scheme (7) as follows: Thus, ; that is, the sequence of (7) converges to the Moore-Penrose inverse in twelfth order as . This ends the proof.

3. Stability

We investigate the stability of (7) for finding (or the simplified case ) in a neighborhood of the solution of equation . Note that if the iteration is not self-correcting, that is, if errors made at one stage are not subsequently damped, then the inevitable rounding errors introduced into the iteration may accumulate to the point where they overwhelm the answer. Thus, we should either show that the proposed method is self-correcting or must furnish an analysis showing that rounding errors remain under control. This will be done in what follows. In fact, we analyze how a small perturbation at the th iterate is amplified or damped along the iterates. Note that this procedure has recently been applied on a general family of methods for matrix inversion in [17].

Theorem 5. The sequence generated by (7) with the initial approximation (19) is asymptotically stable for finding the Moore-Penrose generalized inverse.

Proof. Let be the numerical perturbation introduced at the th iterate of (7). Next, one has Here, we perform a first-order error analysis; that is, we formally neglect quadratic or higher terms such as . This formal manipulation is meaningful if is sufficiently small and further yields to . We have where , has been used, since they are very close to the zero (matrix). After some algebraic manipulation and using , we have by using the fact that for enough large , we have . We attain From (35), we can conclude that the perturbation at the iterate , is bounded. Therefore, the sequence generated by (7) is asymptotically stable. This ends the proof.

Corollary 6. By using the matrix identity in (27), it would be possible to further simplify bound (35) as follows: Using (36) recursively, one may attain the following very simple bound:

Remark 7. In case of finding the regular inverse of nonsingular matrices, that is, when , according to (37), we have , and so the matrix method is strongly numerically stable. Consequently, in case of finding the , the matrix method (7), is asymptotically stable. Of course, since the iteration is not self-correcting in the general case, proceeding beyond convergence may cause a serious increase in error.

4. Initial Value

The iterative methods that have been discussed up to now are sensitive upon choosing the initial guess/value to start the process. As a matter of fact, the high accuracy and efficiency of such types of iterative algorithms are guaranteed only if the initial value satisfies the appropriate condition given in Theorem 1. Thus, in order to preserve the convergence order, we present some ways from the literature to remedy this flaw, although an efficient way for square or rectangular matrices is the way (19).

For a symmetric positive definite (SPD) matrix , one can easily use the Householder-John Theorem to attain as the initial value, wherein the matrix is any of the matrices such that is SPD [8].

If the square matrix is diagonally dominant, one may apply the approach given in [18] and use wherein is the diagonal entry of . Note that this choice is so much fruitful in solving PDEs resulting from discretizations. Some further generalizations of such an initial matrix are given in [19].

Although the two abovementioned ways are efficient, they cannot be applied for finding an initial guess/value to the inverse of general input matrices. For instance, for large-scale matrices which do not satisfy the above structures, they may fail to provide the convergence. Hence, we here take into account the suboptimal way of producing as given by Pan and Schreiber in [20] as follows:

We should note that choosing the initial value as given above can satisfy the necessary condition of arriving to the convergence phase. Some ways for updating the initial matrix for sparse matrices are brought forward by [21].

In what follows, we provide an algorithm for improving an initial matrix for square matrices rapidly. In fact, the derivation of LU factorization for almost all kinds of square nonsingular matrices could be done rapidly in the linear algebra programming packages. In the Mathematica, the one argument command LinearSolve[ ] provides an LU factorization of too much fast. Then, by applying the LU decomposition on the columns of a identity matrix recursively, one could update the columns of a derived initial matrix produced by other strategies such as (40). Such a procedure is illustrated in Algorithm 1. We summarized this idea as in Algorithm 2.

(1)Given
(2) construct the matrix
(3) obtain the LU decomposition of the matrix , for example LU = LinearSolve[A]
(4) for , and when is the th column of the identity matrix
(5) update the columns of by finding the columns of the real as follows:
end for.

initial1 A_, num_   
Quiet@Module n = Dimensions A , i = 1, LU = LinearSolve A ,
Id = SparseArray k_, k_}  -> 1.}, {n, n ;
mat = (1/Norm A, Frobenius )*ConjugateTranspose A ;
While i <= num, {ith = LU Id All, n + 1 - i ;
mat All, n + 1 - i   = ith; i++; ; mat ;
initial2 A_, num_      Quiet@Module n = Dimensions A , i = 1},
Id = SparseArray k_, k_}  -> 1.}, {n, n}, 0 ; LU = LinearSolve A ;
mat = Quiet[initial1 A, num ;
While i <= num, {ith = LU Id All, i ;
mat All, i   = ith; i++; ; mat ;

The initial1[A_,num_] takes the nonsingular matrix and the number of columns that users wish to update from the real inverse into the approximate inverse from the left, while the function initial2[A_,num_] works doubly. That is, if, for example, num=10, it updates the first and last 10 columns of the approximate inverse as rapidly as possible.

Next, we conduct some numerical tests to support the theoretical results given in Section 2 using the initial value discussed herein.

5. Computational Aspects

In this section, some experiments are presented to demonstrate the capability of the proposed method. The programming package MATHEMATICA 8 [22] has been used in the demonstrations. We work on the numerical aspects of the methods in machine precision.

It is clear that large sparse matrices cannot be handled easily and their storage needs to be done in sparse form as in the input form to be accessible and economic in real applications. Methods like (7) are powerful in finding an approximate inverse or a robust approximate inverse preconditioner in low number of steps and computational time, in which the output form of the approximate inverse is also sparse.

In this paper, as the programs were running, we found the running time using the command AbsoluteTiming[ ] to report the elapsed CPU time (in seconds) for this experiment. In this paper, the computer specifications are Microsoft Windows XP Intel(R), Pentium(R) 4, and CPU 3.20 GHz, with 4 GB of RAM.

Experiment 1. This test is devoted to the application of the Schulz-type iterative methods in finding the pseudoinverse of 30 large random complex matrices defined as shown in Algorithm 3 ().

m  = 1500; k = 1800; number = 30; SeedRandom ;
Table A l   = SparseArray Band 400, 10}, {m, k   -> Random   - I,
Band 10, 200}, {m, k   -> {1.1, -Random ,
Band -60, 100   -> -3.02, Band -100, 500   -> 3.1 I}, {m, k}, 0. ;, {l, number ;
Id = SparseArray i_,  i_}  -> 1.}, {m, m}, 0. ;

The results of comparisons for these random matrices of the size are reported in Figures 1, 2, and 3 in terms of the number of iterations and the computational time. The compared methods are (1) denoted by “Schulz,” (2) denoted by “Chebyshev,” (4) denoted by “KMS,” and the new iterative scheme (7) denoted by “PM." A saving in the elapsed time by considering the stopping criteria as and can be observed for the studied method (7). In this test, the initial matrix has been computed for each random matrix by V0[j]=ConjugateTranspose[A[j]] *(1./((SingularValueList[A[j],1][[1]])2)), while the maximum number of iterations is set to 100.

6. Concluding Remarks

It is well known that matrix inverse and generalized inverse are important in applied fields of nature science, such as the solution to various systems of linear and nonlinear equations, eigenvalue problems, and the linear least square problems. Iterative methods are often effective especially for large-scale systems with sparsity and Schulz-type methods are great tools for preconditioning such systems or in finding pseudoinverses.

Hotelling-Bodewig algorithm is simple to describe and analyze and is numerically stable. This was the idea of developing an iterative method of this type in this paper.

In this paper, we have shown that the suggested method (7) reaches twelfth order of convergence. The stability of the new method was also studied in detail and established that the new scheme is asymptotically stable. The efficacy of the new scheme was illustrated numerically in Section 5. Finally, and based on the numerical results obtained, one can conclude that the presented method is useful. Further extensions of the new scheme for other generalized inverses (such as the ones in [23, 24]) can be done for future works.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The research of the first author (F. Khaksar Haghani) is financially supported by Shahrekord Branch, Islamic Azad University, Shahrekord, Iran.