Abstract

A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper.

1. Introduction

Computing the matrix inverse of nonsingular matrices of higher sizes is difficult and is a time consuming task. Application of higher order algorithms to solve this problem is very desirable. Generally speaking, in wide variety of topics, one must compute the inverse or particularly the generalized inverses to comprehend and realize significant features of the involved problems [1]. An example could be in phased-array radar whereas the target tracking is a recursive prediction correction process, when Kalman filtering is extensively consumed; see [2, 3]. Target equations are modeled explicitly such that the position and velocity and potentially higher derivatives of each measurement are approximated by the track filter as a state vector. The approximated error with the state vector is modeled by taking into account a covariance matrix, which is then used in subsequent computations. To be more precise, this matrix gets updated in each iteration of the track filter. Finding the inverse in the next iteration could make use of the inverse in the present iteration. In this circumstance, fast and efficient iterative algorithms are required.

There are some techniques to tackle this problem, which are basically divided into two main parts: the direct solvers such as Gaussian Elimination with Partial Pivoting (GEPP), which requires a massive load of computations and memory for large scale problems, and the iterative methods of the class of Schulz-type iterations, in which an approximation of the matrix inverse (by using a threshold) can be found per step up to the desired accuracy.

Almost all of the direct methods for matrix inversion require high accuracy in the computations to attain proper solutions as they are not tolerant to errors in the computed matrices. In contrast, iterative method compensates for individual and accumulation of round-off errors as it is a process of successive refinement.

In this paper, we focus on presenting and demonstrating a new iterative method to find approximate inverse matrices as fast as possible with a close attention in reducing the computational time. Toward this goal, a theoretical discussion will also be given to show the behavior of the proposed scheme. An interesting point in the contribution is that it could easily be applied to complex matrices as well as for finding the Drazin inverse.

To clarify the procedure, we now remind of some of the well-known methods in what follows. Perhaps, the most common technique to compute the inverse of a nonsingular complex matrix , is the Schulz method given in [4] as follows: where is the identity matrix with the same dimension that of the matrix . The scheme (1) has become popular in the 1980s due to the emerging of parallel machines.

Such iterative methods are sensitive for the initial guess/value ( ) to start the process and converge to . In practice, the Schulz-type methods are efficient (specially for structured matrices) but a difficulty lies in the initial approximation of the . This need was fulfilled by providing some appropriate initial approximations in the literature. For example, Rajagopalan in [5] gave some initial approximations for the inverse matrix by considering different norms as: In fact, by choosing (2), we attain Based on the fact that , we could have which suggests that by choosing (2), the iterative scheme (1) will be almost always convergent for regular matrices.

Some other ways to choose the initial approximation have been listed in Table 1, where and are the transpose and the conjugate transpose of the complex matrix , respectively, and stands for the size of the square matrix.

A vast discussion on choosing the initial approximation is given in [6]. For instance, Pan and his coworkers discussed the possible ways to reduce the computational load of Schulz-type iteration methods for structured matrices such as Toeplitz or Hankel matrices; see, for example, [7]. To illustrate further, for a symmetric positive definite (SPD) matrix , one can choose as follows:

Another interesting choice is based on [8] for diagonally dominant matrices, which is fruitful when dealing with large scale sparse systems arising from the discretization of differential equations as follows: with as the diagonal elements of .

Let us now review some of the high order iterative methods for matrix inversion. The perception of the need for higher order methods is the fact that (1) is too slow at the beginning of the process before arriving at the convergence phase for general matrices, and this would increase the computational load of the whole algorithm used for matrix inversion [9].

Li et al. in [10] investigated an iteration of the form of third-order and also proposed another iterative method for finding of the same order as it is given in

Note that a general procedure for constructing such methods was given in ([11], Chapter 5). Krishnamurthy and Sen provided the following fourth-order method: in which . As another example, a ninth-order method could be presented as

In the sequel, we present a new iteration for matrix inversion. Actually, the following section covers our main contribution as a new ninth-order efficient inverse-finding iterative method. We also prove the main theorem therein. Next in Section 3, we discuss the complexity of the iterative methods to theoretically find the most efficient method. In Section 4, we analytically discuss the application of the new algorithm in the computation of the Drazin inverse, which is of interest in numerical analysis. Section 5 applies the suggested iteration in finding robust approximate inverses for large sparse matrices with real or complex entries in details, while the application of the new scheme in preconditioning of the practical problems will be also given. A clear reduction in the computational time to attain the desired accuracy will be observed therein. Finally, in Section 6 our concluding remarks will be furnished.

2. A New Method for Matrix Inversion

A new scheme must be designed by applying an efficient nonlinear (scalar) equation solver to the matrix equation . Then, one may obtain an iterative process using proper factorization. In this way, by applying the iterative scheme on the matrix equation , we are able to find the fixed-point iteration Now, by simplification, we suggest the following matrix iteration: wherein ,  is the identity matrix, and the sequence of iterates converges to under some condition.

In numerical mathematics, it is very useful and essential to know the behavior of an approximate method. Therefore, we are about to prove its order of convergence in Theorem 1.

Theorem 1. Let be a nonsingular complex matrix. If the initial approximation satisfies then the iterative method (13) converges with ninth order to .

Proof. We use notations that and subsequently . Then, Hence, by taking arbitrary matrix norm on both sides of (15), we attain In addition, since , by relation (16), we obtain that Now if we consider , therefore Using mathematical induction, we obtain Furthermore, we get that That is, , when and Thus, the new method (13) converges to the inverse of the matrix in the case , where is the spectral radius. Now, we prove that the order of convergence for the sequence is at least nine. Let denotes the error matrix ; afterwards The identity (22) in conjunction with (15) implies that Therefore, using invertibility of , it follows immediately that By taking any subordinate norm of (24), we obtain Consequently, it is proved that the iterative formula (13) converges to , and the order of this method is at least nine.

The Schulz-type iterations are strongly numerically stable, that is, they have the self-correcting characteristic and are essentially based upon matrix multiplication per an iterative step. Multiplication is effectively parallelizable for structured matrices represented in compressed form.

The iterative scheme (13) could efficiently be combined with sparse techniques in order to reduce the computational load of matrix-by-matrix multiplications per step. We should also point out that even if the matrix is singular, the Schulz-type methods, including the scheme (13), converge to the Moore-Penrose inverse using a proper initial matrix. A full discussion on this feature of this type of iterative methods has been given in [12].

Note that produced from (13), under a certain condition (when ), may be applied not only to the left preconditioned linear system but also to the right preconditioned linear system , where . In fact, an important application of the new method (13) is in preconditioning of the linear system of equations. Practically, experimental results in Section 5 will show that the preconditioner obtained from (13) may lead to nicely clustered eigenvalue distributions of the preconditioned matrices and, hence, results in fast convergence of the preconditioned Krylov subspace iteration methods, such as GMRES and BiCGSTAB for solving some classes of large sparse system of linear equations.

3. Complexity of the Methods

Let us consider the computational complexity of the existing iterative processes (1), (7), (8), (9), (10), and (13), since they are all convergent to in the same condition. From a theoretical analysis, and by assuming a uniform cost for the arithmetic operations, typical of the floating point computations, we consider the inverse-finder informational efficiency index. It uses two parameters and which stand for the rate of convergence and the number of matrix-by-matrix multiplications in floating point arithmetics, respectively. Then the comparative index could be expressed by

Hence, a favorable method in theoretical point of view must reach an order with fewer matrix multiplications , (i.e., ).

In Table 2, we furnish a comparison on the order along the number of matrix multiplications, rate of convergence, and the index (26) for different methods. The results show that the new established method in Section 2 is better than the others. In fact, by comparing these results, one can see that the iterative process (13) reduces the computational complexity by using less number of basic operations and leads to the better equilibrium between the high speed and the operational cost.

4. Application in Finding the Drazin Inverse

In 1958, Drazin in [13] introduced a different kind of generalized inverse in associative rings and semigroups that does not have the reflexivity property but commutes with the element. The importance of this kind of inverse and its computation was later expressed away fully by Wilkinson in [14]. This was the motivation of many authors to develop direct or iterative methods for this important problem; see, for example, [1].

Definition 2. The smallest nonnegative integer , such that , is called the index of and denoted by .

Definition 3. Let be an complex matrix; the Drazin inverse of , denoted by , is the unique matrix satisfying where is the index of .

Note that if , the matrix is called the group inverse of . Also, if is nonsingular, then it is easily seen that and . Note that the idempotent matrix is the projector on along , where denotes the range of and is the null space of .

Wei in [15] proved that the general solution of the square singular linear system can be obtained using the Drazin inverse as , where .

In 2004, Li and Wei in [16] proved that the matrix method of Schulz (1) can be used for finding the Drazin inverse of square matrices both possessing real or complex spectra. They proposed the following initial matrix: where the parameter must be chosen so that the condition is satisfied. Using the initial matrix of the form (28) yields to a matrix method for finding the famous Drazin inverse with quadratical convergence.

As a consequence, we could present the iterative method of the form (13) with ninth order of convergence for finding the Drazin inverse, where the initial approximation is chosen as wherein stands for the trace of an arbitrary square matrix.

In what follows, we use the following auxiliary results.

Proposition 4 (see [17]). Let and be given. There is at least one matrix norm such that where denotes the spectral radius of .

Proposition 5 (see [18]). If denotes the projector on a space along a space , then(i) if and only if ;(ii) if and only if .

Theorem 6. Let be singular square matrix. Also, suppose that the initial approximation is chosen by means of (28). Then the sequence defined by the iterative method (13) satisfies the following error estimate when finding the Drazin inverse:

Proof. Consider the notation and subsequently the residual matrix as . Then similarly as in (15), we get By taking an arbitrary matrix norm on both sides of (32), we attain In addition, since , by relation (33), we obtain that . Similarly, . Using mathematical induction, we obtain for each . This implies that
Since is chosen as in (28), immediately follows. Further, using this fact in conjunction with (12), which implies that , we conclude that Similarly, if we rewrite (13) in the form it is not difficult to verify that Now, an application of the well-known results from [12] in conjunction with Proposition 5 and (35), (37), immediately follows Therefore, the error matrix satisfies From the last identity and (34) we have which is a confirmation of (31).

The following result is a consequence of Theorem 6.

Corollary 7. If the conditions of Theorem 6 are satisfied and the initial iteration is chosen such that the iterative method (13) converges to .

Therefore, our goal is to find initial approximations satisfying (42). In accordance with Proposition 4, must satisfy the following inequality to ensure the convergence in the Drazin inverse case: where and are eigenvalues of .

Theorem 8. Let be a singular square matrix with , and the sequences , are defined as in Theorem 6. The sequence defined by the iterative method (13) converges with ninth order to if the initial approximation is in accordance with (42).

Proof. Now, by considering as the error matrix for finding the Drazin inverse, we have Taking into account (32) and using elementary algebraic transformations, we further derive Now, using the idempotent property , , and the following consequence of (39) we obtain for each the following: From (47) and (45), Therefore,
Finally, applying (39), it is now easy to find the error inequality of the new scheme (13) using (49) and the second condition of (27), when finding the Drazin inverse, as follows:
Therefore, since (42) is satisfied, from (31) follows . Furthermore, the inequalities in (50) immediately lead to the conclusion that as with the ninth order of convergence.

5. Numerical Aspects

Using the programming package Mathematica 8 [19] in this section, we apply our iterative method on some practical numerical tests and compare it with the existing methods in order to manifest the applicability and the consistency of numerical results with the theoretical aspects illustrated in Sections 24.

For numerical comparisons in this section, we have used the methods (1), (7), (8), (9), (10), and (13) denoted by “Schulz”, “Li et al. I”, “Li et al. II”, “KMS4”, “KMS9,” and the “Proposed method”, respectively. We have carried out the numerical tests with machine precision on a computer with Pentium 4. In fact, the computer characteristics are Microsoft Windows XP Intel(R), Pentium(R) 4 CPU, 3.20 GHz with 4 GB of RAM. In all computations, the running time in seconds using AbsoluteTiming was attained.

In sparse-matrix algebra, the iteration methods such as (1), (7), (8), (9), (10) and (13) should be coded in sparse form using some well-known commands such as SparseArray to reduce the computational burden and preserve the sparsity feature of the approximate inverse per computing step. This is done herein along with a threshold by applying Chop on each approximate inverse .

Now, we apply the above inverse finders for finding the approximate inverses of the following large sparse matrices.

Test Problem 1. In this test, 10 large sparse random complex matrices of the dimension 5000 are considered as shown in Algorithm 1.

n  = 5000; number = 10; SeedRandom 123
Table A j   = SparseArray Band −100, 1100   -> RandomReal 20 ,
     Band 1, 1   -> 2., Band 1000, −50}, {n   20, n   25   -> {2.8, RandomReal   + I},
     Band 600, 150}, {n   100, n   400   -> {-RandomReal 3 , 3. + 3 I , {n, n},
          0. , {j, number ;

Note that . In this test, the stopping criterion is and the maximum number of iterations allowed is set to 75. Note that in this test the initial choice has been constructed using different available ways as discussed in Table 1.

The results of comparisons for this test have been presented in Figures 1, 2, 3, and 4. In Figure 1, we show the number of iterations required by different methods to attain the desired accuracy when . As it is obvious that higher order methods require lower number of iterations to converge, we then put our focus on the computational time needed to satisfy the desired tolerance using three different initial approximations , , and . As could be observed for all the three forms of the initial guesses and in all the 10 test problems (given in Figures 24), our iterative method (13) beats the other existing schemes, which immediately follows the theoretical results of Table 2.

The attained results have reverified the robustness of the proposed iterative method (13) by a clear reduction in the number of iterations and the elapsed time.

The practical application of the new scheme (13) falls within many problems as discussed in Section 1. For instance, in solving second-kind integral equations by Wavelet-like approach, the problem will be reduced to find that the inverse of a large sparse matrix possesses a sparse inverse as well [20]. In the rest of this section, we apply our new iterative method as a robust technique to produce accurate preconditioners for accelerating modern iterative solvers such as GMRES or BiCGSTAB for solving large scale sparse linear systems; see, for example, [21].

Test Problem 2. Consider solving the following Boundary Value Problem (BVP) using discretization. In such problems, and in order to capture all the shocks and the behavior of the solution, a very fine grid of points is required in approach: where the domain of the solution is and . To solve (51) using 3-point discretization, we assume that in the grid points , the exact solution is denoted by and the approximate solution is defined by , for any , .

Thus, (51) can be written in discretized form in MATHEMATICA language (see Algorithm 2), wherein is the number of grid points which result in an sparse linear system of equations. In this test, we consider the tolerance as for the final solution of the linear systems. The results of solving this system “without preconditioning” and “with left preconditioning” are given in Tables 3 and 4. Note that in Tables 3 and 4, for example, PBiCGSTAB-(8)-II stands for the left preconditioned linear system ( ) applying the preconditioner (approximative inverse) obtained by the scheme (8) after 2 iterations, while it is solved by the iterative solver BiCGSTAB.

h  = (2.   0)/(n + 1); Subscript t, 0   = 0; Subscript w, 0   =
         E 3; Subscript w, n + 1   = Exp 3 ;
Table N (Subscript w, i   1     2 Subscript w, i   + Subscript w, i + 1 )/
      h 2   3 Subscript w, i   +
     2 (Subscript w, i + 1     Subscript w, i 1 )/(2 h) == 0 , {i, 1, n ;

The numerical results clearly support the efficiency of the method (13). A clear reduction in the elapsed time is observable. Even for the case of GMRES solver which had failed (not convergent after 1500 iterations to the considered tolerance), a simple preconditioner obtained by the new method (13) significantly improved the problem. Note that in this test, we have constructed for the compared methods using (6).

Test Problem 3. The aim of this example is to apply the discussions of Section 4, for finding the Drazin inverse of the following square matrix (taken from [16]): with . To simplify the process, we write a general code in the programming package Mathematica for the iterative process (13), to find the (pseudo-)inverse or the Drazin inverse of arbitrary matrices (see Algorithm 3).

PM X_   :=
With Id = SparseArray i_, i_}  -> 1.}, {n, n ,
       X1 = A.X; X2 = −7 Id + X1.(9 Id + X1.(−5 Id + X1)); X3 = X1.X2;
       (−1/8) X.X2.(12 Id + X3.(6 Id + X3))
          ;
InitialMatrix A_   := 1/SingularValueList A, 1 [1   ConjugateTranspose A ;
InitialDrazin A_   := 2/Tr MatrixPower A, k + 1   MatrixPower A, k ;
DrazinInverse A_, tolerance_   := If k == 0,
      Module X0 = InitialMatrix A , FixedPoint (PM #   &), X0,
      SameTest -> (Norm #1 - #2, Infinity   <= tolerance  &) ,
      Module X0 = InitialDrazin A , FixedPoint (PM #   &), X0,
      SameTest -> (Norm #1   #2, Infinity   <= tolerance  &)
      ;

The two-argument function DrazinInverse[A_,tolerance_] takes the arbitrary matrix and the tolerance from the user to obtain its Drazin inverse by knowing the index . In this case, by choosing , we obtain

Checking the conditions of Definition 3 yields to which supports the theoretical discussions.

6. Conclusion

In many engineering applications, extracting the diagonal or the whole entries of the inverse of a given matrix (basically large and sparse) is an important part of the computation, for example, in electronic structure calculation and especially for models based on effective one-electron Hamiltonians such as the tight-binding models, or in modern statistics, and thus, developing high-order efficient Schulz-type methods is of practical interest.

In this paper, we have studied a high-order iteration method (13) for matrix inversion. Convergence analysis of our iterative algorithm has been studied and a discussion on the choice of the initial value in order to start the process and preserve the convergence order has been given. We also discussed that under what conditions a new method could be applied for finding the Drazin inverse of square matrices having real or complex spectra.

As a result, the total time consuming of the suggested iteration (13) is remarkably low in contrast with the existing methods of this type in the case of constructing approximate inverses and in preconditioning.

Working on the extension of the proposed method (13) for interval matrix inversion [22] can be considered as future works in this field of study.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The second author gratefully acknowledges support from the Research Project 174013 of the Serbian Ministry of Science.