Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
Volume 2014, Article ID 563787, 7 pages
http://dx.doi.org/10.1155/2014/563787
Research Article

A New Iterative Method for Finding Approximate Inverses of Complex Matrices

1Department of Applied Mathematics, School of Mathematical Sciences, Ferdowsi University of Mashhad, Mashhad, Iran
2Department of Mathematics and Applied Mathematics, School of Mathematical and Natural Sciences, University of Venda, Private Bag X5050, Thohoyandou 0950, South Africa

Received 24 May 2014; Revised 25 July 2014; Accepted 28 July 2014; Published 14 September 2014

Academic Editor: Juan R. Torregrosa

Copyright © 2014 M. Kafaei Razavi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a new iterative method for computing the approximate inverse of nonsingular matrices. The analytical discussion of the method is included to demonstrate its convergence behavior. As a matter of fact, it is proven that the suggested scheme possesses tenth order of convergence. Finally, its performance is illustrated by numerical examples on different matrices.

1. Introduction

The solution of linear algebraic systems of the form , where is an matrix (all matrices in this paper are of the same dimension, unless it is clearly stated) and is a given vector, is the center to many numerical simulations and is often the most time-consuming part of a computation. Direct methods, which work essentially based on finding the inverse of the coefficients matrix , are very robust, and they tend to require a predictable amount of resources in terms of time and storage. Their only problem, in fact, lies in the massive need of time and memory in calculations, which normally put them out of interest especially for the cases when is sparse.

At this time, iterative methods can be taken into account. The relaxation and Krylov subspace methods are of such type, which are reliable in solving large scale sparse systems [1]. On the other hand, we have another type of iterative methods, which are called Schulz-type iterations in the literature (see, e.g., [2]). These techniques are based on finding robust approximate inverses of a given matrix.

The oldest technique of this type is the Schulz method [3] defined by where is the identity matrix. In the general case, it is known to converge with , where and denotes the spectral radius. Such schemes are also useful for sensitivity analysis when accurate approximate inverses are needed for both square and rectangular matrices. Notice that, for rectangular matrices, one may obtain their generalized inverse using such iterative methods [4].

These solvers are also the method of choice in certain areas such as circuits, power system networks, and chemical plant modeling [5]. A practical application of Schulz-type methods, which recently attracted some numerical analysts to itself [6], is in preconditioning. In fact, by having an initial value [7], one is able to produce any approximate inverse preconditioners up to the desired accuracy and then solve the preconditioned linear system as rapidly as possible.

Hence, we are interested in finding a new iterative method belonging to the class of Schulz-type methods for finding approximate inverses in this work.

Remark 1. We use matrix norm 2 in all subsequent derivations and discussions unless the other forms are clearly stated.

The rest of this paper is organized in what follows. In the next section, we briefly review some of the existing Schulz-type methods and provide a new mathematical proof for one of the unproved iterative methods. Another contribution of this paper is presented in Section 3. That section is also devoted to the analysis of convergence. Section 4 covers the numerical simulations and some notes in order to have the best feedback in practical implementations. And finally, concluding remarks are drawn in Section 5.

2. Preliminaries

Li et al. in [6] presented and also proposed another third-order iterative method for finding as comes next (right-product form):

W. Li and Z. Li in [8] proposed the following fourth-order iteration method:

In fact, a family of methods developed in [8]. We draw attention to the point that the iterative methods (2) and (4) can also be found in the textbook [9]. Moreover, we suggest that the iterative method (3) can be rewritten as (left-product form)

This structure is easier due to the use of Horner-like multiplications and, subsequently, lowers round-off errors by avoiding the calculation of matrix power, which is costly. It should be remarked that authors in [6] did not provide a mathematical proof for (3) or its other form (5). Thus, herein we prove its order of convergence to first complete the paper [6].

Theorem 2. Assume that is an invertible matrix with real or complex components. If the initial value satisfies then, the iteration (5) converges cubically to .

Proof. In order to prove the convergence of (5), we consider first that , , and . For (5), we get that Thus, we obtain . Moreover, since and , we get that where (8) tends to zero when . That is, , when , and thus for (5), we attain , as . This shows the convergence.
Now, we must show its third order. Toward this end, we denote by the error matrix in the iterative procedure (5). We have (using a similar methodology as in [10]) We can now obtain Equation (11) results in and subsequently which yields by taking norm from both sides and consequently This reveals that the iterative method (5) converges to with at least third order of convergence. The proof is complete.

3. A Novel Method

Let be the identity matrix and . We aim at constructing an iterative method in which the sequence of iterates converges to for an appropriate initial guess. We suggest our proposed method as follows: while and its derivation will be pointed out in Section 4. In numerical mathematics, it is essential to know the theoretical behavior of an approximate method. In what follows, we prove the convergence order of (16).

Theorem 3. Assume that is an invertible matrix with real or complex entries. If the initial guess satisfies then, the iteration (16) converges to with at least tenth convergence order.

Proof. In order to prove the convergence of (16), we consider the same assumptions as we did in the proof of Theorem 2. We then have Thus, we obtain Moreover, since and , we attain where (20) tends to zero when . That is, , when , and thus for (16), we obtain We must now illustrate the tenth order of convergence for (16). To this aim, we denote by the error matrix in the iterative procedure (16). We have (using (18)) Equation (22) yields Using (24), we attain which simplifies, by taking norm from both sides, the following: and consequently This shows that the method (16) converges to with at least tenth order of convergence. The proof is now complete.

A simple corollary from Theorems 2 and 3 is that, using the same conditions and initial conditions, the higher order methods arrive at the convergence phase faster than lower order methods and this reduces the number of iterations.

Note that the sequence of (16) may be applied to not only the left preconditioned linear system but also the right preconditioned linear system , where , only if the initial matrix satisfies .

The iterative methods that have been discussed up to now are sensitive for choosing the initial guess to start the process. As a matter of fact, the high accuracy and efficiency of such types of iterative algorithms are guaranteed only if the initial value satisfies the appropriate condition given in Theorems 2 and 3.

Thus, in order to preserve the convergence order, we remind the reader that the efficient way of producing as given in [11] is as follows: . Another adaptive way is , where is the identity matrix, and should be determined so that .

Remark 4. The new iteration (16) reaches 10th order by using 8 matrix-matrix multiplications, while the schemes (1), (2), and (5) reach 2nd, 3rd, and and 4th orders, respectively, by consuming 2, 3, and 4 matrix-matrix multiplications. Hence, the contributed method has less computational cost than its competitors. This superiority will be clarified in Section 4. It should also be remarked that the convergence of any order for nonsingular square matrices is generated in Section 6 of Chapter 2 of the book [12], whereas the general way for the rectangular matrices is discussed in Chapter 5 of [9] and the recent paper [13]. In fact, in those constructions a convergence order will always be attained by times of matrix-matrix products, such as (1) which reaches the order 2 using two matrix-matrix multiplications.

Remark 5. Two important matters must be mentioned at this moment to ease up the perception of why a higher order (efficient) method such as (16) with 8 matrix-matrix products to reach at least the convergence order 10 is practical. First, by following the comparative index of informational efficiency index of inverse-finders [14], defined by , wherein and stand for the convergence order and the number of matrix-matrix products, the informational efficiency for (16), that is, , beats its other competitive, of (1), of (2) and (4), and of (3). And second, the significance of the new scheme will be displayed in the implementation of such schemes. To illustrate further, such iterations are totally dependent on the initial matrices. Though there are certain and efficient ways for finding , in general such initial approximations take a high number of iterations to arrive at the convergence phase. On the other hand, each cycle of the implementation of such Schulz-type methods includes one stopping criterion based on the use of a matrix norm, and this would impose further burden and load in general for the low order methods in contrast to the high order methods such as (5). Because the computation of a matrix norm (usually for dense matrices and for large sparse matrices) takes a reasonable time, therefore higher number of steps/iterations (which is the result of lower order methods) will be costlier than the lower number of steps/iterations.

Remark 6. The index that we are defining is different from the classical efficiency index as defined in [15]. Traub in [15] discussed why informational efficiency index is needed. In fact, the kind of efficiency index that the users apply depends on the situation which is dealt with. Note that the cost of each iteration (step) is governed by the number of matrix-matrix products, order, and computing of a stopping criterion. Thus, the informational index has meaning in this case, because it measures the gain brought each time a matrix-matrix product along with the order and the stopping criterion is computed.

4. Numerical Reports

In this section, some experiments are presented to demonstrate the capability of the suggested method. The computer algebra system MATHEMATICA 8, [16] and [17], has been used in this section. For numerical comparisons, we have used the methods (1), (2), (5), and (16). We will also use double precision in our calculations. The computer specifications are Microsoft Windows XP Intel(R), Pentium(R) 4, CPU 3.20 GHz, and 4 GB of RAM.

Experiment 1. In this test, 10 sparse random complex matrices of the dimension 2500 are considered as follows:;–>,–>,–>;

In this test, the stopping criterion is , and the maximum number of iterations allowed is set to 100. Note that in this test the initial choice has been constructed by . We also plot the condition number of the 10 test matrices in Figure 1.

563787.fig.001
Figure 1: Comparison of condition numbers for the 10 test sparse matrices in Experiment 1.

The results of comparisons for the test problem have been presented in Figures 2-3. We have distinguished the curves by various symbols like circle, triangle, and so forth, alongside different colors. The attained results reverify the robustness of the proposed iterative method (16) by a clear reduction in the number of iterations and the elapsed time. Note that, in figures of this paper, Schulz, KSM, MM, and proposed method (PM) stand for (1), (2), (5), and (16), respectively.

563787.fig.002
Figure 2: Comparison of the number of iterations for the 10 test sparse matrices in Experiment 1.
563787.fig.003
Figure 3: Comparison of the computational times for the 10 test sparse matrices in Experiment 1.

In general, iterative Schulz-type methods are very useful for large sparse matrices having an sparse inverse or when only an approximate inverse is needed.

After a few iterations of such methods, the computed approximate inverse may be dense, and thus the whole procedure might be slow. To remedy this, a threshold can be imposed to the implemented algorithm. Hopefully, this can be done by the command . In Experiment 1, we have set the to . This technique of implementation would be also fruitful for preconditioning.

We also point out that the construction of (16) is based on applying the nonlinear equation solver (similar idea to [3]) on the matrix equation , where, for example, is the two-point divided difference, which gives us the aimed tenth-order method (16).

Remark 7. As discussed before, an important application of such solvers is to provide robust preconditioners for solving linear systems of equations. To illustrate further, we have chosen the following example carefully, in which, in a practical problem, the GMRES solver fails in solving the resulting system of a discretization, when the maximum number of steps allowed to the GMRES is 2000.

Experiment 2. We consider solving a boundary value problem (BVP) using finite difference discretization. Let us solve the BVP where , , and the number of discretization in our range is .
The implementation to find the solution with the tolerance would be (in the MATHEMATICA environment) as comes next: ;;; –> ; –>  –> –> 
Now the coefficient matrix and the right hand side vector for finding the solution of the BVP (29) in the sparse form can be deduced by ;
Just like the other problems, this is the most time-consuming step, that is, to solve the sparse system . Herein, we choose the Krylov subspace method GMRES to solve the system, without preconditioning and with produced from the iterative method (1), produced from the iterative methods (2) and (5), and produced from the iterative methods (16) to solve the left preconditioned system .
In this way, the system will be well behaved and we expect to find the solution of the BVP (29) in less computational time than the nonpreconditioned GMRES. The results of consuming time for this purpose are given in Table 1. We should remark that in this test has been chosen as the initial approximation based on [18], where is the th diagonal entry of .
As is obvious from Table 1, the nonpreconditioned GMRES fails to converge even after 2000 iteration steps, while the required accuracy could be attained using preconditioning. The best time which beats all the other ways comes from the new scheme (16). The left preconditioned system resulting from (16), showed as (16)-PGMRES, is reliable in solving linear ill-conditioned systems.
Note that the time reported for PGMRES is the whole time of constructing the initial matrix, obtaining the preconditioner and solving the left preconditioned system , by GMRES.

tab1
Table 1: Comparison of the computational time in solving the linear system in Experiment 2.

Experiment 3. In order to compare the preconditioners obtained from new method with the famous preconditioners of the literature resulting from incomplete LU factorizations, [1], we pay heed to solving the linear sparse systems , of the dimension 841 using BiCGSTAB. The matrix has been chosen from MatrixMarket database as  A = ExampleData[“Matrix”, “YOUNG1C”], while the right hand side vector is . The solution would be . Figure 4 denotes the plot of matrix .
The left preconditioned system using of Schulz, of KSM, and of the proposed method PM along with the well-known preconditioned techniques  ILU0,  ILUT, and  ILUTP has been tested, while the initial vector has been chosen for all the cases automatically by the command of in MATHEMATICA 8. The results of consuming time comparisons for different tolerances (residual norms) have been listed in Figure 5.
Note that, after a few iterations, the computed preconditioner of Schulz-type methods may be dense. Like before, we must choose a strategy to control the sparsity of the preconditioner. This here can be done by setting , in our obtained approximate inverses.

563787.fig.004
Figure 4: The plot of matrix in Experiment 3.
563787.fig.005
Figure 5: Comparison of the computational times for various preconditioners in Experiment 3.

5. Conclusions

In this work, we have developed an iterative method in inverse-finding of complex matrices belonging to the well-known class of Schulz-type methods.

We have proposed a simpler form of a recently published method (3) in the form (5) and proved analytically that the scheme possesses third-order convergence.

We have shown that the suggested method (16) reaches tenth order of convergence. We moreover have discussed that (16) can be considered for the left and right preconditioned systems under a certain condition. The efficacy of the new scheme was illustrated numerically using the computer programming package MATHEMATICA.

One may note that the approximate inverse obtained per step of Algorithm (5) or (16) can also easily be taken into account as a preconditioner to reduce the ill-conditioning of a system and let the users apply iterative methods such as GMRES or BiCGSTAB in solving large scale sparse linear systems of algebraic equations efficiently.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, 2nd edition, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  2. F. Soleimani, F. Soleymani, A. Cordero, and J. R. Torregrosa, “On the extension of Householder's method for weighted Moore-Penrose inverse,” Applied Mathematics and Computation, vol. 231, pp. 407–413, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  3. G. Schulz, “Iterative Berechnung der Reziproken matrix,” Zeitschrift für Angewandte Mathematik und Mechanik, vol. 13, pp. 57–59, 1933. View at Google Scholar
  4. A. Ben-Israel and T. N. E. Greville, Generalized Inverses, Berlin, Germany, Springer, 2nd edition, 2003. View at MathSciNet
  5. M. Benzi, “Preconditioning techniques for large linear systems: a survey,” Journal of Computational Physics, vol. 182, no. 2, pp. 418–477, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. H.-B. Li, T.-Z. Huang, Y. Zhang, V.-P. Liu, and T.-V. Gu, “Chebyshev-type methods and preconditioning techniques,” Applied Mathematics and Computation, vol. 218, no. 2, pp. 260–270, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. V. Y. Pan, M. Van Barel, X. Wang, and G. Codevico, “Iterative inversion of structured matrices,” Theoretical Computer Science, vol. 315, no. 2-3, pp. 581–592, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. W. Li and Z. Li, “A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix,” Applied Mathematics and Computation, vol. 215, no. 9, pp. 3433–3442, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  9. E. V. Krishnamurthy and S. K. Sen, Numerical Algorithms: Computations in Science and Engineering, Affiliated East-West Press, New Delhi, India, 2007. View at MathSciNet
  10. F. Toutounian and F. Soleymani, “An iterative method for computing the approximate inverse of a square matrix and the Moore-Penrose inverse of a non-square matrix,” Applied Mathematics and Computation, vol. 224, pp. 671–680, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  11. V. Y. Pan and R. Schreiber, “An improved Newton iteration for the generalized inverse of a matrix, with applications,” SIAM: Journal on Scientific and Statistical Computing, vol. 12, no. 5, pp. 1109–1130, 1991. View at Publisher · View at Google Scholar · View at MathSciNet
  12. E. Isaacson and H. B. Keller, Analysis of Numerical Methods, John Wiley & Sons, New York, NY, USA, 1966. View at MathSciNet
  13. L. Weiguo, L. Juan, and Q. Tiantian, “A family of iterative methods for computing Moore-Penrose inverse of a matrix,” Linear Algebra and Its Applications, vol. 438, no. 1, pp. 47–56, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. F. Soleymani, “A fast convergent iterative solver for approximate inverse of matrices,” Numerical Linear Algebra with Applications, vol. 21, pp. 439–452, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. J. F. Traub, Iterative Methods for the Solution of Equations, Prentice Hall, New York, NY, USA, 1964. View at MathSciNet
  16. http://reference.wolfram.com/language/tutorial/LinearAlgebraInMathematicaOverview.
  17. S. Wolfram, The Mathematica Book, Wolfram Media, 5th edition, 2003. View at MathSciNet
  18. L. Grosz, “Preconditioning by incomplete block elimination,” Numerical Linear Algebra with Applications, vol. 7, no. 7-8, pp. 527–541, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus