Research Article | Open Access

# A Matrix Iteration for Finding Drazin Inverse with Ninth-Order Convergence

**Academic Editor:**Sofiya Ostrovska

#### Abstract

The aim of this paper is twofold. First, a matrix iteration for finding approximate inverses of nonsingular square matrices is constructed. Second, how the new method could be applied for computing the Drazin inverse is discussed. It is theoretically proven that the contributed method possesses the convergence rate nine. Numerical studies are brought forward to support the analytical parts.

#### 1. Preliminary Notes

Let and denote the set of all complex matrices and the set of all complex matrices of rank , respectively. By , , , and , we denote the conjugate transpose, the range, the rank, and the null space of , respectively.

Important matrix-valued functions are, for example, the inverse , the (principal) square root , and the matrix sign function. Their evaluation for large matrices arising from partial differential equations or integral equations (e.g., resulting from wavelet-like methods) is not an easy task and needs techniques exploiting appropriate structures of the matrices and .

In this paper, we focus on the matrix function of inverse for square matrices. To this goal, we construct a matrix iterative method for finding approximate inverses quickly. It is proven that the new method possesses the high convergence order nine using only seven matrix-matrix multiplications. We will then discuss how to apply the new method for Drazin inverse. The Drazin inverse is investigated in the matrix theory (particularly in the topic of generalized inverses) and also in the ring theory; see, for example, [1].

Generally speaking, applying Schröder's general method (often called Schröder-Traub's sequence [2]) to the nonlinear matrix equation , one obtains the following scheme [3]: of order , requiring Horner's matrix multiplications, where .

The application of such (fixed-point type) matrix iterative methods is not limited to the matrix inversion for square nonsingular matrices [4, 5]. In fact and under some fair conditions, one may construct a sequence of iterates converging to the Moore-Penrose inverse [6], the weighted Moore-Penrose inverse [7], the Drazin inverse [8], or the outer inverse in the field of generalized inverses. Such extensions alongside the asymptotical stability of matrix iterations in the form (1) encouraged many authors to present new schemes or work on the application of such methods in different fields of sciences and engineering; see, for example, [9–12].

Choosing and in (1) reduces to the well-known methods of Schulz [13] and Chebyshev in matrix inversion. Note that any method extracted from Sen-Prabhu scheme (1) requires matrix-matrix multiplications to achieve the convergence order . In this work, we are interested in proposing a new scheme, at which a convergence order can be attained by fewer matrix-matrix multiplications than .

It is of great importance to arrive at the convergence phase by a valid initial value in matrix iterative methods. An interesting initial matrix was developed and introduced by Ben-Israel and Greville in [14] as follows: where , once the user wants to find the Moore-Penrose inverse.

The rest of the paper has been organized as follows. Section 2 describes a contribution of the paper alongside a convergence analysis, while, in Section 3, we will extend the new method for finding the Drazin inverse as well. Section 4 is devoted to the computational examples. Section 5 concludes the paper.

#### 2. A New Method

Let us consider the* inverse-finder informational efficiency index* [15], which states that if and stand for the rate of convergence and the number of matrix-by-matrix multiplications in floating point arithmetics for a matrix method in matrix inversion, then the index is

Based on (3), we must design a method, at which the number of matrix-matrix products is fewer than the local convergence order. The first of such an attempt dated back to the earlier works of Ostrowski in [16], wherein he suggested that a robust way for achieving such a goal is in proper matrix factorizing.

Now, let us first apply the following* new* nonlinear equation solver:
on the matrix equation . Next, we obtain
where . Note that a background on the construction of iterative methods for solving nonlinear equations and their applications might be found in [17–20].

Using proper factorizing, we attain the following iteration for matrix inversion, at which is an initial approximation to :

The scheme (6) falls in the category of Schulz-type methods, which possesses matrix-by-matrix multiplications to provide approximate inverses. Let us prove the rate of convergence for (6), using the theory of matrix analysis [21] in what follows.

Theorem 1. *Let be a nonsingular complex matrix. If the initial value satisfies
**
then the matrix iterative method (6) converges with ninth order to .*

*Proof . *Let (7) hold, and further assume that , . It is straightforward to have
The rest of the proof for this theorem is similar to Theorem 2.1 of [22]. It is hence omitted.

#### 3. Extension to the Drazin Inverse

The Drazin inverse, named after Drazin [23], is a generalized inverse which has spectral properties similar to the ordinary inverse of a given square matrix. In some cases, it also provides a solution of a given system of linear equations. Note that the Drazin inverse of a matrix mostly resembles the true inverse of .

*Definition 2. *The smallest nonnegative integer , such that , is called the index of and denoted by .

*Definition 3. *Let be a complex matrix; the Drazin inverse of , denoted by , is the unique matrix satisfying the following: (1),
(2),
(3),
where is the index of .

The Drazin inverse has applications in the theory of finite Markov chains, as well as in the study of differential equations and singular linear difference equations and so forth [24]. To illustrate further, the solution of singular systems has been studied by several authors. For instance, in [25], an analytical solution for continuous systems using the Drazin inverse was presented.

Note that a projection matrix , defined as a matrix such that , has (or ) and has the Drazin inverse . Also, if is a nilpotent matrix (e.g., a shift matrix), then . See for more [26].

In 2004, Li and Wei in [27] proved that the matrix method of Schulz (the case for in (1)) can be used for finding the Drazin inverse of square matrices. They proposed the following initial matrix: where the parameter must be chosen such that the condition is satisfied. Using this initial matrix yields a numerically (asymptotical) stable method for finding the famous Drazin inverse with quadratical convergence.

Using the above descriptions, it is easy to apply the efficient method (6) for finding the Drazin inverse in what follows.

Theorem 4. *Let be a square matrix and . Choosing the initial approximation as
**
or
**
wherein stands for the trace of an arbitrary square matrix and is a matrix norm, then the iterative method (6) converges with ninth order to .*

*Proof. *Consider the notation and subsequently the residual matrix as for finding the Drazin inverse. Then similar to (8), we get
By taking an arbitrary matrix norm to both sides of (12), we attain
In addition, since (the result of choosing the appropriate initial matrices (10) and (11)) by relation (13), we obtain that . Similarly, . Using mathematical induction, we obtain
So, the sequence is strictly monotonically decreasing. Now, by considering , as the error matrix for finding the Drazin inverse, we have
Taking into account (12) and using elementary algebraic transformations, we further derive
It is now easy to find the error inequality of the new scheme (6) using (16) as follows:

Therefore, the inequalities in (17) immediately lead to the conclusion that as with the ninth order of convergence.

Theorem 5. *Considering the same assumptions as in Theorem 4, the iterative method (6) has asymptotical stability for finding the Drazin inverse.*

*Proof . *The steps of proving the asymptotical stability of (6) are similar to those that have recently been taken for a general family of methods in [28]. Hence, the proof is omitted.

*Remark 6. *It should be remarked that the generalization of our proposed scheme for generalized outer inverses, that is, , is straightforward according to the recent work [29].

*Remark 7. *The new iteration (6) is free from matrix power in its implementation and this allows one to apply it for finding generalized inverses easily.

#### 4. Numerical Experiments

We herein present several numerical tests to illustrate the efficiency of the new iterative method to compute the approximate inverses. MATHEMATICA 8 [30] has been employed in our calculations. We have carried out the numerical tests with machine precision on a computer with characteristics: Microsoft Windows XP Intel(R), Pentium(R) 4 CPU, 3.20 GHz, with 4 GB of RAM.

For comparisons, we have used the methods “Schulz” (), “Chebyshev” (), “KMS” () in (1), and the proposed method (6). As the programs were running in this paper, we found the running time using the command AbsoluteTiming[] to report the elapsed CPU time (in seconds) for the examples.

*Example 8. *The computations of approximate inverses for 10 dense random complex matrices of the dimension 100 are considered and compared as follows: n = 100; number = 10; SeedRandom[1234]; Table[A[l] = RandomComplex[{-2. + I, 2. - I}, {n, n}];,{l, number}];

Note that . For this example, the stopping criterion is and the maximum number of iterations allowed is set to 100. The initial choice has been constructed using (2) with . The result of comparisons has been presented in Figures 1 and 2. As could be observed, in all the 10 test problems, our iterative method (6) beats the other existing schemes.

*Example 9. *The computations of approximate inverses for 10 dense random complex matrices of the dimension 200 are considered and compared as follows: n = 200; number = 10; SeedRandom[1234]; Table[A[l] = RandomComplex[{-2. + I, 2. - I}, {n, n}];,{l, number}];

Note again that . For this test, the stopping criterion for finding the approximate inverse for the generated random matrices is and the maximum number of iterations allowed is set to 100 by the initial choice as in Example 8. The numerics are provided in Figures 3 and 4. Results are in harmony with the theoretical aspects and show that the new method is efficient in matrix inversion.

*Example 10. *The computations of approximate inverses for 10 dense random matrices of the size 200 are investigated in what follows: n = 200; number = 10; SeedRandom[1]; Table[A[l] = RandomReal[{0, 1}, {n, n}];, {l, number}];

here, the stopping criterion is . Results are provided in Figures 5 and 6, which indicate that the new method is better in terms of the number of iterations and behaves similarly to the other well-known schemes of Schulz and Chebyshev in terms of the computational time.

The order of convergence and the number of matrix-matrix products are not the only factors to govern the efficiency of an algorithm per computing step in matrix iterations. Generally speaking, the stopping criterion could be reported as one of the important factors, which could indirectly affect the computational time of an algorithm in implementations, especially when trying to find the generalized inverses.

Although in the above implementations we considered the stop termination on two successive iterates, this is not a reliable termination when dealing with some large ill-conditioned matrices. For example, the reliable stopping criterion in the above examples is .

*Example 11. *The aim of this example is to apply the discussions of Section 4, for finding the Drazin inverse of the following square matrix (taken from [27]):

with . Using (6) and the stopping termination and (10), we could obtain the Drazin inverse as follows:

Due to the efficiency of the new method, we did not compare different methods for this test and just bring forward the results of (6). Checking the conditions of Definition 4.2. yields ,, and , which supports the theoretical discussions.

#### 5. Summary

In this paper, we have developed a high order matrix method for finding approximate inverses for nonsingular square matrices. It has been proven that the contributed method reaches the convergence order nine by using seven matrix-matrix multiplications which makes its informational index as .

We have also discussed the importance of the well-known Drazin inverse and how to find it numerically by the proposed method. Numerical examples were also employed to support the underlying theory of the paper.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This Project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant no. 159/130/1434. The authors, therefore, acknowledge with thanks DSR technical and financial support.

#### References

- B. Cantó, C. Coll, and E. Sánchez, “Identifiability for a class of discretized linear partial differential algebraic equations,”
*Mathematical Problems in Engineering*, vol. 2011, Article ID 510519, 12 pages, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - J. F. Traub,
*Iterative Methods for the Solution of Equations*, Prentice-Hall, Englewood Cliffs, NJ, USA, 1964. View at: MathSciNet - S. K. Sen and S. S. Prabhu, “Optimal iterative schemes for computing the Moore-Penrose matrix inverse,”
*International Journal of Systems Science. Principles and Applications of Systems and Integration*, vol. 8, pp. 748–753, 1976. View at: Google Scholar - F. Soleymani, “A rapid numerical algorithm to compute matrix inversion,”
*International Journal of Mathematics and Mathematical Sciences*, vol. 2012, Article ID 134653, 11 pages, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - F. Soleymani, “A new method for solving ill-conditioned linear systems,”
*Opuscula Mathematica*, vol. 33, no. 2, pp. 337–344, 2013. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - F. Toutounian and F. Soleymani, “An iterative method for computing the approximate inverse of a square matrix and the Moore-Penrose inverse of a non-square matrix,”
*Applied Mathematics and Computation*, vol. 224, pp. 671–680, 2013. View at: Publisher Site | Google Scholar | MathSciNet - F. Soleymani, P. S. Stanimirović, and M. Z. Ullah, “An accelerated iterative method for computing weighted Moore-Penrose inverse,”
*Applied Mathematics and Computation*, vol. 222, pp. 365–371, 2013. View at: Publisher Site | Google Scholar | MathSciNet - F. Soleymani and P. S. Stanimirović, “A higher order iterative method for computing the Drazin inverse,”
*The Scientific World Journal*, vol. 2013, Article ID 708647, 11 pages, 2013. View at: Publisher Site | Google Scholar - X. Liu, H. Jin, and Y. Yu, “Higher-order convergent iterative method for computing the generalized inverse and its application to Toeplitz matrices,”
*Linear Algebra and Its Applications*, vol. 439, no. 6, pp. 1635–1650, 2013. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - G. Montero, L. González, E. Flórez, M. D. García, and A. Suárez, “Approximate inverse computation using Frobenius inner product,”
*Numerical Linear Algebra with Applications*, vol. 9, no. 3, pp. 239–247, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - F. Soleymani, “On a fast iterative method for approximate inverse of matrices,”
*Korean Mathematical Society. Communications*, vol. 28, no. 2, pp. 407–418, 2013. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - X. Sheng, “Execute elementary row and column operations on the partitioned matrix to compute M-P inverse A
^{†},”*Abstract and Applied Analysis*, vol. 2014, Article ID 596049, 6 pages, 2014. View at: Publisher Site | Google Scholar - G. Schulz, “Iterative Berechnung der Reziproken matrix,”
*Zeitschrift für Angewandte Mathematik und Mechanik*, vol. 13, pp. 57–59, 1933. View at: Google Scholar - A. Ben-Israel and T. N. E. Greville,
*Generalized Inverses*, Springer, New York, NY, USA, 2nd edition, 2003. View at: MathSciNet - F. Soleymani, “A fast convergent iterative solver for approximate inverse of matrices,”
*Numerical Linear Algebra with Applications*, 2013. View at: Publisher Site | Google Scholar - A. M. Ostrowski, “Sur quelques transformations de la serie de LiouvilleNewman,”
*Comptes Rendus de l'Académie des Sciences*, vol. 206, pp. 1345–1347, 1938. View at: Google Scholar - F. Soleimani, F. Soleymani, and S. Shateyi, “Some iterative methods free from derivatives and their basins of attraction for nonlinear equations,”
*Discrete Dynamics in Nature and Society*, vol. 2013, Article ID 301718, 10 pages, 2013. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - F. Soleymani and D. K. R. Babajee, “Computing multiple roots using a class of quartically convergent methods,”
*Alexandria Engineering Journal*, vol. 52, pp. 531–541, 2013. View at: Google Scholar - F. Soleymani, “Efficient optimal eighth-order derivative-free methods for nonlinear equations,”
*Japan Journal of Industrial and Applied Mathematics*, vol. 30, no. 2, pp. 287–306, 2013. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - J. R. Torregrosa, I. K. Argyros, C. Chun, A. Cordero, and F. Soleymani, “Iterative methods for nonlinear equations or systems and their applications [Editorial],”
*Journal of Applied Mathematics*, vol. 2013, Article ID 656953, 2 pages, 2013. View at: Publisher Site | Google Scholar | MathSciNet - G. W. Stewart and J. G. Sun,
*Matrix Perturbation Theory*, Academic Press, Boston, Mass, USA, 1990. View at: MathSciNet - M. Zaka Ullah, F. Soleymani, and A. S. Al-Fhaid, “An efficient matrix iteration for computing weighted Moore-Penrose inverse,”
*Applied Mathematics and Computation*, vol. 226, pp. 441–454, 2014. View at: Publisher Site | Google Scholar | MathSciNet - M. P. Drazin, “Pseudo-inverses in associative rings and semigroups,”
*The American Mathematical Monthly*, vol. 65, pp. 506–514, 1958. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - I. Kyrchei, “Explicit formulas for determinantal representations of the Drazin inverse solutions of some matrix and differential matrix equations,”
*Applied Mathematics and Computation*, vol. 219, no. 14, pp. 7632–7644, 2013. View at: Publisher Site | Google Scholar | MathSciNet - S. L. Campbell, C. D. Meyer, Jr., and N. J. Rose, “Applications of the Drazin inverse to linear systems of differential equations with singular constant coefficients,”
*SIAM Journal on Applied Mathematics*, vol. 31, no. 3, pp. 411–425, 1976. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - L. Zhao, “The expression of the Drazin inverse with rank constraints,”
*Journal of Applied Mathematics*, vol. 2012, Article ID 390592, 10 pages, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - X. Li and Y. Wei, “Iterative methods for the Drazin inverse of a matrix with a complex spectrum,”
*Applied Mathematics and Computation*, vol. 147, no. 3, pp. 855–862, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - F. Soleymani and P. S. Stanimirović, “A note on the stability of a $p$th order iteration for finding generalized inverses,”
*Applied Mathematics Letters*, vol. 28, pp. 77–81, 2014. View at: Publisher Site | Google Scholar | MathSciNet - P. S. Stanimirović and F. Soleymani, “A class of numerical algorithms for computing outer inverses,”
*Journal of Computational and Applied Mathematics*, vol. 263, pp. 236–245, 2014. View at: Publisher Site | Google Scholar | MathSciNet - M. Trott,
*The Mathematica Guide-Book For Numerics*, Springer, New York, NY, USA, 2006.

#### Copyright

Copyright © 2014 A. S. Al-Fhaid et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.