Discrete Dynamics in Nature and Society

Volume 2018, Article ID 6758302, 8 pages

https://doi.org/10.1155/2018/6758302

## An Improved Computationally Efficient Method for Finding the Drazin Inverse

^{1}Mathematics Department, College of Science, King Saud University, Riyadh, Saudi Arabia^{2}Departamento de Matematica, Universidad de Tarapaca, Casilla 7D, Arica, Chile

Correspondence should be addressed to Haifa Bin Jebreen; as.ude.usk@neerbejh

Received 2 July 2018; Revised 20 September 2018; Accepted 2 October 2018; Published 17 October 2018

Guest Editor: Tarek F. Ibrahim

Copyright © 2018 Haifa Bin Jebreen and Yurilev Chalco-Cano. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Drazin inverse is one of the most significant inverses in the matrix theory, where its computation is an intensive and useful task. The objective of this work is to propose a computationally effective iterative scheme for finding the Drazin inverse. The convergence is investigated analytically by applying a suitable initial matrix. The theoretical discussions are upheld by several experiments showing the stability and convergence of the proposed method.

#### 1. Introductory Notes

Methods for calculating the generalized inverses are a topic of current investigation in computational mathematics (see, e.g., [1, 2]). Enormous amount of work in the topic of generalized inverses has been done during the past 63 years. E.H. Moore (1920) was a pioneer in the area of a generalized algebraic matrix inverse. But intense activities started since 1955 [3, 4]. Note that R. Penrose (1955, 1956), C.R. Rao and S.K. Mitra (1971), and G.H. Golub and W. Kahan (1965) are just a couple of names who have contributed significantly in the area of generalized matrix inverse and its applications leaving very little scope for others to contribute in this area.

The terms “the minimum-norm least squares inverse,” “the Moore-Penrose inverse,” and “the pseudo-inverse” of a matrix are much more used than the term “Drazin inverse.” This unique inverse is globally used for solving linear systems and linear optimization problems.

The fundamental partitioning method of Greville for calculating generalized inverses was discussed in [3]. This method demands many operations and clearly includes more round-off errors. A lot of computational schemes for computing the Drazin inverse lack numerical stability or slow convergence and accordingly it is requisite to study and propose new iterations for this aim.

It is known that cumulative rounding errors must be canceled completely, which is only doable via “symbolic computations.” For such a case, variables are handled in the exact form, yielding no loss of precision in the process of computation. However, once the dimension of the input matrix is big, the calculation of its Drazin inverse via symbolic methods would be so time-consuming. This encouraged researchers to recommend computational stable and fast iterative methods for such a purpose; see, e.g., [5–7].

The author in [8] investigated that, in case of having the following square singular linear systemthe general solution can be represented as follows:using the Drazin inverse, where .

Let and indicate the collection of complex matrices and ones having rank , respectively. Furthermore, with , , , and , we indicate the conjugate transpose, the range, the rank, and the null space of the matrix , respectively.

In the study of associative rings and semigroups, Drazin in the fundamental work [9] showed a different type of generalized inverse that does not possess the reflexivity feature while commuting with the entry/element. The significance of this type of inverse and its calculation was then completely provided by Wilkinson in [10].

Before continuing, it is required to recall several useful definitions in what follows.

*Definition 1. *The smallest nonnegative integer , so thatis named as the index of and it is shown by .

*Definition 2. *Assume that is an complex matrix; then the Drazin inverse of the matrix is a unique matrix that satisfies the following conditionswherein is the matrix index of .

Recal that once , then is named as the group inverse of . In addition, if is nonsingular, thence it is straightforward to observe that , and .

Note that the idempotent matrix is the projector on alongside , whereas denotes the range of and is the null space of . Also, if is nilpotent, then ; see [11–14] for more.

Matrix iterations are sensitive upon the choice of the first approximation () to initiate the scheme and observe the convergence to . Practically speaking, the iterative methods (like the Schulz-type iterations) are effective (particularly for sparse matrices having sparse inverses) but an issue lies in the initial value. However, this requirement was lifted by furnishing several suitable initial values in the literature. A vast discussion on choosing the initial choice is given in the work [15].

In this work, we concentrate on proposing and investigating a novel matrix iteration for calculating the Drazin inverse quickly and efficiently with a clear concentration on decreasing the elapsed CPU times in contrast to several well-known competitors in the literature. To do this, an analytical discussion will be furnished to manifest the behavior of the presented method. It is proven that the novel approach has a high convergence order using fewer number of matrix-matrix multiplications; i.e., it has a better computational efficiency index.

After a short introduction in Section 1 and a brief review on the most common iterative schemes for calculating the Drazin inverse in Section 2, we propose a novel iterative method for computing the Drazin inverse. Section 3 unfolds our contributed method as a novel high order efficient method. Section 4 furnishes a discussion regarding its convergence rate. Next in Section 5, we study the complexity of the iterative methods to analytically select the best iterative expression. In Section 6, we numerically investigate the application and usefulness of the novel scheme in the calculation of the Drazin inverse. A clear decrease of the elapsed CPU time will be seen therein. Ultimately in Section 7, summary and comments are brought forward.

#### 2. The Literature

One of the fundamental techniques to calculate the inverse of a nonsingular complex matrix is the Schulz method given in [16] as follows:wherein is the unit matrix of the same dimension as .

In 2004, Li and Wei in [17] proved that the matrix method of Schulz (5) could be applied for calculating the Drazin inverse of square matrices having real or complex eigenvalues. They proposed the following initial matrixwherein the parameter should be selected such that the following criterion holdsUsing the initial matrix of the form (6) along with (5) results in a quadratically convergent iterative scheme for calculating the well-known Drazin inverse.

Let us now briefly review some of the higher order iteration schemes for such a purpose. The notion of the need for efficient schemes is the fact that (5) is slow at its initial stage of iterates, and this would increase the computational burdensome of the scheme applied for matrix inversion.

Li et al. in [18] studied and discussed an iterative method in the following formulationwith cubic convergence rate and also presented another scheme for calculating (and similarly for using a suitable initial value) of the same order as it is provided in the coming formula:

Recall that a general way for deriving similar iterations was furnished in [19, Chapter 5]. In fact, Krishnamurthy and Sen proposed the following quartically-convergent scheme:in which . As another instance, a ninth-order matrix iteration can be written as follows:Generally speaking, applying Schröder’s general iteration (often named as Schröder-Traub’s sequence [20]) to the nonlinear matrix equationone can obtain the following scheme [21]:of order , needing Horner’s matrix products, whereas .

#### 3. Proposing a New Method

Now, we propose a high order scheme which is computationally efficient in terms of the number of matrix-matrix multiplications. Let us first consider a tenth-order method using (13) as follows:To improve the performance of this scheme, we factorize (14) as much as possible so as to decrease the number of matrix-matrix products. First we can obtainwhich includes seven matrix-matrix multiplications.

By further factorizations and simplifications, we can propose the following iterative method:with only 6 matrix-matrix products per cycle where

Obtaining and so as to reduce the matrix matrix products is new and not easy. In fact, we should solve a system of equations in symbolic environment so as to do such a task.

The Schulz-type iterations such as (16) are numerically stable; i.e., they have the self-correcting characteristic and are exactly based upon matrix multiplication per cycle. Multiplication is efficiently parallelizable for special matrices. The method (16) could be mixed with sparse-saving techniques so as to decrease the burdensome of matrix-matrix products per cycle.

We can now apply (16) with tenth convergence rate for calculating the Drazin inverse when the first value is selected as follows:orwherein stands for the trace of an arbitrary square matrix with index .

Before providing the main theorems regarding the convergence analysis and the rate of convergence of the proposed method, we recall the following lemmas.

Proposition 3 (see [22]). *Let and be given. There is at least one matrix norm so thatwhere indicates the set of all eigenvalues of (in the maximum of absolute value sense).*

Proposition 4 (see [22]). *If indicates the projector on a space along a space , then*(i)* iff ;*(ii)* iff .*

*4. Convergence Analysis*

*Theorem 5. Assume that is a singular square matrix. In addition, let the initial value be selected via (6) or (18). Thence, the matrices generated via (16) satisfy the following error estimate for calculating the Drazin inverse:Also the convergence speed is ten.*

*Proof. *Let us writewherein and are defined in (17). Applying an arbitrary matrix norm on relation (22), it is possible to write downSince is chosen as in (6) or (18), it follows thatThis could now state thatThus, we can conclude thatSimilarly if the novel scheme for the Drazin inverse is defined by left multiplying of , we can state thatNow, an application of definition of the Drazin inverse yieldsProposition 4 along with (26), (27), and (28) could lead toTo complete the proof, we proceed in what follows. The error matrix satisfies Using (23), we obtainwhich is a confirmation of (21). As a direct result of (31) and Proposition 4, we can obtainConsidering (22) and applying several simplifications, we obtain thatApplying the idempotent property being a positive integer from now on, and the following fact of (29):we obtain for each that (here we use (34) in simplifications)From (35) and (33), we haveFinally, we get thatIt is now straightforward to calculate the error inequality of the proposed iteration (16) considering (37) and the second criterion of (4), when computing the Drazin inverse, as comes nextThe inequalities in (38) lead to the fact that as with the tenth convergence speed. The proof is ended now.

*Remark 6. *An important issue is to find initial approximations . In accordance with Proposition 3, must read as the following relation to guarantee the convergence in the Drazin inverse case:whereand , are eigenvalues of .

*5. Efficiency Comparison*

*The computational complexity of the matrix inverse is (mn-squared), where the order of the matrix is (a rectangular matrix). Broadly speaking, if the complexity (-cubed) is brought down to , where , say 2.9 for a general matrix, then it is definitely an achievement. Reducing the complexity by a linear factor is not attractive since what we have currently in 2018 is exa-flops computing speed and desktop and laptop computers are available in 100s of millions worldwide unlike the period during mid-20th centuries.*

*Thus, we calculate and compare theoretically the computational efficiencies of various schemes (5), (8), (9), (10), (11), and (16), since they all could converge to upon the choice of a suitable initial value (6).*

*By considering a unit cost for the arithmetic operations, typical of the floating point calculations, we can take into account the inverse-finder informational efficiency index. This index applies two values and , which are for the convergence speed and the number of matrix-matrix products, respectively. Hence, this index is given by [20]Apart from this index, another useful one which is mainly in agreement with the CPU elapsed time of the iterative methods in numerical linear algebra is the computational efficiency index defined by [23]Therefore, a fruitful scheme in analytical standpoint should achieve the speed with fewer matrix products in contrast to the competitors (viz, ).*

*In Table 1, we provide a comparison of the number of matrix products and the rate of convergence, accompanied by (41) and (42) for various schemes. The results indicate that the proposed scheme in Section 3 is more efficient than its competitors.*