Abstract

We investigate a new higher order iterative method for computing the generalized inverse for a given matrix . We also discuss how the new method could be applied for finding approximate inverses of nonsingular square matrices. Analysis of convergence is included to show that the proposed scheme has at least fifteenth-order convergence. Some tests are also presented to show the superiority of the new method.

1. Introduction

The traditional generalized inverses, the Moore-Penrose inverse, the weighted Moore-Penrose inverse, the Drazin inverse, the group inverse, the Bott-Duffin inverse, and so forth are of special interest in matrix theory. They are extensively used in statistics, control theory, power systems, nonlinear equations, optimization and numerical analysis, and so on. Most of these generalized inverses are outer inverses of the form having the prescribed range and null space . For a given complex matrix , the unique matrix such that , , and is known as the outer inverse or 2-inverse denoted by of with the prescribed range and null space . This also plays an important role in singular differential and difference equations, the stable approximations of ill-posed problems, and in linear and nonlinear problems [1, 2]. As a result, it has been extensively studied by many researchers and many methods [316] are proposed in the literature for computing them. There are some techniques to tackle this problem, which are basically divided into two parts: the direct solvers such as Gaussian elimination with partial pivoting (GEPP). However, direct methods usually need much cost in both time and space in order to achieve desired results; sometimes they may not be able to work at all. In contrast, iterative method compensates for individual and accumulation of round-off errors as it is a process of successive refinement.

The well-known second order Schulz method in [17] firstly presents Newton’s method: where is the identity matrix. In 2011, Li et al. in [18] gave the following locally cubically convergent scheme: and also proposed another iterative method: Krishnamurthy and Sen in [19] provided the following fourth-order method: in which . As another example from this primary source, the authors provided the following twelfth-order method:

It is known that the Schulz iteration is numerically stable. Unfortunately, the method is too slow at the beginning of the process for finding . To remedy this drawback and lead to the better equilibrium between the high speed and the operational cost, we define a new higher order iterative method.

The aim of the present paper is to introduce and investigate new higher order iterative methods for computing the generalized inverse for a given matrix . We also discussed how the new method could be applied for finding approximate inverses of nonsingular square matrices. Analysis of convergence is included to show that the proposed scheme has at least fifteenth-order convergence. Some tests are also presented to show the superiority of the new method.

The paper is organized as follows. Section 1 is the introduction. In Section 2, some preliminaries involving notations are given. We analytically discuss the application of the new algorithm in the computation of in Section 3. The proposed iterative method for computing matrix inversion for its convergence analysis is also given. In Section 4, we discuss the complexity of the iterative methods to theoretically find the most efficient method. Some numerical examples matrices are worked out in Section 5; the results obtained are compared with existing methods.

2. Preliminaries

In this section, we shall describe some concepts used in this paper. Let the symbols , and denote the set of complex matrices, the subspace of , and the subspace of , respectively. In addition, the symbols , , , , and denote the range, the null space, the conjugate transpose, spectral radius, and Frobenius norm of . It is well known that converges; that is, if and only if .

The following lemmas are needed in what follows.

Lemma 1. If denotes the projector on a space along a space , then (i) if and only if ;(ii) if and only if .

Lemma 2. Let ; let and be the subspace of and , respectively, with . Then, has a unique outer inverse denoting such that and if and only if

It is well known that, for , the Moore-Penrose inverse , the Drazin inverse , the weighted Moore-Penrose inverse , and the weighted Drazin inverse can be represented by(i) ;(ii) , where ;(iii) ;(iv) , where .

3. A New Method for Finding the Generalized Inverse

Inspired and motivated by the classical method (1) for finding the inverse of a matrix , it should be pointed out that Newton’s method for the matrix equation [17] is as follows: To construct a new matrix iteration, we must find a nonlinear equation solver, which is more efficient than Newton’s method when it is applied to (7). Toward this goal, we apply the following rational iteration function: on the matrix equation (7), to obtain the following efficient high-order method after simplifying: Now, by simplification, we suggest the following matrix iteration: where is the identity matrix.

Note that it is also known that for with and , and the initial approximation , with , it holds that We are about to use this fact in the following theorem so as to find the theoretical order of the proposed method (10) for finding the generalized inverse .

Theorem 3. Let , let be a subspace of of dimension , and let be a subspace of of dimension . Suppose with and and the initial approximation ; the sequence generalized by (9) satisfies the following error estimate when finding :

Proof. We use notations that and subsequently . Then, By taking an arbitrary matrix norm on both sides of (13), we attain In addition, since , by relation (14), we obtain that Now if we consider , therefore Using mathematical induction, we obtain Using this fact with and in conjunction with (9), we conclude that It is not difficult to verify that Therefore, the error matrix satisfies From the last identity and (17) we have which is a confirmation of (12).

Theorem 4. Let , let be a subspace of of dimension , and let be a subspace of of dimension . Suppose with and . If the initial approximation is in accordance with (11). The sequence generalized by (9) converges with fifteenth order to .

Proof. From (13), we know Set . We have On the other hand, from conditions (18) and (19), we can obtain the equation According to the relations (22)–(25), we have Therefore It would be now easy to find the error inequality of the high-order iterative as follows:
Thus, ; that is, the sequence converges to the generalized inverse in at least fifteenth-order as . This ends the proof.

Since the Moore-Penrose inverse, the weighted Moore-Penrose inverse, the Drazin inverse, and the weighted Drazin inverse of a matrix over always exist, the below corollary follows immediately by Theorems 4.

Corollary 5. Suppose the condition of Theorem 3 holds. Let be defined by Theorem 3. Denote , the sequence generalized by (9) and using the initial approximation and such that , if and only if one of the following holds:(i) with ;(ii) with , ;(iii) with , where and are Hermite positive definite matrices of the orders and , respectively;(iv) with , , where .

By taking special matrix, we can get some desired results. We discuss the applications of (9) in computing matrix inversion as follows.

Corollary 6. Let be a nonsingular real or complex matrix. If the initial approximation satisfies then the iterative method (9) converges with at least fifteenth-order to .

Proof. From (13), we know Hence, by taking arbitrary matrix norm on both sides of (30), we attain In addition, since by relation (13), we obtain that Now if we consider , therefore Using mathematical induction, we obtain Furthermore, we get that That is, , when and
Thus, the new method (9) converges to the inverse of the matrix in the case , where is the spectral radius. Now, we prove that the order of convergence for the sequence is at least fifteen. Let denote the error matrix ; afterwards The identity (13) in conjunction with (37) implies that Therefore, using invertibility of , it follows immediately that By taking any subordinate norm of (39), we obtain Consequently, it is proved that the iterative formula (9) converges to , and the order of this method is at least fifteen.

4. Complexity of the Methods

Let us consider two parameters and which stand for the rate of convergence and the number of matrix-by-matrix multiplications in floating point arithmetics, respectively. Then the comparative index could be expressed by According to Table 1, we can see that the iterative process (9) reduces the computational complexity by using less number of basic operations and leads to the better equilibrium between the high speed and the operational cost.

5. Numerical Examples

In this section, we will illustrate a numerical experiment on our method and give some theoretical and numerical comparison with the existing solution methods. All these numerical experiments are executed in the Matlab 7.13.0.564 (R2011b) (Build 7601: Service Pack 1) of the software used on an intel(R) core (TM)2 Duo CPU T6500 2.10 GHz with 2 GB RAM memory running on the windows XP Professional Version 2002 Service Pack 3.

Example 1. Consider the singular M-matrix of order in [20] Clearly, . Now, taking given by such that , . We must take which satisfies to satisfy the convergence condition . The Drazin inverse is obtained as

Table 2 compares the number of iterations ( ) and error bounds of the existing methods [17, 18] and the fastest method of [20] with the stopping criteria . It is found that our method takes fewer numbers of iterations and gives accurate estimations of in comparison to the method compared.

Example 2. Consider the following matrix investigated in ([21] Example 3.2): Then . Now, with .

According to Table 3, our method algorithm only needs 5 iterations to converge with . We obtain . Consider

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

Xiaoji Liu and Zemeng Zuo were supported by the Guangxi Natural Science Foundation (2013GXNSFAA019008), the Key Project of Education Department of Guangxi (201202ZD031), Project supported by the National Science Foundation of China (11361009), and Science Research Project 2013 of the China-ASEAN Study Center (Guangxi Science Experiment Center) of Guangxi University for Nationalities.