#### Abstract

We first study the complexity of the algorithm presented in Guo and Huang (2010). After that, a new explicit formula for computational of the Moore-Penrose inverse of a singular or rectangular matrix . This new approach is based on a modified Gauss-Jordan elimination process. The complexity of the new method is analyzed and presented and is found to be less computationally demanding than the one presented in Guo and Huang (2010). In the end, an illustrative example is demonstrated to explain the corresponding improvements of the algorithm.

#### 1. Introduction

Throughout this paper we use the following notation. Let and be the dimensional complex space and the set of complex matrices with rank . For a matrix , and are the range and null space of ; and denote the rank and the the conjugate transpose of , while and denote the M-P inverse and Frobenius norm, respectively.

In 1920, Moore [1] defined a new inverse of a matrix by projection matrices. Moore’s definition of the generalized inverse of an matrix is equivalent to the existence of an matrix such that where is the orthogonal projector on . Unaware of Moore’s work, In 1955 Penrose [2] showed that there exists a unique matrix satisfying the four conditions where denotes conjugate transpose. These conditions are equivalent to Moore’s conditions. The unique matrix that satisfied these conditions was known as the Moore-Penrose inverse (abbreviated M-P) and is denoted by . For a subset of the set , the set of matrices satisfied the equations , from among (2) is denoted by . These concepts can be found in Ben-Israel and Greville’s famous book [3] or Campell and Meyer’s book [4]. In their famous books [3, 4], the next statement is valid for a rectangular matrix.

Lemma 1 (see [3, 4]). *Let , be the M-P inverse of if and only if is a -inverse of with range and null space .*

In the latest fifty years, there have been many famous specialists and scholars, who investigated the generalized inverse and published many articles and books. Its perturbation theories were introduced in [5–7], and the algebraical perturbations were in [8, 9]. Some minors, Cramer rulers, and sums of can be seen in [10–13]. There are a large number of papers [9, 14–18] using various methods, iterative or not, for computing .

One handy method of computing the inverse of a nonsingular matrix is the Gauss-Jordan elimination procedure by executing elementary row operations on the pair to transform it into . Moreover Gauss-Jordan elimination can be used to determine whether a matrix is nonsingular or not. However, one cannot directly use this method on a generalized inverse of a rectangular matrix or a square singular matrix .

Recently, Author [19] proposed a Gauss-Jordan elimination algorithm to compute , which required multiplications and divisions. More recently, Ji [20] improved author’s algorithm [19] and pointed that only multiplications and divisions are required. Following these lines, Stanimirovi and Petkovi in [21] extended the method of [20] to . But these three algorithms need also switching. Guo and Huang [22] executed row and column elementary transformations for computing M-P inverse by applying the rank equalities of matrix . They did not analyze the complexity of their algorithm. In this paper, we first study the total number arithmetic operation of GH-algorithm, then improve it, and present an alternative explicit formula for the M-P inverse of a matrix; the improvements save the total number arithmetic operation. We must point that GH-algorithm and our algorithm do not need to switch blocks of certain matrix in the process of computation.

The paper is organized as follows. The computational complexity of GH-Algorithm 3 for computing M-P inverse is surveyed in the next section. In Section 3, we derive a novel explicit expression for , propose a new Gauss-Jordan elimination procedure for based on the formula, and study the computational complexity of the new approach and Algorithm 7. In Section 4, an illustrative example is presented to explain the corresponding improvements of the algorithm.

#### 2. The Computational Complexity of GH-Algorithm

In [22], Guo and Huang gave a method of elementary transformation for computing M-P inverse by applying the rank equalities of matrix .

Lemma 2. *Suppose that , , , . If
**
Then . In particualr when , and .*

In [22], the authors also considered an algorithm based on Lemma 2, which was stated as follows.

*Algorithm 3. *M-P inverse GH-Algorithm is as follows.(1)Compute partitioned matrix .(2)Make the block matrix of become by applying elementary transformations. Meanwhile, the block matrices of and are accordingly transformed. A new partitioned matrix is obtained.(3)Make the block matrices of and be zero matrices by applying matrix , which yields
Then

Nevertheless, Guo and Huang [22] did not analyze the complexity of the numerical algorithm. In the following theorem, we will study the total number of arithmetic operations.

Theorem 4. *Let ; the total number of multiplications and divisions required in Algorithm 3 to compute M-P inverse is
**
Moreover, is bounded above by .*

*Proof. *It needs multiplications to compute . row pivoting steps and column pivoting steps are needed to transform the partitioned matrix into following the . First row pivoting step involves nonzero columns in . Thus, it needs divisions and multiplications with a total of multiplications and divisions. On the second row pivoting step, there is one less column in the first part of the pair. There are nonzero columns to deal with. These pivoting steps require operations. Following the same idea, the th pivoting steps require operations. So it requires .

For simplicity, assume that . Following the same line, this requires multiplications and divisions on the column pivoting steps.

Then resume elementary row and columns operations on the matrix to transform it into , which requires multiplications, which is the count of .

Therefore, the total number of operations needed for computation of is

Define a function for fixed and . Since , we have
which means that is monotonically increasing over when . Therefore takes its maximum value when , which implies

#### 3. Main Results

The Gauss-Jordan row and column elimination procedure for the M-P inverse of a matrix by Guo and Huang is based on the partitioned matrix . In this section, we will first propose a modified Gauss-Jordan elimination process to compute and then summarize an algorithm of this method. Finally, the complexity of the algorithm is analyzed.

Theorem 5. *Let ; there exist an elementary row operation matrix and an elementary column operation matrix , such that
**
where and are the row and column reduced echelon form of , respectively.**Further, there exists an elementary row operation matrix such that
**
Then
*

*Proof. *For , there exist two elementary row and column operation matrices and , such that

It is easy to check that and ; then the matrix is nonsingular, which implies that there exists another elementary row operation matrix such that
The above formula also shows that .

Denote that ; it is obvious that and . If we can prove , then and .

In fact, is a full column rank matrix and is an invertible matrix, which implies that .

By deducing, we obtain that

This means that is a 2-inverse of with and . From Lemma 1, we know that .

*Remark 6. *The representation of in Theorem 5 is consistent with the one in [3], although we use Gauss-Jordan elimination procedure.

According to the representation introduced in Theorem 5, we summarize the following algorithm for computing M-P inverse .

*Algorithm 7. *M-P inverse-Sheng algorithm is as follows.(1)Input: .(2)Execute elementary row operations on first rows of the partitioned matrix into , where is a reduced row-echelon matrix.(3)Perform elementary column operations on first columns of the partitioned matrix into , where matrix has a reduced column-echelon form.(4)Compute and form the block matrix
(5)Execute the elementary row operations on first rows of the partitioned matrix into .(6)Make the block matrices of and be zero matrices by applying elementary row and column transformations, respectively, through matrix , which yields
Then .

According to Algorithm 7, the next theorem will analyze the computational complexity of it.

Theorem 8. *The total number of multiplications and divisions required for Algorithm 7 to compute M-P inverse of a matrix is
**Further, the upper bound of is less than when .*

*Proof. *For a matrix with rank , pivoting steps are needed to make the partitioned matrix into . First pivoting step involves nonzero columns in . Thus, it needs divisions and multiplications with a total number of multiplications and divisions. On the second pivoting step, there is one less column in the first part of the pair. There are nonzero columns to deal with. This pivoting step requires operations. Following the same idea, the pivoting step requires operations. So these pivoting steps require
multiplications and divisions to reach the matrix .

Similarly, it needs multiplications and divisions to change the matrix into .

For simplicity, assume that and , which follows from that and are row-echelon and column-echelon reduced matrix, respectively.

multiplications are required to form under the above assumption. Since every row of the partitioned matrix has nonzero elements, each pivoting step needs multiplications and divisions. Thus, it requires multiplications and divisions to obtain the matrix .

Then resume elementary row and columns operations on the matrix to transform it into . The complexity of this process is multiplications, which is the count to compute .

Hence, the total number of complexity of Algorithm 7 is

With fixed and , define a function for . Then we have
which implies that is also monotonically increasing over when .

Therefore, when , obtains its maximum value, which yields

Furthermore, we give two remarks: one is explaining the computation speed and the other is how to improve the accuracy of Algorithm 7.

*Remark 9. *The algorithm in this paper does not need to switch block of certain matrix in the process computation, unlike the existing algorithm in [19–21]. The higher computational complexity is about multiplications and divisions, that is, less than GH-algorithm [22], which requires multiplications and divisions, when they are applied to the case of for .

*Remark 10. *In order to improve the accuracy of the algorithm, we must select nonzero entries in pivot row and column in each step of the Gauss-Jordan elimination. This improvement is based on the fact that Gauss-Jordan elimination is applied on matrices containing nonnegligible number zero elements.

#### 4. Numerical Examples

In this section, we will use a numerical example to demonstrate our results. A handy method is used to compute on a low order matrix.

*Example 1. *Use Algorithm 7 to compute the M-P inverse of the matrix in [21], where
*Solution.* Execute elementary row operations on the first four rows of the partitioned matrix ; we have
Then perform elementary column operations on the first three columns of matrix , which yields
Denote
By computing, we have

We execute elementary row operations on the first two rows of the partitioned matrix again; we have

One then resumes elementary row and column operations on , which results in
Then we can obtain

#### Conflict of Interests

The author declares that they have no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This project was supported by NSF China (no. 11226200), Anhui Provincial Natural Science Foundation (no. 10040606Q47), and The University Natural Science Research key Project of Anhui Province (KJ2013A204).