Research Article | Open Access

# Execute Elementary Row and Column Operations on the Partitioned Matrix to Compute M-P Inverse

**Academic Editor:**Sergei V. Pereverzyev

#### Abstract

We first study the complexity of the algorithm presented in Guo and Huang (2010). After that, a new explicit formula for computational of the Moore-Penrose inverse of a singular or rectangular matrix . This new approach is based on a modified Gauss-Jordan elimination process. The complexity of the new method is analyzed and presented and is found to be less computationally demanding than the one presented in Guo and Huang (2010). In the end, an illustrative example is demonstrated to explain the corresponding improvements of the algorithm.

#### 1. Introduction

Throughout this paper we use the following notation. Let and be the dimensional complex space and the set of complex matrices with rank . For a matrix , and are the range and null space of ; and denote the rank and the the conjugate transpose of , while and denote the M-P inverse and Frobenius norm, respectively.

In 1920, Moore [1] defined a new inverse of a matrix by projection matrices. Moore’s definition of the generalized inverse of an matrix is equivalent to the existence of an matrix such that where is the orthogonal projector on . Unaware of Moore’s work, In 1955 Penrose [2] showed that there exists a unique matrix satisfying the four conditions where denotes conjugate transpose. These conditions are equivalent to Moore’s conditions. The unique matrix that satisfied these conditions was known as the Moore-Penrose inverse (abbreviated M-P) and is denoted by . For a subset of the set , the set of matrices satisfied the equations , from among (2) is denoted by . These concepts can be found in Ben-Israel and Greville’s famous book [3] or Campell and Meyer’s book [4]. In their famous books [3, 4], the next statement is valid for a rectangular matrix.

Lemma 1 (see [3, 4]). *Let , be the M-P inverse of if and only if is a -inverse of with range and null space .*

In the latest fifty years, there have been many famous specialists and scholars, who investigated the generalized inverse and published many articles and books. Its perturbation theories were introduced in [5–7], and the algebraical perturbations were in [8, 9]. Some minors, Cramer rulers, and sums of can be seen in [10–13]. There are a large number of papers [9, 14–18] using various methods, iterative or not, for computing .

One handy method of computing the inverse of a nonsingular matrix is the Gauss-Jordan elimination procedure by executing elementary row operations on the pair to transform it into . Moreover Gauss-Jordan elimination can be used to determine whether a matrix is nonsingular or not. However, one cannot directly use this method on a generalized inverse of a rectangular matrix or a square singular matrix .

Recently, Author [19] proposed a Gauss-Jordan elimination algorithm to compute , which required multiplications and divisions. More recently, Ji [20] improved author’s algorithm [19] and pointed that only multiplications and divisions are required. Following these lines, Stanimirovi and Petkovi in [21] extended the method of [20] to . But these three algorithms need also switching. Guo and Huang [22] executed row and column elementary transformations for computing M-P inverse by applying the rank equalities of matrix . They did not analyze the complexity of their algorithm. In this paper, we first study the total number arithmetic operation of GH-algorithm, then improve it, and present an alternative explicit formula for the M-P inverse of a matrix; the improvements save the total number arithmetic operation. We must point that GH-algorithm and our algorithm do not need to switch blocks of certain matrix in the process of computation.

The paper is organized as follows. The computational complexity of GH-Algorithm 3 for computing M-P inverse is surveyed in the next section. In Section 3, we derive a novel explicit expression for , propose a new Gauss-Jordan elimination procedure for based on the formula, and study the computational complexity of the new approach and Algorithm 7. In Section 4, an illustrative example is presented to explain the corresponding improvements of the algorithm.

#### 2. The Computational Complexity of GH-Algorithm

In [22], Guo and Huang gave a method of elementary transformation for computing M-P inverse by applying the rank equalities of matrix .

Lemma 2. *Suppose that , , , . If
**
Then . In particualr when , and .*

In [22], the authors also considered an algorithm based on Lemma 2, which was stated as follows.

*Algorithm 3. *M-P inverse GH-Algorithm is as follows.(1)Compute partitioned matrix .(2)Make the block matrix of become by applying elementary transformations. Meanwhile, the block matrices of and are accordingly transformed. A new partitioned matrix is obtained.(3)Make the block matrices of and be zero matrices by applying matrix , which yields
Then

Nevertheless, Guo and Huang [22] did not analyze the complexity of the numerical algorithm. In the following theorem, we will study the total number of arithmetic operations.

Theorem 4. *Let ; the total number of multiplications and divisions required in Algorithm 3 to compute M-P inverse is
**
Moreover, is bounded above by .*

*Proof. *It needs multiplications to compute . row pivoting steps and column pivoting steps are needed to transform the partitioned matrix into following the . First row pivoting step involves nonzero columns in . Thus, it needs divisions and multiplications with a total of multiplications and divisions. On the second row pivoting step, there is one less column in the first part of the pair. There are nonzero columns to deal with. These pivoting steps require operations. Following the same idea, the th pivoting steps require operations. So it requires .

For simplicity, assume that . Following the same line, this requires multiplications and divisions on the column pivoting steps.

Then resume elementary row and columns operations on the matrix to transform it into , which requires multiplications, which is the count of .

Therefore, the total number of operations needed for computation of is

Define a function for fixed and . Since , we have
which means that is monotonically increasing over when . Therefore takes its maximum value when , which implies

#### 3. Main Results

The Gauss-Jordan row and column elimination procedure for the M-P inverse of a matrix by Guo and Huang is based on the partitioned matrix . In this section, we will first propose a modified Gauss-Jordan elimination process to compute and then summarize an algorithm of this method. Finally, the complexity of the algorithm is analyzed.

Theorem 5. *Let ; there exist an elementary row operation matrix and an elementary column operation matrix , such that
**
where and are the row and column reduced echelon form of , respectively.**Further, there exists an elementary row operation matrix such that
**
Then
*

*Proof. *For , there exist two elementary row and column operation matrices and , such that

It is easy to check that and ; then the matrix is nonsingular, which implies that there exists another elementary row operation matrix such that
The above formula also shows that .

Denote that ; it is obvious that and . If we can prove , then and .

In fact, is a full column rank matrix and is an invertible matrix, which implies that .

By deducing, we obtain that

This means that is a 2-inverse of with and . From Lemma 1, we know that .

*Remark 6. *The representation of in Theorem 5 is consistent with the one in [3], although we use Gauss-Jordan elimination procedure.

According to the representation introduced in Theorem 5, we summarize the following algorithm for computing M-P inverse .

*Algorithm 7. *M-P inverse-Sheng algorithm is as follows.(1)Input: .(2)Execute elementary row operations on first rows of the partitioned matrix into , where is a reduced row-echelon matrix.(3)Perform elementary column operations on first columns of the partitioned matrix into , where matrix has a reduced column-echelon form.(4)Compute and form the block matrix
(5)Execute the elementary row operations on first rows of the partitioned matrix into .(6)Make the block matrices of and be zero matrices by applying elementary row and column transformations, respectively, through matrix , which yields
Then .

According to Algorithm 7, the next theorem will analyze the computational complexity of it.

Theorem 8. *The total number of multiplications and divisions required for Algorithm 7 to compute M-P inverse of a matrix is
**Further, the upper bound of is less than when .*

*Proof. *For a matrix with rank , pivoting steps are needed to make the partitioned matrix into . First pivoting step involves nonzero columns in . Thus, it needs divisions and multiplications with a total number of multiplications and divisions. On the second pivoting step, there is one less column in the first part of the pair. There are nonzero columns to deal with. This pivoting step requires operations. Following the same idea, the pivoting step requires operations. So these pivoting steps require
multiplications and divisions to reach the matrix .

Similarly, it needs multiplications and divisions to change the matrix into .

For simplicity, assume that and , which follows from that and are row-echelon and column-echelon reduced matrix, respectively.

multiplications are required to form under the above assumption. Since every row of the partitioned matrix has nonzero elements, each pivoting step needs multiplications and divisions. Thus, it requires multiplications and divisions to obtain the matrix .

Then resume elementary row and columns operations on the matrix to transform it into . The complexity of this process is multiplications, which is the count to compute .

Hence, the total number of complexity of Algorithm 7 is

With fixed and , define a function for . Then we have
which implies that is also monotonically increasing over when .

Therefore, when , obtains its maximum value, which yields

Furthermore, we give two remarks: one is explaining the computation speed and the other is how to improve the accuracy of Algorithm 7.

*Remark 9. *The algorithm in this paper does not need to switch block of certain matrix in the process computation, unlike the existing algorithm in [19–21]. The higher computational complexity is about multiplications and divisions, that is, less than GH-algorithm [22], which requires multiplications and divisions, when they are applied to the case of for .

*Remark 10. *In order to improve the accuracy of the algorithm, we must select nonzero entries in pivot row and column in each step of the Gauss-Jordan elimination. This improvement is based on the fact that Gauss-Jordan elimination is applied on matrices containing nonnegligible number zero elements.

#### 4. Numerical Examples

In this section, we will use a numerical example to demonstrate our results. A handy method is used to compute on a low order matrix.

*Example 1. *Use Algorithm 7 to compute the M-P inverse of the matrix in [21], where
*Solution.* Execute elementary row operations on the first four rows of the partitioned matrix ; we have
Then perform elementary column operations on the first three columns of matrix , which yields
Denote
By computing, we have

We execute elementary row operations on the first two rows of the partitioned matrix again; we have

One then resumes elementary row and column operations on , which results in
Then we can obtain

#### Conflict of Interests

The author declares that they have no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This project was supported by NSF China (no. 11226200), Anhui Provincial Natural Science Foundation (no. 10040606Q47), and The University Natural Science Research key Project of Anhui Province (KJ2013A204).

#### References

- E. H. Moore, “On the reciprocal of the general algebra matrix,”
*Bulletin of the American Mathematical Society*, vol. 26, pp. 394–395, 1920. View at: Google Scholar - R. Penrose, “A generalized inverse for matrices,”
*Proceedings of the Cambridge Philosophical Society*, vol. 51, pp. 406–413, 1955. View at: Google Scholar - A. Ben-Israel and T. N. E. Greville,
*Generalized Inverse Theory and Applications*, Springer, New York, NY, USA, 2nd edition, 2003. View at: MathSciNet - S. L. Campbell and C. D. Meyer, Jr.,
*Generalized Inverses of Linear Transformations*, Dover, New York, NY, USA, 1979. View at: MathSciNet - G. W. Stewart, “On the continuity of the generalized inverse,”
*SIAM Journal on Applied Mathematics*, vol. 17, pp. 33–45, 1969. View at: Google Scholar | MathSciNet - G. W. Stewart, “On the perturbation of pseudo-inverses, projections and linear least squares problems,”
*SIAM Review*, vol. 19, no. 4, pp. 634–662, 1977. View at: Google Scholar | MathSciNet - P.-Å. Wedin, “Perturbation theory for pseudo-inverses,”
*BIT Numerical Mathematics*, vol. 13, no. 2, pp. 217–232, 1973. View at: Publisher Site | Google Scholar | MathSciNet - J. Ji, “The algebraic perturbation method for generalized inverses,”
*Journal of Computational Mathematics*, vol. 7, no. 4, pp. 327–333, 1989. View at: Google Scholar | MathSciNet - L. Kramarz, “Algebraic perturbation methods for the solution of singular linear systems,”
*Linear Algebra and Its Applications*, vol. 36, pp. 79–88, 1981. View at: Publisher Site | Google Scholar | MathSciNet - J. Miao and A. Ben-Israel, “Minors of the Moore-Penrose inverse,”
*Linear Algebra and Its Applications*, vol. 195, pp. 191–207, 1993. View at: Google Scholar - J. Cai and G. L. Chen, “On the representation of ${A}^{+},{A}_{MN}^{+}$ and its applications,”
*Numerical Mathematics*, vol. 24, no. 4, pp. 320–326, 2002 (Chinese). View at: Google Scholar | MathSciNet - J. A. Fill and D. E. Fishkind, “The Moore-Penrose generalized inverse for sums of matrices,”
*SIAM Journal on Matrix Analysis and Applications*, vol. 21, no. 2, pp. 629–635, 2000. View at: Google Scholar | MathSciNet - J. Ji, “Explicit expressions of the generalized inverses and condensed Cramer rules,”
*Linear Algebra and Its Applications*, vol. 404, no. 1–3, pp. 183–192, 2005. View at: Publisher Site | Google Scholar | MathSciNet - J.-F. Cai, M. K. Ng, and Y.-M. Wei, “Modified Newton's algorithm for computing the group inverses of singular Toeplitz matrices,”
*Journal of Computational Mathematics*, vol. 24, no. 5, pp. 647–656, 2006. View at: Google Scholar - X. Chen and J. Ji, “Computing the Moore-Penrose inverse of a matrix through sysmmetric rank one updates,”
*American Journal of Computational Mathematics*, vol. 1, pp. 147–151, 2011. View at: Google Scholar - M. D. Petkovi and P. S. Stanimirovi, “Iterative method for computing the MoorePenrose inverse based on Penrose equations,”
*Journal of Computational and Applied Mathematics*, vol. 235, no. 6, pp. 1604–1613, 2011. View at: Publisher Site | Google Scholar | MathSciNet - Y. Wei, J. Cai, and M. K. Ng, “Computing Moore-Penrose inverses of Toeplitz matrices by Newton's iteration,”
*Mathematical and Computer Modelling*, vol. 40, no. 1-2, pp. 181–191, 2004. View at: Publisher Site | Google Scholar | MathSciNet - V. N. Katsikis, D. Pappas, and A. Petralias, “An improved method for the computation of the Moore-Penrose inverse matrix,”
*Applied Mathematics and Computation*, vol. 217, no. 23, pp. 9828–9834, 2011. View at: Publisher Site | Google Scholar | MathSciNet - X. Sheng and G. Chen, “A note of computation for M-P inverse ${A}^{\mathrm{\u2020}}$,”
*International Journal of Computer Mathematics*, vol. 87, no. 10, pp. 2235–2241, 2010. View at: Publisher Site | Google Scholar - J. Ji, “Gauss-Jordan elimination methods for the Moore-Penrose inverse of a matrix,”
*Linear Algebra and Its Applications*, vol. 437, no. 7, pp. 1835–1844, 2012. View at: Publisher Site | Google Scholar | MathSciNet - P. S. Stanimirović and M. D. Petković, “Gauss-Jordan elimination method for computing outer inverses,”
*Applied Mathematics and Computation*, vol. 219, no. 9, pp. 4667–4679, 2013. View at: Publisher Site | Google Scholar | MathSciNet - W. Guo and T. Huang, “Method of elementary transformation to compute Moore-Penrose inverse,”
*Applied Mathematics and Computation*, vol. 216, no. 5, pp. 1614–1617, 2010. View at: Publisher Site | Google Scholar | MathSciNet

#### Copyright

Copyright © 2014 Xingping Sheng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.