Research Article | Open Access
Alireza Ataei, "Improved Qrginv Algorithm for Computing Moore-Penrose Inverse Matrices", International Scholarly Research Notices, vol. 2014, Article ID 641706, 5 pages, 2014. https://doi.org/10.1155/2014/641706
Improved Qrginv Algorithm for Computing Moore-Penrose Inverse Matrices
Katsikis et al. presented a computational method in order to calculate the Moore-Penrose inverse of an arbitrary matrix (including singular and rectangular) (2011). In this paper, an improved version of this method is presented for computing the pseudo inverse of an real matrix A with rank . Numerical experiments show that the resulting pseudoinverse matrix is reasonably accurate and its computation time is significantly less than that obtained by Katsikis et al.
Let denote the set of all matrices over the field of real numbers, . The symbols , rank( ) will stand for the transpose and rank of , respectively. For a matrix , the Moore-Penrose inverse of , denoted by , is the unique matrix satisfying the following equations:(i), (ii), (iii), (iv).
A lot of works concerning generalized inverses have been carried out, in finite and infinite dimensions (e.g., [1–3]). There are several methods for computing the Moore-Penrose inverse matrix [2, 4–8]. In a recent article , an improved method for the computation of the Moore-Penrose inverse matrix provided.
In this paper, we aim to improve their method so that it can be used for any kind of matrices, square or rectangular, full rank or not. The numerical examples show that our method is competitive in terms of accuracy and is much faster than the commonly used methods and can also be used for large sparse matrices.
This paper is organized as follows. In Section 2 the improved version of this method is presented for computing the pseudoinverse of an real matrix with rank . In Section 3, the numerical results of some test matrices are given. Section 4 is devoted to the concluding remarks.
2. Improved Qrginv Method (IMqrginv)
In , a method (qrginv) for computing the Moore-Penrose inverse of an arbitrary matrix was presented. They made use of the QR-factorization, as well as an algorithm based on a known reverse order law for generalized inverse matrices, and also they apply a method (ginv), presented in , based on a full rank Cholesky factorization of possibly singular symmetric positive matrices.
In the current paper, we improved qrginv algorithm using the QR-factorization by Gram-Schmidt orthonormalization (GSO) and Theorem 1 for faster computing Moore-Penrose inverse of arbitrary matrices (including singular and rectangular). We should note that we invoke ginv algorithm. In order to support and state our achievement we need to remind Gram-Schmidt orthonormalization (GSO) and the QR-factorization.
2.1. The Gram-Schmidt Procedure
Let us remind a generalization of the Gram-Schmidt orthonormalization process (shortly GSO) which is applied for singular matrices. Let be a set of vectors spanning a subspace , this process generates a set of mutually orthonormal vectors such as having the property that is an orthonormalization basis for . is obtained using the Gram-Schmidt orthonormalization process (shortly GSO) as follows:
The integer found by the GSO process is the dimension of the subspace . The integers are the indices of a maximal linearly independent subset of .
2.2. The QR-Factorization
Let us remind the QR-factorization for arbitrary matrices (including singular and rectangular).
Let the orthonormal set be obtained from the set of vectors by the GSO process described in Section 2.1, and let with , be the corresponding matrices. Then there exist matrices , , and such that where (i) is a permutation matrix (therefore orthonormal);(ii), where and denote matrices whose columns are an orthonormal basis of and , respectively;(iii), where is upper triangular matrix with . One obtains from (3) It follows that has a -factorization. A nonzero matrix can be expressed as the product of a matrix of full column rank and a matrix of full row rank. In fact, for given () there exist matrices and such that . Such factorization, which is the so-called a full rank factorization, turns out to be a powerful tool in the study of generalized inverses.
The following theorem is due to MacDuffe  who apparently was the first one to point out that a full rank factorization of a matrix leads to an explicit formula for its Moore-Penrose inverse, .
Theorem 1. If matrix, with , has a full rank factorization , then
As a direct consequence of Theorem 1, we have the following.
Corollary 2. Let , , and be the -factorization of . Then
Proof. With , , and in Theorem 1 one obtains and consequently
The function IMqrginv 4 provided all implementation details of the above solution, in MATLAB code. To calculate the rank of , one needs only the number of its columns having at least one value above a tolerance level in absolute terms. This tolerance is set to be equal to , which is also used by Katsikis et al. , and turns out to provide accurate results.
3. Numerical Examples
In this section, we compare the performance of the proposed method (IMqrginv function) to that of Katsikis et al.  for the computation of Moore-Penrose inverse matrices. Testing qrginv and IMqrginv was performed separately for random singular and singular matrices with “large” condition number from the Matrix Computation Toolbox (see ). We also tested the proposed method for some sparse matrices and obtained very fast and accurate results. Specifically, MATLAb 7.11 (R2010b) Service Pack 3 version of the software was used on an Intel Core 2 (Duo) 8400 Processor running 2.26 GHz with 3 GB of RAM memory using Windows Vista professional 32-bit operating system.
3.1. Random Singular Matrices
We are computing the performance of the proposed method IMqrginv to that of  (qrginv function). In the same way of  we tested on a series of random singular matrices of size , with , and , which are rank deficient, with rank . In addition, the accuracy of the results is examined with the matrix 2-norm in error matrices corresponding to the four properties characterizing the Moore-Penrose inverses shown in Table 1. The computation error is less than per coefficient in the error matrices, in all cases. The computation time (in seconds) is reported in Table 1. We observe that the computation time of IMqrginv method is substantially less than that of the qrginv method.
3.2. Singular Matrices
In this section we use a set of singular test matrices that includes singular matrices of size , obtained from the function matrix in the Matrix Computation Toolbox  (which inculdes test matrices from Matlab itself). The condition number of test matrices ranges from order to . Since the matrices are of relatively small size and so as to measure the time needed for each algorithm to compute the Moore-Penrose inverse accurately, each algorithm runs distinct times. The reported time is the mean time over these replications. For each case, the time responses together with the error results are presented in Tables 2, 3, 4, 5, 6, 7, 8, and 9. We observe that the computation time of IMqrginv method is shorter than the qrginv method for all matrices and is proved to be more efficient.
3.3. Matrix-Market Sparse Matrices
For sparse matrices, we have chosen some matrices from Matrix-Market collection . We follow the same method as in , and we have the rank deficient matrices as where is one of the chosen matrices and is a zero matrix of order . The chosen matrices with their properties are shown in Table 10. The results of the methods are presented in Table 11. We observe that the Moore-Penrose inverses obtained by IMqrginv are reasonably accurate in all cases; the computation time required by the IMqrginv method is significantly less than the time required by the IMqrginv methods. On the other hand, we can see that the accuracy computation of the IMqrginv method is less than the qrginv method; however, in some cases, the accuracy of the results of both methods is low. We can conclude that IMqrginv method is a robust and efficient tool for obtaining the Moore-Penrose inverse of large sparse and rank deficient matrices.