International Scholarly Research Notices

International Scholarly Research Notices / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 641706 | 5 pages | https://doi.org/10.1155/2014/641706

Improved Qrginv Algorithm for Computing Moore-Penrose Inverse Matrices

Academic Editor: M. Hermann
Received24 Jan 2014
Accepted17 Feb 2014
Published12 Mar 2014

Abstract

Katsikis et al. presented a computational method in order to calculate the Moore-Penrose inverse of an arbitrary matrix (including singular and rectangular) (2011). In this paper, an improved version of this method is presented for computing the pseudo inverse of an real matrix A with rank . Numerical experiments show that the resulting pseudoinverse matrix is reasonably accurate and its computation time is significantly less than that obtained by Katsikis et al.

1. Introduction

Let denote the set of all matrices over the field of real numbers, . The symbols , rank( ) will stand for the transpose and rank of , respectively. For a matrix , the Moore-Penrose inverse of , denoted by , is the unique matrix satisfying the following equations:(i), (ii), (iii), (iv).

A lot of works concerning generalized inverses have been carried out, in finite and infinite dimensions (e.g., [13]). There are several methods for computing the Moore-Penrose inverse matrix [2, 48]. In a recent article [9], an improved method for the computation of the Moore-Penrose inverse matrix provided.

In this paper, we aim to improve their method so that it can be used for any kind of matrices, square or rectangular, full rank or not. The numerical examples show that our method is competitive in terms of accuracy and is much faster than the commonly used methods and can also be used for large sparse matrices.

This paper is organized as follows. In Section 2 the improved version of this method is presented for computing the pseudoinverse of an real matrix with rank . In Section 3, the numerical results of some test matrices are given. Section 4 is devoted to the concluding remarks.

2. Improved Qrginv Method (IMqrginv)

In [9], a method (qrginv) for computing the Moore-Penrose inverse of an arbitrary matrix was presented. They made use of the QR-factorization, as well as an algorithm based on a known reverse order law for generalized inverse matrices, and also they apply a method (ginv), presented in [4], based on a full rank Cholesky factorization of possibly singular symmetric positive matrices.

In the current paper, we improved qrginv algorithm using the QR-factorization by Gram-Schmidt orthonormalization (GSO) and Theorem 1 for faster computing Moore-Penrose inverse of arbitrary matrices (including singular and rectangular). We should note that we invoke ginv algorithm. In order to support and state our achievement we need to remind Gram-Schmidt orthonormalization (GSO) and the QR-factorization.

2.1. The Gram-Schmidt Procedure

Let us remind a generalization of the Gram-Schmidt orthonormalization process (shortly GSO) which is applied for singular matrices. Let be a set of vectors spanning a subspace , this process generates a set of mutually orthonormal vectors such as having the property that is an orthonormalization basis for . is obtained using the Gram-Schmidt orthonormalization process (shortly GSO) as follows:

The integer found by the GSO process is the dimension of the subspace . The integers are the indices of a maximal linearly independent subset of .

2.2. The QR-Factorization

Let us remind the QR-factorization for arbitrary matrices (including singular and rectangular).

Let the orthonormal set be obtained from the set of vectors by the GSO process described in Section 2.1, and let with , be the corresponding matrices. Then there exist matrices , , and such that where (i) is a permutation matrix (therefore orthonormal);(ii), where and denote matrices whose columns are an orthonormal basis of and , respectively;(iii), where is upper triangular matrix with . One obtains from (3) It follows that has a -factorization. A nonzero matrix can be expressed as the product of a matrix of full column rank and a matrix of full row rank. In fact, for given () there exist matrices and such that [2]. Such factorization, which is the so-called a full rank factorization, turns out to be a powerful tool in the study of generalized inverses.

The following theorem is due to MacDuffe [10] who apparently was the first one to point out that a full rank factorization of a matrix leads to an explicit formula for its Moore-Penrose inverse, .

Theorem 1. If matrix, with , has a full rank factorization , then

As a direct consequence of Theorem 1, we have the following.

Corollary 2. Let , , and be the -factorization of . Then

Proof. With , , and in Theorem 1 one obtains and consequently

The function IMqrginv 4 provided all implementation details of the above solution, in MATLAB code. To calculate the rank of , one needs only the number of its columns having at least one value above a tolerance level in absolute terms. This tolerance is set to be equal to , which is also used by Katsikis et al. [9], and turns out to provide accurate results.

3. Numerical Examples

In this section, we compare the performance of the proposed method (IMqrginv function) to that of Katsikis et al. [9] for the computation of Moore-Penrose inverse matrices. Testing qrginv and IMqrginv was performed separately for random singular and singular matrices with “large” condition number from the Matrix Computation Toolbox (see [11]). We also tested the proposed method for some sparse matrices and obtained very fast and accurate results. Specifically, MATLAb 7.11 (R2010b) Service Pack 3 version of the software was used on an Intel Core 2 (Duo) 8400 Processor running 2.26 GHz with 3 GB of RAM memory using Windows Vista professional 32-bit operating system.

3.1. Random Singular Matrices

We are computing the performance of the proposed method IMqrginv to that of [9] (qrginv function). In the same way of [4] we tested on a series of random singular matrices of size , with , and , which are rank deficient, with rank . In addition, the accuracy of the results is examined with the matrix 2-norm in error matrices corresponding to the four properties characterizing the Moore-Penrose inverses shown in Table 1. The computation error is less than per coefficient in the error matrices, in all cases. The computation time (in seconds) is reported in Table 1. We observe that the computation time of IMqrginv method is substantially less than that of the qrginv method.


MethodTime

qrginv0.0317
IMqrginv0.0137

qrginv0.1176
IMqrginv0.0786

qrginv1.0584
IMqrginv0.8236

qrginv10.8336
IMqrginv8.2976

qrginv81.1429
IMqrginv55.0746

3.2. Singular Matrices

In this section we use a set of singular test matrices that includes singular matrices of size , obtained from the function matrix in the Matrix Computation Toolbox [11] (which inculdes test matrices from Matlab itself). The condition number of test matrices ranges from order to . Since the matrices are of relatively small size and so as to measure the time needed for each algorithm to compute the Moore-Penrose inverse accurately, each algorithm runs distinct times. The reported time is the mean time over these replications. For each case, the time responses together with the error results are presented in Tables 2, 3, 4, 5, 6, 7, 8, and 9. We observe that the computation time of IMqrginv method is shorter than the qrginv method for all matrices and is proved to be more efficient.


Method Time

qrginv 0.0376
IMqrginv 0.0187


Method Time

qrginv 0.2816
IMqrginv 0.2334


MethodTime

qrginv 0.0377
IMqrginv 0.0165


MethodTime

qrginv 0.0279
IMqrginv 0.0099


MethodTime

qrginv0.0244
IMqrginv0.0102


MethodTime

qrginv 0.0276
IMqrginv 0.0138


MethodTime

qrginv 0.0247
IMqrginv 0.0117


MethodTime

qrginv 0.0154
IMqrginv 0.0035

3.3. Matrix-Market Sparse Matrices

For sparse matrices, we have chosen some matrices from Matrix-Market collection [11]. We follow the same method as in [8], and we have the rank deficient matrices as where is one of the chosen matrices and is a zero matrix of order . The chosen matrices with their properties are shown in Table 10. The results of the methods are presented in Table 11. We observe that the Moore-Penrose inverses obtained by IMqrginv are reasonably accurate in all cases; the computation time required by the IMqrginv method is significantly less than the time required by the IMqrginv methods. On the other hand, we can see that the accuracy computation of the IMqrginv method is less than the qrginv method; however, in some cases, the accuracy of the results of both methods is low. We can conclude that IMqrginv method is a robust and efficient tool for obtaining the Moore-Penrose inverse of large sparse and rank deficient matrices.


Matrixproperty nnz Cond

WELL1033 1033 320 4732 Not available
WELL1850 1850 712 8758 Not available
ILCC1033 1033 320 4732 Not available
ILCC1850 10850 712 8758 Not available
WATT1 1856 1856 11360
GR-30-30 900 900 4322
ADD20 2395 2395 17319
NOSE3 960 960 8402
SHERMAN1 1000 1000 3750


MethodMethod Time

WELL1033_Z qrginv0.6277
IMqrginv0.3300

WELL1850_Z qrginv3.8271
IMqrginv1.7559

ILCC1033_Z qrginv0.6119
IMqrginv0.3786

ILCC1850_Z qrginv3.9197
IMqrginv1.6791

WATT1_Z qrginv10.5110 6.1068
IMqrginv3.1018 6.1068

GR-30-30_Z qrginv3.1095
IMqrginv1.6834

ADD20_Z qrginv60.9176
IMqrginv59.0933

NOS3_Z qrginv4.3836
IMqrginv3.0214