Journal of Applied Mathematics

Journal of Applied Mathematics / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 262034 | 12 pages | https://doi.org/10.1155/2012/262034

Successive Matrix Squaring Algorithm for Computing the Generalized Inverse

Academic Editor: J. Biazar
Received12 Jun 2012
Accepted29 Nov 2012
Published27 Dec 2012

Abstract

We investigate successive matrix squaring (SMS) algorithms for computing the generalized inverse of a given matrix .

1. Introduction

Throughout this paper, the symbol denotes a set of all complex matrices. Let , and the symbols , and stand for the range, the null space, the spectrum of matrix , and the matrix norm, respectively.

A matrix is called a -inverse of matrix if holds. The symbols , , and denote, respectively, the Moore-Penrose inverse, the index, and the Drazin inverse of , and, obviously, (see [1] for details). Let , , and and , and there exists and unique matrix such that then is called -inverse of with the prescribed range and null space of , denoted by .

In [1], it is well known that the generalized inverse of a given matrix with the prescribed range and null space is very important in applications of many mathematics branches such as stable approximations of ill-posed problems, linear and nonlinear problems involving rank-deficient generalized, and the applications to statistics [2]. In particular, the generalized inverse plays an important role for the iterative methods for solving nonlinear equations [1, 2].

In recent years, successive matrix squaring algorithms are investigated for computing the generalized inverse of a given matrix in [37]. In [3], the authors exhibit a deterministic iterative algorithm for linear system solution and matrix inversion based on a repeated matrix squaring scheme. Wei derives a successive matrix squaring (SMS) algorithm to approximate the Drazin inverse in [4]. Wei et al. in [5] derive a successive matrix squaring (SMS) algorithm to approximate the weighted generalized inverse , which can be expressed in the form of successive squaring of a composite matrix . Stanimirović and Cvetković-Ilić derive a successive matrix squaring (SMS) algorithm to approximate an outer generalized inverse with prescribed range and null space of a given matrix in [6]. In [7], authors introduce a new algorithm based on the successive matrix squaring (SMS) method and this algorithm uses the strategy of -displacement rank in order to find various outer inverses with prescribed ranges and null spaces of a square Toeplitz matrix.

In this paper, based on [35], we investigate successive matrix squaring algorithms for computing the generalized inverse of a matrix in Section 2 and also give a numerical example for illustrating our results in Section 3.

The following given lemma suggests that the generalized inverse is unique.

Lemma 1.1 (see [1, Theorem  2.14]). Let with rank , let be a subspace of of dimension , and let be a subspace of of dimension . Then, has a -inverse such that and if and only if in which case is unique.

The following nations are stated in Banach space but they are true in the finite dimension space. Throughout this paper, let denote the Banach space and let stand for the set of all bounded linear operators from to , in particular .

In the following, we state two lemmas which are given for Banach space but it can be used also for the finite dimension space.

Lemma 1.2 (see [8, Section  4]). Let and and , respectively, closed subspaces of and . Then the following statements are equivalent:(i)has a -inverse such that and ,(ii) is a complemented subspace of , is invertible and .

Lemma 1.3 (see [9, Section  3]). Suppose that the conditions of Lemma 1.2 are satisfied. If we take , then holds and has the following matrix form: where is invertible. Moreover, has the matrix following form:

From (1.5), we obtain the following projections (see [9]):

2. Main Result

In this section, we consider successive matrix squaring (SMS) algorithms for computing the generalized inverse .

Let and the sequence in , and we can define the iterative form as follows ([10, Theorem  2.2] for computing the generalized inverse in the infinite space case): From [10], the authors have proved that the iteration (2.1) converges to the generalized inverse if and only if , where and (for the proof see [11] and [10, Theorem  2.1] when ).

In the following, we give the algorithm for computing the generalized inverse of a matrix .

Let and . It is not difficult to see that the above fact can be written as follows: From (2.2) and letting , we have

By (2.3), we prove that the iterative (2.1) is equal to the right upper block in the matrix . Note that we defined the new iterative form as follows:

From the new iterative form (2.4), we arrive at

Assume that , and by (2.5), we have

By (2.4)–(2.6), we have Algorithm 1.

Input: Input the initial value matrices and the accurate value ;
Output: The algorithm export the matrix, that is ;
Begin: Assignment the matrix by the initial value matrix , that is ;
Assigned the matrix by , that is ;
Computed matrix , that is ;
Computed the error between and , that is ;
Judged that whether is lower than or not,
that is while , do ;
Defined the loop function: ;
Computed the error between and , that is ;
Finished the loop function
The matrix multiplied by and assigned to , that is ;
End the algorithm.

From (2.4)–(2.6) and Algorithm 1, we obtain the following result.

Theorem 2.1. Let , and the sequence converges to the generalized inverse if and only if . In this case where and

Proof. From the proof in [11] and [10, Theorem  2.1] when and according to (2.4), (2.5) and (2.6), we easily finish the proof of the former of the theorem. In the following, we only prove the last section, that is, prove that the inequality (2.7) holds.
By applying (2.5) and (2.6), we obtain
By the iteration (2.4) and (2.9), we arrive at

The following corollary given the result is the same as theorem in [6, Theorem  2.3]. It also presents an explicit representation of the the generalized inverse and the sequence (2.4) converges to a -inverse of a given matrix by its full-rank decomposition.

Corollary 2.2. Let be full rank decomposition, and the sequence converges to the -inverse if and only if . In this case where and

Proof. From Theorem 2.5 and by [6, Theorem  2.3], we have the result.

In the following, we consider the improvement of the iterative form (2.1) (see [11] for computing the Moore-Penrose inverse and the Drazin inverse of the matrix case and [10, Theorem  2.2] for computing the generalized inverse in the infinite space case): Let be a block matrix and then

By induction if has the following form: then

Similarly to the iterative form (2.4), we also define the new iterative scheme Note that from (2.18)

Let , and by (2.18), and (2.19), we arrive at

From (2.14) to (2.20), we find that if one wants to compute the generalized inverse then we only compute the element () of the matrix . Similarly to Algorithm 1, we also obtain Algorithm 2.

Input: Input the matrices and the accurate value ;
Output: The algorithm export the matrix: ;
Begin: Assignment the matrix by the initial value matrix , that is ;
Assigned the matrix by , that is ;
Computed the product of and , and assigned its value to . that is ;
Similarly, we repeatedly do the computation for the product and as well as above
the computation, where .
Computed the product of the matrix and , and assigned its value to as well
as above computations, that is ;
Assigned the matrix by the sum of the matrices , where and
. that is ;
Take the norm of and assigned its value to . that is ;
while do;
We need the iteration not to exceed 500 times. that is ; (In fact )
Do 500 step repeatedly computations in the following.
that is For 1 : 
Computed the product of the given matrix and the iteration matrix , and
assigned its value to the new matrix . that is ;
From the iteration , we obtain the new matrix and add its value to ,
and assigned the sum of and to the matrix ..that is ; After these,
return the the step .
Finished the For loop function that is end
Computed the error between and , that is ;
Finished the While loop function. that is end
The matrix multiplied by and assigned to , that is ;
End the algorithm.

Analogous to Theorem 2.5 by Algorithm 2 and sequence (2.18), we also have the following theorem.

Theorem 2.3. Let , and the sequence converges to the generalized inverse if and only if . In this case where and

Proof. Similarly the proof in [10, Theorem  2.1], we can prove the former of this theorem. Analogous to the proof of Theorem 2.5, we finish the proof of the theorem.

In the following, we extend the sequence (2.4) to By (2.26) and by induction, we have

Assume that , we easily have

Similarly, from (2.23) and (2.25), we obtain the following result.

Theorem 2.4. Let , and the sequence converges to the generalized inverse if and only if . In this case where and

Proof. From (2.25) and only using instead of in Theorem 2.1, we easily have that converges to the generalized inverse if and only if . Similarly to the formula (2.29), we obtain that where , , and are the same as Theorem 2.5.

In the following, we consider the dually iterative form.

Let and the sequence in , and we can define the iterative form as follows (see [11] and [10, Theorem  2.3]):

Let and . It is not difficult to see that the above fact can be written as follows:

From iterative forms (2.26) and (2.29), we have the following theorem.

Theorem 2.5. Let , and the sequence converges to the generalized inverse if and only if . In this case where and

Similarly to Corollary 2.2, we have the result as follows.

Corollary 2.6. Let , full rank decomposition, and the sequence converges to the -inverse if and only if . In this case where and

In the following, we consider the improvement of the iterative form (2.29) (see [11] for computing the Moore-Penrose inverse and the Drazin inverse of the matrix case and [10, Theorem  2.3] for computing the generalized inverse in the infinite space case):

It is similar to (2.14), and we have

Analogous to Theorem 2.5 by Algorithm 2 and from (2.36), we obtain the theorem in the following.

Theorem 2.7. Let , and the sequence converges to the generalized inverse if and only if . In this case where and

Dually, we give the SMS algorithm for computing the generalized inverse which are analogous to the iterative form (2.23) as follows and omit their proofs:

Similarly Theorem 2.4, from (2.35) and (2.39), we obtain the following result.

Theorem 2.8. Let , and the sequence converges to the generalized inverse if and only if . In this case where and

3. Example

Here is an example to verify the effectiveness of the SMS method.

Example 3.1. Let Let ; .
Take By (2.2), we have From [10, 12], we easily have the generalized inverse in

Then, from Algorithm 1, we obtain

But by the iteration (2.1), we get

From the data in (3.5) and (3.6), we obtain Table 1.


Method Iteration (2.1) Algorithm 1

Steps 5 2

From the above in (3.5), (3.6), and Table 1, we know that we only need two steps by Algorithm 1, but five steps by using iterative form (2.1).

Acknowledgments

X. Liu is supported by the National Natural Science Foundation of China (11061005), College of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning, China, and Y. Qin is supported by the Innovation Project of Guangxi University for Nationalities (gxun-chx2011075), College of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning, China.

References

  1. A. Ben-Israel and T. N. E. Greville, Generalized Inverses, Theory and Applications, vol. 15 of CMS Books in Mathematics, Springer, New York, NY, USA, 2nd edition, 2003. View at: Zentralblatt MATH
  2. A. J. Getson and F. C. Hsuan, {2}-Inverses and Their Statistical Application, vol. 47 of Lecture Notes in Statistics, Springer, New York, NY, USA, 1988. View at: Publisher Site
  3. B. Codenotti, M. Leoncini, and G. Resta, “Repeated matrix squaring for the parallel solution of linear systems,” in PARLE '92 Parallel Architectures and Languages Europe, vol. 605 of Lecture Notes in Computer Science, pp. 725–732, Springer, Berlin, Germany, 1992. View at: Publisher Site | Google Scholar
  4. Y. Wei, “Successive matrix squaring algorithm for computing the Drazin inverse,” Applied Mathematics and Computation, vol. 108, no. 2-3, pp. 67–75, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  5. Y. Wei, H. Wu, and J. Wei, “Successive matrix squaring algorithm for parallel computing the weighted generalized inverse AM,N+,” Applied Mathematics and Computation, vol. 116, no. 3, pp. 289–296, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  6. P. S. Stanimirović and D. S. Cvetković-Ilić, “Successive matrix squaring algorithm for computing outer inverses,” Applied Mathematics and Computation, vol. 203, no. 1, pp. 19–29, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  7. M. Miladinović, S. Miljković, and P. Stanimirović, “Modified SMS method for computing outer inverses of Toeplitz matrices,” Applied Mathematics and Computation, vol. 218, no. 7, pp. 3131–3143, 2011. View at: Publisher Site | Google Scholar
  8. D. S. Djordjević and P. S. Stanimirović, “On the generalized Drazin inverse and generalized resolvent,” Czechoslovak Mathematical Journal, vol. 51(126), no. 3, pp. 617–634, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  9. D. S. Djordjević and P. S. Stanimirović, “Splittings of operators and generalized inverses,” Publicationes Mathematicae Debrecen, vol. 59, no. 1-2, pp. 147–159, 2001. View at: Google Scholar | Zentralblatt MATH
  10. X. Liu, Y. Yu, and C. Hu, “The iterative methods for computing the generalized inverse AT,S(2) of the bounded linear operator between Banach spaces,” Applied Mathematics and Computation, vol. 214, no. 2, pp. 391–410, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  11. X.-Z. Chen and R. E. Hartwig, “The hyperpower iteration revisited,” Linear Algebra and Its Applications, vol. 233, pp. 207–229, 1996. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  12. B. Zheng and G. Wang, “Representation and approximation for generalized inverse AT,S(2): revisited,” Journal of Applied Mathematics & Computing, vol. 22, no. 3, pp. 225–240, 2006. View at: Publisher Site | Google Scholar

Copyright © 2012 Xiaoji Liu and Yonghui Qin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

767 Views | 550 Downloads | 8 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.