Abstract

The definition of convergence of an infinite product of scalars is extended to the infinite usual and Kronecker products of matrices. The new definitions are less restricted invertibly convergence. Whereas the invertibly convergence is based on the invertible of matrices; in this study, we assume that matrices are not invertible. Some sufficient conditions for these kinds of convergence are studied. Further, some matrix sequences which are convergent to the Moore-Penrose inverses and outer inverses as a general case are also studied. The results are derived here by considering the related well-known methods, namely, Euler-Knopp, Newton-Raphson, and Tikhonov methods. Finally, we provide some examples for computing both generalized inverses and numerically for any arbitrary matrix of large dimension by using MATLAB and comparing the results between some of different methods.

1. Introduction and Preliminaries

A scalar infinite product of complex numbers is said to converge if is nonzero for sufficiently large, say , and exists and is nonzero. If this is so then is defined to be . With this definition, a convergent infinite product vanishes if and only if one of its factors vanishes.

Let be a sequence of matrices, then In [1], Daubechies and Lagarias defined the converges of an infinite product of matrices without the adverb “invertibly” as follows.(i)An infinite product of matrices is said to be right converges if exists, in which case (ii)An infinite product of matrices is said to be left converges if exists, in which case

The idea of invertibly convergence of sequence of matrices was introduced by Trench [2, 3] as follows. An infinite product of matrices is said to be invertibly converged if there is an integer such that is invertible for , and exists and is invertible. In this case, Let us recall some concepts that will be used below. Before starting, throughout we consider matrices over the field of complex numbers or real numbers . The set of -by- complex matrices is denoted by . For simplicity, we write instead of and when , we write instead of . The notations , , , , , , , , , , and stand, respectively, for the transpose, conjugate transpose, Moore-Penrose inverse, outer inverse, rank, range, null space, spectral radius, spectrum norm, -norm, and the set of all eigenvalues of matrix .

The Moore-Penrose and outer inverses of an arbitrary matrix (including singular and rectangular) are very useful in various applications in control system analysis, statistics, singular differential and difference equations, Markov chains, iterative methods, least-square problem, perturbation theory, neural networks problem, and many other subjects were found in the literature (see, e.g., [414]).

It is well known that Moore-Penrose inverse (MPI) of a matrix is defined to be the unique solution of the following four matrix equations (see, e.g. [4, 11, 1420]): and is often denoted by . In particular, when is a square and nonsingular matrix, then reduce to .

For , arbitrary, it holds, (see, e.g., [14, 18]), and , then Thus, is the unique minimum least-squares solution of the following linear squares problem, (see, e.g., [14, 21, 22]), It is well known also that the singular value decomposition of any rectangular matrix with is given by where is a diagonal matrix with diagonal entries , and are the singular values of , that is, are the nonzero eigenvalues of . This decomposition is extremely useful to represent the MPI of by [20, 23] where is a diagonal matrix with diagonal entries .

Furthermore, the spectral norm of is defined by where and are, respectively, the largest and smallest singular value of .

Generally speaking, the outer inverse of a matrix , which is a unique matrix satisfying the following equations (see, e.g., [20, 2427]): where is a subspace of of , and is a subspace of of dimension .

As we see in [13, 20, 2429], it is well-known fact that several important generalized inverses, such as the Moore-Penrose inverse , the weighted Moore-Penrose inverse , the Drazin inverse , and so forth, are all the generalized inverse , which is having the prescribed range and null space of outer inverse of . In this case, the Moore-Penrose inverse can be represented in outer inverse form as follows [27]: Also, the representation and characterization for the outer generalized inverse have been considered by many authors (see, e.g., [15, 16, 20, 27, 30, 31])

Finally, given two matrices , and , then the Kronecker product of and is defined by (see, e.g., [5, 7, 3235]) Furthermore, the Kronecker product enjoys the following well-known and important properties:(i)The Kronecker product is associative and distributive with respect to matrix addition.(ii)If , , and , then (iii)If , are positive definite matrices, then for any real number , we have: (iv)If , , then

2. Convergent Moore-Penrose Inverse of Matrices

First, the need to compute by using sequences method. The key to such results below is the following two Lemmas, due to Wei [23] and Wei and Wu [17], respectively.

Lemma 2.1. Let ba a matrix. Then where is the restriction of on .

Lemma 2.2. Let with and . Suppose is an open set such that . Let be a family of continuous real valued function on with uniformly on. Then Furthermore, Moreover, for each , we have

It is well known that the inverse of an invertible operator can be calculated by interpolating the function , in a similar manner we will approximate the Moore-Penrose inverse by interpolating function and using Lemmas 2.1 and 2.2.

One way to produce a family of functions which is suitable for use in the Lemma 2.2 is to employ the well known Euler-Knopp method. A series is said to be Euler-Knopp summable with parameter to the value if the sequence defined by converges to . If for , then we obtain as the Euler-Knopp transform of the series , the sequence given by Clearly uniformly on any compact subset of the set Another way to produce a family functions which is suitable also for use in the Lemma 2.2 is to employ the well-known Newton-Raphson method. This can be done by generating a sequence , where for suitable . Suppose that for we define a sequence of functions by In fact, Iterating on this equality, it follows that if is confined to a compact subset of . Then there is a constant (defining on this compact set) with and According to the variational definition, is the vector which minimizes the functional and also has the smallest 2 norm among all such minimizing vectors. The idea of Tikhonov’s regularization [36, 37] of order zero is to approximately minimize both the functional and the norm by minimizing the functional defined by where . The minimum of this functional will occur at the unique stationary point of , that is, the vector which satisfies . The gradient of is given by and hence the unique minimizer satisfies On intuitive grounds, it seems reasonable to expect that Therefore, if we define a sequence of functions by using Euler-Knopp method, Newton-Raphson method and the idea of Tikhonov’s regularization that mentioned above, then we get the following nice Theorem.

Theorem 2.3. Let with and . Then(i)the sequence defined by converges to . Furthermore, the error estimate is given by where .(ii)The sequence defined by converges to . Furthermore, the error estimate is given by where .(iii)for , Thus, the error estimate is given by

Proof. (i) It follows from that , and hence we apply Lemma 2.2 if we choose the parameter is such a way that , where is defined by (2.7). We may choose such that . If we use the sequence defined by it is easy to verify that uniformly on any compact subset of . Hence, if , then, applying Lemma 2.2, we get But it is easy to see from (2.22) that , where is given by (2.16). This is surely the case if , then, for such , we have the representation Note that if we set then we get (2.16).
To derive an error estimate for the Euler-Knopp method, suppose that . If the sequence is defined as in (2.22), then Therefore, since , By for and , it follows that where is given by Clearly, and therefore . From Lemma 2.2, we establish (2.17).
(ii) Using the Newton-Raphson iterations in (2.8)–(2.11) in conjunction with Lemma 2.2, we see that the sequence of defined by has the property that uniformly in . If we set , then we get (2.18).
If and , then we see that , where is given by (2.30). It follows as in (2.11) and hence from Lemma 2.2, then we get the error bound as in (2.19).
(iii) If we set in Lemma 2.2 and using the idea of Tikhonov’s regularization as in (2.12)–(2.15), then it is easy to get (2.20) and (2.21).

Huang and Zhang [28] presented the following sequence which is convergent to : where is called an acceleration parameter and chosen so as to minimize that which is bound on the maximum distance of any nonzero singular value of from 1. They chose according to the first term of sequence (2.34) with , and let , then the acceleration parameters and have the following sequences: We point out that the iteration (2.18) is a special case of an acceleration iteration (2.34). Further, we note that the above methods and the first-order iterative methods used by Ben-Israel and Greville [4] for computing are a set of instructions for generating a sequence converging to .

Similarly, Liu et al. [25] introduced some necessary and sufficient conditions for iterative convergence to the generalized inverse and its existence and estimated the error bounds of the iterative methods for approximating by defining the sequence in the following way: where with . Then the iteration (2.36) converges if and only , equivalently, . In which case, if , , and . Then, exists and converges to . Furthermore, the error estimate is given by where .

What is the best value such that minimize in order to achieve good convergence? Unfortunately, it may be very difficult and still require further studies. If is a subset of and , analogous to [38, Example  4.1], we can have

3. Convergent Infinite Products of Matrices

Trench [3, Definition  1] defined invertibly convergence of an infinite products matrices as in the invertible of matrices for all (where is an integer number). Here, we define the less restricted definitions of convergence of an infinite products and for complex matrices such that as follows.

Definition 3.1. Let be a sequence of matrices. Then An infinite product is said to be convergent if there is an integer such that (may is invertible or not) for and exists and is nonzero. In this case, we define Similarly, an infinite product converges if there is an integer such that for , and exists and is nonzero. In this case, we define In the above Definition 3.1, the matrix may be singular even if is nonsingular for all , but may be singular if is singular for some . However, this definition does not require that is invertible for large .

Definition 3.2. Let be a sequence of matrices. Then an infinite product is said to be invertibly convergent if there is an integer such that is invertible for , and exists and invertible. In this case, we define The Definitions 3.1 and 3.2 have the following obvious consequence.

Theorem 3.3. Let be a sequence of matrices such that the infinite products and are invertibly convergent. Then, both infinite products are convergent, but the converse, in general, is not true.

Theorem 3.4. Let be a sequence of matrices such that(i)if the infinite product is convergent, then where .(ii)If the infinite product is converge, then

Proof. (i) Suppose that is convergent such that when . Let . Then , where . Therefore, . Since , we then have But if is invertible when , then is invertible and Similarly, it is easy to prove (ii).

If the infinite products and are invertibly convergent in Theorem 3.4, then we get the following corollary.

Corollary 3.5. Let be a sequence of matrices such that(i)if the infinite product is invertibly convergent, then (ii)if the infinite product is invertibly convergent, then where .

The main reason for interest in these products above is to generate matrix sequences for solving such matrix problems such as singular linear systems and singular coupled matrix equations. For example, Cao [21] and Shi et al. [22] constructed the general stationary and nonstationary iterative process generated by for solving the singular linear system , and Leizarowitz [39] established conditions for weak ergodicity of products, existence of optimal strategies for controlled Markov chains, and growth properties of certain linear nonautonomous differential equations based on a sequence (an infinite product) of stochastic matrices . Also, as discussed in [2], the motivation of Definition 3.1 stems from a question about linear systems of difference equations and coupled matrix equations, under what conditions on of, for instance, the system , approach a finite nonzero limit whenever ? A system with property linear asymptotic equilibrium if and only if is invertible for every and invertibly convergent, but a system with the so-called least-squares linear asymptotic equilibrium if for every and converges.

Because of Theorem 3.4, we consider only infinite product of the form , where . We will write The following Theorem provides the convergence and invertibly convergence of the infinite product , and the proof here is omitted.

Theorem 3.6. The infinite product converges (invertibly converges) if , for some -norm .

The following theorem relates convergence of an infinite product to the asymptotic behavior of least-square solutions of a related system of difference equations.

Theorem 3.7. The infinite product converges if and only if for some integer the matrix difference equation has a least-square solution such that . In this case,

Proof. Suppose that converges. Choose so that for , and let . Define Then is a solution of (3.15) such that .
Conversely, suppose that (3.15) has a least-square solution such that . Then where . Therefore, Letting shows that which implies that converges.
From (3.14) and (3.18), we get which proves the first expression in (3.16). From (3.14) and (3.20), we get which proves the second expression in (3.16).

Remark 3.8. If the infinite product is invertibly convergent in Theorem 3.7, then

Theorem 3.7 indicates the connection between convergence of an infinite product of matrices and the asymptotic properties of least-squares solutions of matrix difference equation as defined in (3.15). We can say that (3.15) has a least-squares linear asymptotic equilibrium if every least-squares solution of (3.15) for which that approaches as .

For example, Ding and Chen [5] and generalized later by Kılıçman and Al-Zhour [8] studied the convergence of least-square solutions to the coupled Sylvester matrix equations where , , are given constant matrices, and are the unknown matrices to be solved. If the coupled Sylvester matrix equation determined by (3.24) has a unique solution and , then the following iterative solution and given by [5, 8]: where and are full column and full row-rank matrices, respectively; converges to and for any finite initial values and .

The convergence factor in (3.26) may not be the best and may be conservative. In fact, there exists a best such that the fast convergence rate of to and to can be obtained as in numerical examples given by Cao [21] and Kılıçman and Al-Zhour [8]. How to find the connections between convergence of an infinite products of matrices and least-square solutions of coupled Sylvester matrix equation in (3.24) requires further research.

4. Numerical Examples

Here, we give some numerical example for computing outer inverse and Moore-Penrose inverse by applying sequences methods which are studied and derived in Section 2. Our results are obtained in this Section by choosing Frobenius norm () and using MATLAB software.

Example 4.1. Consider the matrix Let .
Take

Here . Clearly , , and . By computing, we have In order to satisfy , we get that should satisfy the following .

From the iteration (2.5) in [24, Theorem  2.2], Let , and and be given subspaces of such that there exists . Then the sequence in defined in the following way: converges to if and only if and (where and ).

In this case, if , then

Thus we have Tables 1 and 2 respectively, where Table 1 illustrates that is best value such that reaches iterating the least number of steps, the reason for which is that such a is calculating by using (2.38). Thus, for an appropriate , the iteration is better than the iteration (4.4) (cf. Tables 1 and 2). And with respect to the error bound, the iterations for almost all are also better. Let us take the error bound smaller than ; for instance, the number of steps of iterations in Table 1 is smaller than that of the iterations in Table 2. But, in practice, we consider also the quantity in order to cease iteration since there exist such cases as . For example, for , where is the machine precision, the iteration for only needs 3 steps. Therefore, in general, the iteration of (2.36) is better than the iteration (4.4) for an appropriate . Note that the iterations in both Tables 1 and 2 indicate a fast convergence for the quantity more than the quantity in Table 1 and the quantity in Table 2 since each of and is an upper bound of the quantity , and, to find the best or least upper bound for the quantity, requires further research.

Example 4.2. Consider the matrix Then by computing we have

Thus, (see Tables 3 and 4).

Example 4.3. We generate a random matrix, by using MATLAB, and then we obtain the results as in Tables 5 and 6.

Note that from Tables 3, 4, 5, and 6, it is clear that the quantities , and , , are becoming smaller and smaller and goes to zero as increases in both iterations (2.34) and (2.18). We can also conclude that both iterations almost have same fast of convergence when the dimension of any arbitrary matrix is not so large, but the acceleration iteration (2.34) is better more than the iteration (2.18) when the dimension of any arbitrary matrix is so large with an appropriate acceleration parameter .

5. Concluding Remarks

In this paper, we have studied some matrix sequences convergence to the Moore-Penrose inverse and outer inverse of an arbitrary matrix . The key to derive matrix sequences which are convergent to weighted Drazin and weighted Moore-Penrose inverses is the Lemma 2.2. Some sufficient conditions for infinite products and of matrices are also derived. In our opinion, it is worth establishing some connections between convergence of an infinite products of matrices and least-square solutions of such linear singular systems as well as the singular coupled matrix equations.

6. Acknowledgments

The authors express their sincere thanks to the referee(s) for careful reading of the manuscript and several helpful suggestions. The authors also gratefully acknowledge that this research was partially supported by Ministry of Science, Technology and Innovations (MOSTI), Malaysia under the e-Science Grant 06-01-04-SF1050.