Complexity

Volume 2017 (2017), Article ID 6429725, 27 pages

https://doi.org/10.1155/2017/6429725

## Conditions for Existence, Representations, and Computation of Matrix Generalized Inverses

^{1}Faculty of Science and Mathematics, Department of Computer Science, University of Niš, Višegradska 33, 18000 Niš, Serbia^{2}Faculty of Computer Science, Goce Delčev University, Goce Delčev 89, 2000 Štip, Macedonia^{3}Aristoteleion Panepistimion, Thessalonikis, Greece

Correspondence should be addressed to Predrag S. Stanimirović; sr.ca.in.fmp@okcep

Received 3 January 2017; Accepted 18 April 2017; Published 5 June 2017

Academic Editor: Sigurdur F. Hafstein

Copyright © 2017 Predrag S. Stanimirović et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Conditions for the existence and representations of -, -, and -inverses which satisfy certain conditions on ranges and/or null spaces are introduced. These representations are applicable to complex matrices and involve solutions of certain matrix equations. Algorithms arising from the introduced representations are developed. Particularly, these algorithms can be used to compute the Moore-Penrose inverse, the Drazin inverse, and the usual matrix inverse. The implementation of introduced algorithms is defined on the set of real matrices and it is based on the Simulink implementation of GNN models for solving the involved matrix equations. In this way, we develop computational procedures which generate various classes of inner and outer generalized inverses on the basis of resolving certain matrix equations. As a consequence, some new relationships between the problem of solving matrix equations and the problem of numerical computation of generalized inverses are established. Theoretical results are applicable to complex matrices and the developed algorithms are applicable to both the time-varying and time-invariant real matrices.

#### 1. Introduction, Motivation, and Preliminaries

Let and (resp., and ) denote the set of complex (resp., real) matrices and all complex (resp., real) matrices of rank . As usual, the notation denotes the unit matrix of an appropriate order. Further, by , , , and are denoted as the conjugate transpose, the range, the rank, and the null space of .

The problem of pseudoinverses computation leads to the, so-called, Penrose equations:The set of all matrices obeying the conditions contained in is denoted by . Any matrix from is called the -inverse of and is denoted by . is denoted as the set of all -inverses of of rank . For any matrix there exists a unique element in the set , called the Moore-Penrose inverse of , which is denoted by . The Drazin inverse of a square matrix is the unique matrix which fulfills matrix equation (2) in conjunction with and it is denoted by . Here, the notation denotes the index of a square matrix and it is defined by In the case , the Drazin inverse becomes the group inverse . For other important properties of generalized inverses see [1, 2].

An element satisfying (resp., ) is denoted by (resp., ). If satisfies both the conditions and it is denoted by . The set of all -inverses of with the prescribed range (resp., prescribed kernel ) is denoted by (resp., ). Definitions and notation used in the further text are from the books by Ben-Israel and Greville [1] and Wang et al. [2].

Full-rank representation of -inverses with the prescribed range and null space is determined in the next proposition, which originates from [3].

Proposition 1 (see [3]). *Let , let be a subspace of of dimension , and let be a subspace of of dimensions . In addition, suppose that satisfies , . Let have an arbitrary full-rank decomposition; that is, . If has a -inverse , then*(1)* is an invertible matrix;*(2)*.*

*The Moore-Penrose inverse , the Drazin inverse , and the group inverse are generalized inverses for appropriate choice of subspaces and . For example, the following is valid for a rectangular matrix [2]: *

*The full-rank representation has been applied in numerical calculations. For example, such a representation has been exploited to define the determinantal representation of the inverse in [3] or the determinantal representation of the set in [4]. A lot of iterative methods for computing outer inverses with the prescribed range and null space have been developed. An outline of these numerical methods can be found in [5–13].*

*A drawback of the representation given in Proposition 1 arises from the fact that it is based on the full-rank decomposition and gives the representation of . Besides, it requires invertibility of ; in the opposite case, it is not applicable. Finally, representations of outer inverses with given only range or null space or the representations of inner inverses with the prescribed range and/or null space are not covered. For this purpose, our further motivation is well-known representations of generalized inverses and , given by the Urquhart formula. The Urquhart formula was originated [14] and later extended in [2, Theorem 1.3.3] and [1, Theorem 13, P. 72]. We restate it for the sake of completeness.*

*Proposition 2 (Urquhart formula). Let , , , and , where is a fixed but arbitrary element of . Then(1) if and only if ;(2) and if and only if ;(3) and if and only if ;(4) if and only if ;(5) if and only if .*

*Later, our motivation is the notion of a -inverse of an element in a semigroup, introduced by Drazin in [15]. Following the result from [9], the representation of outer inverses given in Proposition 1 investigates -inverses. Our tendency is to consider representations and computations of -inverses, where and could be different.*

*Finally, our intention is to define appropriate numerical algorithms for computing generalized inversesin both the time-varying and time-invariant cases. For this purpose, we observed that the neural dynamic approach has been exploited as a powerful tool in solving matrix algebra problems, due to its parallel distributed nature as well as its convenience of hardware implementation. Recently, many authors have shown great interest for computing the inverse or the pseudoinverse of square and full-rank rectangular matrices on the basis of gradient-based recurrent neural networks (GNNs) or Zhang neural networks (ZNNs). Neural network models for the inversion and pseudo-inversion of square and full-row or full-column rank rectangular matrices were developed in [16–18]. Various recurrent neural networks for computing generalized inverses of rank-deficient matrices were introduced in [19–23]. RNNs designed for calculating the pseudoinverse of rank-deficient matrices were created in [21]. Three recurrent neural networks for computing the weighted Moore-Penrose inverse were introduced in [22]. A feedforward neural network architecture for computing the Drazin inverse was proposed in [19]. The dynamic equation and induced gradient recurrent neural network for computing the Drazin inverse were defined in [24]. Two gradient-based RNNs for generating outer inverses with prescribed range and null space in the time-invariant case were introduced in [25]. Two additional dynamic state equations and corresponding gradient-based RNNs for generating the class of outer inverses of time-invariant real matrices were proposed in [26].*

*The global organization of the paper is as follows. Conditions for the existence and representations of generalized inverses included in (4) are given in Section 2. Numerical algorithms arising from the representations derived in Section 2 are defined in Section 3. In this way, Section 3 defines algorithms for computing various classes of inner and outer generalized inverses by means of derived solutions of certain matrix equations. Main particular cases are presented in the same section as well as the global computational complexity of introduced algorithms. Illustrative simulation and numerical examples are presented in Section 4.*

*2. Existence and Representations of Generalized Inverses*

*2. Existence and Representations of Generalized Inverses*

*Theorem 3 provides a theoretical basis for computing outer inverses with the prescribed range space.*

*Theorem 3. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying , denoted by .(ii)There exists such that .(iii).(iv).(v), for some (equivalently every) .(b)If the statements in (a) are true, then the set of all outer inverses with the prescribed range is represented by Moreover, where is arbitrary but fixed.*

*Proof. *(a) . Let such that and . Then and , for some and , so .

. As we know, . On the other hand, taking into account for some , it follows that , and hence .

. Let be an arbitrary -inverse of . As implies , for some , it follows that. Let , for some , and set . Thenand by and it follows that is a -inverse of which satisfies .

. This result is well-known.

(b) From the proofs of and , and the fact that implies , it follows thatand hence (5) holds.

According to Theorem 1 [1, Section 2] (or [2, Theorem 1.2.5]), the condition (v) ensures consistency of the matrix equation and gives its general solutionwhence we obtain This proves is that (6) is true.

*Remark 4. *Five equivalent conditions for the existence and representations of the class of generalized inverses were given in [27, Theorem 1]. Theorem 3 gives two new and important conditions (i) and (v). These conditions are related with solvability of certain matrix equations. Further, the representations of generalized inverses were presented in [27, Theorem 2]. Theorem 3 gives two new and important representations: the second representation in (5) and representation (6).

*Theorem 5 provides a theoretical basis for computing outer inverses with the prescribed kernel. These results are new in the literature, according to our best knowledge.*

*Theorem 5. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying , denoted by .(ii)There exists such that .(iii).(iv).(v), for some (equivalently every) .(b)If the statements in (a) are true, then the set of all outer inverses with the prescribed null space is represented by Moreover, where is an arbitrary fixed matrix from .*

*Proof. *The proof is analogous to the proof of Theorem 3.

*Theorem 6 is a theoretical basis for computing a -inverse with the prescribed range and null space.*

*Theorem 6. Let , , and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying and .(ii)There exist such that and .(iii)There exist such that and .(iv)There exist and such that , , and .(v)There exist and such that and .(vi), .(vii).(viii) and , for some (equivalently every) .(b)If the statements in (a) are true, then the unique -inverse of with the prescribed range and null space is represented by for arbitrary and arbitrary satisfying and .*

*Proof. *(a) . Let be such that , , and . Then there exists such that . Also, and satisfy and , for some , . This further implies. According to , for some , it follows thatand thus . Further, by , for some , it follows thatwhich yields .

. Let be an arbitrary -inverse of . Since implies , for some , it follows thatSimilarly, implies , for some and. Let , for some , and set . Thenand by , , and it follows that is a -inverse of which satisfies , .

. This statement follows from [2, Theorem 1.1.3, P. 3].

. This is evident.

. Let be arbitrary matrices such that and . ThenwhenceThus, (ii) holds.

. such that and . Thenwhich means that (iv) is true.

. Let and such that , , and . Thenwhich confirms (v).

. Let and such that and . Thenand hence (iv) holds.

. Let and such that , , and , and set . Thenby and it follows that , and by it follows that . Therefore, (i) is true.

(b) According to the proofs of (i)(ii) and (iv)(i) and the fact that and , for , imply , it follows that and hence (14) holds.

*Remark 7. *After a comparison of Theorem 6 with the Urquhart formula given in Proposition 2, it is evident that conditions (vi) and (vii) of Theorem 6 could be derived using the Urquhart results. All other conditions are based on the solutions of certain matrix equations, and they are new.

In addition, comparing the representations of Theorem 6 with the full-rank representation restated from [3] in Proposition 1, it is remarkable that the representations given in Theorem 6 do not require computation of a full-rank factorization of the matrix . More precisely, representations of from Theorem 6 boil down to the full-rank factorization of from Proposition 1 in the case when is a full-rank factorization of and is invertible.

It is worth mentioning that Drazin in [15] generalized the concept of the outer inverse with the prescribed range and null space by introducing the concept of a -inverse in a semigroup. In the matrix case, this concept can be defined as follows. Let , , , and . Then, we call a -inverse of if the following two relations hold:It is easy to see that is a -inverse of if and only if is a -inverse of satisfying and .

*The next theorem can be used for computing a -inverse of satisfying .*

*Theorem 8. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying .(ii)There exists such that .(iii).(iv), for some (equivalently every) .(v).(b)If the statements in (a) are true, then the set of all inner inverses of whose range is contained in is represented by Moreover, where and are arbitrary but fixed.*

*Proof. *(a) . Let such that and . Then , for some , so .

. Let , for some . Then . Since the opposite inclusion always holds, we conclude that .

. Let be an arbitrary -inverse of . By it follows that , for some , so we have that. Let , for some , and set . It is clear that , and by we obtain the fact that .

. This follows from [2, Theorem 1.1.3, P. 3].

(b) On the basis of the fact that implies and the arguments used in the proofs of and , we have that which confirms that (30) is true.

Once again, according to Theorem 1 [1, Section 2] (or Theorem 1.2.5 [2]) we have thatwhere and are arbitrary elements, whence we obtain that and hence (31) is true.

*Theorem 9 can be used for computing a -inverse of satisfying . Its proof is dual to the proof of Theorem 8.*

*Theorem 9. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying .(ii)There exists such that .(iii).(iv), for some (equivalently every) .(v).(b)If the statements in (a) are true, then the set of all inner inverses of whose null space is contained in is represented by Moreover, where and are arbitrary but fixed.*

*Theorem 10 provides several equivalent conditions for the existence and representations for computing a -inverse with the prescribed range.*

*Theorem 10. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying , denoted by .(ii)There exist such that and .(iii)There exists such that and .(iv) and .(v).(vi) and , for some (equivalently every) .(b)If the statements in (a) are true, then the set of all -inverses with the prescribed range is represented by*

*Proof. *(a) First we note that the implication and the equivalences and follow directly from Theorems 3 and 8. Also, follows from Theorem 1.1.3 [2] (or Example 10 [1, Section 1]).

. If we set , where is an arbitrary element, then (vi) implies that and .

. If such that and , then by Theorem 3 we obtain the fact that is a -inverse of satisfying , and clearly is also a -inverse of .

. This implication is evident.

(b) If the statements in (a) hold, then the statements of Theorems 3 and 8 also hold, and from these two theorems it follows directly that (38) is valid.

*Theorem 11 provides several equivalent conditions for the existence and representations of .*

*Theorem 11. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying , denoted by .(ii)There exist such that and .(iii)There exists such that and .(iv) and .(v).(vi) and , for some (equivalently every) .(b)If the statements in (a) are true, then the set of all -inverses with the range is given by*

*Theorem 12 is a theoretical basis for computing a -inverse with the predefined range and null space.*

*Theorem 12. Let , , and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying and , denoted by .(ii)There exist and such that , , , and .(iii), , , and .(iv), .(v).(vi), , , and , for some (equivalently every) and .(b)If the statements in (a) are true, then the unique -inverse of with the prescribed range and null space is represented by for arbitrary , , and and arbitrary and satisfying and .*

*Proof. *(a) The equivalence of the statements and (vi) follows immediately from Theorem 10 and its dual. The equivalence follows immediately from part (4) of the famous Urquhart formula [2, Theorem 1.3.7].

(b) Let and be arbitrary matrices satisfying and , and set . Seeing that and , according to (v) we obtain the fact that and . This implies that which means that is a -inverse of satisfying and , and hence the second equality in (40) is true.

The same arguments confirm the validity of the first equality in (40).

*Corollary 13. Theorem 6 is equivalent to Theorem 12 in the case .*

*Proof. *According to assumptions, the output of Theorem 6 becomes . Then the proof follows from the uniqueness of this kind of generalized inverses.

*Remark 14. *It is evident that only conditions (v) of Theorem 12 can be derived from the Urquhart results. All other conditions are based on the solutions of certain matrix equations and they are introduced in Theorem 12. Also, the first two representations in (40) are introduced in the present research.

*3. Algorithms and Implementation Details*

*3. Algorithms and Implementation Details*

*The representations presented in Section 2 provide two different frameworks for computing generalized inverses. The first approach arises from the direct computation of various generalizations or certain variants of the Urquhart formula, derived in Section 2. The second approach enables computation of generalized inverses by means of solving certain matrix equations.*

*The dynamical-system approach is one of the most important parallel tools for solving various basic linear algebra problems. Also, Zhang neural networks (ZNN) as well as gradient neural networks (GNN) have been simulated for finding a real-time solution of linear time-varying matrix equation . Simulation results confirm the efficiency of the ZNN and GNN approach in solving both time-varying and time-invariant linear matrix equations. We refer to [28, 29] for further details. In the case of constant coefficient matrices , it is necessary to use the linear GNN of the formThe generalized nonlinearly activated GNN model (GGNN model) is applicable in both time-varying and time-invariant case and possesses the formwhere is an odd and monotonically increasing function element-wise applicable to elements of a real matrix ; that is, , wherein is an odd and monotonically increasing function. Also, the scaling parameter could be chosen as large as possible in order to accelerate the convergence. The convergence could be proved only for the situation with constant coefficient matrices .*

*Besides the linear activation function, , in the present paper we use the power-sigmoid activation function*

*Theorem 3 provides not only criteria for the existence of an outer inverse with the prescribed range, but also a method for computing such an inverse. Namely, the problem of computing a -inverse of satisfying boils down to the problem of computing a solution to the matrix equation , where is an unknown matrix taking values in . If is an arbitrary solution to this equation, then a -inverse of satisfying can be computed as .*

*The Simulink implementation of Algorithm 1 in the set of real matrices is based on GGNN model (43) for solving the matrix equation and it is presented in Figure 5. The Simulink Scope and Display Block denoted by display input signals corresponding to the solution of the matrix equation with respect to the time . The underlying GGNN model in Figure 5 is The Display Block denoted by displays inputs signals corresponding to the solution .*