Abstract

Conditions for the existence and representations of -, -, and -inverses which satisfy certain conditions on ranges and/or null spaces are introduced. These representations are applicable to complex matrices and involve solutions of certain matrix equations. Algorithms arising from the introduced representations are developed. Particularly, these algorithms can be used to compute the Moore-Penrose inverse, the Drazin inverse, and the usual matrix inverse. The implementation of introduced algorithms is defined on the set of real matrices and it is based on the Simulink implementation of GNN models for solving the involved matrix equations. In this way, we develop computational procedures which generate various classes of inner and outer generalized inverses on the basis of resolving certain matrix equations. As a consequence, some new relationships between the problem of solving matrix equations and the problem of numerical computation of generalized inverses are established. Theoretical results are applicable to complex matrices and the developed algorithms are applicable to both the time-varying and time-invariant real matrices.

1. Introduction, Motivation, and Preliminaries

Let and (resp., and ) denote the set of complex (resp., real) matrices and all complex (resp., real) matrices of rank . As usual, the notation denotes the unit matrix of an appropriate order. Further, by , , , and are denoted as the conjugate transpose, the range, the rank, and the null space of .

The problem of pseudoinverses computation leads to the, so-called, Penrose equations:The set of all matrices obeying the conditions contained in is denoted by . Any matrix from is called the -inverse of and is denoted by . is denoted as the set of all -inverses of of rank . For any matrix there exists a unique element in the set , called the Moore-Penrose inverse of , which is denoted by . The Drazin inverse of a square matrix is the unique matrix which fulfills matrix equation (2) in conjunction with and it is denoted by . Here, the notation denotes the index of a square matrix and it is defined by In the case , the Drazin inverse becomes the group inverse . For other important properties of generalized inverses see [1, 2].

An element satisfying (resp., ) is denoted by (resp., ). If satisfies both the conditions and it is denoted by . The set of all -inverses of with the prescribed range (resp., prescribed kernel ) is denoted by (resp., ). Definitions and notation used in the further text are from the books by Ben-Israel and Greville [1] and Wang et al. [2].

Full-rank representation of -inverses with the prescribed range and null space is determined in the next proposition, which originates from [3].

Proposition 1 (see [3]). Let , let be a subspace of of dimension , and let be a subspace of of dimensions . In addition, suppose that satisfies ,  . Let have an arbitrary full-rank decomposition; that is, . If has a -inverse , then(1) is an invertible matrix;(2).

The Moore-Penrose inverse , the Drazin inverse , and the group inverse are generalized inverses for appropriate choice of subspaces and . For example, the following is valid for a rectangular matrix [2]:

The full-rank representation has been applied in numerical calculations. For example, such a representation has been exploited to define the determinantal representation of the inverse in [3] or the determinantal representation of the set in [4]. A lot of iterative methods for computing outer inverses with the prescribed range and null space have been developed. An outline of these numerical methods can be found in [513].

A drawback of the representation given in Proposition 1 arises from the fact that it is based on the full-rank decomposition and gives the representation of . Besides, it requires invertibility of ; in the opposite case, it is not applicable. Finally, representations of outer inverses with given only range or null space or the representations of inner inverses with the prescribed range and/or null space are not covered. For this purpose, our further motivation is well-known representations of generalized inverses and , given by the Urquhart formula. The Urquhart formula was originated [14] and later extended in [2, Theorem  1.3.3] and [1, Theorem  13, P. 72]. We restate it for the sake of completeness.

Proposition 2 (Urquhart formula). Let , , , and , where is a fixed but arbitrary element of . Then(1) if and only if ;(2) and if and only if ;(3) and if and only if ;(4) if and only if ;(5) if and only if .

Later, our motivation is the notion of a -inverse of an element in a semigroup, introduced by Drazin in [15]. Following the result from [9], the representation of outer inverses given in Proposition 1 investigates -inverses. Our tendency is to consider representations and computations of -inverses, where and could be different.

Finally, our intention is to define appropriate numerical algorithms for computing generalized inversesin both the time-varying and time-invariant cases. For this purpose, we observed that the neural dynamic approach has been exploited as a powerful tool in solving matrix algebra problems, due to its parallel distributed nature as well as its convenience of hardware implementation. Recently, many authors have shown great interest for computing the inverse or the pseudoinverse of square and full-rank rectangular matrices on the basis of gradient-based recurrent neural networks (GNNs) or Zhang neural networks (ZNNs). Neural network models for the inversion and pseudo-inversion of square and full-row or full-column rank rectangular matrices were developed in [1618]. Various recurrent neural networks for computing generalized inverses of rank-deficient matrices were introduced in [1923]. RNNs designed for calculating the pseudoinverse of rank-deficient matrices were created in [21]. Three recurrent neural networks for computing the weighted Moore-Penrose inverse were introduced in [22]. A feedforward neural network architecture for computing the Drazin inverse was proposed in [19]. The dynamic equation and induced gradient recurrent neural network for computing the Drazin inverse were defined in [24]. Two gradient-based RNNs for generating outer inverses with prescribed range and null space in the time-invariant case were introduced in [25]. Two additional dynamic state equations and corresponding gradient-based RNNs for generating the class of outer inverses of time-invariant real matrices were proposed in [26].

The global organization of the paper is as follows. Conditions for the existence and representations of generalized inverses included in (4) are given in Section 2. Numerical algorithms arising from the representations derived in Section 2 are defined in Section 3. In this way, Section 3 defines algorithms for computing various classes of inner and outer generalized inverses by means of derived solutions of certain matrix equations. Main particular cases are presented in the same section as well as the global computational complexity of introduced algorithms. Illustrative simulation and numerical examples are presented in Section 4.

2. Existence and Representations of Generalized Inverses

Theorem 3 provides a theoretical basis for computing outer inverses with the prescribed range space.

Theorem 3. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying , denoted by .(ii)There exists such that .(iii).(iv).(v), for some (equivalently every) .(b)If the statements in (a) are true, then the set of all outer inverses with the prescribed range is represented byMoreover,where is arbitrary but fixed.

Proof. (a) . Let such that and . Then and , for some and , so .
. As we know, . On the other hand, taking into account for some , it follows that , and hence .
. Let be an arbitrary -inverse of . As implies , for some , it follows that. Let , for some , and set . Thenand by and it follows that is a -inverse of which satisfies .
. This result is well-known.
(b) From the proofs of and , and the fact that implies , it follows thatand hence (5) holds.
According to Theorem  1 [1, Section  2] (or [2, Theorem  1.2.5]), the condition (v) ensures consistency of the matrix equation and gives its general solutionwhence we obtain This proves is that (6) is true.

Remark 4. Five equivalent conditions for the existence and representations of the class of generalized inverses were given in [27, Theorem  1]. Theorem 3 gives two new and important conditions (i) and (v). These conditions are related with solvability of certain matrix equations. Further, the representations of generalized inverses were presented in [27, Theorem  2]. Theorem 3 gives two new and important representations: the second representation in (5) and representation (6).

Theorem 5 provides a theoretical basis for computing outer inverses with the prescribed kernel. These results are new in the literature, according to our best knowledge.

Theorem 5. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying , denoted by .(ii)There exists such that .(iii).(iv).(v), for some (equivalently every) .(b)If the statements in (a) are true, then the set of all outer inverses with the prescribed null space is represented byMoreover,where is an arbitrary fixed matrix from .

Proof. The proof is analogous to the proof of Theorem 3.

Theorem 6 is a theoretical basis for computing a -inverse with the prescribed range and null space.

Theorem 6. Let , , and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying and .(ii)There exist such that and .(iii)There exist such that and .(iv)There exist and such that , , and .(v)There exist and such that and .(vi), .(vii).(viii) and , for some (equivalently every) .(b)If the statements in (a) are true, then the unique -inverse of with the prescribed range and null space is represented byfor arbitrary and arbitrary satisfying and .

Proof. (a) . Let be such that , , and . Then there exists such that . Also, and satisfy and , for some , . This further implies. According to , for some , it follows thatand thus . Further, by , for some , it follows thatwhich yields .
. Let be an arbitrary -inverse of . Since implies , for some , it follows thatSimilarly, implies , for some and. Let , for some , and set . Thenand by , , and it follows that is a -inverse of which satisfies , .
. This statement follows from [2, Theorem  1.1.3, P. 3].
. This is evident.
. Let be arbitrary matrices such that and . ThenwhenceThus, (ii) holds.
. such that and . Thenwhich means that (iv) is true.
. Let and such that , , and . Thenwhich confirms (v).
. Let and such that and . Thenand hence (iv) holds.
. Let and such that , , and , and set . Thenby and it follows that , and by it follows that . Therefore, (i) is true.
(b) According to the proofs of (i)(ii) and (iv)(i) and the fact that and , for , imply , it follows that and hence (14) holds.

Remark 7. After a comparison of Theorem 6 with the Urquhart formula given in Proposition 2, it is evident that conditions (vi) and (vii) of Theorem 6 could be derived using the Urquhart results. All other conditions are based on the solutions of certain matrix equations, and they are new.
In addition, comparing the representations of Theorem 6 with the full-rank representation restated from [3] in Proposition 1, it is remarkable that the representations given in Theorem 6 do not require computation of a full-rank factorization of the matrix . More precisely, representations of from Theorem 6 boil down to the full-rank factorization of from Proposition 1 in the case when is a full-rank factorization of and is invertible.
It is worth mentioning that Drazin in [15] generalized the concept of the outer inverse with the prescribed range and null space by introducing the concept of a -inverse in a semigroup. In the matrix case, this concept can be defined as follows. Let , , , and . Then, we call a -inverse of if the following two relations hold:It is easy to see that is a -inverse of if and only if is a -inverse of satisfying and .

The next theorem can be used for computing a -inverse of satisfying .

Theorem 8. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying .(ii)There exists such that .(iii).(iv), for some (equivalently every) .(v).(b)If the statements in (a) are true, then the set of all inner inverses of whose range is contained in is represented byMoreover,where and are arbitrary but fixed.

Proof. (a) . Let such that and . Then , for some , so .
. Let , for some . Then . Since the opposite inclusion always holds, we conclude that .
. Let be an arbitrary -inverse of . By it follows that , for some , so we have that. Let , for some , and set . It is clear that , and by we obtain the fact that .
. This follows from [2, Theorem  1.1.3, P. 3].
(b) On the basis of the fact that implies and the arguments used in the proofs of and , we have that which confirms that (30) is true.
Once again, according to Theorem  1 [1, Section  2] (or Theorem  1.2.5 [2]) we have thatwhere and are arbitrary elements, whence we obtain that and hence (31) is true.

Theorem 9 can be used for computing a -inverse of satisfying . Its proof is dual to the proof of Theorem 8.

Theorem 9. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying .(ii)There exists such that .(iii).(iv), for some (equivalently every) .(v).(b)If the statements in (a) are true, then the set of all inner inverses of whose null space is contained in is represented byMoreover,where and are arbitrary but fixed.

Theorem 10 provides several equivalent conditions for the existence and representations for computing a -inverse with the prescribed range.

Theorem 10. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying , denoted by .(ii)There exist such that and .(iii)There exists such that and .(iv) and .(v).(vi) and , for some (equivalently every) .(b)If the statements in (a) are true, then the set of all -inverses with the prescribed range is represented by

Proof. (a) First we note that the implication and the equivalences and follow directly from Theorems 3 and 8. Also, follows from Theorem  1.1.3 [2] (or Example  10 [1, Section  1]).
. If we set , where is an arbitrary element, then (vi) implies that and .
. If such that and , then by Theorem 3 we obtain the fact that is a -inverse of satisfying , and clearly is also a -inverse of .
. This implication is evident.
(b) If the statements in (a) hold, then the statements of Theorems 3 and 8 also hold, and from these two theorems it follows directly that (38) is valid.

Theorem 11 provides several equivalent conditions for the existence and representations of .

Theorem 11. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying , denoted by .(ii)There exist such that and .(iii)There exists such that and .(iv) and .(v).(vi) and , for some (equivalently every) .(b)If the statements in (a) are true, then the set of all -inverses with the range is given by

Theorem 12 is a theoretical basis for computing a -inverse with the predefined range and null space.

Theorem 12. Let , , and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying and , denoted by .(ii)There exist and such that , , , and .(iii), , , and .(iv), .(v).(vi), , , and , for some (equivalently every) and .(b)If the statements in (a) are true, then the unique -inverse of with the prescribed range and null space is represented byfor arbitrary , , and and arbitrary and satisfying and .

Proof. (a) The equivalence of the statements and (vi) follows immediately from Theorem 10 and its dual. The equivalence follows immediately from part (4) of the famous Urquhart formula [2, Theorem  1.3.7].
(b) Let and be arbitrary matrices satisfying and , and set . Seeing that and , according to (v) we obtain the fact that and . This implies that which means that is a -inverse of satisfying and , and hence the second equality in (40) is true.
The same arguments confirm the validity of the first equality in (40).

Corollary 13. Theorem 6 is equivalent to Theorem 12 in the case .

Proof. According to assumptions, the output of Theorem 6 becomes . Then the proof follows from the uniqueness of this kind of generalized inverses.

Remark 14. It is evident that only conditions (v) of Theorem 12 can be derived from the Urquhart results. All other conditions are based on the solutions of certain matrix equations and they are introduced in Theorem 12. Also, the first two representations in (40) are introduced in the present research.

3. Algorithms and Implementation Details

The representations presented in Section 2 provide two different frameworks for computing generalized inverses. The first approach arises from the direct computation of various generalizations or certain variants of the Urquhart formula, derived in Section 2. The second approach enables computation of generalized inverses by means of solving certain matrix equations.

The dynamical-system approach is one of the most important parallel tools for solving various basic linear algebra problems. Also, Zhang neural networks (ZNN) as well as gradient neural networks (GNN) have been simulated for finding a real-time solution of linear time-varying matrix equation . Simulation results confirm the efficiency of the ZNN and GNN approach in solving both time-varying and time-invariant linear matrix equations. We refer to [28, 29] for further details. In the case of constant coefficient matrices , it is necessary to use the linear GNN of the formThe generalized nonlinearly activated GNN model (GGNN model) is applicable in both time-varying and time-invariant case and possesses the formwhere is an odd and monotonically increasing function element-wise applicable to elements of a real matrix ; that is, , wherein is an odd and monotonically increasing function. Also, the scaling parameter could be chosen as large as possible in order to accelerate the convergence. The convergence could be proved only for the situation with constant coefficient matrices .

Besides the linear activation function, , in the present paper we use the power-sigmoid activation function

Theorem 3 provides not only criteria for the existence of an outer inverse with the prescribed range, but also a method for computing such an inverse. Namely, the problem of computing a -inverse of satisfying boils down to the problem of computing a solution to the matrix equation , where is an unknown matrix taking values in . If is an arbitrary solution to this equation, then a -inverse of satisfying can be computed as .

The Simulink implementation of Algorithm 1 in the set of real matrices is based on GGNN model (43) for solving the matrix equation and it is presented in Figure 5. The Simulink Scope and Display Block denoted by display input signals corresponding to the solution of the matrix equation with respect to the time . The underlying GGNN model in Figure 5 is The Display Block denoted by displays inputs signals corresponding to the solution .

Require: Time varying matrices and .
 (1) Verify .
  If these conditions are satisfied then continue.
  (2) Solve the matrix equation with respect to .
  (3) Return .

The block subsystem implements the power-sigmoid activation function and it is presented in Figure 1.

Theorem 5 reduces the problem of computing a -inverse of satisfying to the problem of computing a solution to the matrix equation , where is an unknown matrix taking values in . Then .

The Simulink implementation of Algorithm 2 which is based on the GGNN model for solving and computing is presented in Figure 6. The underlying GGNN model in Figure 6 is

Require: Time varying matrices and .
 (1) Verify .
  If these conditions are satisfied then continue.
 (2) Solve the matrix equation with respect to an unknown matrix .
 (3) Return .

The Display Block denoted by displays input signals corresponding to the solution of the matrix equation with respect to simulation time. The Display Block denoted by displays input signals corresponding to the solution .

Theorem 6 provides a powerful representation of a -inverse of satisfying and . Also, it suggests the following procedure for computing those generalized inverses. First, it is necessary to verify whether . If this is true, then by Theorem 6 it follows that the equations and are solvable and have the same sets of solutions. We compute an arbitrary solution of the equation , and then is the desired -inverse of .

The Simulink implementation of the GGNN model for solving and computing the outer inverse defined in Algorithm 3 is presented in Figure 2. The underlying GGNN model in Figure 2 isThe implementation of the dual approach, based on the solution of and generating the outer inverse , is presented in Figure 4. The underlying GGNN model in Figure 4 is

Require: Time varying matrices , and .
 (1) Verify .
  If these conditions are satisfied then continue.
 (2) Solve the matrix equation with respect to an unknown matrix .
 (3) Return .

Theorem 8 can be used in a similar way to Theorem 3: if the equation is solvable and its solution is computed, then a -inverse of satisfying is computed as . Corresponding computational procedure is given in Algorithm 4.

Require: Time varying matrices and .
 (1) Check the condition .
  If this condition is satisfied then continue.
 (2) Solve the matrix equation with respect to .
 (3) Return a -inverse of satisfying .

Similarly, Theorem 9 can be used for computing a -inverse of satisfying , as it is presented in Algorithm 5.

Require: Time varying matrices and .
 (1) Check the condition .
  If this condition is satisfied then continue.
 (2) Solve the matrix equation with respect to an unknown matrix .
 (3) Return a -inverse of satisfying .

An algorithm for computing a -inverse with the prescribed range is based on Theorem 10. According to this theorem we first check the condition . If it is satisfied, then the equation is solvable and we compute an arbitrary solution to this equation, after which we compute a -inverse of satisfying as . By Theorem 10, is also a -inverse of . Algorithm 1 differs from Algorithm 6 only in the first step. Therefore, the implementation of Algorithm 6 uses the Simulink implementation of Algorithm 1 in the case when .

Require: Time varying matrices and .
 (1) Check the condition .
  If these conditions are satisfied then continue.
 (2) If the previous condition is satisfied, then solve the matrix equation with respect to an unknown
  matrix .
 (3) Return a -inverse of satisfying .

Similarly, Theorem 11 provides an algorithm for computing . The implementation of Algorithm 7 uses the Simulink implementation of Algorithm 2 in the case .

Require: Time varying matrices and .
 (1) Check the condition .
  If these conditions are satisfied then continue.
 (2) Solve the matrix equation with respect to an unknown matrix .
 (3) Return a -inverse of satisfying .

Theorem 12 suggests the following procedure for computing a -inverse of satisfying and . First we check the condition . If this is true, then the equations and are solvable, and we compute an arbitrary solution to the first one and an arbitrary solution of the second one. According to Theorem 12, is a -inverse of with and .

The Simulink implementation of Algorithm 8 based on the GGNN models for solving and and computing is presented in Figure 8. In this case, it is necessary to implement two parallel GGNN models of the form

Require: Time varying matrices , and .
Require: Verify .
  If these conditions are satisfied then continue.
 (1) Solve the matrix equation with respect to an unknown matrix .
 (2) Solve the matrix equation with respect to an unknown matrix .
 (3) Return .

There is also an alternative way to compute a -inverse of with and . Namely, first we check whether . If this is true, then by Theorem 12 it follows that there exists a -inverse of with the prescribed range and null space , and each such inverse is also a -inverse of . Therefore, to compute a -inverse of having the range and null space we have to compute a -inverse of with and in exactly the same way as in Algorithm 3. In other words, we compute an arbitrary solution to the equation , and then is the desired -inverse of .

3.1. Complexity of Algorithms

The general computational pattern for commuting generalized inverses is based on the general representation , where the matrices satisfy various conditions imposed in the proposed algorithms.

The first approach is based on the computation of an involved inner inverse , and it can be described in three main steps:(1)Compute the matrix product .(2)Compute an inner inverse of , for example, .(3)Compute the generalized inverse as the matrix product .

The second general computational pattern for computing generalized inverses can be described in three main steps:(1)Compute matrix products included in the required linear matrix equation.(2)Solve the generated matrix equation with respect to the unknown matrix .(3)Compute the generalized inverse of as the matrix product which includes .

According to the first approach, the complexity of computing generalized inverses can be estimated as follows:(1)Complexity of the matrix product +(2)Complexity to compute an inner inverse of +(3)Complexity to compute the matrix product

According to the second approach, the complexity of computing generalized inverses can be expressed according to the rule:(1)Complexity of the matrix product included in required matrix equation which should be solved.+(2)Complexity to solve the linear matrix generated in (1)+(3)Complexity of matrix products required in final representation

Let us compare complexities of two representations from (14). Two possible approaches are available. The first approach assumes computation and the second one assumes , where Complexity of computing the is(1)complexity of the matrix product ,+(2)complexity of computation of ,+(3)complexity of matrix products required in final representation .

Complexity of computing the second expression in (14) is(1)complexity of matrix products ,+(2)complexity to solve appropriate linear matrix equation with respect to ,+(3)complexity of the matrix product .

3.2. Particular Cases

The main particular cases of Theorem 6 can be derived directly and listed as follows.(a)In the case the outer inverse becomes .(b)If is nonsingular and , then the outer inverse becomes the usual inverse .

Then the matrix equation becomes and .(c)In the case or when is a full-rank factorization of , it follows that .(d)The choice , , , or the full-rank factorization implies .(e)The choice , , or the full-rank factorization produces .(f)In the case when is invertible, the inverse matrix can be generated by two choices: and .(g)Theorem 6 and the full-rank representation of - and -inverses from [30] are a theoretical basis for computing - and -inverses with the prescribed range and null space.(h)Further, Theorems 3 and 5 provide a way to characterize - and -inverses of a matrix.

Corollary 15. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying and .(ii)There exist such that and .(iii)There exist such that and .(iv)There exist and such that , , and .(v)There exist and such that and .(vi), .(vii).(viii) and , for some (equivalently every) .(b)If the statements in (a) are true, then the unique -inverse of with the prescribed range and null space is represented byfor arbitrary and arbitrary satisfying and .

Proof. (a) This part of the proof is particular case of Theorem 6.
(b) According to general representation of outer inverses with prescribed range and null space, it follows that Now, it suffices to verify that satisfies Penrose equation (4). For this purpose, it is useful to use known result which implies and later . Hence, (50) holds.

Corollary 16. Let and .(a)The following statements are equivalent:(i)There exists a -inverse of satisfying and .(ii)There exist such that and .(iii)There exist such that and .(iv)There exist and such that , , and .(v)There exist and such that and .(vi), .(vii).(viii) and , for some (equivalently every) .(b)If the statements in (a) are true, then the unique -inverse of with the prescribed range and null space is represented byfor arbitrary and arbitrary satisfying and .

Corollary 17 shows the equivalence between the first representation given in (53) of Corollary 16 and Corollary  1 from [31].

Corollary 17. Let and satisfy . Then

Proof. It suffices to verifyIndeed, since , it follows thatNow, the proof can be completed using the evident fact that is the Hermitian matrix.

In dual case, Corollary 18 is an additional result to Corollary  1 from [31].

Corollary 18. Let and satisfy . Then

Proof. In this case, the identitycan be verified similarly.

Theorem 19. Let . Then

Proof. The equalitiesfollow immediately from Theorem 3.
Let , that is, , , and , and set . ThenConversely, let and , for some . According to (5) we have that . On the other hand, by and it follows that , and it is well-known that it is equivalent to . Thus, .

The following theorem can be verified in a similar way.

Theorem 20. Let . Then

4. Numerical Examples

All numerical experiments are performed starting from the zero initial condition. MATLAB and the Simulink version is 8.4 (R2014b).

Example 21. Consider (a) This part of the example illustrates results of Theorem 6 and it is based on the implementation of Algorithm 3. The matrices satisfy , , and . Since the conditions in (vii) of Theorem 6 are not satisfied, there is no an exact solution of the system of matrix equations and . The outer inverse can be computed using the RNN approach, as follows. The Simulink implementation of Algorithm 3, which is based on the GGNN model for solving the matrix equation , gives the result which is presented in Figure 2. The display denoted by denotes an approximate solution of the matrix equation . The time interval is , the solver is ode15s, the power-sigmoid activation is selected, and .

Step 1. Solve the matrix equation with respect to using an appropriate adaptation of the GGNN approach developed in [28, 29] and restated in (43). In the particular case, the model becomesThe matrix is of full-column rank, and it possesses the left inverse . Therefore, the matrix equation is equivalent to the equation . Then the GGNN model (64) reduces to the well-known GNN model for computing the pseudoinverse of . The GNN models for computing the pseudoinverse of rank-deficient matrices were introduced and described in [21]. We further confirm the results derived in MATLAB Simulink by means of the programming package Mathematica. Mathematica gives which coincides with the result displayed in in Figure 2.

Step 2. The matrix is showed in Figure 2, in the display denoted by . The residual norm of is equal to .
As a confirmation, Mathematica gives which coincides with the contents of the Display Block denoted as ATS2 in Figure 2. Further, the matrix is an approximate solution of the matrix equations and . Also, is an approximate solution of (28), since Therefore, the equations in (28) are satisfied. In addition, (29) is satisfied by the definition of . Therefore, is an approximate -inverse of .
Trajectories of the entries in the matrix generated inside the time , using and ode15s solver, are presented in Figure 3.
(b) Dual approach in Theorem 6, as well as in the implementation of Algorithm 3, is based on the solution of and the associated outer inverse The Simulink implementation of the GGNN model which is based on the matrix equation and the matrix product gives the result which is presented in Figure 4. The display denoted by represents an approximate solution of the matrix equation . The time interval is , the solver is ode15s, the linear activation is selected, and .
Since the matrix is right invertible, the matrix equation gives the dual form of the matrix equation for computing ; that is, .
Therefore, both and are approximations of the same outer inverse of , equal to . To that end, it can be verified that and satisfy
(c) The goal of this part of the example is to illustrate Theorem 3 and Algorithm 1. The matrices and satisfy , so that it is justifiable to search for a solution of the matrix equation and the initiated outer inverse . In order to highlight the results derived by the implementation of Algorithm 1 it is important to mention that On the other hand, the Simulink implementation gives another element from , different from . The matrix is presented in Figure 5. The display denoted by represents an approximate solution of the matrix equation . The time interval is and the solver is ode15s.
(d) The goal of this part of the example is to illustrate Theorem 5 and Algorithm 2. Since , it is justifiable to search for a solution of the matrix equation . The Simulink implementation of the GGNN model which is based on the matrix equation gives the result which is presented in Figure 6. The display denoted by represents an approximation of . The display denoted by represents the matrix product . The time interval is and the solver is ode15s. The activation is achieved by the power-sigmoid function. The corresponding outer inverse of is .
It is important to mention that the results and given by the implementation of Algorithm 2 are different from the pseudoinverse of and , since

Example 22. The aim of the present example is a verification of Theorem 6 and Algorithm 3 in the important case . For this purpose, we consider the same matrix as in Example 21. The Mathematica function Pseudoinverse gives the following exact Moore-Penrose inverse of : It can be approximated using the Simulink implementation of Algorithm 3 corresponding to the choice . Indeed, according to Example 21, the Simulink implementation of Algorithm 3 approximates the outer inverse The implementation and generated results are presented in Figure 7. The GGNN model underlying the implementation is The display denoted by represents an approximate solution of the matrix equation and the display denoted by represents an approximation of . The time interval is , the solver is ode15s, and the scaling parameter is assigned to .

Require: Time varying matrices , and .
Require: Verify .
  If these conditions are satisfied then continue.
 (1) Solve the matrix equation with respect to an unknown matrix .
 (2) Return .

Example 23. Let us consider the same matrix as in Example 21 and The matrices and are generated with the purpose of illustrating Theorem 12 and Algorithms 8 and 9. Conditions (iv) and (v) of Theorem 12 are satisfied. Therefore, it is expectable that the results generated by Algorithms 8 and 9 are the same.
The Simulink implementation of Algorithm 9 generates results presented in Figure 8. The simulation is performed within the time interval which is , the scaling constant is , and the selected solver is ode15s.
The Simulink implementation of Algorithm 8 generates the results presented in Figure 9. The time interval is , , and the solver is ode15s.
As a verification, Mathematica gives the following result: Let us observe that and are very close with respect to the Frobenius norm, since In the case and , the matrix equations and are satisfied, since

Example 24. (a) Consider the time-varying symmetric matrix , belonging to matrices of rank from [32]: The Moore-Penrose inverse of is equal to Figure 10 shows the Simulink adopted computation of in the time period using the solver ode15s and the parameter .
Trajectories of approximations of the entries in the matrix inside the time and generated using are presented in Figure 11. It is evident that these trajectories follow the graphs of the corresponding different expressions (representing entries) in .
(b) Now, consider the following matrices and in conjunction with : The outer inverse of corresponding to and is equal toIts computation in the time period using solver ode15s and the parameter is presented in Figure 12.

Example 25. Here we discuss the behaviour of Algorithm 3 in the case when the condition is not satisfied. For this purpose, let us consider the matrices These matrices do not satisfy the requirement of Algorithm 3, since On the other hand, the conditions and are valid, so that the conditions required in Algorithms 1 and 2 hold. An application of Algorithm 3 in the time , based on the scaling constant and the ode15s solver, gives the results for and as it is presented in Figure 13.
An application of the dual case of Algorithm 3 in the time , based on the scaling constant and the ode15s solver, gives the results for and as it is presented in Figure 14.
Trajectories of the elements of the matrix in the period of time are presented in Figure 15.
According to the obtained results, the following can be concluded.(1)The matrix equation is not satisfied, since . This fact is expectable since the conditions are not satisfied nor is the matrix invertible. Similarly, the matrix equation is not satisfied, since .(2)Both the matrices and are approximations of , since This means that the solutions of the matrix equations and given by the GNN model approximate the solution of the GNN model corresponding to the matrix equations and , respectively, which is equal to .(3)Accordingly, the output denoted by approximates the outer inverse exactly in five decimals. In conclusion, the Simulink implementation of Algorithm 3 computes the outer inverse which satisfies condition (29) from the definition of the -inverse, but not condition (28) from the same definition. In other words, satisfies neither nor .(4)Observations and finally imply that the GGNN model can be used for online time-varying pseudo-inversion of both the matrices and .

5. Conclusion

The contribution of the present paper is both theoretical and computationally applicable. Conditions for the existence and representations of -, -, and -inverses with some assumptions on their ranges and null spaces are proposed. A new computational framework for these generalized inverses is proposed. This approach arises from the derived general representations and involves solutions of certain matrix equations. In general, the methods and algorithms proposed in the present paper are aimed at computation of various classes of generalized inverses of the form , where are solutions of the proposed matrix equations solvable under specified conditions.

Our decision is to apply the GGNN approach in finding solutions of required matrix equations. Also, we use Simulink implementation of the underlying RNN models. This decision allows us to extend derived algorithms to time-varying matrices. Also, such an approach makes it possible to compute two types of generalized inverses, namely, inner and/or outer inverses of and inner inverses of the matrix product . Illustrative numerical examples and simulation examples are presented to demonstrate validity of the derived theoretical results and proposed methods.

It is worth mentioning that the blurring process which is applied on the original image and produces the blurred image is expressed in the form of a certain matrix equation of the formwherein it is assumed that , , where (resp., ) is the length of the horizontal (resp., vertical) blurring in pixels. Solutions of these types of matrix equations which are based on the pseudoinverse of and and least squares solutions were investigated in [3335]. Possible application of the proposed algorithms in finding least squares solutions of matrix equation (83) could be useful for further research.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The first and second author gratefully acknowledge support from the project supported by Ministry of Education and Science of Republic of Serbia, Grant no. 174013. The first and third author gratefully acknowledge support from the project “Applying Direct Methods for Digital Image Restoring” of the Goce Delčev University.