Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2013, Article ID 760293, 10 pages
http://dx.doi.org/10.1155/2013/760293
Research Article

Without Diagonal Nonlinear Requirements: The More General -Critical Dynamical Analysis for UPPAM Recurrent Neural Networks

Institute for Information and System Science, School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China

Received 18 September 2013; Accepted 12 November 2013

Academic Editor: Qintao Gan

Copyright © 2013 Xi Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Continuous-time recurrent neural networks (RNNs) play an important part in practical applications. Recently, due to the ability of assuring the convergence of the equilibriums on the boundary line between stable and unstable, the study on the critical dynamics behaviors of RNNs has drawn especial attentions. In this paper, a new asymptotical stable theorem and two corollaries are presented for the unified RNNs, that is, the UPPAM RNNs. The analysis results given in this paper are under the generally -critical conditions, which improve substantially upon the existing relevant critical convergence and stability results, and most important, the compulsory requirement of diagonally nonlinear activation mapping in most recent researches is removed. As a result, the theory in this paper can be applied more generally.

1. Introduction

Neural networks have been rapidly developing into an important technology for about 30 years, and they can be grouped into two main types according to the networks’ structure, that is, feed-forward neural networks (FNNs) and recurrent neural networks (RNNs). RNNs are the neural networks with feedback loops and by which the output of each neuron in the monolayer would feedback to the input of other neurons. RNNs are the dynamic systems that their states will vary with time pasting by. The crucial foundation of the RNNs consists in their dynamical properties, such as the global convergence, asymptotic stability, and exponential stability. Therefore, the analysis of such dynamical behaviors is the first and necessary step for any practical design and application of RNNs, such as recognition classification, adaptive control, and optimization.

In recent years, for different model individuals, considerable efforts have been devoted to the analysis on the stability of RNNs without and with delay (see, e.g., [118] and the references therein). In order to generalize those results, the authors of [19] point out that most of the exponential stability results are given under the condition that one discriminate matrix defined by the RNNs, which is denoted by , is positive. Here, , and with each being the Lipschitz constant of , , is the activation operator of the network, is an arbitrary positive define diagonal matrix, and is the weight matrix of the network. When another discriminate matrix which has the similar form as is negative, the RNNs are of exponential unstability. Further, they present the concept of the special critical condition and some special critical convergence analysis for two kinds of RNN models. In [20, 21], the general critical condition is defined. From which, we can see that the critical condition is really an essential gap between stable and unstable for the given RNNs; that is, on one side of this gap, the RNNs are sure to be stable and on the other side, the RNNs are certainly unstable. And, when the RNNs fall into this gap, there exist stable trajectories as well as unstable ones. Studying the dynamics behaviors of RNNs under the critical condition is called as the critical analysis. The goal of the critical analysis is to find the least restrictions to assure the most stability of RNNs, which is usually corresponding to the design of the connection matrices (which is contained in the critical condition). To extend the application fields, especially to loosen the design of the connection matrices, it is obvious quite important to study the critical dynamics of RNNs. Since studying the general critical dynamics is quite different, for studying conveniently, the general critical analysis is often replaced by the -critical study.

On the other hand, in order to discover some more in-depth characters other than the bound and monotonically nondecreasing properties of the common used activation operators, [22] provides two novel concepts: the uniformly pseudoprojection antimonotone (UPPAM) operator and the UPPAM RNNs. The UPPAM operator can formalize most of the activation operators, and the UPPAM RNNs can be the representation of most of the RNNs individuals too. Thus, the UPPAM RNNs are called as the unified RNNs. Further, it is guessed that only the study of the dynamics of the UPPAM RNNs may achieve the outcome that we can discriminate the similarity and redundant of the dynamics results among those known RNNs individuals, and which is affirmed following in [22, 23] for discrete-time RNNs as well as for continuous-time RNNs, respectively.

For all that, we should notice one more important thing: no matter for all kinds of the RNNs individuals or for the UPPAM RNNs, almost all of the dynamics conclusions (including those latest critical results) are based on a hypothesis that the activation operators are diagonally nonlinear; that is, the nonlinear activation operator is defined by (here each is a one-dimensional nonlinear function). The cause of the diagonally nonlinear requirement in dynamics analysis is due to the fact that when considering the derivative of a constructed energy function, an inner product is always produced and it is hard to deal with it if one do not use the diagonally nonlinear property. It is obvious that this requirement for the activation operator is quite strict and goes against the biological basis as well as applications.

In the current paper, we are devoted to remove the diagonally nonlinear requirement of the activation operators in dynamics analysis and answer the question that for the unified recurrent neural networks (i.e., the UPPAM RNNs), what dynamics behavior will happen under the -critical condition (which is defined as with being an arbitrary nonnegative matrix). By using the uniformly antimonotonicity as well as the pseudoprojection property and by combining the Lyapunov functional method and the LaSalle invariance principle, we get the global convergence and asymptotic stability theorem and some corollaries, which improve the recent dynamics results in three aspects. Firstly, the theorem and corollaries obtained in this paper are for the general RNNs, so they can be applied directly to CNN, BSB, BCOp-type, and other common used specific models. Secondly, the -critical analysis is more general than the special critical analysis, let alone those noncritical analyses. Thirdly, and most importantly, the requirement of the diagonal nonlinear for activation operator, which is the restrictive condition for activation operator, is removed.

2. Basic Definition

Static RNNs and local field RNNs typically represent two fundamental modeling approaches in current neural network research, which are, respectively, modeled by where is the neural state vector, is the local field vector, is the synaptic weight matrix, is a positive constant, is a fixed external bias vector, and is the nonlinear activation operator.

We now recall some notion and notations (taking system (1) as an example). A constant vector is said to be an equilibrium state of system (1), if is a fixed point of the operator , for all . is said to be stable if any trajectory of system (1) can stay within a small neighborhood of whenever the initial state is close to , and it is said to be attractive if there is a neighborhood , called the attraction basin of , such that any trajectory of system (1) initialized from a state in will approach to as time goes to infinity. is said to be globally asymptotically stable on if it is both stable and attractive, with the attraction basin . System (1) is said to be globally convergent on if for every initial point , converges to an equilibrium state of system (1) (the limit of may not be the same for different ).

Here we give some definitions about the activation operator and the nonlinear operator . Denote the range of by .

Definition 1 (see [22]). The nonlinear activation operator is said to be an -uniformly antimonotonous operator if there is a constant positive number such that

In [22], it is shown that most of the common nonlinear activation operators have the uniformly antimonotonous properties, such as the nearest point projection operator, the linear saturating operator, the signum operator, and the pseudoprojection operator.

Definition 2 (see [22]). An operator is said to be a pseudoprojection if there exists a positive define diagonal matrix such that and , for all . Then is called as -projection.

Obviously, all the projection operators are pseudoprojection operators (here , and refers to the identity matrix), and most of the concrete activation operators diversely appeared in RNNs are pseudoprojection operators.

In [22], when the operator has both the pseudoprojection and uniformly antimonotonous properties, it is called as uniformly pseudoprojection, antimonotonous (UPPAM) operator. Specially, we say is a -UPPAM whenever it is a -projection and -uniformly antimonotonous operator. It is worthwhile to note that the UPPAM operator provides a very appropriate, unified framework within which most of the known RNN models can be embedded and uniformly studied [2123].

The following definition of the nonlinear norm is similar to that of the matrix norm.

Definition 3 (see [20]). Suppose that is a nonlinear operator, is a nonsingular matrix, and is a given vector. One calls as the nonlinear norm if it is defined as follows:

Obviously, for any given matrix , . Additionally, for any constant , one has .

Definition 4. One takes as the minimum Lipschitz constant of operator , which is defined as follows:

Without loss of generality, throughout this paper, we assume that each . Here, let be the minimum Lipschitz constant of and . The matrix is said to be the minimum Lipschitz matrix of operator .

3. The Global Convergence Theorems of RNNs

In this section, the global convergence and asymptotic stability theorem and corollaries for RNNs with UPPAM operators of both systems (1) and (2) will be established under the -critical condition. We consider the networks of form (1) first.

Suppose that is the nonlinear activation operator. For any , define and as being the fixed point set of . Then by Brouwer’s fixed point theorem, has at least one fixed point . As a result, the equilibrium state set of (1) is not empty.

Since for any positive definite matrix , there exists an orthogonal matrix () such that (here is the eigenvalue of ), and if we define , then it is clear that , and is invertible. Such a matrix is denoted by ; that is, .

Following is the global convergence and asymptotic stability theorem of system (1). Suppose that is a bounded, closed, and convex subset of .

Theorem 5. Let be a ()-UPPAM operator, and let each be monotonically increasing and continuous. If there is a nonnegative definite matrix , such that , and for a , where , then RNN system (1) is globally convergent on when is disconnected. Furthermore, when is the unique equilibrium point of (1), then is globally asymptotically stable on .

Proof. Denote the trajectory starting from by , Let , and . It is obvious that .
Define It is easy to get their derivatives, that is,
Define as the Lyapunov energy function, and a direct calculation is that Since , then Thus Let , then we have From , we know . And then (12) is equal to Furthermore, on noting that , one can get that This, combined with (11) and (13), implies that In addition, since and is a ()-UPPAM operator, then and Combining with (16), we have The equal sign holds if and only if ; that is, .
Furthermore, since is bounded and is disconnected, then by LaSalle invariance principle in [24], we know that RNN model (1) is globally convergent on . And when , then it is easy to deduce that is both attractive and stable on since is bounded; that is, is globally asymptotically stable on . Thus, Theorem 5 is proved.

Theorem 5 gives the global convergence and asymptotic stability result of UPPAM RNNs without diagonal nonlinear requirement under the -critical condition, while it is not quite easy to judge the condition that . To improve it, we present the following corollary.

Corollary 6. Assume that is a -UPPAM operator with each being monotonically increasing and continuous. If there exists a nonnegative diagonal matrix , such that and (here ), then RNN model (1) is globally convergent on when is disconnected. Moreover, when is the unique equilibrium point of (1), then is globally asymptotically stable on .

Proof. Assume that ; then Since is -uniformly antimonotonous, we know that for any , So it is clear that Then, we have And (19) can conclude that
Clearly, when , then it can be deduced that Corollary 6 is then proved from Theorem 5.

Correspondingly, we can deduce the critical global convergence and asymptotical stability conclusions for RNN system (2).

Corollary 7. Assume that is a -UPPAM operator with each being monotonically increasing and continuous. If there exists a nonnegative diagonal matrix such that and one of the following conditions holds, then RNN model (2) is globally convergent on when is disconnected. Moreover, when is the unique equilibrium point of (2), then is globally asymptotically stable on .(i)For a , the nonlinear norm , where ;(ii)For the weight matrix , , where .

Proof. For any trajectory of (2) starting from , let with and let be the solution of (1) with initial value ; then by the uniqueness of solution of differential equations, it is easy to verify that and there exists an equilibrium state of (1), denoted by , such that . Thus, by [25], the convergence of to an equilibrium state of (2) can be shown by studying the asymptotic behavior of . Then, the conclusion of Corollary 7 readily follows from Theorem 5 and Corollary 6.

Remark 8. In this section, we have presented the -critical convergence as well as the stability results for the uniformly pseudoprojection antimonotone RNNs. In detail, we obtain the global convergence and the asymptotic stability for the static UPPAM RNNs under the conditions that either the nonlinear norm defined by the networks is less than 1, or the matrix norms given by the networks satisfy one bounded requirement. And the corresponding results for the local field UPPAM RNNs are discussed.
It should be noticed that in the achieved theorem as well as the two corollaries here, the diagonally nonlinear requirement of the activation operators has been removed directly, which is a basic hypothesis in nearly all of the dynamics analysis obtained before. Thus, we improve most of the existing results for RNNs, either for those ones under the noncritical conditions or for those under the critical conditions (see, e.g., [6, 13, 19, 21, 2528] and the references therein). In addition of this, we know from [20] that the -critical analysis of RNNs can give the basic argument between stability and unstability of RNNs, so the discussion of the -critical dynamics analysis for RNNs without diagonally nonlinear requirement is quite meaningful both in theory and in applications. Further, since the UPPAM RNNs can formalize most of the existing RNNs individuals, thus the analysis results of dynamics behaviors for UPPAM RNNs may achieve the unified conclusions for RNNs, and which can discriminate the similarity and redundant of the dynamics results among the known RNNs individuals. In particular, the achieved results here can be applied directly to many RNNs models and can improve deeply the main results of those models, for example, the Cellular Neural Networks (CNNs) [711], the Brain-State-in-Box Neural Networks (BSB NNs) [29, 30], the BCOp-type RNNs [31], and other commonly used specific individuals.

4. Illustrative Examples

In this section, we provide several illustrative examples to demonstrate the validity of the convergence and stability results formulated in the previous section.

Example 1. Consider the following RNN: where each ().

In this example, it is easy to find that the activation operator and that the minimum Lipschitz constant of , (). So we get that . In addition, the unique equilibrium state is .

For any positive diagonal matrix , it is easy to verify that is not positive and, further, not nonnegative. That is, all of the noncritical and critical conclusions in the literature (see, e.g., [19, 27, 32]) cannot be used here. But Theorem 5 can be applied to this example. Actually, the projection operator is a -UPPAM operator. Letting , we have

It is obvious that and . Letting for any , one can get that . For any , we want to show that On noting that for any , And (), so , ,  ; then , . Then, On the other hand, From (29) and (30), it shows that So the inequality always holds.

According to Theorem 5, system (25) is globally asymptotically stable on . The following Figure 1 depicts the time responses of state variables of the system with random initial point starting from , which confirm that the proposed condition in Theorem 5 ensures the globally asymptotical stability of the RNNs.

760293.fig.001
Figure 1: Transient behaviors of RNN in system (25) with random initial point .

Example 2. Consider the following RNN of system (2): where , and . The weight matrix and the external bias vector are defined as follows:

Obviously, this example is established on a general projection operator, and all the diagonally nonlinear conclusions in the literature [20, 28, 32] cannot be used here. But the Corollary 7 established in Section 3 can be applied to Example 2. Actually, in this example and . And it could be proofed that , that is because when is the unit sphere in and , is defined as follows: Let be two arbitrary points in , we know that . In addition, (here ), so , and then . On the other hand, taking , , then . For , , . So And then we get .

Taking , then and . It is easy to figure out that . According to Corollary 7, system (33) is globally convergent on . Figure 2 depicts the time responses of state variables of system (33) with random initial point starting from .

fig2
Figure 2: Transient behaviors of RNN in system (33) with random initial point .

Example 3. Consider the following RNN of system (1): where , and . The weight matrix and the external bias vector is defined as follows:

Like Example 2, all the diagonally nonlinear conclusions in the literature [20, 28, 32] cannot be used here. But this example can be proved to be globally convergent by Corollary 6. Here, we find that is equal to the activation operator in Example 2. Furthermore, it is easy to get that and which equals with each . So the activation operator here is a -UPPAM. And just like the proof in Example 2, we can get .

Taking , then and . It is easy to figure out that < . According to Corollary 6, system (37) is globally convergent on . Figure 3 depicts the time responses of state variables of system (37) with random initial point starting from .

760293.fig.003
Figure 3: Transient behaviors of RNN in system (37) with random initial point .

5. Conclusion

Two basic dynamics behaviors, global convergence and asymptotical stability of both static and local field RNNs with UPPAM operators, have been studied under the -critical condition. It has been proved that when the nonlinear norm determined by the network is bounded, then RNN with UPPAM operator possesses convergent and stable properties in the sense that a discriminant matrix is nonnegative definite, where is a matrix related to the network and is an arbitrary nonnegative definite matrix. Compared with the existing dynamics analysis, the results in this paper extend most of the dynamics conclusions achieved. The requirements of net type, critical form and activation operator’s character in the available literatures have been dearly relaxed. Some typical RNNs with UPPAM activation operators, such as most of the CNNs, BSB, and BCOp-type networks, can apply the theory obtained here to judge the dynamical behavior directly. The significance of the results obtained here not only lies in providing some further cognizance on the essentially dynamical behavior of RNNs, but also in enlarging the application field of them.

In this paper, we only achieved the global convergence and asymptotical stability result for UPPAM RNNs and did not discuss another important dynamics behavior, that is, the exponential stability for UPPAM RNNs, and this is under our current investigation.

Conflict of Interests

The authors declare that they have no conflict of interests.

Acknowledgments

This research was supported by the National Nature Science Foundation of China (nos. 11101327 and 11171270), the National Basic Research Program of China (973 Program) (no. 2013C13329406), and the Fundamental Research Funds for the Central Universities (nos. xjj20100087 and 2011jdhz30).

References

  1. T. Chen and S. I. Amari, “New theorems on global convergence of some dynamical systems,” Neural Networks, vol. 14, no. 3, pp. 251–255, 2001. View at Publisher · View at Google Scholar · View at Scopus
  2. T. Chen, “Global convergence of delayed dynamical systems,” IEEE Transactions on Neural Networks, vol. 12, no. 6, pp. 1532–1536, 2001. View at Publisher · View at Google Scholar · View at Scopus
  3. P. van den Driessche and X. Zou, “Global attractivity in delayed Hopfield neural network models,” SIAM Journal on Applied Mathematics, vol. 58, no. 6, pp. 1878–1890, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
  4. Y. Fang and T. G. Kincaid, “Stability analysis of dynamical neural networks,” IEEE Transactions on Neural Networks, vol. 7, no. 4, pp. 996–1006, 1996. View at Publisher · View at Google Scholar · View at Scopus
  5. Z.-H. Guan, G. Chen, and Y. Qin, “On equilibria, stability, and instability of Hopfield neural networks,” IEEE Transactions on Neural Networks, vol. 11, no. 2, pp. 534–540, 2000. View at Publisher · View at Google Scholar · View at Scopus
  6. X.-B. Liang and J. Si, “Global exponential stability of neural networks with globally Lipschitz continuous activations and its application to linear variational inequality problem,” IEEE Transactions on Neural Networks, vol. 12, no. 2, pp. 349–359, 2001. View at Publisher · View at Google Scholar · View at Scopus
  7. L. O. Chua and L. Yang, “Cellular neural networks: theory,” IEEE Transactions on Circuits and Systems, vol. 35, no. 10, pp. 1257–1272, 1988. View at Publisher · View at Google Scholar · View at MathSciNet
  8. L. O. Chua and T. Roska, Cellular Neural Networks and Visual Computing: Foundations and Applications, Cambridge University Press, Cambridge, UK, 2002.
  9. J. Park, H.-Y. Kim, Y. Park, and S.-W. Lee, “A synthesis procedure for associative memories based on space-varying cellular neural networks,” Neural Networks, vol. 14, no. 1, pp. 107–113, 2001. View at Publisher · View at Google Scholar · View at Scopus
  10. T. Roska and J. Vandewalle, Cellular Neural Networks, Wiley, Chichester, UK, 1995.
  11. A. Slavova, Cellular Neural Networks: Dynamics and Modelling, vol. 16 of Mathematical Modelling: Theory and Applications, Kluwer Academic, Dordrecht, The Netherlands, 2003. View at MathSciNet
  12. M. Forti and A. Tesi, “New conditions for global stability of neural networks with application to linear and quadratic programming problems,” IEEE Transactions on Circuits and Systems, vol. 42, no. 7, pp. 354–366, 1995. View at Publisher · View at Google Scholar · View at MathSciNet
  13. X. Liu and T. Chen, “A new result on the global convergence of Hopfield neural networks,” IEEE Transactions on Circuits and Systems, vol. 49, no. 10, pp. 1514–1516, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  14. R. F. Rao and Z. L. Pu, “LMI-based stability criterion of impulsive T-S fuzzy dynamic equations via fixed point theory,” Abstract and Applied Analysis, vol. 2013, Article ID 261353, 9 pages, 2013. View at Publisher · View at Google Scholar
  15. R. F. Rao, S. M. Zhong, and X. R. Wang, “Stochastic stability criteria with LMI conditions for Markovian jumping impulsive BAM neural networks with mode-dependent time-varying delays and nonlinear reaction-diffusion,” Communications in Nonlinear Science and Numerical Simulation, vol. 19, no. 1, pp. 258–273, 2014. View at Google Scholar
  16. R. F. Rao, S. M. Zhong, and X. R. Wang, “Delay-dependent exponential stability for Marko- vian jumping stochastic Cohen-Grossberg neural networks with p-Laplace diffusion and partially known transition rates via a differential inequality,” Advances in Difference Equations, vol. 2013, Article ID 183, 2013. View at Publisher · View at Google Scholar
  17. R. Rao, X. Wang, S. Zhong, and Z. Pu, “LMI approach to exponential stability and almost sure exponential stability for stochastic fuzzy Markovian-jumping Cohen-Grossberg neural networks with nonlinear p-Laplace diffusion,” Journal of Applied Mathematics, vol. 2013, Article ID 396903, 21 pages, 2013. View at Google Scholar · View at MathSciNet
  18. R. F. Rao and Z. L. Pu, “Stability analysis for impulsive stochastic fuzzy p-Laplace dynamic equations under Neumann or Dirichlet boundary condition,” Boundary Value Problems, vol. 2013, Article ID 133, 2013. View at Publisher · View at Google Scholar
  19. J. Peng, Z.-B. Xu, H. Qiao, and B. Zhang, “A critical analysis on global convergence of Hopfield-type neural networks,” IEEE Transactions on Circuits and Systems, vol. 52, no. 4, pp. 804–814, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  20. C. Qiao and Z. Xu, “On the P-critical dynamics analysis of projection recurrent neural networks,” Neurocomputing, vol. 73, no. 13–15, pp. 2783–2788, 2010. View at Publisher · View at Google Scholar · View at Scopus
  21. C. Qiao and Z. Xu, “Critical dynamics study on recurrent neural networks: globally exponential stability,” Neurocomputing, vol. 77, no. 1, pp. 205–211, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. Z. B. Xu and C. Qiao, “Towards a unified recurrent neural network theory: the uniformly pseudo-projection-anti-monotone net,” Acta Mathematica Sinica (English Series), vol. 27, no. 2, pp. 377–396, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  23. C. Qiao, W. F. Jing, and Z. B. Xu, “The UPPAM continuous-time RNN model and its critical dynamics,” Neurocomputing, vol. 77, pp. 205–211, 2013. View at Google Scholar
  24. J. P. LaSalle, The Stability of Dynamical Systems, SIAM, Philadelphia, Pa, USA, 1987.
  25. Z.-B. Xu, H. Qiao, J. Peng, and B. Zhang, “A comparative study of two modeling approaches in neural networks,” Neural Networks, vol. 17, no. 1, pp. 73–85, 2004. View at Publisher · View at Google Scholar · View at Scopus
  26. Y. Yang and J. Cao, “Solving quadratic programming problems by delayed projection neural network,” IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1630–1634, 2006. View at Publisher · View at Google Scholar · View at Scopus
  27. H. Qiao, J. Peng, Z.-B. Xu, and B. Zhang, “A reference model approach to stability analysis of neural networks,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 33, no. 6, pp. 925–936, 2003. View at Publisher · View at Google Scholar · View at Scopus
  28. C. Qiao and Z. B. Xu, “New critical analysis on global convergence of recurrent neural networks with projection mappings,” vol. 4493 of Lecture Notes in Computer Science, pp. 131–139, Springer, Berlin, Germany, 2007. View at Google Scholar
  29. J.-H. Li, A. N. Michel, and W. Porod, “Analysis and synthesis of a class of neural networks: linear systems operating on a closed hypercube,” IEEE Transactions on Circuits and Systems, vol. 36, no. 11, pp. 1405–1422, 1989. View at Publisher · View at Google Scholar · View at MathSciNet
  30. I. Varga, G. Elek, and S. H. Zak, “On the brain-state-in-a-convex-domain neural models,” Neural Networks, vol. 9, no. 7, pp. 1173–1184, 1996. View at Publisher · View at Google Scholar · View at Scopus
  31. Y. S. Xia and J. Wang, “On the stability of globally projected dynamical systems,” Journal of Optimization Theory and Applications, vol. 106, no. 1, pp. 129–150, 2000. View at Publisher · View at Google Scholar · View at MathSciNet
  32. C. Qiao and Z. Xu, “A critical global convergence analysis of recurrent neural networks with general projection mappings,” Neurocomputing, vol. 72, no. 7–9, pp. 1878–1886, 2009. View at Publisher · View at Google Scholar · View at Scopus