- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Annual Issues
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 109319, 14 pages
Global Convergence for Cohen-Grossberg Neural Networks with Discontinuous Activation Functions
1School of Mathematics and Physics, Anhui University of Technology, Ma'anshan 243002, China
2School of Computer Science, Anhui University of Technology, Ma'anshan 243002, China
Received 12 September 2012; Accepted 23 October 2012
Academic Editor: Sabri Arik
Copyright © 2012 Yanyan Wang and Jianping Zhou. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Cohen-Grossberg neural networks with discontinuous activation functions is considered. Using the property of -matrix and a generalized Lyapunov-like approach, the uniqueness is proved for state solutions and corresponding output solutions, and equilibrium point and corresponding output equilibrium point of considered neural networks. Meanwhile, global exponential stability of equilibrium point is obtained. Furthermore, by contraction mapping principle, the uniqueness and globally exponential stability of limit cycle are given. Finally, an example is given to illustrate the effectiveness of the obtained results.
Recently, different types of neural networks with or without time delays have been widely investigated due to their wide applicability [1–32]. Obviously, considerable research interests are focused on the studies of Cohen-Grossberg neural networks (CGNNs) with their various generalizations due to their potential applications in classification, associative memory, and parallel computation and their ability to solve optimization problems. This class of neural networks is proposed by Cohen and Grossberg  in 1983, and can be modeled as where is the number of neurons in the network, denotes the state variable associated with the th neuron, represents an amplification function, and is an appropriately behaved function. represents the connection strengths between neurons, and if the output from neuron excites (resp., inhibits) neuron , then (resp., ). The activation function shows how neurons respond to each other. CGNNs include a lot of famous ecological systems and neural networks as special cases such as the Lotka-Volterra system, the Gilpia-Analg competition system, the Eingen-Schuster system, and the Hopfield neural networks [1–3], where the Hopfield neural networks can be described as follows:
For CGNNs, dynamics behavior have been studied in literature; we refer to [4–10, 27–29] and the references cited therein. In , by using the concept of Lyapunov diagonally stable (LDS) and linear matrix inequality approach, some criteria were given to ensure global stability and global exponential stability. Yuan and Cao in  considered global asymptotic stability of delayed Cohen-Grossberg neural networks via nonsmooth analysis. Robust exponentially stability of delayed Cohen-Grossberg neural networks is discussed in . In , the authors studied the stochastic stability of a class of Cohen-Grossberg neural networks, in which the interconnections and delays are time varying.
In the above papers, a common feature is that the activation functions are assumed to be continuous and even Lipschitz continuous. However, in , Forti and Nistri pointed out that neural networks modeled by differential equations with discontinuous right-hand side are important and do frequently arise in practice. In order to model discrete-time cellular neural networks, a conceptually analogous model based on hard comparators was used . The class of neural networks introduced in  to deal with linear and nonlinear programming problems can be considered as another important example. Those networks make use of constraint neurons with a diode-like input-output activations. Once again, in order to ensure satisfaction of the constraints, the diodes are required to have a very high slope in the conducting region; that is, they should approximate the discontinuous characteristic of an ideal diode . When treating with dynamical systems with high-slope nonlinear elements, a system of differential equations with discontinuous right-hand side is often used, rather than the model with high but finite slope . The reason of analyzing the ideal discontinuous case is that such analysis is able to reveal crucial features of the dynamics, such as the possibility that trajectories be confined for some time intervals on discontinuity surfaces. Another interesting phenomenon which is peculiar to discontinuous systems is the possibility that trajectories converge toward an equilibrium point in finite time [16, 17], which is of special interest for designing real-time neural optimization solvers.
In , Forti and Nistri discussed the global convergence of neural networks with discontinuous neuron activations by means of the concepts and results of differential equations with discontinuous right-hand side introduced by Filippov . In , they extended the results in  under the assumption that the interconnection matrix is an -matrix or -matrix. In , without assuming the boundedness and the continuity of the neuron activations, the authors presented sufficient conditions for the global stability of neural networks with time delay based on linear matrix inequality. Also, in , they present some sufficient conditions for the global stability and exponential stability of a class of the CGNNs by using the LDS, and provided an estimate of the convergence rate. In [24–26], the authors discussed the stability or multistability of the neural networks with discontinuous activation functions. However, [11, 24–26] have shown that convergence of the state does not imply convergence of the outputs. In addition, in the practical applications, the result of the neural computation is usually the steady-state neuron output, rather than the asymptotic value of the state. Hence, in this paper, we will study global convergence of CGNNs with discontinuous activation functions, where the interconnection matrix is assumed to be an -matrix or -matrix. Firstly, using the property of -matrix and a generalized Lyapunov-like approach, we prove the uniqueness of state solutions and corresponding output solutions, and equilibrium point and corresponding output equilibrium point for the considered neural networks. Then, global exponential stability of unique equilibrium point is discussed and exponential convergence rate is estimated. Also, by contraction mapping principle, the globally exponential stability of limit cycle is given. Finally, we use a numerical example to illustrate the effectiveness of the theoretical results. The rest of the paper is organized as follows. In Section 2, model description and preliminaries are presented. The main results are stated in Section 3. In Section 4, an example is given to show the validity of the obtained results. Finally, in Section 5, the conclusions are drawn.
Notations. Throughout the paper, the transpose of and inverse of any square matrix are expressed as and , respectively. For denotes for . For , denotes the scalar product of .
2. Model Description and Preliminaries
Throughout this paper, we make the following assumptions.(A1) The function is continuous, for all , where and are positive constants, .(A2) The matrix is nonsingular, that is, det .
Moreover, is supposed to belong to the following class of discontinuous functions.
Definition 2.1 (see  (Function Class )). if and only if for , the following conditions hold:(i) is bounded on ;(ii) is piecewise continuous on ; namely, is continuous on except a countable set of points of discontinuity , where there exist finite right and left limits and , respectively; moreover, has finite discontinuous points in any compact interval of ;(iii) is nondecreasing on .
Denote the set of discontinuous points of , by
Sometimes, is supposed to belong to the next class of discontinuous functions, which is included in .
Definition 2.2 (see  (Function Class )). if and only if and for is locally Lipschitz with Lipschitz constant for all . Furthermore, we have for all .
For model (1.1) or model (2.1) with discontinuous right-hand side, a solution of Cauchy problem need to be explained. In this paper, solutions in the sense of Filippov  are considered whose definition will be given next.
Let , where .
Definition 2.3. A function , where is a solution (in the sense of Filippov) of (2.1) in the interval , with initial condition , if is absolutely continuous on and , and for almost all (a.a.) we have Let , be a solution of model (2.1). For a.a. , one can obtain where is the output solution of model (2.1) corresponding to . And is a bounded measurable function , which is uniquely defined by the state solution for a.a. .
Definition 2.4 (equilibrium point). is an equilibrium point of model (2.1) if and only if the following algebraic inclusion is satisfied:
In this paper, we also need the following definitions and lemma.
Definition 2.6 (see ). Let be a square matrix. Matrix is said to be an -matrix if and only if for each , and all successive principal minors of are positive.
Definition 2.7 (see ). Let be a square matrix. Matrix is said to be an -matrix if and only if the comparison matrix of , which is defined by is an -matrix.
Lemma 2.8 (see ). Suppose that is an -matrix. Then, there exists a vector such that .
All results of this paper are under one of the following assumptions: (a) is an -matrix; (b) is an -matrix such that .
(a) and (b) can be applied to cooperative neural networks  and cooperative-competitive neural networks, respectively.
From , the result that is LDS under (a) or (b) can be obtained; hence, all results in  hold. So, for any , model (2.1) has a bounded absolutely continuous solution for which satisfies . Meanwhile, there exists an equilibrium point of model (2.1).
If is an -matrix, then, there exists such that If is an -matrix, then, there exists such that Using the positive vector , we define the distance in as follows: for any , define
Definition 2.9. The equilibrium point of (2.1) is said to be globally exponentially stable, if there exist constants and such that for any solution of model (2.1), we have Also, we can consider the CGNNs with periodic input: where is periodic input vectors with period .
Definition 2.10. A periodic orbit of Cohen-Grossberg networks is said to be globally exponentially stable, if there exist constants and such that such that for any solution of model (2.13), we have
3. Main Results
In this section, we shall establish some sufficient conditions to ensure the uniqueness of solutions, equilibrium point, output equilibrium point, and limit cycle as well as the global exponential stability of the state solutions.
Because Filippov solution includes set-valued function, in the general case, for a given initial condition, a discontinuous differential equation has multiple solutions starting at it . Next, it will be shown that the uniqueness of solutions of model (2.1) can be obtained under the assumptions (A1) and (A2).
Theorem 3.1. Under the assumptions (A1) and (A2), if and is an -matrix or is an -matrix such that , then, for any there is a unique solution of model (2.1) with initial condition , which is defined and bounded for all . Meanwhile, the corresponding output solution of model (2.1) is uniquely defined and bounded for a.a. .
Proof. We only need to prove the uniqueness. Let and are two solutions of model (2.1) with the initial condition .
Define Computing the time derivative of along the solutions of (2.1) gives where From and , one can have Hence where . Integrating (3.1) between 0 and , we have and hence, for any ; that is, the solution of model (2.1) with initial condition is unique.
From (2.5), the output solution corresponding to is uniquely defined and bounded for a.a. . The proof of Theorem 3.1 is completed.
Remark 3.2. Under the assumptions (A1) and (A2), if and is an -matrix or is an -matrix such that , then, for any , model (2.1) has a unique equilibrium point and a unique corresponding output equilibrium point. Because from the assumptions, we have is LDS, hence, from Theorem 6 in , model (2.1) has a unique equilibrium point. By Definition 2.5, it is easily obtained that corresponding output equilibrium point is unique.
Next, global exponential stability of the equilibrium point of model (2.1) and the uniqueness and global exponential stability of limit cycle of model (2.13) are addressed. The results are given in following theorems.
Theorem 3.3. Under the assumptions and , if and is an -matrix or is an -matrix such that , then, for any , model (2.1) has a unique equilibrium point which is globally exponentially stable.
Proof. Let , be the solution of model (2.1) such that , and for a.a. , let be the corresponding output solution. For equilibrium point , is corresponding output equilibrium point.
Since , we can choose a small such that Define Computing the time derivative of along the solutions of (2.1), it follows that where .
On the other hand, where .
So, the following inequality holds: that is, is globally exponentially stable.
Remark 3.4. Since , the exponential convergence rate can be estimated by means of the maximal allowable value by virtue of inequality . From this, one can see that amplification functions have key effect on the convergence rate of the stability for the considered model.
Next, the uniqueness and the exponentially stability of limit cycle for model (2.13) is given.
Theorem 3.5. Under the assumptions (A1) and (A2), if and is an -matrix or is an -matrix such that , then model (2.13) has a unique globally exponentially stable limit cycle.
Proof. Let , are two solutions of model (2.13), such that respectively.
Define Similar to the proof of Theorem 3.3, the following inequality holds: Define . Define a mapping by , then . We can choose a positive integer , such that for a positive constant , And, from (3.14), we have By contraction mapping principle, there exists a unique fixed point such that . In addition, ; that is, is also a fixed point of . By the uniqueness of the fixed point of the mapping ; that is, . Let be a state of model (1.1) with initial condition ; we obtain for all , Then, for all , That is, is also a state of the model (2.13) with initial condition ; here, ; hence, for all , from Theorem 3.1, Hence, is an isolated periodic orbit of model (2.13) with period , that is, a limit cycle of model (2.13). From (3.14), we can obtain that it is globally exponentially stable. The proof of Theorem 3.5 is completed.
Remark 3.6. Similar to those that are given in , global convergence of the output solutions in finite time also can be discussed, which can be embodied in the following example, and the detailed results are omitted.
4. Illustrative Example
In this section, we shall give an example to illustrate the effectiveness of our results.
Example 4.1. Consider the following CGNN model: where Obviously, is an -matrix with and Also, the subsets , , and in this example are the same as those in example 1 in  which are depicted in detail in Figure 3 in .
Firstly, we choose . The equilibrium point of model (4.1) is , and the corresponding output equilibrium point . Global convergence of and in finite time can be obtained. Figure 1 depicts the behavior of state solution and output solution with .
Then, we choose and are equilibrium point and output equilibrium point of model (4.1), respectively. Simulation results with about global convergence in finite time of the state solution and corresponding output solution are depicted in Figure 3.
In this paper, by using the property of -matrix and a generalized Lyapunov-like approach, global convergence of CGNNs possessing discontinuous activation functions is investigated under the condition that neuron interconnection matrix belongs to the class of -matrices or -matrices. The uniqueness is proved for equilibrium point and corresponding output equilibrium point of considered neural networks. It is also proved that for considered model, the solution starting at a given initial condition is unique. Meanwhile, global exponential stability of equilibrium point is obtained for any input. Furthermore, by contraction mapping principle, the uniqueness and the globally exponential stability of limit cycle are given.
The authors appreciate the editor's work and the reviewers' insightful comments and constructive suggestions. This work is supported by the Excellent Youthful Talent Foundation of Colleges and Universities of Anhui Province of China under Grant no. 2011SQRL030ZD.
- M. A. Cohen and S. Grossberg, “Absolute stability of global pattern formation and parallel memory storage by competitive neural networks,” IIEEE Transactions on Systems, Man, and Cybernetics, vol. 13, no. 5, pp. 815–826, 1983.
- K. Gopalsamy, “Global asymptotic stability in a periodic Lotka-Volterra system,” Australian Mathematical Society B, vol. 27, no. 1, pp. 66–72, 1985.
- X. Liao, S. Yang, S. Chen, and Y. Fu, “Stability of general neural networks with reaction-diffusion,” Science in China F, vol. 44, pp. 389–395, 2001.
- W. Lu and T. Chen, “New conditions on global stability of Cohen-Grossberg neural networks,” Neural Computation, vol. 15, no. 5, pp. 1173–1189, 2003.
- K. Yuan and J. Cao, “An analysis of global asymptotic stability of delayed Cohen-Grossberg neural networks via nonsmooth analysis,” IEEE Transactions on Circuits and Systems I, vol. 52, no. 9, pp. 1854–1861, 2005.
- J. Cao and X. Li, “Stability in delayed Cohen-Grossberg neural networks: LMI optimization approach,” Physica D, vol. 212, no. 1-2, pp. 54–65, 2005.
- C. C. Hwang, C. J. Cheng, and T. L. Liao, “Globally exponential stability of generalized Cohen-Grossberg neural networks with delays,” Physics Letters A, vol. 319, no. 1-2, pp. 157–166, 2003.
- J. Cao and J. Liang, “Boundedness and stability for Cohen-Grossberg neural network with time-varying delays,” Journal of Mathematical Analysis and Applications, vol. 296, no. 2, pp. 665–685, 2004.
- J. Zhang, Y. Suda, and H. Komine, “Global exponential stability of Cohen-Grossberg neural networks with variable delays,” Physics Letters A, vol. 338, no. 1, pp. 44–50, 2005.
- T. Chen and L. Rong, “Robust global exponential stability of Cohen-Grossberg neural networks with time delays,” IEEE Transactions on Neural Networks, vol. 15, no. 1, pp. 203–205, 2004.
- M. Forti and P. Nistri, “Global convergence of neural networks with discontinuous neuron activations,” IEEE Transactions on Circuits and Systems I, vol. 50, no. 11, pp. 1421–1435, 2003.
- H. Harrer, J. A. Nossek, and R. Stelzl, “An analog implementation of discrete-time cellular neural networks,” IEEE Transactions on Neural Networks, vol. 3, no. 3, pp. 466–476, 1992.
- M. P. Kennedy and L. O. Chua, “Neural networks for nonlinear programming,” IEEE Transactions on Circuits and Systems I, vol. 35, no. 5, pp. 554–562, 1988.
- L. O. Chua, C. A. Desoer, and E. S. Kuh, Linear and Nonlinear Circuits, McGraw-Hill, New York, NY, USA, 1987.
- V. I. Utkin, Sliding Modes and Their Application in Variable Structure Systems, U.S.S.R.: MIR, Moscow, Russia, 1978.
- J.-P. Aubin and A. Cellina, Differential Inclusions, Springer, Berlin, Germany, 1984.
- B. E. Paden and S. S. Sastry, “A calculus for computing Filippov's differential inclusion with application to the variable structure control of robot manipulators,” IEEE Transactions on Circuits and Systems, vol. 34, no. 1, pp. 73–82, 1987.
- M. Forti, “M-matrices and global convergence of discontinuous neural networks,” International Journal of Circuit Theory and Applications, vol. 35, no. 2, pp. 105–130, 2007.
- W. Lu and T. Chen, “Dynamical behaviors of delayed neural network systems with discontinuous activation functions,” Neural Computation, vol. 18, no. 3, pp. 683–708, 2006.
- W. Lu and T. Chen, “Dynamical behaviors of Cohen-Grossberg neural networks with discontinuous activation functions,” Neural Networks, vol. 18, no. 3, pp. 231–242, 2005.
- A. F. Filippov, “Differential equations with discontinuous right-hand side,” Transactions of the American Mathematical Society, vol. 42, pp. 199–231, 1964.
- M. Hirsch, “Convergent activation dynamics in continuous time networks,” Neural Networks, vol. 2, pp. 331–349, 1989.
- D. Hershkowitz, “Recent directions in matrix stability,” Linear Algebra and its Applications, vol. 171, pp. 161–186, 1992.
- H. Wu, “Global stability analysis of a general class of discontinuous neural networks with linear growth activation functions,” Information Sciences, vol. 179, no. 19, pp. 3432–3441, 2009.
- G. Huang and J. Cao, “Multistability of neural networks with discontinuous activation function,” Communications in Nonlinear Science and Numerical Simulation, vol. 13, no. 10, pp. 2279–2289, 2008.
- H. Wu and C. Shan, “Stability analysis for periodic solution of BAM neural networks with discontinuous neuron activations and impulses,” Applied Mathematical Modelling, vol. 33, no. 6, pp. 2564–2574, 2009.
- C. Huang and J. Cao, “Stochastic dynamics of nonautonomous Cohen-Grossberg neural networks,” Abstract and Applied Analysis, vol. 2011, Article ID 297147, 17 pages, 2011.
- X. Yang, C. Huang, D. Zhang, and Y. Long, “Dynamics of Cohen-Grossberg neural networks with mixed delays and impulses,” Abstract and Applied Analysis, vol. 2008, Article ID 432341, 14 pages, 2008.
- Q. Liu and W. Zheng, “Bifurcation of a Cohen-Grossberg neural network with discrete delays,” Abstract and Applied Analysis, vol. 2012, Article ID 909385, 11 pages, 2012.
- M. J. Park, O. M. Kwon, J. H. Park, S. M. Lee, and E. J. Cha, “Synchronization criteria for coupled stochastic neural networks with time-varying delays and leakage delay,” Journal of the Franklin Institute, vol. 349, no. 5, pp. 1699–1720, 2012.
- O. M. Kwon, S. M. Lee, J. H. Park, and E. J. Cha, “New approaches on stability criteria for neural networks with interval time-varying delays,” Applied Mathematics and Computation, vol. 218, no. 19, pp. 9953–9964, 2012.
- O. M. Kwon, J. H. Park, S. M. Lee, and E. J. Cha, “A new augmented Lyapunov-Krasovskii functional approach to exponential passivity for neural networks with time-varying delays,” Applied Mathematics and Computation, vol. 217, no. 24, pp. 10231–10238, 2011.