About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 270791, 9 pages
http://dx.doi.org/10.1155/2013/270791
Research Article

Asymptotic Behavior of Switched Stochastic Delayed Cellular Neural Networks via Average Dwell Time Method

1College of Mathematics and Computing Science, Changsha University of Science and Technology, Changsha, Hunan 410114, China
2Hunan Provincial Center for Disease Control and Prevention, Changsha, Hunan 410005, China
3School of Business, Hunan Normal University, Changsha, Hunan 410081, China

Received 28 January 2013; Accepted 31 March 2013

Academic Editor: Zhichun Yang

Copyright © 2013 Hanfeng Kuang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The asymptotic behavior of a class of switched stochastic cellular neural networks (CNNs) with mixed delays (discrete time-varying delays and distributed time-varying delays) is investigated in this paper. Employing the average dwell time approach (ADT), stochastic analysis technology, and linear matrix inequalities technique (LMI), some novel sufficient conditions on the issue of asymptotic behavior (the mean-square ultimate boundedness, the existence of an attractor, and the mean-square exponential stability) are established. A numerical example is provided to illustrate the effectiveness of the proposed results.

1. Introduction

Since Chua and Yang’s seminal work on cellular neural networks (CNNs) in 1988 [1, 2], it has witnessed the successful applications of CNN in various areas such as signal processing, pattern recognition, associative memory, and optimization problems (see, e.g., [35]). From a practical point of view, both in biological and man-made neural networks, processing of moving images and pattern recognition problems require the introduction of delay in the signals transmitted among the cells [6, 7]. After the widely use of discrete delays, distributed delays arise because that neural networks usually have a spatial extent due to the presences of a multitude of parallel pathway with a variety of axon sizes and lengths. The mathematical model can be described by the following differential equations: where , corresponds to the number of units in a neural network; denotes the potential (or voltage) of cell at time ; denotes a nonlinear output function; denotes the rate with which the cell resets its potential to the resting state when isolated from other cells and external inputs; , , denote the strengths of connectivity between cell and at time , respectively; and correspond to the discrete time-varying delays and distributed time-varying delays, respectively.

Neural network is nonlinearity; in the real world, nonlinear problems are not exceptional, but regular phenomena. Nonlinearity is the nature of matter and its development [8, 9]. Although discrete delays combined with distributed delays can usually provide a good approximation for prime model, most real models are often affected by so many external perturbations which are of great uncertainty. For instance, in electronic implementations, it was realized that stochastic disturbances are mostly inevitable owing to thermal noise. Just as Haykin [10] point out that in real nervous systems, synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes. Consequently, noise is unavoidable and should be taken into consideration in modeling. Moreover, it has been well recognized that a CNN could be stabilized or destabilized by certain stochastic inputs. Therefore, it is of significant importance to consider stochastic effects to the delayed neural networks. One approach to the mathematical incorporation of such effects is to use probabilistic threshold models. However, the previous literatures all focus on the stability of stochastic neural networks with delays [1114]. Actually, studies on dynamical systems involve not only a discussion of the stability property, but also other dynamic behaviors such as the ultimate boundedness and attractor. However, there are very few results on the ultimate boundedness and attractor for stochastic neural networks [1517]. Hence, discussing the asymptotic behavior of neural networks with mixed delays is valuable and meaningful.

On the other hand, neural networks often exhibit a special characteristic of network mode switching; that is, a neural network sometimes has finite modes that switch from one to another at different times according to a switching law generated from a switching logic. As an important class of hybrid systems, switched systems arise in many practical processes. In current papers, the analysis of switched systems has drawn considerable attention since they have numerous applications in control of mechanical systems, computer communities, automotive industry, electric power systems and many other fields [1822]. Most recently, the stability analysis of switched neural systems has been further investigated which was mainly based on Lyapunov functions [23, 24]. It is worth noting that the average dwell time (ADT) approach is an effective method for the switched systems, which avoid the common Lyapunov function and can be adopted to obtain less conservative stability conditions. For instance, based on the average dwell time method, the problems of stability have been discussed for uncertain switched Cohen-Grossberg neural networks with interval time-varying delay and distributed time-varying delay in [25]. In [26], the average dwell time method has been utilized to get some sufficient conditions for the exponential stability and the weighted gain for a class of switched systems.

However, it is worth emphasizing that when the activation functions are unbounded in some special applications, the existence of equilibrium point cannot be guaranteed [27]. Therefore, in these circumstances, the discussing of stability of equilibrium point for switched neural networks turns to be unreachable, which motivated us to consider the ultimate boundedness and attractor for the switched neural networks. Unfortunately, as far as we know, the issue of asymptotic behavior of switched systems with mixed time delays has not been investigated yet, let alone studying the asymptotic behavior of switched stochastic systems. Therefore, these researches are challenging and interesting since they integrate the switched hybrid system into the stochastic system and are thus theoretically and practically significant. Notice that the asymptotic behavior of switched stochastic neural networks with mixed delays should be studied intensively.

Motivated by the above analysis, the main purpose of this paper is to get sufficient conditions on the asymptotic behavior (the mean-square ultimate boundedness, the existence of an attractor, and mean-square exponential stability) for the switched stochastic system. This paper is organized as follows. In Section 2, the considered model of switched stochastic CNN with mixed delays is presented. Some necessary assumptions, definitions and lemmas are also given in this section. In Section 3, mean-square ultimate boundedness and attractor for the proposed model are studied. A numerical example is arranged to demonstrate the effectiveness of the theoretical results in Section 4, and we conclude this paper in Section 5.

2. Problem Formulation

In general, a stochastic cellular neural network with mixed delays can be described as follows: where , , , , , , , , , is a matrix valued function, and is an -dimensional Brownian motion defined on a complete probability space with a natural filtration (i.e., .

By introducing switching signal into the system (2) and taking a set of neural networks as the individual subsystems, the switched system can be obtained, which is described as where is the switching signal. At each time instant , the index (i.e., ) of the active subsystem means that the th subsystem is activated.

For the convenience of discussion, it is necessary to introduce some notations. denotes the -dimensional Euclidean space. () means that each pair of corresponding elements of and satisfies the inequality “”. is especially called a positive (negative) matrix if (). denotes the transpose of any square matrix , and the symbol “*” within the matrix represents the symmetric term of the matrix. means the minimum eigenvalue of matrix , and means the maximum eigenvalue of matrix . denotes unit matrix.

Let denote the Banach space of continuous functions which mapping from to with is the topology of uniform convergence. For any , we define .

The initial conditions for system (3) are given in the form: where is the family of all -measurable bounded -valued random variables.

Throughout this paper, we assume the following assumptions are always satisfied. The discrete time-varying delay and distributed time-varying delay are satisfying where , , are scalars. There exist constants and , , such that Moreover, we define We assume that is locally Lipschitz continuous and satisfies the following condition: where , are constant matrices with appropriate dimensions.

Some definitions and lemmas are introduced as follows.

Definition 1 (see [15]). System (2) is called mean-square ultimate boundedness if there exists a constant vector , such that, for any initial value , there is a , for all , the solution of system (2) satisfies
In this case, the set is said to be an attractor of system (2) in mean square sense.
Clearly, proposition above equals to .

Definition 2 (see [28]). For any switching signal , corresponding a switching sequence , where means the th subsystem, is activated during , and denotes the switching ordinal number. Given any finite constants , satisfying denotes the number of discontinuity of a switching signal over the time interval by . If holds for , , then is called the average dwell time. is the chatter bound.

Lemma 3. Let and be any -dimensional real vectors, be a positive semidefinite matrix and a scalar . Then the following inequality holds:

Lemma 4 (see [29]). For any positive definite constant matrix , and a scalar , if there exists a vector function such that the integrals and are well defined, then

3. Main Results

Let denote the family of all nonnegative functions on which are continuously twice differentiable in and once differentiable in . If , define an operator associated with general stochastic system as where

Theorem 5. If there are constants , such that , , we denote , as: For a given constant , if there exist positive definite matrixes , , , , , , , ,, such that the following condition holds: then system (2) is mean-square ultimate boundedness.

Proof. Consider the positive definite Lyapunov functional as follows: where Then, by Ito’s formula, the stochastic derivative of is the operator along the trajectory of system (2) can be obtained From Assumption (), Lemma 3, and (19), we can get
Similarly, calculating the operator (), along the trajectory of system (2), one can get
According to Lemma 4, the following inequalities can be obtained: Then, we can get
On the other hand, it follows from Assumption that we can easily obtain
Then we obtain Similarly, one can get
Denote and combing with (16)–(26), we can get where
By integrating both sides of (28) in time interval and then taking expectation results in where .
Therefore, one obtains which implies If one chooses , then, for initial value , there is , such that for all . According to Definition 1, we have for all . That is to say, system (2) is mean-square ultimate boundedness. This completes the proof.

Theorem 6. If all of the conditions of Theorem 5 hold, then there exists an attractor for the solutions of system (2).

Proof. If one chooses , Theorem 5 shows that, for any , there is , such that for all . Let denote by . Clearly, is closed, bounded, and invariant. Furthermore, . Therefore, is an attractor for the solutions of system (2). This completes the proof.

Corollary 7. In addition to that all of the conditions of Theorem 5 hold, if , , and for all , then system (2) has a trivial solution , and the trivial solution of system (2) is mean-square exponentially stable.

Proof. If and , then , and it is obvious that system (2) has a trivial solution . From Theorem 5, one has where . Therefore, the trivial solution of system (2) is mean-square exponentially stable. This completes the proof.

According to Theorem 5–Corollary 7, we will present conditions of mean-square ultimate boundedness for the switched systems (3) by applying the average dwell time method in the follow-up studies.

Theorem 8. If there are constants , such that , , we denote , as For a given constant , if there exist positive definite matrixs ,, , , , , , , such that the following condition holds where Then system (3) is mean-square ultimate boundedness for any switching signal with average dwell time satisfying where .

Proof. Define the Lyapunov functional candidate
From (16) and (32), we have the following result: where , is a positive constant.
When , the th subsystem is activated; from (39) and Theorem 5,we can get where is a positive constant, , , .
Since the system state is continuous, it follows from (40) that where , .
If one chooses , then, for initial value , there is , such that for all . According to Definition 1, we have for all . That is to say, system (3) is mean-square ultimate boundedness, and the proof is completed.

Remark 9. In this paper, we construct two piecewise functions , to remove the restrictive condition and in the results, which have reduced the conservatism of the obtained results and also avoid the computational complexity.

Remark 10. The condition (35) is given as in the form of linear matrix inequalities, which are more relaxing than the algebraic formulation. Furthermore, by using the MATLAB LMI toolbox, we can check the feasibility of (35) straightforward without tuning any parameters.

Theorem 11. If all of the conditions of Theorem 8 hold, then there exists an attractor for the solutions of system (3), where .

Proof. If one chooses , Theorem 8 shows that, for any , there is , such that for all . Let denote by . Clearly, is closed, bounded, and invariant. Furthermore, . Therefore, is an attractor for the solutions of system (3). This completes the proof.

Corollary 12. In addition to all that of the conditions of Theorem 8 hold, if , and for all , then system (3) has a trivial solution , and the trivial solution of system (3) is mean-square exponentially stable.

Proof. If and for all , then it is obvious that system (3) has a trivial solution . From Theorem 8, one has where /. Therefore, the trivial solution of system (3) is mean-square exponentially stable. This completes the proof.

Remark 13. Assumption is less conservative than that in [17] since the constants and are allowed to be positive, negative, or zero. Hence, the resulting activation functions could be nonmonotonic and are more general than the usual forms , , . Moreover, unlike the bounded case, there will be no equilibrium point for the switched system (3) under the assumption . For this reason, to investigate the asymptotic behavior (the ultimate boundedness and the existence of attractor) of switched system that contains mixed delays is more complex and challenge.

Remark 14. In this paper, the chatter bound is a positive integer, which is more practical in significance and can include the model in [16, 25, 26] as a special case.

Remark 15. If , which implies that the switched delay system (3) reduces to the usual stochastic CNN with delays. In this case, attractor and ultimate boundedness are discussed in [17]. And when , the model in our paper turns out to be a switched CNN with mixed delays; to the best of our knowledge, there are no published results in this aspect yet. Thus, the main results of this paper are novel. Moreover, when uncertainties appear in the switched stochastic CNN system (3), we can obtain the corresponding results, by applying the similar method as in [25].

4. Illustrative Examples

In this section, we shall give a numerical example to demonstrate the validity and effectiveness of our results. Consider the switched cellular neural networks with two subsystems.

Consider the switched stochastic cellular neural network system (3) with (), , , and the connection weight matrices as follows:

From assumptions , we can gain , , and .

Therefore, for , by solving LMIs (35), we get

Using (37), we can get the average dwell time .

5. Conclusions

In this paper, we studied the switched stochastic cellular neural networks with discrete time-varying delays and distributed time-varying delays. With the help of the average dwell time approach, the novel multiple Lyapunov-Krasovkii functionals methods, and some inequality techniques, we obtain the new sufficient conditions guaranteeing the mean-square ultimate boundedness, the existence of an attractor, and the mean-square exponential stability. A numerical example is also given to demonstrate our results. Furthermore, our derived conditions are presented in the forms of LMIs, which are more relaxing than the algebraic formulation and can be easily checked in practice by the effective LMI toolbox in MATLAB.

Acknowledgments

The authors are extremely grateful to Prof. Zhichun Yang and the anonymous reviewers for their constructive and valuable comments, which have contributed much to the improvement of this paper. This work was jointly supported by the National Natural Science Foundation of China under Grant no. 11101053, the Key Project of Chinese Ministry of Education under Grant no. 211118, the Excellent Youth Foundation of Educational Committee of Hunan Provincial no. 10B002, the Hunan Provincial NSF no. 11JJ1001, National Science and technology Major Projects of China no. 2012ZX10001001-006, the Scientific Research Funds of Hunan Provincial Science and Technology Department of China no. 2012SK3096.

References

  1. L. O. Chua and L. Yang, “Cellular neural networks: theory,” IEEE Transactions on Circuits and Systems, vol. 35, no. 10, pp. 1257–1272, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. L. O. Chua and L. Yang, “Cellular neural networks: applications,” IEEE Transactions on Circuits and Systems, vol. 35, no. 10, pp. 1273–1290, 1988. View at Publisher · View at Google Scholar · View at MathSciNet
  3. D. Liu and A. Michel, “Celular neural networks for associative memories,” IEEE Transactions on Circuits and Systems II, vol. 40, pp. 119–121, 1993. View at Publisher · View at Google Scholar
  4. H.-C. Chen, Y.-C. Hung, C.-K. Chen, T.-L. Liao, and C.-K. Chen, “Image-processing algorithms realized by discrete-time cellular neural networks and their circuit implementations,” Chaos, Solitons & Fractals, vol. 29, no. 5, pp. 1100–1108, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. P. Venetianer and T. Roska, “Image compression by delayed CNNs,” IEEE Transactions on Circuits and Systems I, vol. 45, pp. 205–215, 1998.
  6. T. Roska, T. Boros, P. Thiran, and L. Chua, “Detecting simple motion using cellular neural networks,” in Proceedings of the International Workshop Cellular Neural Networks Application, pp. 127–138, 1990.
  7. T. Roska and L. Chua, “Cellular neural networks with nonlinear and delay-type template,” International Journal of Circuit Theory and Applications, vol. 20, pp. 469–481, 1992.
  8. F. Wen and X. Yang, “Skewness of return distribution and coefficient of risk premium,” Journal of Systems Science & Complexity, vol. 22, no. 3, pp. 360–371, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  9. F. Wen, Z. Li, C. Xie, and S. David, “Study on the fractal and chaotic features of the Shanghai composite index,” Fractals—Complex Geometry Patterns and Scaling in Nature and Society, vol. 20, no. 2, pp. 133–140, 2012.
  10. S. Haykin, Neural Networks, Prentice-Hall, Englewood Cliffs, NJ, USA, 1994.
  11. C. Huang and J. Cao, “Almost sure exponential stability of stochastic cellular neural networks with unbounded distributed delays,” Neurocomputing, vol. 72, pp. 3352–3356, 2009.
  12. C. Huang and J. Cao, “On pth moment exponential stability of stochastic Cohen-Grossberg neural networks with time-varying delays,” Neurocomputing, vol. 73, pp. 986–990, 2010.
  13. R. Rakkiyappan and P. Balasubramaniam, “Delay-dependent asymptotic stability for stochastic delayed recurrent neural networks with time varying delays,” Applied Mathematics and Computation, vol. 198, no. 2, pp. 526–533, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  14. H. Zhao, N. Ding, and L. Chen, “Almost sure exponential stability of stochastic fuzzy cellular neural networks with delays,” Chaos, Solitons & Fractals, vol. 40, no. 4, pp. 1653–1659, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  15. B. Li and D. Xu, “Mean square asymptotic behavior of stochastic neural networks with infinitely distributed delay,” Neurocomputing, vol. 72, pp. 3311–3317, 2009.
  16. L. Wan, Q. Zhou, P. Wang, and J. Li, “Ultimate boundedness and an attractor for stochastic Hopfield neural networks with time-varying delays,” Nonlinear Analysis: Real World Applications, vol. 13, no. 2, pp. 953–958, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  17. L. Wan and Q. Zhou, “Attractor and ultimate boundedness for stochastic cellular neural networks with delays,” Nonlinear Analysis: Real World Applications, vol. 12, no. 5, pp. 2561–2566, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  18. W. Yu, J. Cao, and W. Lu, “Synchronization control of switched linearly coupled neural networks with delay,” Neurocomputing, vol. 73, pp. 858–866, 2010.
  19. L. Wu, Z. Feng, and W. Zheng, “Exponential stability analysis for delayed neural networks with switching parameters: average dwell time approach,” IEEE Transactions on Neural Networks, vol. 21, pp. 1396–1407, 2010.
  20. C. W. Wu and L. O. Chua, “A simple way to synchronize chaotic systems with applications to secure communication systems,” International Journal of Bifurcation and Chaos, vol. 3, pp. 1619–1627, 1993.
  21. W. Yu, J. Cao, and K. Yuan, “Synchronization of switched system and application in communication,” Physics Letters A, vol. 372, pp. 4438–4445, 2008.
  22. C. Maia and M. Goncalves, “Application of switched adaptive system to load forecasting,” Electric Power Systems Research, vol. 78, pp. 721–727, 2008.
  23. H. Wu, X. Liao, W. Feng, S. Guo, and W. Zhang, “Robust stability analysis of uncertain systems with two additive time-varying delay components,” Applied Mathematical Modelling, vol. 33, no. 12, pp. 4345–4353, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  24. X. Yang, C. Huang, and Q. Zhu, “Synchronization of switched neural networks with mixed delays via impulsive control,” Chaos, Solitons & Fractals, vol. 44, no. 10, pp. 817–826, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  25. J. Lian and K. Zhang, “Exponential stability for switched Cohen-Grossberg neural networks with average dwell time,” Nonlinear Dynamics, vol. 63, no. 3, pp. 331–343, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  26. T.-F. Li, J. Zhao, and G. M. Dimirovski, “Stability and L2-gain analysis for switched neutral systems with mixed time-varying delays,” Journal of the Franklin Institute, vol. 348, no. 9, pp. 2237–2256, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  27. C. Huang and J. Cao, “Convergence dynamics of stochastic Cohen-Grossberg neural networks with unbounded distributed delays,” IEEE Transactions on Neural Networks, vol. 22, pp. 561–572, 2011.
  28. J. Hespanha and A. Morse, “Stability of switched systems with average dwell time,” in Proceedings of the 38th IEEE Conference on Decision and Control, pp. 2655–2660, 1999.
  29. K. Gu, “An integral inequality in the stability problem of time-delay systems,” in Proceedings of the 39th IEEE Conference on Decision and Control, pp. 2805–2810, 2000.