About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 631734, 9 pages
http://dx.doi.org/10.1155/2013/631734
Research Article

Dynamical Behaviors of Stochastic Hopfield Neural Networks with Both Time-Varying and Continuously Distributed Delays

1Department of Mathematics, Zhaoqing University, Zhaoqing 526061, China
2School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430072, China
3School of Mathematics and Physics, Wuhan Textile University, Wuhan 430073, China

Received 24 December 2012; Revised 19 March 2013; Accepted 20 March 2013

Academic Editor: Chuangxia Huang

Copyright © 2013 Qinghua Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper investigates dynamical behaviors of stochastic Hopfield neural networks with both time-varying and continuously distributed delays. By employing the Lyapunov functional theory and linear matrix inequality, some novel criteria on asymptotic stability, ultimate boundedness, and weak attractor are derived. Finally, an example is given to illustrate the correctness and effectiveness of our theoretical results.

1. Introduction

Hopfield neural networks [1] have been extensively studied in the past years and found many applications in different areas such as pattern recognition, associative memory, and combinatorial optimization. Such applications heavily depend on the dynamical behaviors such as stability, uniform boundedness, ultimate boundedness, attractor, bifurcation, and chaos. As it is well known, time delays are unavoidably encountered in the implementation of neural networks. Since time delays as a source of instability and bad performance always appear in many neural networks owing to the finite speed of information processing, the stability analysis for the delayed neural networks has received considerable attention. However, in these recent publications, most research on delayed neural networks has been restricted to simple cases of discrete delays. Since a neural network usually has a spatial nature due to the presence of an amount of parallel pathways of a variety of axon sizes and lengths, it is desired to model them by introducing distributed delays. Therefore, both discrete and distributed delays should be taken into account when modeling realistic neural networks [2, 3].

On the other hand, it has now been well recognized that stochastic disturbances are also ubiquitous owing to thermal noise in electronic implementations. Therefore, it is important to understand how these disturbances affect the networks. Many results on stochastic neural networks have been reported in [424]. Some sufficient criteria on the stability of uncertain stochastic neural networks were derived in [47]. Almost sure exponential stability of stochastic neural networks was studied in [810]. In [1116], mean square exponential stability and th moment exponential stability of stochastic neural networks were discussed. The stability of stochastic impulsive neural networks was discussed in [1719]. The stability of stochastic neural networks with the Markovian jumping parameters was investigated in [2022]. The passivity analysis for stochastic neural networks was discussed in [23, 24]. These references mainly considered the stability of equilibrium point of stochastic delayed neural networks. What do we study to understand the asymptotic behaviors when the equilibrium point does not exist?

Except for the stability property, boundedness and attractor are also foundational concepts of dynamical systems. They play an important role in investigating the uniqueness of equilibrium, global asymptotic stability, global exponential stability, and the existence of periodic solution, its control, and synchronization [25]. Recently, ultimate boundedness and attractor of several classes of neural networks with time delays have been reported in [2633]. Some sufficient criteria were derived in [26, 27], but these results hold only under constant delays. Following, in [28], the globally robust ultimate boundedness of integrodifferential neural networks with uncertainties and varying delays were studied. After that, some sufficient criteria on the ultimate boundedness of neural networks with both varying and unbounded delays were derived in [29], but the concerned systems are deterministic ones. In [30, 31], a series of criteria on the boundedness, global exponential stability, and the existence of periodic solution for nonautonomous recurrent neural networks were established. In [32, 33], the ultimate boundedness and weak attractor of stochastic neural networks with time-varying delays were discussed. To the best of our knowledge, for stochastic neural networks with mixed time delays, there are few published results on the ultimate boundedness and weak attractor. Therefore, the arising questions about the ultimate boundedness, weak attractor, and asymptotic stability of the stochastic Hopfield neural networks with mixed time delays are important and meaningful.

The left paper is organized as follows. Some preliminaries are in Section 2, main results are presented in Section 3, a numerical example is given in Section 4, and conclusions are drawn in Section 5.

2. Preliminaries

Consider the following stochastic Hopfield neural networks with both time-varying and continuously distributed delays: in which is a state vector associated with the neurons; , represents the rate with which the th unit will reset its potential to the resting state in isolation when being disconnected from the network and the external stochastic perturbation; , , and represent the connection weight matrix, the delayed connection weight matrix, and the distributively delayed connection weight matrix, respectively; , denotes the external bias on the ith unit; , , and denote activation functions, , and the delay kernel is a real-valued nonnegative continuous function defined on ; and are diffusion coefficient matrices; is a one-dimensional Brownian motion or Winner process, which is defined on a complete probability space with a natural filtration generated by ; is the transmission delay, and the initial conditions associated with system (1) are of the following forms: , , where is a -measurable, bounded, and continuous -valued random variable defined on .

Throughout this paper, one always supposes that the following condition holds. (A1) For and in (1), there are always constants , , , and such that Moreover, there exist constant and matrix such that

Following, (resp., ) means that matrix is a symmetric positive definite (resp., positive semi-definite). and denote the transpose and inverse of the matrix . and represent the maximum and minimum eigenvalues of matrix , respectively.

Definition 1. System (1) is said to be stochastically ultimately bounded if, for any , there is a positive constant such that the solution of system (1) satisfies

Lemma 2 (see [34]). Let and depends affinely on . Then, linear matrix inequality is equivalent to (1), (2).

3. Main Results

Theorem 3. System (1) is stochastically ultimately bounded provided that satisfies , , and there exist some matrices , , () such that the following linear matrix inequality holds: where denotes the corresponding symmetric terms,

Proof. The key of the proof is to prove that there exists a positive constant , which is independent of the initial data, such that If (8) holds, then it follows from Chebyshev's inequality that for any and , which implies that (4) holds. Now, we begin to prove that (8) holds.
From and Lemma 2, one may obtain
Hence, there exists a sufficiently small such that where is identity matrix,
Consider the Lyapunov-Krasovskii function as follows: where
Then, it can be obtained by Ito's formula in [35] that where in which the following inequality is used:
On the other hand, it follows from (A1) that for ,
Similarly, one may obtain
Therefore, from (15)–(21), it follows that where
Thus, one may obtain that where . Equation (26) implies that (8) holds. The proof is completed.

Theorem 3 shows that there exists such that for any , . Let be denoted by

Clearly, is closed, bounded, and invariant. Moreover, with no less than probability , which means that attracts the solutions infinitely many times with no less than probability ; so we may say that is a weak attractor for the solutions.

Theorem 4. Suppose that all conditions of Theorem 3 hold, then there exists a weak attractor for the solutions of system (1).

Theorem 5. Suppose that all conditions of Theorem 3 hold and ; then, zero solution of system (1) is mean square exponential stability and almost sure exponential stability.

Proof. If , then . By (25) and the semimartingale convergence theorem used in [35], zero solution of system (1) is almost sure exponential stability. It follows from (26) that zero solution of system (1) is mean square exponential stability.

Remark 6. If one takes in Theorem 3, then it is not required that . Furthermore, If one takes , then may be nondifferentiable or the boundedness of is unknown.

Remark 7. Assumption (A1) is less conservative than that in [32] since the constants , and are allowed to be positive, negative, or zero. System (1) includes mixed time-delays, which is more complex than that in [33]. The systems concerned in [2631] are deterministic, so the stochastic system studied in this paper is more complex and realistic.
When , system (1) becomes the following determined system:

Definition 8. System (29) is said to be uniformly bounded, if, for each , there exists a constant such that , , , imply , where .

Theorem 9. System (29) is uniformly bounded provided that satisfies , , and there exist some matrices , , and such that the following linear matrix inequality holds: where denotes the corresponding symmetric terms, ( are the same as in Theorem 3.

Proof. From , there exists a sufficiently small such that where is identity matrix, , , , , and .
We still consider the Lyapunov-Krasovskii functional in (13). From (16)–(21), one may obtain where , is the same as in (24). Note that So, system (29) is uniformly bounded.

4. One Example

Example 1. Consider system (1) with , , and

The activation functions , satisfy, , . Then, one computes , , and . By using MATLAB’s LMI Control Toolbox [34], for and , based on Theorem 3, such system is stochastically ultimately bounded when , , and ( satisfy

From Figure 1, it is easy to see that is stochastically ultimately bounded.

fig1
Figure 1: Time trajectories (a) as well as the phase portrait (b) for the system in Example 1.

5. Conclusions

A proper Lyapunov functional and linear matrix inequalities are employed to investigate the ultimate boundedness, stability, and weak attractor of stochastic Hopfield neural networks with both time-varying and continuously distributed delays. New results and sufficient criteria are derived after extensive deductions. From the proposed sufficient conditions, one can easily prove that zero solution of such network is mean square exponentially stable and almost surely exponentially stable by applying the semimartingale convergence theorem.

Acknowledgments

The authors thank the editor and the reviewers for their insightful comments and valuable suggestions. This work was supported by the National Natural Science Foundation of China (nos. 10801109, 11271295, and 11047114), Science and Technology Research Projects of Hubei Provincial Department of Education (D20131602), and Young Talent Cultivation Projects of Guangdong (LYM09134).

References

  1. C. Bai, “Stability analysis of Cohen-Grossberg BAM neural networks with delays and impulses,” Chaos, Solitons & Fractals, vol. 35, no. 2, pp. 263–267, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. T. Lv and P. Yan, “Dynamical behaviors of reaction-diffusion fuzzy neural networks with mixed delays and general boundary conditions,” Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 2, pp. 993–1001, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  3. J. Liu, X. Liu, and W.-C. Xie, “Global convergence of neural networks with mixed time-varying delays and discontinuous neuron activations,” Information Sciences, vol. 183, pp. 92–105, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. H. Huang and G. Feng, “Delay-dependent stability for uncertain stochastic neural networks with time-varying delay,” Physica A, vol. 381, no. 1-2, pp. 93–103, 2007. View at Publisher · View at Google Scholar · View at Scopus
  5. W.-H. Chen and X. M. Lu, “Mean square exponential stability of uncertain stochastic delayed neural networks,” Physics Letters A, vol. 372, no. 7, pp. 1061–1069, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. M. Hua, X. Liu, F. Deng, and J. Fei, “New results on robust exponential stability of uncertain stochastic neural networks with mixed time-varying delays,” Neural Processing Letters, vol. 32, no. 3, pp. 219–233, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. P. Balasubramaniam, S. Lakshmanan, and R. Rakkiyappan, “LMI optimization problem of delay-dependent robust stability criteria for stochastic systems with polytopic and linear fractional uncertainties,” International Journal of Applied Mathematics and Computer Science, vol. 22, no. 2, pp. 339–351, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  8. C. Huang and J. Cao, “Almost sure exponential stability of stochastic cellular neural networks with unbounded distributed delays,” Neurocomputing, vol. 72, no. 13–15, pp. 3352–3356, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. H. Zhao, N. Ding, and L. Chen, “Almost sure exponential stability of stochastic fuzzy cellular neural networks with delays,” Chaos, Solitons & Fractals, vol. 40, no. 4, pp. 1653–1659, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. C. Huang, P. Chen, Y. He, L. Huang, and W. Tan, “Almost sure exponential stability of delayed Hopfield neural networks,” Applied Mathematics Letters, vol. 21, no. 7, pp. 701–705, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  11. C. Huang and J. Cao, “On pth moment exponential stability of stochastic Cohen-Grossberg neural networks with time-varying delays,” Neurocomputing, vol. 73, no. 4–6, pp. 986–990, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. C. X. Huang, Y. G. He, L. H. Huang, and W. J. Zhu, “pth moment stability analysis of stochastic recurrent neural networks with time-varying delays,” Information Sciences, vol. 178, no. 9, pp. 2194–2203, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. C. Huang, Y. He, and H. Wang, “Mean square exponential stability of stochastic recurrent neural networks with time-varying delays,” Computers and Mathematics with Applications, vol. 56, no. 7, pp. 1773–1778, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  14. R. Rakkiyappan and P. Balasubramaniam, “Delay-dependent asymptotic stability for stochastic delayed recurrent neural networks with time varying delays,” Applied Mathematics and Computation, vol. 198, no. 2, pp. 526–533, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  15. S. Lakshmanan and P. Balasubramaniam, “Linear matrix inequality approach for robust stability analysis for stochastic neural networks with time-varying delay,” Chinese Physics B, vol. 20, no. 4, Article ID 040204, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. S. Lakshmanan, A. Manivannan, and P. Balasubramaniam, “Delay-distribution-dependent stability criteria for neural networks with time-varying delays,” Dynamics of Continuous, Discrete & Impulsive Systems A, vol. 19, no. 1, pp. 1–14, 2012. View at MathSciNet
  17. Q. Song and Z. Wang, “Stability analysis of impulsive stochastic Cohen-Grossberg neural networks with mixed time delays,” Physica A, vol. 387, no. 13, pp. 3314–3326, 2008. View at Publisher · View at Google Scholar · View at Scopus
  18. X. Li, “Existence and global exponential stability of periodic solution for delayed neural networks with impulsive and stochastic effects,” Neurocomputing, vol. 73, no. 4–6, pp. 749–758, 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. C. H. Wang, Y. G. Kao, and G. W. Yang, “Exponential stability of impulsive stochastic fuzzy reactiondiffusion Cohen-Grossberg neural networks with mixed delays,” Neurocomputing, vol. 89, pp. 55–63, 2012.
  20. Q. Zhu, C. Huang, and X. Yang, “Exponential stability for stochastic jumping BAM neural networks with time-varying and distributed delays,” Nonlinear Analysis, vol. 5, no. 1, pp. 52–77, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  21. P. Balasubramaniam and M. Syed Ali, “Stochastic stability of uncertain fuzzy recurrent neural networks with Markovian jumping parameters,” International Journal of Computer Mathematics, vol. 88, no. 5, pp. 892–904, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  22. Y. F. Wang, P. Lin, and L. S. Wang, “Exponential stability of reaction-diffusion high-order Markovian jump Hopfield neural networks with time-varying delays,” Nonlinear Analysis, vol. 13, no. 3, pp. 1353–1361, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  23. P. Balasubramaniam and G. Nagamani, “Global robust passivity analysis for stochastic interval neural networks with interval time-varying delays and Markovian jumping parameters,” Journal of Optimization Theory and Applications, vol. 149, no. 1, pp. 197–215, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  24. P. Balasubramaniam and G. Nagamani, “Global robust passivity analysis for stochastic fuzzy interval neural networks with time-varying delays,” Expert Systems With Applications, vol. 39, pp. 732–742, 2012.
  25. P. Wang, D. Li, and Q. Hu, “Bounds of the hyper-chaotic Lorenz-Stenflo system,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 9, pp. 2514–2520, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  26. X. Liao and J. Wang, “Global dissipativity of continuous-time recurrent neural networks with time delay,” Physical Review E, vol. 68, no. 1, Article ID 016118, 7 pages, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  27. S. Arik, “On the global dissipativity of dynamical neural networks with time delays,” Physics Letters A, vol. 326, no. 1-2, pp. 126–132, 2004. View at Publisher · View at Google Scholar · View at Scopus
  28. X. Y. Lou and B. T. Cui, “Global robust dissipativity for integro-differential systems modeling neural networks with delays,” Chaos, Solitons & Fractals, vol. 36, no. 2, pp. 469–478, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  29. Q. Song and Z. Zhao, “Global dissipativity of neural networks with both variable and unbounded delays,” Chaos, Solitons & Fractals, vol. 25, no. 2, pp. 393–401, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  30. H. Jiang and Z. Teng, “Global eponential stability of cellular neural networks with time-varying coefficients and delays,” Neural Networks, vol. 17, no. 10, pp. 1415–1425, 2004. View at Publisher · View at Google Scholar · View at Scopus
  31. H. Jiang and Z. Teng, “Boundedness, periodic solutions and global stability for cellular neural networks with variable coefficients and infinite delays,” Neurocomputing, vol. 72, no. 10–12, pp. 2455–2463, 2009. View at Publisher · View at Google Scholar · View at Scopus
  32. L. Wan and Q. H. Zhou, “Attractor and ultimate boundedness for stochastic cellular neural networks with delays,” Nonlinear Analysis, vol. 12, no. 5, pp. 2561–2566, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  33. L. Wan, Q. Zhou, P. Wang, and J. Li, “Ultimate boundedness and an attractor for stochastic Hopfield neural networks with time-varying delays,” Nonlinear Analysis, vol. 13, no. 2, pp. 953–958, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  34. S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequations in System and Control Theory, SIAM, Philadelphia, Pa, USA.
  35. X. Mao, Stochastic Differential Equations and Applications, Horwood Publishing, 1997. View at MathSciNet