Abstract

This paper investigates dynamical behaviors of stochastic Hopfield neural networks with both time-varying and continuously distributed delays. By employing the Lyapunov functional theory and linear matrix inequality, some novel criteria on asymptotic stability, ultimate boundedness, and weak attractor are derived. Finally, an example is given to illustrate the correctness and effectiveness of our theoretical results.

1. Introduction

Hopfield neural networks [1] have been extensively studied in the past years and found many applications in different areas such as pattern recognition, associative memory, and combinatorial optimization. Such applications heavily depend on the dynamical behaviors such as stability, uniform boundedness, ultimate boundedness, attractor, bifurcation, and chaos. As it is well known, time delays are unavoidably encountered in the implementation of neural networks. Since time delays as a source of instability and bad performance always appear in many neural networks owing to the finite speed of information processing, the stability analysis for the delayed neural networks has received considerable attention. However, in these recent publications, most research on delayed neural networks has been restricted to simple cases of discrete delays. Since a neural network usually has a spatial nature due to the presence of an amount of parallel pathways of a variety of axon sizes and lengths, it is desired to model them by introducing distributed delays. Therefore, both discrete and distributed delays should be taken into account when modeling realistic neural networks [2, 3].

On the other hand, it has now been well recognized that stochastic disturbances are also ubiquitous owing to thermal noise in electronic implementations. Therefore, it is important to understand how these disturbances affect the networks. Many results on stochastic neural networks have been reported in [424]. Some sufficient criteria on the stability of uncertain stochastic neural networks were derived in [47]. Almost sure exponential stability of stochastic neural networks was studied in [810]. In [1116], mean square exponential stability and th moment exponential stability of stochastic neural networks were discussed. The stability of stochastic impulsive neural networks was discussed in [1719]. The stability of stochastic neural networks with the Markovian jumping parameters was investigated in [2022]. The passivity analysis for stochastic neural networks was discussed in [23, 24]. These references mainly considered the stability of equilibrium point of stochastic delayed neural networks. What do we study to understand the asymptotic behaviors when the equilibrium point does not exist?

Except for the stability property, boundedness and attractor are also foundational concepts of dynamical systems. They play an important role in investigating the uniqueness of equilibrium, global asymptotic stability, global exponential stability, and the existence of periodic solution, its control, and synchronization [25]. Recently, ultimate boundedness and attractor of several classes of neural networks with time delays have been reported in [2633]. Some sufficient criteria were derived in [26, 27], but these results hold only under constant delays. Following, in [28], the globally robust ultimate boundedness of integrodifferential neural networks with uncertainties and varying delays were studied. After that, some sufficient criteria on the ultimate boundedness of neural networks with both varying and unbounded delays were derived in [29], but the concerned systems are deterministic ones. In [30, 31], a series of criteria on the boundedness, global exponential stability, and the existence of periodic solution for nonautonomous recurrent neural networks were established. In [32, 33], the ultimate boundedness and weak attractor of stochastic neural networks with time-varying delays were discussed. To the best of our knowledge, for stochastic neural networks with mixed time delays, there are few published results on the ultimate boundedness and weak attractor. Therefore, the arising questions about the ultimate boundedness, weak attractor, and asymptotic stability of the stochastic Hopfield neural networks with mixed time delays are important and meaningful.

The left paper is organized as follows. Some preliminaries are in Section 2, main results are presented in Section 3, a numerical example is given in Section 4, and conclusions are drawn in Section 5.

2. Preliminaries

Consider the following stochastic Hopfield neural networks with both time-varying and continuously distributed delays: in which is a state vector associated with the neurons; , represents the rate with which the th unit will reset its potential to the resting state in isolation when being disconnected from the network and the external stochastic perturbation; , , and represent the connection weight matrix, the delayed connection weight matrix, and the distributively delayed connection weight matrix, respectively; , denotes the external bias on the ith unit; , , and denote activation functions, , and the delay kernel is a real-valued nonnegative continuous function defined on ; and are diffusion coefficient matrices; is a one-dimensional Brownian motion or Winner process, which is defined on a complete probability space with a natural filtration generated by ; is the transmission delay, and the initial conditions associated with system (1) are of the following forms: , , where is a -measurable, bounded, and continuous -valued random variable defined on .

Throughout this paper, one always supposes that the following condition holds. (A1) For and in (1), there are always constants , , , and such that Moreover, there exist constant and matrix such that

Following, (resp., ) means that matrix is a symmetric positive definite (resp., positive semi-definite). and denote the transpose and inverse of the matrix . and represent the maximum and minimum eigenvalues of matrix , respectively.

Definition 1. System (1) is said to be stochastically ultimately bounded if, for any , there is a positive constant such that the solution of system (1) satisfies

Lemma 2 (see [34]). Let and depends affinely on . Then, linear matrix inequality is equivalent to (1), (2).

3. Main Results

Theorem 3. System (1) is stochastically ultimately bounded provided that satisfies , , and there exist some matrices , , () such that the following linear matrix inequality holds: where denotes the corresponding symmetric terms,

Proof. The key of the proof is to prove that there exists a positive constant , which is independent of the initial data, such that If (8) holds, then it follows from Chebyshev's inequality that for any and , which implies that (4) holds. Now, we begin to prove that (8) holds.
From and Lemma 2, one may obtain
Hence, there exists a sufficiently small such that where is identity matrix,
Consider the Lyapunov-Krasovskii function as follows: where
Then, it can be obtained by Ito's formula in [35] that where in which the following inequality is used:
On the other hand, it follows from (A1) that for ,
Similarly, one may obtain
Therefore, from (15)–(21), it follows that where
Thus, one may obtain that where . Equation (26) implies that (8) holds. The proof is completed.

Theorem 3 shows that there exists such that for any , . Let be denoted by

Clearly, is closed, bounded, and invariant. Moreover, with no less than probability , which means that attracts the solutions infinitely many times with no less than probability ; so we may say that is a weak attractor for the solutions.

Theorem 4. Suppose that all conditions of Theorem 3 hold, then there exists a weak attractor for the solutions of system (1).

Theorem 5. Suppose that all conditions of Theorem 3 hold and ; then, zero solution of system (1) is mean square exponential stability and almost sure exponential stability.

Proof. If , then . By (25) and the semimartingale convergence theorem used in [35], zero solution of system (1) is almost sure exponential stability. It follows from (26) that zero solution of system (1) is mean square exponential stability.

Remark 6. If one takes in Theorem 3, then it is not required that . Furthermore, If one takes , then may be nondifferentiable or the boundedness of is unknown.

Remark 7. Assumption (A1) is less conservative than that in [32] since the constants , and are allowed to be positive, negative, or zero. System (1) includes mixed time-delays, which is more complex than that in [33]. The systems concerned in [2631] are deterministic, so the stochastic system studied in this paper is more complex and realistic.
When , system (1) becomes the following determined system:

Definition 8. System (29) is said to be uniformly bounded, if, for each , there exists a constant such that , , , imply , where .

Theorem 9. System (29) is uniformly bounded provided that satisfies , , and there exist some matrices , , and such that the following linear matrix inequality holds: where denotes the corresponding symmetric terms, ( are the same as in Theorem 3.

Proof. From , there exists a sufficiently small such that where is identity matrix, , , , , and .
We still consider the Lyapunov-Krasovskii functional in (13). From (16)–(21), one may obtain where , is the same as in (24). Note that So, system (29) is uniformly bounded.

4. One Example

Example 1. Consider system (1) with , , and

The activation functions , satisfy, , . Then, one computes , , and . By using MATLAB’s LMI Control Toolbox [34], for and , based on Theorem 3, such system is stochastically ultimately bounded when , , and ( satisfy

From Figure 1, it is easy to see that is stochastically ultimately bounded.

5. Conclusions

A proper Lyapunov functional and linear matrix inequalities are employed to investigate the ultimate boundedness, stability, and weak attractor of stochastic Hopfield neural networks with both time-varying and continuously distributed delays. New results and sufficient criteria are derived after extensive deductions. From the proposed sufficient conditions, one can easily prove that zero solution of such network is mean square exponentially stable and almost surely exponentially stable by applying the semimartingale convergence theorem.

Acknowledgments

The authors thank the editor and the reviewers for their insightful comments and valuable suggestions. This work was supported by the National Natural Science Foundation of China (nos. 10801109, 11271295, and 11047114), Science and Technology Research Projects of Hubei Provincial Department of Education (D20131602), and Young Talent Cultivation Projects of Guangdong (LYM09134).