Abstract

This paper investigates dynamical behaviors of the stochastic Hopfield neural networks with mixed time delays. The mixed time delays under consideration comprise both the discrete time-varying delays and the distributed time-delays. By employing the theory of stochastic functional differential equations and linear matrix inequality (LMI) approach, some novel criteria on asymptotic stability, ultimate boundedness, and weak attractor are derived. Finally, a numerical example is given to illustrate the correctness and effectiveness of our theoretical results.

1. Introduction

The well-known Hopfield neural networks were firstly introduced by Hopfield [1, 2] in early 1980s. Since then, both the mathematical analysis and practical applications of Hopfield neural networks have gained considerable research attention. The Hopfield neural networks have already been successfully applied in many different areas such as combinatorial optimization, knowledge acquisition, and pattern recognition, see, for example, [35]. In both the biological and artificial neural networks, the interactions between neurons are generally asynchronous, which give rise to the inevitable signal transmission delays. Also, in electronic implementation of analog neural networks, time delay is usually time-varying due to the finite switching speed of amplifiers. Note that continuously distributed delays have gained particular attention, since a neural network usually has a spatial nature due to the presence of an amount of parallel pathways of a variety of axon sizes and lengths.

Recently, it has been well recognized that stochastic disturbances are ubiquitous and inevitable in various systems, ranging from electronic implementations to biochemical systems, which are mainly caused by thermal noise and environmental fluctuations as well as different orders of ongoing events in the overall systems [6, 7]. Therefore, considerable attentions have been paid to investigate the dynamics of stochastic neural networks, and many results on stochastic neural networks with delays have been reported in the literature, see, for example, [830] and references therein. Among which, some sufficient criteria on the stability of uncertain stochastic neural networks were derived in [810]. Almost sure exponential stability of stochastic neural networks was discussed in [1115]. In [1622], mean square exponential stability and pth moment exponential stability of stochastic neural networks were investigated; Some sufficient criteria on the exponential stability for impulsive stochastic neural networks were established in [2326]. In [27], the stability of discrete-time stochastic neural networks was analyzed, while exponential stability of stochastic neural networks with Markovian jump parameters is investigated in [2830]. These references mainly considered the stability of equilibrium point of stochastic neural networks. What do we study when the equilibrium point does not exist?

Except for stability property, boundedness and attractor are also foundational concepts of dynamical systems. They play an important role in investigating the uniqueness of equilibrium, global asymptotic stability, global exponential stability, the existence of periodic solution, its control and synchronization [31, 32], and so on. Recently, ultimate boundedness and attractor of several classes of neural networks with time delays have been reported. Some sufficient criteria were derived in [33, 34], but these results hold only under constant delays. Following, in [35], the globally robust ultimate boundedness of integrodifferential neural networks with uncertainties and varying delays was studied. After that, some sufficient criteria on the ultimate boundedness of neural networks with both varying and unbounded delays were derived in [36], but the concerned systems are deterministic ones. In [37, 38], a series of criteria on the boundedness, global exponential stability, and the existence of periodic solution for nonautonomous recurrent neural networks were established. In [3941], the ultimate boundedness and attractor of the stochastic Hopfield neural networks with time-varying delays were discussed. To the best of our knowledge, for stochastic neural networks with mixed time delays, there are few published results on the ultimate boundedness and weak attractor. Therefore, the arising questions about the ultimate boundedness, weak attractor, and asymptotic stability of the stochastic Hopfield neural networks with mixed time delays are important yet meaningful.

The left of the paper is organized as follows and some preliminaries are in Section 2, Section 3 presents our main results, a numerical example and conclusions will be in Sections 4 and 5, respectively.

2. Preliminaries

Consider the following stochastic Hopfield neural networks with mixed time delays: where is the state vector associated with the neurons, ,   represents the rate with which the th unit will reset its potential to the resting state in isolation when being disconnected from the network and the external stochastic perturbation; , and represent the connection weight matrix; , denotes the external bias on the th unit; and denote activation functions, , ; are the diffusion coefficient matrices; is one-dimensional Brownian motion defined on a complete probability space with a natural filtration generated by ; there exists a positive constant such that the transmission delay satisfies

The initial conditions are given in the following form: where is -valued function, -measurable -valued random variable satisfying is the Euclidean norm, and is the space of all continuous -valued functions defined on .

Let , , where Then system (1) can be written by Throughout this paper, the following assumption will be considered.

(A1) There exist constants , , and such that

Remark 1. It follows from [42] that under the assumption (A1), system (1) has a global solution on . Moreover, under assumption 1, it is not difficult to prove that and satisfy the local Lipschitz condition in [43].

Remark 2. We note that assumption (A1) is less conservative than that in [8, 9, 39], since the constants , , and are allowed to be positive, negative numbers, or zeros.

The notation (resp., ) means that matrix is symmetric positive definite (resp., positive semidefinite). denotes the transpose of the matrix . represents the minimum eigenvalue of matrix . Denote by the family of continuous functions from to . Let be the family of all continuous nonnegative functions defined on such that they are continuously twice differentiable in and once in . Given , we define the functional by where and .

The following lemmas will be used in establishing our main results.

Lemma 3 (see [44]). For any positive definite matrix , scalar , vector function such that the integrations concerned are well defined, and the following inequality holds:

Lemma 4 (see [43]). Suppose that system (5) satisfies the local Lipschitz condition and the following assumptions hold. (A2) There are two functions and and two probability measures and on such that while for all , where , and . (A3) If there is a pair of positive constants and such that Then the unique global solution to system (5) obeys where while and is the unique root to the following equation:
If, furthermore, , then

3. Main Results

Theorem 5. Suppose that there exist some matrices , and positive constants , , such that and where , , , , , , , , , means the symmetric terms.
Then, the following results hold.(i) System (1) is stochastically ultimately bounded; that is, for any , there exists a positive constant such that the solution of system (1) satisfies (ii)If , where is the same as defined in Lemma 4,
then

Proof. Let the Lyapunov function . Applying Itô’s formula in [42] to along with system (1), one may obtain the following: where From Lemma 3, it follows that From (A1), it follows that for , Similarly, one derives that
Further from (20)–(23), one derives where , , , , , , , Let , , , , . Then it follows from Lemma 4 that where is the same as defined in Lemma 4, Therefore, for any , it follows from Chebyshev’s inequality that If, furthermore, , then it follows from Lemma 4 that (ii) holds.

Theorem 5 shows that there exists such that for any , . Let denote by Clearly, is closed, bounded, and invariant. Moreover, with no less than probability , which means that attracts the solutions infinitely many times with no less than probability , so we may say that is a weak attractor for the solutions.

Theorem 6. Suppose that all conditions of Theorem 5 hold, then there exists a weak attractor   for the solutions of system (1).

Remark 7. Compared with [3941], assumption (A1) is less conservative than that in [39] and system (1) includes mixed time delays, which is more complex than that in [3941]. In addition, Lemma 4 is firstly used to investigate the dynamical behaviors of stochastic neural networks with mixed time delays. The bound for may be in a much weaker form. Our results do not only deal with the asymptotic moment estimation but also the path wise (almost sure) estimation.

4. Numerical Example

In this section, a numerical example is presented to demonstrate the validity and effectiveness of our theoretical results.

Example 8. Consider the following stochastic Hopfield neural networks with mixed time delays: where , , and , is one-dimensional Brownian motion. Then , . By using the Matlab LMI Control Toolbox [45], for , , based on Theorem 5, such system is stochastically ultimately bounded when , , and are chosen as follows:
For , , , we obtain and constant . Then , . For the system in Example 8 (color online), Figure 1(a) shows time trajectories, and Figure 1(b) shows the set and several typical phase portraits, where initial value for is chosen as . In Figure 1(b), only phase portraits for are shown. From Figure 1, one can easily find that these trajectories are almost all attracted by the set .

5. Conclusion

In this paper, by using the theory of stochastic functional differential equations and linear matrix inequality, new results and sufficient criteria on the asymptotic stability, ultimate boundedness, and attractor of stochastic Hopfield neural networks with mixed time delays are established. A numerical example is also presented to demonstrate the correctness of our theoretical results.

Acknowledgments

The authors thank the Editor and the Reviewers for their detailed comments and valuable suggestions. This work was supported by the National Natural Science Foundation of China (no. 11271295, 10926128, and 11047114), Science and Technology Research Projects of Hubei Provincial Department of Education (no. Q20111607, Q20111611, D20131602, and D20131604), and Young Talent Cultivation Projects of Guangdong (LYM09134).