Abstract

This paper investigates dynamical behaviors of stochastic Cohen-Grossberg neural network with delays and reaction diffusion. By employing Lyapunov method, Poincaré inequality and matrix technique, some sufficient criteria on ultimate boundedness, weak attractor, and asymptotic stability are obtained. Finally, a numerical example is given to illustrate the correctness and effectiveness of our theoretical results.

1. Introduction

Cohen and Grossberg proposed and investigated Cohen-Grossberg neural networks in 1983 [1]. Hopfield neural networks, recurrent neural networks, cellular neural networks, and bidirectional associative memory neural networks are special cases of this model. Since then, the Cohen-Grossberg neural networks have been widely studied in the literature, see for example, [212] and references therein.

Strictly speaking, diffusion effects cannot be avoided in the neural networks when electrons are moving in asymmetric electromagnetic fields. Therefore, we must consider that the activations vary in space as well as in time. In [1319], the authors gave some stability conditions of reaction-diffusion neural networks, but these conditions were independent of diffusion effects.

On the other hand, it has been well recognized that stochastic disturbances are ubiquitous and inevitable in various systems, ranging from electronic implementations to biochemical systems, which are mainly caused by thermal noise, environmental fluctuations, as well as different orders of ongoing events in the overall systems [20, 21]. Therefore, considerable attention has been paid to investigate the dynamics of stochastic neural networks, and many results on stability of stochastic neural networks have been reported in the literature, see for example, [2238] and references therein.

The above references mainly considered the stability of equilibrium point of neural networks. What do we study when the equilibrium point does not exist? Except for stability property, boundedness and attractor are also foundational concepts of dynamical systems, which play an important role in investigating the uniqueness of equilibrium, global asymptotic stability, global exponential stability, the existence of periodic solution, and so on [39, 40]. Recently, ultimate boundedness and attractor of several classes of neural networks with time delays have been reported. In [41], the globally robust ultimate boundedness of integrodifferential neural networks with uncertainties and varying delays was studied. Some sufficient criteria on the ultimate boundedness of deterministic neural networks with both varying and unbounded delays were derived in [42]. In [43, 44], a series of criteria on the boundedness, global exponential stability, and the existence of periodic solution for nonautonomous recurrent neural networks were established. In [45, 46], some criteria on ultimate boundedness and attractor of stochastic neural networks were derived. To the best of our knowledge, there are few results on the ultimate boundedness and attractor of stochastic reaction-diffusion neural networks.

Therefore, the arising questions about the ultimate boundedness, attractor and stability for the stochastic reaction-diffusion Cohen-Grossberg neural networks with time-varying delays are important yet meaningful.

The rest of the paper is organized as follows: some preliminaries are in Section 2, main results are presented in Section 3, a numerical example and conclusions will be drawn in Sections 4 and 5, respectively.

2. Model Description and Assumptions

Consider the following stochastic Cohen-Grossberg neural networks with delays and diffusion terms: for and . In the above model, is the number of neurons in the network; is space variable; is the state variable of the th neuron at time and in space ; and denote the activation functions of the th unit at time and in space ; constant ; presents an amplification function; is an appropriately behavior function; and denote the connection strengths of the th unit on the th unit, respectively; corresponds to the transmission delay and satisfies denotes the external bias on the th unit; is the diffusion function; is a compact set with smooth boundary and measure mes in is the initial boundary value; is -dimensional Brownian motion defined on a complete probability space with a natural filtration generated by , where we associate with the canonical space generated by all and denote by the associated -algebra generated by with the probability measure .

System (2.1) has the following matrix form: where

Let be the space of real Lebesgue measurable functions on and a Banach space for the -norm Note that is -valued function and -measurable -valued random variable, where on , is the space of all continuous -valued functions defined on with a norm .

The following assumptions and lemmas will be used in establishing our main results.(A1) There exist constants , , and such that (A2) There exist constants and such that (A3) is bounded, positive, and continuous, that is, there exist constants , such that , for , .

Lemma 2.1 (Poincaré inequality, [47]). Assume that a real-valued function satisfies , where is a bounded domain of with a smooth boundary . Then, which is the lowest positive eigenvalue of the Neumann boundary problem: is the gradient operator, is the Laplace operator.

Remark 2.2. Assumption (A1) is less conservative than that in [26, 28], since the constants , , , and are allowed to be positive, negative, or zero, that is to say, the activation function in (A1) is assumed to be neither monotonic, differentiable, nor bounded. Assumption (A2) is weaker than those given in [23, 27, 30] since is not required to be zero or smaller than 1 and is allowed to take any value.

Remark 2.3. According to the eigenvalue theory of elliptic operators, the lowest eigenvalue is only determined by [47]. For example, if , then ; if , then .

The notation (resp., ) means that matrix is symmetric-positive definite (resp., positive semidefinite). denotes the transpose of the matrix . represents the minimum eigenvalue of matrix . .

3. Main Results

Theorem 3.1. Suppose that assumptions (A1)–(A3) hold and there exist some matrices , , , , , and such that the following linear matrix inequality hold:(A4)where means the symmetric term,
Then system (2.1) is stochastically ultimately bounded, that is, if for any , there is a positive constant such that the solution of system (2.1) satisfies

Proof. If , then it follows from (A4) that there exists a sufficiently small such that where
If , then it follows from (A4) that there exists a sufficiently small such that where , , and are the same as in (3.4),
Consider the following Lyapunov functional:
Applying Itô formula in [48] to along (2.2), one obtains
From assumptions (A1)–(A4), one obtains
From the boundary condition and Lemma 2.1, one obtains where “·” is inner product, ,
Combining (3.10) and (3.11) into (3.9), we have where or .
In addition, it follows from (A1) that Similarly, one obtains
From (3.13)–(3.15), one derives or where , Thus, one obtains
For any , set . By Chebyshev’s inequality and (3.20), we obtain which implies The proof is completed.

Theorem 3.1 shows that there exists such that for any , . Let be denoted by Clearly, is closed, bounded, and invariant. Moreover, with no less than probability , which means that attracts the solutions infinitely many times with no less than probability , so we may say that is a weak attractor for the solutions.

Theorem 3.2. Suppose that all conditions of Theorem 3.1 hold. Then there exists a weak attractor for the solutions of system (2.1).

Theorem 3.3. Suppose that all conditions of Theorem 3.1 hold and . Then zero solution of system (2.1) is mean square exponential stability.

Remark 3.4. Assumption (A4) depends on and , so the criteria on the stability, ultimate boundedness, and weak attractor depend on diffusion effects and the derivative of the delays and are independent of the magnitude of the delays.

4. An Example

In this section, a numerical example is presented to demonstrate the validity and effectiveness of our theoretical results.

Example 4.1. Consider the following system where , , , , , , , , is one-dimensional Brownian motion. Then we compute that , , , , , , , , and . By using the Matlab LMI Toolbox, for , based on Theorem 3.1, such system is stochastically ultimately bounded when

5. Conclusion

In this paper, new results and sufficient criteria on the ultimate boundedness, weak attractor, and stability are established for stochastic reaction-diffusion Cohen-Grossberg neural networks with delays by using Lyapunov method, Poincaré inequality and matrix technique. The criteria depend on diffusion effect and derivative of the delays and are independent of the magnitude of the delays.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (nos. 11271295, 10926128, 11047114, and 71171152), Science and Technology Research Projects of Hubei Provincial Department of Education (nos. Q20111607 and Q20111611) and Young Talent Cultivation Projects of Guangdong (LYM09134).