About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 693163, 12 pages
Research Article

Exponential Stability for a Class of Stochastic Reaction-Diffusion Hopfield Neural Networks with Delays

1College of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
2Department of Mathematics, Ocean University of China, Qingdao 266100, China

Received 7 August 2011; Accepted 28 November 2011

Academic Editor: Jitao Sun

Copyright © 2012 Xiao Liang and Linshan Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper studies the asymptotic behavior for a class of delayed reaction-diffusion Hopfield neural networks driven by finite-dimensional Wiener processes. Some new sufficient conditions are established to guarantee the mean square exponential stability of this system by using Poincaré’s inequality and stochastic analysis technique. The proof of the almost surely exponential stability for this system is carried out by using the Burkholder-Davis-Gundy inequality, the Chebyshev inequality and the Borel-Cantelli lemma. Finally, an example is given to illustrate the effectiveness of the proposed approach, and the simulation is also given by using the Matlab.

1. Introduction

Recently, the dynamics of Hopfield neural networks with reaction-diffusion terms have been deeply investigated because their various generations have been widely used in some practical engineering problems such as pattern recognition, associate memory, and combinatorial optimization (see [13]). However, under closer scrutiny, that a more realistic model would include some of the past states of the system, and theory of functional differential equations systems has been extensively developed [4, 5], meanwhile many authors have considered the asymptotic behavior of the neural networks with delays [69]. In fact random perturbation is unavoidable in any situation [3, 10]; if we include some environment noise in these systems, we can obtain a more perfect model of this situation [3, 1116]. So, this paper is devoted to the exponential stability of the following delayed reaction-diffusion Hopfield neural networks driven by finite-dimensional Wiener processes:𝑑𝑢𝑖(𝑡,𝑥)=𝑙𝑗=1𝜕𝜕𝑥𝑗𝐷𝑖𝑗(𝑥)𝜕𝑢𝑖𝜕𝑥𝑗𝑎𝑖𝑢𝑖+𝑛𝑗=1𝑐𝑖𝑗𝑓𝑗𝑢𝑗+(𝑡𝑟,𝑥)𝑑𝑡𝑚𝑗=1𝑔𝑖𝑗𝑢𝑖(𝑡𝑟,𝑥)𝑑𝑊𝑗,𝜕𝑢𝑖||||𝜕𝜈𝜕𝒪𝑢=0,𝑡0,𝑖(𝜃,𝑥)=𝜙𝑖(𝜃,𝑥),𝑥𝒪𝑙[],𝜃𝑟,0,𝑖=1,2,,𝑛.(1.1)

There are 𝑛 neural network units in this system and 𝑢𝑖(𝑡,𝑥) denote the potential of the cell 𝑖 at 𝑡 and 𝑥. 𝑎𝑖 are positive constants and denote the rate with which the 𝑖th unit will reset its potential to the resting state in isolation when it is disconnected from the network and external inputs at 𝑡, and 𝑐𝑖𝑗 are the output connection weights from the 𝑗th neuron to the 𝑖th neuron. 𝑓𝑗 are the active functions of the neural network. 𝑟 is the time delay of a neuron. 𝒪 denotes an open bounded and connected subset of 𝑙 with a sufficient regular boundary 𝜕𝒪, 𝜈 is the unit outward normal on 𝜕𝒪, 𝜕𝑢𝑖/𝜕𝜈=(𝑢𝑖,𝜈)𝑙, and 𝑔𝑖𝑗 are noise intensities. Initial data 𝜙𝑖 are 0-measurable and bounded functions, almost surely.

We denote (Ω,,) a complete probability space with filtration {𝑡}𝑡0 satisfying the usual conditions (see [10]). 𝑊𝑖(𝑡),𝑖=1,2,,𝑚, are scale standard Brownian motions defined on (Ω,,).

For convenience, we rewrite system (1.1) in the vector form: 𝑑𝑢=((𝐷(𝑥)𝑢)𝐴𝑢+𝐶𝑓(𝑢(𝑡𝑟)))𝑑𝑡+𝐺(𝑢(𝑡𝑟))𝑑𝑊,𝜕𝑢(𝑡,𝑥)|||𝜕𝜈𝜕𝒪=0,𝑡0,𝑢(0,𝑥)=𝜙(𝑥),(1.2) where 𝐶=(𝑐𝑖𝑗)𝑛×𝑛, 𝑢=(𝑢1,𝑢2,,𝑢𝑛)𝑇, 𝑢=(𝑢1,,𝑢𝑛)𝑇, 𝑊=(𝑊1,𝑊2,,𝑊𝑚)𝑇, 𝑓(𝑢)=(𝑓1(𝑢1),𝑓2(𝑢2),,𝑓𝑛(𝑢𝑛))𝑇, 𝐴=Diag(𝑎1,𝑎2,,𝑎𝑛), 𝜙=(𝜙1,𝜙2,,𝜙𝑛)𝑇, 𝐺(𝑢)=(𝑔𝑖𝑗(𝑢𝑖))𝑛×𝑚, 𝐷=(𝐷𝑖𝑗)𝑛×𝑙, and 𝐷𝑢=(𝐷𝑖𝑗𝜕𝑢𝑖/𝜕𝑥𝑗)𝑛×𝑙 is the Hadamard product of matrix 𝐷 and 𝑢; for the definition of divergence operator 𝑢, we refer to [2, 3].

2. Preliminaries and Notations

In this paper, we introduce the following Hilbert spaces 𝐻𝐿2(𝒪), 𝑉𝐻1(𝒪), according to [1719], 𝑉𝐻=𝐻𝑉, where 𝐻,𝑉 denote the dual of the space 𝐻,𝑉, respectively, the injection is continuous, and the embedding is compact. ,|| represent the norm in 𝐻,𝑉, respectively.

𝑈(𝐿2(𝒪))𝑛 is the space of vector-valued Lebesgue measurable functions on 𝒪, which is a Banach space under the norm 𝑢𝑈=(𝑛𝑖=1𝑢𝑖(𝑥)2)1/2.

𝐶𝐶([𝑟,0],𝑈) is the Banach space of all continuous functions from [𝑟,0] to 𝑈, when equipped with the sup-norm 𝜙𝐶=sup𝑟𝑠0𝜙𝑈.

With any continuous 𝑡-adapted 𝑈-valued stochastic process 𝑢(𝑡)Ω𝑈, 𝑡𝑟, we associate a continuous 𝑡-adapted 𝐶-valued stochastic process 𝑢𝑡Ω𝐶,𝑡>0, by setting 𝑢𝑡(𝑠,𝑥)(𝜔)=𝑢(𝑡+𝑠,𝑥)(𝜔), 𝑠[𝑟,0],𝑥𝒪.

𝐶𝑏0 denote the space of all bounded continuous processes 𝜙[𝑟,0]×Ω𝑈 such that 𝜙(𝜃,) is 0-measurable for each 𝜃[𝑟,0] and 𝐸𝜙𝐶<.

(𝐾) is the set of all linear bounded operators from 𝐾 into 𝐾; when equipped with the operator norm, it becomes a Banach space.

In this paper, we assume the following.H1𝑓𝑖 and 𝐺𝑖𝑗 are Lipschitz continuous with positive Lipschitz constants 𝑘1,𝑘2 such that |𝑓𝑖(𝑢)𝑓𝑖(𝑣)|𝑘1|𝑢𝑣| and |𝐺𝑖𝑗(𝑢)𝐺𝑖𝑗(𝑣)|𝑘2|𝑢𝑣|,𝑢,𝑣, and 𝑓𝑖(0)=0, 𝑔𝑖𝑗(0)=0.H2 There exists 𝛼>0 such that 𝐷𝑖𝑗(𝑥)𝛼/𝑙.H3 Let 𝜂=2𝛼𝛽2+2𝑘3𝑛𝑘21𝜎2𝑒𝑟𝑚𝑘22𝑒𝑟2>0,𝑘3=min{𝑎𝑖}, 𝜎=max{|𝑐𝑖𝑗|}.

Remark 2.1. We can infer from H1 that system (1.1) has an equilibrium 𝑢(𝑡,𝑥,𝜔)=0.
Let us define the linear operator as follows: 𝔄Π(𝔄)𝑈𝑈,𝔄𝑢=(𝐷(𝑥)𝑢),(2.1) and Π(𝔄)={𝑢𝐻2(𝒪)𝑛,𝜕𝑢/𝜕𝜈|𝜕𝒪=0}.

Lemma 2.2 (Poincaré’s inequality). Let 𝒪 be a bounded domain in 𝑅𝑙 and 𝜙 belong to a collection of twice differentiable functions defined on 𝒪 into 𝑅; then 𝜙𝛽1||||𝜙,(2.2) where the constant 𝛽 depends on the size of 𝒪.

Lemma 2.3. Let us consider the equation 𝑑𝑢𝑑𝑡=𝔄𝑢,𝑡0,𝑢(0)=𝜙.(2.3) For every 𝜙𝑈, let 𝑢(𝑡)=𝑆(𝑡)𝜙 denote the solution of (2.3); then 𝑆(𝑡) is a contraction map in 𝑈.

Proof. Now we take the inner product of (2.3) with 𝑢(𝑡) in 𝑈; by employing the Gaussian theorem and condition H2, we get that (𝔄𝑢,𝑢)𝛼|𝑢|2𝐻1(𝒪)𝑛, (,) is the inner product in 𝑈, |𝑢|2𝐻1(𝒪)𝑛 denote the norm of 𝐻1(𝒪)𝑛 (see [3]), which means 12𝑑𝑑𝑡𝑢(𝑡)2𝑈||||+𝛼𝑢(𝑡)2𝐻1(𝒪)𝑛0.(2.4) Thanks to the Poincaré inequality, one obtains 𝑑𝑑𝑡𝑢(𝑡)2𝑈+2𝛼𝛽2𝑢(𝑡)2𝑈0.(2.5) Multiplying 𝑒2𝛼𝛽2𝑡 in both sides of the inequality, we have 𝑑𝑒𝑑𝑡2𝛼𝛽2𝑡𝑢(𝑡)2𝑈0.(2.6) Integrating the above inequality from 0 to 𝑡, we obtain 𝑢(𝑡)2𝑈𝑒2𝛼𝛽2𝑡𝜙2𝑈.(2.7) By the definition of 𝑇(𝑡)(𝑈), we have 𝑇(𝑡)(𝑈)1.

Definition 2.4 (see [2022]). A stochastic process 𝑢(𝑡)[𝑟,+)×Ω𝑈 is called a global mild solution of (1.1) if(i)𝑢(𝑡) is adapted to 𝑡(ii)𝑢(𝑡) is measurable with 0𝑢(𝑡)2𝑈𝑑𝑡< almost surely and𝑢(𝑡)=𝑆(𝑡)𝜙𝑡0𝑆(𝑡𝑠)Ads+𝑡0𝑆(𝑡𝑠)𝑓(𝑢(𝑠𝑟))𝑑𝑠+𝑡0𝑆(𝑡𝑠)𝐺(𝑢(𝑠𝑟))𝑑𝑊,𝑢(𝑡)=𝜙𝐶𝑏0[],,𝑡𝑟,0(2.8) for all 𝑡[𝑟,+) with probability one.

Definition 2.5. Equation (1.1) is said to be almost surely exponentially stable if, for any solution 𝑢(𝑡,𝑥,𝜔) with initial data 𝜙𝐶𝑏0, there exists a positive constant 𝜆 such that limsup𝑡𝑢ln𝑡𝐶𝜆,𝑢𝑡𝐶,almostsurely.(2.9)

Definition 2.6. System (1.1) is said to be exponentially stable in the mean square sense if there exist positive constants 𝜅 and 𝛼 such that, for any solution 𝑢(𝑡,𝑥,𝜔) with the initial condition 𝜙𝐶𝑏0, one has 𝐸𝑢(𝑡)2𝐶𝜅𝑒𝛼(𝑡𝑡0),𝑡𝑡0,𝑢𝑡𝐶.(2.10)

3. Main Result

Theorem 3.1. Suppose conditions H1–H3 hold; then (1.1) is exponentially stable in the mean square sense.

Proof. Let 𝑢 be the mild solution of (1.1); thanks to the Itô formula, we observe that 𝑑𝑒𝜆𝑡𝑢2𝑖=𝜆𝑒𝜆𝑡𝑢2𝑖𝑑𝑡+𝑒𝜆𝑡2𝑢𝑖𝑙𝑗=1𝜕𝜕𝑥𝑖𝐷𝑖𝑗𝜕𝑢𝑖𝜕𝑥𝑗𝑎𝑖𝑢𝑖+𝑛𝑗=1𝑐𝑖𝑗𝑓𝑗𝑢𝑗(𝑡𝑟)𝑑𝑡+𝑒𝜆𝑡𝐺𝑖𝐺𝑇𝑖𝑑𝑡+2𝑒𝜆𝑡𝑢𝑖𝐺𝑖𝑑𝑊,𝐺𝑖=𝐺𝑖1,𝐺𝑖2,,𝐺𝑖𝑚,(3.1) where 𝜆 is a positive constant that will be defined below. Then, by integration between 0 and 𝑡, we find that 𝑒𝜆𝑡𝑢2𝑖(𝑡)=𝜙𝑖(0)2+𝑡0𝜆𝑒𝜆𝑠𝑢2𝑖𝑑𝑠+2𝑡0𝑒𝜆𝑠𝑢𝑖𝑙𝑗=1𝜕𝜕𝑥𝑗𝐷𝑖𝑗𝜕𝑢𝑖𝜕𝑥𝑗𝑑𝑠2𝑡0𝑒𝜆𝑠𝑎𝑖𝑢2𝑖𝑑𝑠+2𝑡0𝑒𝜆𝑠𝑢𝑖𝑛𝑗=1𝑐𝑖𝑗𝑓𝑗𝑢𝑗(𝑠𝑟)𝑑𝑠+𝑡0𝑒𝜆𝑠𝐺i𝐺𝑇𝑖𝑑𝑠+2𝑡0𝑒𝜆𝑠𝑢𝑖𝐺𝑖𝑑𝑊.(3.2) Integrating the above equation over 𝒪, by virtue of Fubini’s theorem, we prove that 𝑒𝜆𝑡𝑢2𝑖2=𝜙𝑖(0)2+𝜆𝑡0𝑒𝜆𝑠𝒪𝑢2𝑖𝑑𝑥𝑑𝑠+𝑡0𝑒𝜆𝑠𝒪2𝑢𝑖𝑙𝑗=1𝜕𝜕𝑥𝑗𝐷𝑖𝑗𝜕𝑢𝑖𝜕𝑥𝑗𝑑𝑥𝑑𝑠2𝑡0𝑒𝜆𝑠𝒪𝑎𝑖𝑢2𝑖𝑑𝑥𝑑𝑠+2𝑡0𝑒𝜆𝑠𝒪𝑢𝑖𝑛𝑗=1𝑐𝑖𝑗𝑓𝑗𝑢𝑗+(𝑠𝑟)𝑑𝑥𝑑𝑠𝑡0𝑒𝜆𝑠𝒪𝐺𝑖𝐺𝑇𝑖𝑑𝑥𝑑𝑠+2𝑡0𝑒𝜆𝑠𝒪𝑢𝑖G𝑖𝑑𝑥𝑑𝑊.(3.3) Taking the expectation on both sides of the last equation, by means of [3, 10, 16] 2𝐸𝑡0𝒪𝑒𝜆𝑠𝑢𝑖𝐺𝑖𝑑𝑥𝑑𝑊=0.(3.4) Then, by Fubini’s theorem, we have 𝑒𝜆𝑡𝐸𝑢2𝑖2𝜙=𝐸𝑖(0)2+𝜆𝑡0𝑒𝜆𝑠𝒪𝐸𝑢2𝑖𝑑𝑥𝑑𝑠+2𝐸𝑡0𝑒𝜆𝑠𝒪𝑢𝑖𝑙𝑗=1𝜕𝜕𝑥𝑗𝐷𝑖𝑗𝜕𝑢𝑖𝜕𝑥𝑗𝑑𝑥𝑑𝑠2𝑡0𝑒𝜆𝑠𝒪𝑎𝑖𝐸𝑢2𝑖𝑑𝑥𝑑𝑠+2𝐸𝑡0𝑒𝜆𝑠𝒪𝑢𝑖𝑛𝑗=1𝑐𝑖𝑗𝑓𝑗𝑢𝑗(𝑠𝑟)𝑑𝑥𝑑𝑠+𝐸𝑡0𝑒𝜆𝑠𝒪𝐺𝑖𝐺𝑇𝑖𝑑𝑥𝑑𝑠𝐼1+𝐼2+𝐼3+𝐼4+𝐼5+𝐼6.(3.5) We observe that 𝐼1𝜙𝐸𝑖(0)2sup[]𝜃𝑟,0𝐸𝜙𝑖(𝜃)2,𝐼(3.6)2𝜆𝑡0𝒪𝑒𝜆𝑠𝐸𝑢2𝑖𝑑𝑥𝑑𝑠=𝜆𝑡0𝑒𝜆𝑠𝐸𝑢𝑖2𝑑𝑠.(3.7) From the Neumann boundary condition, by means of Green’s formula and H2 (see [3, 6, 7]), we know 𝐼32𝐸𝑡0𝒪𝑒𝜆𝑠𝑢𝑖𝑙𝑗=1𝜕𝜕𝑥𝑗𝐷𝑖𝑗𝜕𝑢𝑖𝜕𝑥𝑗𝑑𝑥𝑑𝑠=2𝐸𝑡0𝒪𝑒𝑙𝜆𝑠𝑗=1𝐷𝑖𝑗𝜕𝑢𝑖𝜕𝑥𝑗2𝑑𝑥𝑑𝑠2𝛼𝑡0𝑒𝜆𝑠𝐸||𝑢𝑖||2𝑑𝑠2𝛼𝛽2𝑡0𝑒𝜆𝑠𝐸𝑢𝑖2𝑑𝑠.(3.8) Then, by using the positiveness of 𝑎𝑖, one gets the relation 𝐼42𝑡0𝒪𝑒𝜆𝑠𝑎𝑖𝐸𝑢2𝑖𝑑𝑥𝑑𝑠2𝑘3𝑡0𝑒𝜆𝑠𝐸𝑢𝑖2𝑑𝑠,(3.9) where 𝑘3=min{𝑎1,𝑎2,,𝑎𝑛}>0. By using the Young inequality as well as condition H1, we have that 𝐼52𝐸𝑡0𝒪𝑒𝜆𝑠𝑢𝑖𝑛𝑗=1𝑐𝑖𝑗𝑓𝑗𝑑𝑥𝑑𝑠𝑡0𝒪𝑒𝜆𝑠𝐸||𝑢𝑖||2|||||+𝐸𝑛𝑗=1𝑐𝑖𝑗𝑓𝑗|||||2𝑑𝑥𝑑𝑠𝑡0𝒪𝑒𝜆𝑠𝐸||𝑢𝑖||2+𝜎2𝑛𝑗=1𝐸||𝑓𝑗𝑢𝑗||(𝑠𝑟)2𝑑𝑥𝑑𝑠𝑡0𝒪𝑒𝜆𝑠𝐸||𝑢𝑖||2+𝜎2𝑘21𝑛𝑗=1𝐸||𝑢𝑗||(𝑠𝑟)2𝑑𝑥𝑑𝑠𝑡0𝑒𝜆𝑠𝐸𝑢𝑖2+𝜎2𝑘21𝐸𝑢(𝑠𝑟)2𝑈𝑑𝑠,(3.10) where 𝜎=max|𝑐𝑖𝑗|, and 𝐼6𝑡0𝒪𝑒𝜆𝑠𝐸𝐺𝑖𝐺𝑇𝑖𝑑𝑥𝑑𝑠𝑚𝑘22𝑡0𝑒𝜆𝑠𝐸𝑢𝑖(𝑠𝑟)2𝑑𝑠.(3.11) We infer from (3.6)–(3.11) that 𝑒𝜆𝑡𝐸𝑢𝑖(𝑡)2sup𝜃[𝑟,0]𝐸𝜙𝑖(𝜃)22𝛼𝛽2+2𝑘31𝜆𝑡0𝑒𝜆𝑠𝐸𝑢𝑖2𝑑𝑠+𝜎2𝑘21𝑡0𝑒𝜆𝑠𝑢(𝑡𝑟)2𝑈𝑑𝑠+𝑚𝑘22𝑡0𝑒𝜆𝑠𝐸𝑢𝑖(𝑠𝑟)2𝑑𝑠.(3.12) Adding (3.12) from 𝑖=1 to 𝑖=𝑛, we obtain 𝑒𝜆𝑡𝐸𝑢2𝑈𝐸𝜙2𝐶2𝛼𝛽2+2𝑘31𝜆𝑡0𝑒𝜆𝑠𝐸𝑢2𝑈+𝑑𝑠𝑛𝑘21𝜎2+𝑚𝑘22𝑡0𝑒𝜆𝑠𝐸𝑢(𝑠𝑟)2𝑈𝑑𝑠,(3.13) due to 𝑡0𝑒𝜆𝑠𝐸𝑢(𝑠𝑟)2𝑈𝑑𝑠𝑒𝜆𝑟𝑡𝑟𝑒𝜆𝑠𝐸𝑢(𝑠)2𝑈𝑑𝑠𝑒2𝜆𝑟0𝑟𝐸𝜙(𝑠)2𝑈𝑑𝑠+𝑒𝜆𝑟𝑡0𝑒𝜆𝑠𝐸𝑢(𝑠)2𝑈𝑑𝑠𝑟𝑒2𝜆𝑟𝐸𝜙2𝐶+𝑒𝜆𝑟𝑡0𝑒𝜆𝑠𝐸𝑢(𝑠)2𝑈𝑑𝑠;(3.14) we induce from the previous equations that 𝑒𝜆𝑡𝐸𝑢2𝑈𝑐1𝑡0𝑒𝜆𝑠𝐸𝑢2𝑈𝑑𝑠+𝑐2,(3.15) where 𝑐1=2𝛼𝛽2+2𝑘31𝑛𝑘21𝜎2𝑒𝜆𝑟𝑚𝑘22𝑒𝜆𝑟𝜆 and 𝑐2=(1+𝑚𝑘22𝑟𝑒2𝜆𝑟+𝑛𝑘21𝜎2𝑟𝑒2𝜆𝑟)𝐸𝜙2𝐶; so we choose 𝜆=1 such that 𝑐1=𝜂>0. By using the classical Gronwall inequality we see that 𝑒𝜆𝑡𝐸𝑢2𝑈𝑐2𝑒𝜂𝑡;(3.16) in other words, we get 𝐸𝑢2𝑈𝑐2𝑒(𝜂+1)𝑡.(3.17) So, for 𝑡+𝜃𝑡/20, we also have 𝐸𝑢(𝑡+𝜃)2𝑈𝑐2𝑒(𝜂+1)(𝑡+𝜃),𝑐2𝑒𝜅𝑡[](,𝜃𝑟,0,𝜅=𝜂+1)2,(3.18) and we can conclude that 𝐸𝑢𝑡2𝐶𝑐2𝑒𝜅𝑡.(3.19)

Theorem 3.2. If the system (1.1) satisfies hypotheses H1–H3, then it is almost surely exponentially stable.

Proof. Let 𝑢(𝑡) be the mild solution of (1.1). By Definition 2.4 as well as the inequality (𝑛𝑖=1𝑎𝑖)2𝑛𝑛𝑖=1𝑎2𝑖,𝑎𝑖, we have 𝐸sup𝑁𝑡𝑁+1𝑢(𝑡)2𝑈4sup𝑁𝑡𝑁+1𝑆(𝑡𝑁+1)𝑢(𝑁)2𝑈+4sup𝑁𝑡𝑁+1𝑡𝑁1𝐴𝑆(𝑡𝑠)𝑢𝑑𝑠2𝑈+4sup𝑁𝑡𝑁+1𝑡𝑁1𝑆(𝑡𝑠)𝐶𝑓(𝑢(𝑠𝑟))𝑑𝑠2𝑈+4sup𝑁𝑡𝑁+1𝑡𝑁1𝑆(𝑡𝑠)𝐺(𝑢(𝑠𝑟))𝑑𝑊2𝑈𝐼1+𝐼2+𝐼3+𝐼4.(3.20) Using the contraction of the map 𝑆(𝑡) and the result of Theorem 3.1, we find 𝐼14sup𝑁𝑡𝑁+1𝐸(𝑆(𝑡𝑁+1)𝑢(𝑁1))2𝑈4sup𝑁𝑡𝑁+1𝐸𝑢𝑁12𝐶4𝑐2𝑒𝜅(𝑁1).(3.21) By the Hölder inequality, we obtain 𝐼24sup𝑁𝑡𝑁+1𝐸𝑡𝑁1𝐴𝑆(𝑡𝑠)𝑢𝑑𝑠2𝑈4sup𝑁𝑡𝑁+1(𝑡𝑁+1)𝑡𝑁1𝐸𝐴𝑆(𝑡𝑠)𝑢2𝑈𝑑𝑠8sup𝑁𝑡𝑁+1t𝑁1𝐸𝐴𝑢2𝑈𝑑𝑠8𝑘24𝑁+1𝑁1𝐸𝑢2𝑈𝑑𝑠8𝑘24𝑁+1𝑁1𝐸𝑢𝑠2𝐶𝑑𝑠8𝑘24𝑐2𝑁+1𝑁1𝑒𝜅𝑠𝑑𝑠8𝑘24𝜌1𝑒𝜅(𝑁1),(3.22) where 𝜌1=𝑐2/𝜅,𝑘4=max{𝑎1,𝑎2,,𝑎𝑛}.
By virtue of Theorem 3.1, Hölder inequality, and H1, we have 𝐼34sup𝑁𝑡𝑁+1𝐸𝑡𝑁1𝑆(𝑡𝑠)𝐶𝑓(𝑢(𝑠𝑟))𝑑𝑠2𝑈4sup𝑁𝑡𝑁+1(𝑡𝑁+1)𝐸𝑡𝑁1𝐶𝑓(𝑢(𝑠𝑟))2𝑈𝑑𝑠8𝜎2sup𝑁𝑡𝑁+1𝐸𝑡𝑁1𝑓(𝑢(𝑠𝑟))2𝑈𝑑𝑠8𝑘21𝜎2𝑁+1𝑁1𝐸𝑢(𝑠𝑟)2𝑈𝑑𝑠8𝑘21𝜎2𝑁+1𝑁1𝐸𝑢𝑠2𝐶𝑑𝑠8𝑘21𝑐2𝑁+1𝑁1𝑒𝜅𝑠𝑑𝑠8𝑘21𝜌1𝑒𝜅(𝑁1).(3.23) Then, by the Burkholder-Davis-Gundy inequality (see [18, 22]), there exists 𝑐3 such that 𝐼44sup𝑁𝑡𝑁+1𝐸𝑡𝑁1𝑆(𝑡𝑠)𝐺(𝑢(𝑠𝑟))𝑑𝑊2𝑈4𝑐3sup𝑁𝑡𝑁+1𝐸𝑡𝑁1𝑆(𝑡𝑠)𝐺(𝑢(𝑠𝑟))𝐼2𝑈𝑑𝑠4𝑐3𝑘22sup𝑁𝑡𝑁+1𝑡𝑁1𝐸𝑢(𝑠𝑟)2𝑈𝑑𝑠4𝑐3𝑘22𝑁+1𝑁1𝐸𝑢𝑠2𝐶𝑑𝑠4𝑐3𝑘22𝑐2𝑁+1𝑁1𝑒𝜅𝑠𝑑𝑠4𝑐3𝑘22𝜌1𝑒𝜅(𝑁1),(3.24) where 𝐼=(1,1,,1)𝑇 is an 𝑚-dimensional vector.
We can deduce from (3.21)–(3.24) that 𝐸sup𝑁𝑡𝑁+1𝑢(𝑡)2𝑈𝜌2𝑒𝜅(𝑁1),(3.25) where 𝜌2=4𝑐2+(8𝑘24+8𝑘21+4𝑐3𝑘22)𝜌1.
Thus, for any positive constants 𝜀𝑁, thanks to the Chebyshev inequality we have that 𝑃sup𝑁𝑡𝑁+1𝑢(𝑡)𝑈>𝜀𝑁1𝜀2𝑁sup𝑁𝑡𝑁+1𝐸𝑢(𝑡)2𝑈1𝜀2𝑁𝜌2𝑒𝜅(𝑁1).(3.26) Due to the Borel-Cantelli lemma, we see that limsup𝑡ln𝑢(𝑡)2𝑈𝑡𝜅,almostsurely.(3.27) This completes the proof of the theorem.

4. Simulation

Consider two-dimensional stochastic reaction-diffusion recurrent neural networks with delay as follows:𝑑𝑢1(𝑡,𝑥)=10Δ𝑢17𝑢1𝑢+1.3tanh1(𝑡1,𝑥)𝑑𝑡+𝑢1(𝑡1,𝑥)𝑑𝑊,𝑑𝑢2(𝑡,𝑥)=10Δ𝑢27𝑢2𝑢+tanh1(𝑢𝑡1,𝑥)tanh2(𝑡1,𝑥)𝑑𝑡+𝑢2(𝑡1,𝑥)𝑑𝑊𝜕𝑢𝑖(𝑡,0)=𝜕𝑥𝜕𝑢𝑖(𝑡,20)𝑢𝜕𝑥=0,𝑡0,1(𝜃,𝑥)=cos(0.2𝜋𝑥),𝑢2[][].(𝜃,𝑥)=cos(0.1𝜋𝑥),𝑥0,20,𝜃1,0(4.1)

Δ is the Laplace operator. We have 𝛽1/20, 𝛼10, 𝑘1=1, 𝑘2=1, 𝑘3=7, 𝜎=1.3, 𝑛=2, and 𝜂>0; by Theorems 3.1 and 3.2, this system is mean square exponentially stable as well as almost surely exponentially stable. The results can be shown in Figures 1, 2 and 3.

Figure 1
Figure 2
Figure 3

We use the forward Euler method to simulate this example [2325]. We choose the time step Δ𝑡=0.01 and space step Δ𝑥=1, and 𝛿=Δ𝑡/Δ𝑥2=0.01.


The authors wish to thank the referees for their suggestions and comments. We are also indebted to the editors for their help. This work was supported by the National Natural Science Foundation of China (no. 11171374), Natural Science Foundation of Shandong Province (no. ZR2011AZ001).


  1. X. X. Liao and Y. L. Gao, “Stability of Hopfield neural networks with reactiondiffusion terms,” Acta Electronica Sinica, vol. 28, pp. 78–82, 2000.
  2. L. S. Wang and D. Xu, “Global exponential stability of Hopfield reaction-diffusion neural networks with time-varying delays,” Science in China F, vol. 46, no. 6, pp. 466–474, 2003. View at Publisher · View at Google Scholar
  3. L. S. Wang and Y. F. Wang, “Stochastic exponential stability of the delayed reaction diffusion interval neural networks with Markovian jumpling parameters,” Physics Letters A, vol. 356, pp. 346–352, 2008.
  4. J. K. Hale and V. S. M. Lunel, Introduction to Functional-Differential Equations, vol. 99 of Applied Mathematical Sciences, Springer, Berlin, Germany, 1993.
  5. S.-E. A. Mohammed, Stochastic Functional Differential Equations, vol. 99 of Research Notes in Mathematics, Pitman, London, UK, 1984.
  6. L. S. Wang and Y. Y. Gao, “Global exponential robust stability of reactiondiffusion interval neural networks with time varying delays,” Physics Letters A, vol. 305, pp. 343–348, 2006.
  7. L. S. Wang, R. Zhang, and Y. Wang, “Global exponential stability of reaction-diffusion cellular neural networks with S-type distributed time delays,” Nonlinear Analysis: Real World Applications, vol. 10, no. 2, pp. 1101–1113, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. H. Y. Zhao and G. L. Wang, “Existence of periodic oscillatory solution of reactiondiffusion neural networks with delays,” Physics Letters A, vol. 343, pp. 372–382, 2005. View at Publisher · View at Google Scholar
  9. J. G. Lu and L. J. Lu, “Global exponential stability and periodicity of reaction-diffusion recurrent neural networks with distributed delays and Dirichlet boundary conditions,” Chaos, Solitons and Fractals, vol. 39, no. 4, pp. 1538–1549, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  10. X. Mao, Stochastic Differential Equations and Applications, Horwood, 1997.
  11. J. Sun and L. Wan, “Convergence dynamics of stochastic reaction-diffusion recurrent neural networks with delays,” International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, vol. 15, no. 7, pp. 2131–2144, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. M. Itoh and L. O. Chua, “Complexity of reaction-diffusion CNN,” International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, vol. 16, no. 9, pp. 2499–2527, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  13. X. Lou and B. Cui, “New criteria on global exponential stability of BAM neural networks with distributed delays and reaction-diffusion terms,” International Journal of Neural Systems, vol. 17, pp. 43–52, 2007. View at Publisher · View at Google Scholar
  14. Q. Song and Z. Wang, “Dynamical behaviors of fuzzy reaction-diffusion periodic cellular neural networks with variable coefficients and delays,” Applied Mathematical Modelling, vol. 33, no. 9, pp. 3533–3545, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  15. Q. Song, J. Cao, and Z. Zhao, “Periodic solutions and its exponential stability of reaction-diffusion reccurent neural networks with distributed time delays,” Nonlinear Analysis B, vol. 8, pp. 345–361, 2007. View at Publisher · View at Google Scholar
  16. K. Liu, “Lyapunov functionals and asymptotic stability of stochastic delay evolution equations,” Stochastics and Stochastics Reports, vol. 63, no. 1-2, pp. 1–26, 1998. View at Zentralblatt MATH
  17. R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, vol. 68 of Applied Mathematical Sciences, Springer, New York, NY, USA, 1988.
  18. G. Da Prato and J. Zabczyk, Stochastic Equations in Infinite Dimensions, vol. 44 of Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge, UK, 1992. View at Publisher · View at Google Scholar
  19. I. Chueshov, Introduction to the Theory of Infinite-Dimensional Dissipative Systems, Acta Kharkov, 2002.
  20. R. Jahanipur, “Stochastic functional evolution equations with monotone nonlinearity: existence and stability of the mild solutions,” Journal of Differential Equations, vol. 248, no. 5, pp. 1230–1255, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  21. T. Taniguchi, “Almost sure exponential stability for stochastic partial functional-differential equations,” Stochastic Analysis and Applications, vol. 16, no. 5, pp. 965–975, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  22. T. Caraballo, K. Liu, and A. Truman, “Stochastic functional partial differential equations: existence, uniqueness and asymptotic decay property,” Proceedings the Royal Society of London A, vol. 456, no. 1999, pp. 1775–1802, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  23. M. Kamrani and S. M. Hosseini, “The role of coefficients of a general SPDE on the stability and convergence of a finite difference method,” Journal of Computational and Applied Mathematics, vol. 234, no. 5, pp. 1426–1434, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  24. D. J. Higham, “Mean-square and asymptotic stability of the stochastic theta method,” SIAM Journal on Numerical Analysis, vol. 38, no. 3, pp. 753–769, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  25. P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, vol. 23, Springer, Berlin, Germany, 3rd edition, 1999.