Abstract

This work addresses the asymptotic stability for a class of impulsive cellular neural networks with time-varying delays and reaction-diffusion. By using the impulsive integral inequality of Gronwall-Bellman type and Hardy-Sobolev inequality as well as piecewise continuous Lyapunov functions, we summarize some new and concise sufficient conditions ensuring the global exponential asymptotic stability of the equilibrium point. The provided stability criteria are applicable to Dirichlet boundary condition and showed to be dependent on all of the reaction-diffusion coefficients, the dimension of the space, the delay, and the boundary of the spatial variables. Two examples are finally illustrated to demonstrate the effectiveness of our obtained results.

1. Introduction

Cellular neural networks (CNNs), proposed by Chua and Yang in 1988 [1, 2], have been the focus of a number of investigations due to their potential applications in various fields such as optimization, linear and nonlinear programming, associative memory, pattern recognition, and computer vision [37]. Moreover, on the ground that time delays are unavoidably encountered for the finite switching speed of neurons and amplifiers in implementation of neural networks, it was followed by the introduction of the delayed cellular neural networks (DCNNs) so as to solve some dynamic image processing and pattern recognition problems. Such applications concerning CNNs and DCNNs depend heavily on the dynamical behaviors such as stability, convergence, and oscillatory [8, 9]. Particularly, stability analysis has been a major concern in the designs and applications of the CNNs and DCNNs. The stability of CNNs and DCNNs is a subject of current interest, and considerable theoretical efforts have been put into this topic with many good results reported (see, e.g., [1013]).

With reference to neural networks, however, it is noteworthy that the state of electronic networks is actually subject to instantaneous perturbations more often than not. On this account, the networks experience abrupt change at certain instants which may be caused by a switching phenomenon, frequency change, or other sudden noise; that is, the networks often exhibit impulsive effects [14, 15]. For instance, according to Arbib [16] and Haykin [17], when a stimulus from the body or the external environment is received by receptors, the electrical impulses will be conveyed to the neural net and impulsive effects arise naturally in the net. As a consequence, in the past few years, scientists have become increasingly interested in the influence that impulses may have on the CNNs and DCNNs and a large number of stability criteria have been derived (see, e.g. [1822]).

In reality, besides impulsive effects, diffusion effects are also nonignorable since diffusion is unavoidable when electrons are moving in asymmetric electromagnetic fields. As such, the model of neural networks with both impulses and reaction-diffusion should be more accurate to describe the evolutionary process of the systems in question, and it is necessary to consider the effects of both diffusion and impulses on the stability of CNNs and DCNNs.

In the past years, there have been a few theoretical contributions to the stability of CNNs and DCNNs with impulses and diffusion. For instance, Qiu [23] formulated a mathematical model of impulsive neural networks with time-varying delays and reaction-diffusion terms described by impulsive partial differential equations and studied, via delay impulsive differential inequality, the problem of global exponential stability with some stability criteria presented. Remarkably, all of the obtained stability criteria in [23] are independent of the diffusion. In 2008, Li and song [24] investigated a class of impulsive Cohen-Grossberg networks with time-varying delays and reaction-diffusion terms. By establishing a delay inequality with impulsive initial conditions and M-matrix theory, some sufficient conditions ensuring global exponential stability of the equilibrium points are given. Analogous to [23], the proposed stability criteria in [24] are also independent of the diffusion. More recently Pan et al. [25] investigated a class of impulsive Cohen-Grossberg neural networks with time-varying delays and reaction-diffusion in 2010. By the aid of the delay impulsive differential inequality quoted in [23], several sufficient conditions are exploited ensuring global exponential stability of the equilibrium points. Especially, different from [23, 24], the estimate of the exponential convergence rate depends on reaction-diffusion in [25].

In this paper, unlike the methods of impulsive differential inequalities and Poincaré inequality used in [25], we attempt to adopt the new techniques of the impulsive integral inequality of Gronwall-Bellman type and Hardy-Sobolev inequality to investigate the problem of global exponential asymptotic stability for impulsive cellular neural networks with time-varying delays and reaction-diffusion terms. Different from the existing research, we find, besides the reaction-diffusion coefficients, the dimension of the space and the boundary of the spatial variables do influence the stability.

The rest of the paper is organized as follows. In Section 2, the model of impulsive delayed cellular neural networks with reaction-diffusion terms and Dirichlet boundary condition is outlined, and some facts and lemmas are introduced for later reference. By the new agency of the impulsive integral inequality of Gronwall-Bellman type as well as Hardy-Sobolev inequality, we discuss the global exponential asymptotic stability and develop some new criteria in Section 3. To conclude, two illustrative examples are given to verify the effectiveness of our results in Section 4.

2. Preliminaries

Let 𝑅𝑛 denote the n-dimensional Euclidean space, and Ω𝑅𝑚 is a bounded open set containing the origin. The boundary of Ω is smooth and mesΩ>0. Let 𝑅+=[0,) and 𝑡0𝑅+.

We consider the following impulsive neural networks with time delays and reaction-diffusion terms: 𝜕𝑢𝑖(𝑡,𝑥)=𝜕𝑡𝑚𝑠=1𝜕𝜕𝑥𝑠𝐷𝑖𝑠𝜕𝑢𝑖(𝑡,𝑥)𝜕𝑥𝑠𝑎𝑖𝑢𝑖(𝑡,𝑥)+𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑢𝑗+(𝑡,𝑥)𝑛𝑗=1𝑐𝑖𝑗𝑓𝑗𝑢𝑗𝑡𝜏𝑗(𝑡),𝑥𝑡𝑡0,𝑡𝑡𝑘𝑢,𝑥Ω,𝑖=1,2,,𝑛,𝑘=1,2,,(2.1)𝑖𝑡𝑘+0,𝑥=𝑢𝑖𝑡𝑘,𝑥+𝑃𝑖𝑘𝑢𝑖𝑡𝑘,𝑥𝑥Ω,𝑘=1,2,,𝑖=1,2,,𝑛,(2.2) where 𝑛 corresponds to the numbers of units in a neural network; 𝑥=(𝑥1,,𝑥𝑚)TΩ, 𝑢𝑖(𝑡,𝑥), denotes the state of the ith neuron at time 𝑡 and in space 𝑥; smooth functions 𝐷𝑖𝑠=𝐷𝑖𝑠(𝑡,𝑥,𝑢)0 represent transmission diffusion operators of the ith unit; activation functions 𝑓𝑗(𝑢𝑗(𝑡,𝑥)) stand for the output of the jth unit at time 𝑡 and in space 𝑥; 𝑏𝑖𝑗, 𝑐𝑖𝑗, 𝑎𝑖 are constants: 𝑏𝑖𝑗 indicates the strength of the jth unit on the ith unit at time 𝑡 and in space 𝑥, 𝑐𝑖𝑗 denotes the strength of the jth unit on the ith unit at time 𝑡𝜏𝑗(𝑡) and in space 𝑥, where 𝜏𝑗(𝑡) corresponds to the transmission delay along the axon of the jth unit and satisfies 0𝜏𝑗(𝑡)𝜏 (𝜏=const) as well as 𝜏𝑗(𝑡)<11/ (>0), and 𝑎𝑖>0 represents the rate with which the ith unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs at time 𝑡 and in space 𝑥. The fixed moments 𝑡𝑘 (𝑘=1,2,) are called impulsive moments satisfying 0𝑡0<𝑡1<𝑡2< and lim𝑘𝑡𝑘=; 𝑢𝑖(𝑡𝑘+0,𝑥) and 𝑢𝑖(𝑡𝑘0,𝑥) represent the right-hand and left-hand limit of 𝑢𝑖(𝑡,𝑥) at time 𝑡𝑘 and in space 𝑥, respectively. 𝑃𝑖𝑘(𝑢𝑖(𝑡𝑘,𝑥)) stands for the abrupt change of 𝑢𝑖(𝑡,𝑥) at impulsive moment 𝑡𝑘 and in space 𝑥.

Denote by 𝑢(𝑡,𝑥)=𝑢(𝑡,𝑥;𝑡0,𝜑), 𝑢𝑅𝑛 the solution of system (2.1)-(2.2), satisfying the initial condition 𝑢𝑠,𝑥;𝑡0,𝜑=𝜑(𝑠,𝑥),𝑡0𝜏𝑠𝑡0,𝑥Ω(2.3) and Dirichlet boundary condition 𝑢𝑡,𝑥;𝑡0,𝜑=0,𝑡𝑡0,𝑥𝜕Ω,(2.4) where the vector-valued function 𝜑(𝑠,𝑥)=(𝜑1(𝑠,𝑥),,𝜑𝑛(𝑠,𝑥))𝑇 is such that Ω𝑛𝑖=1𝜑2𝑖(𝑠,𝑥)𝑑𝑥 is bounded on [𝑡0𝜏,𝑡0] and 𝜑𝑖(𝑠,𝑥) (𝑖=1,2,,𝑛) is first-order continuous differentiable as to 𝑠 on [𝑡0𝜏,𝑡0].

The solution 𝑢(𝑡,𝑥)=𝑢(𝑡,𝑥;𝑡0,𝜑)=(𝑢1(𝑡,𝑥;𝑡0,𝜑),,𝑢𝑛(𝑡,𝑥;𝑡0,𝜑))𝑇 of problems ((2.5)–(2.8)) is, for the time variable 𝑡, a piecewise continuous function with the first kind discontinuity at the points 𝑡𝑘 (𝑘=1,2,), where it is continuous from the left, that is the following relations are true:𝑢𝑖𝑡𝑘0,𝑥=𝑢𝑖𝑡𝑘,𝑥,𝑢𝑖𝑡𝑘+0,𝑥=𝑢𝑖𝑡𝑘,𝑥+𝑃𝑖𝑘𝑢𝑖𝑡𝑘,𝑥.(2.5)

Throughout this paper, the norm of 𝑢(𝑡,𝑥;𝑡0,𝜑) is governed by 𝑢𝑡,𝑥;𝑡0,𝜑Ω=𝑛𝑖=1Ω𝑢2𝑖𝑡,𝑥;𝑡0,𝜑d𝑥1/2.(2.6)

Before moving on, we introduce two hypotheses as follows.(H1) Activation function 𝑓𝑗(𝑢𝑗(𝑡,𝑥)) satisfies 𝑓𝑖(0)=0, and there exists constant 𝑙𝑖>0 such that |𝑓𝑖(𝑦1)𝑓𝑖(𝑦2)|𝑙𝑖|𝑦1𝑦2| holds for all 𝑦1,𝑦2𝑅 and 𝑖=1,2,,𝑛.(H2) The functions 𝑃𝑖𝑘(𝑢𝑖(𝑡𝑘,𝑥)) are continuous on 𝑅 and 𝑃𝑖𝑘(0)=0, 𝑖=1,2,,𝑛, 𝑘=1,2,.

According to (H1) and (H2), it is easy to see that problems ((2.5)–(2.8)) admits an equilibrium point 𝑢=0.

Definition 2.1. The equilibrium point 𝑢=0 of problems ((2.5)–(2.8)) is said to be globally exponentially stable if there exist constants 𝜅>0 and 𝑀1 such that 𝑢𝑡,𝑥;𝑡0,𝜑Ω𝑀𝜑Ω𝑒𝜅(𝑡𝑡0),𝑡𝑡0,(2.7) where 𝜑2Ω=sup𝑡0𝜏𝑠𝑡0𝑛𝑖=1Ω𝜑2𝑖(𝑠,𝑥)d𝑥.

Lemma 2.2 (Gronwall-Bellman-type impulsive integral inequality [26]). Assume that
(A1) the sequence {𝑡𝑘} satisfies 0𝑡0<𝑡1<𝑡2<, with lim𝑘𝑡𝑘=,
(A2) 𝑞𝑃𝐶1[𝑅+,𝑅] and 𝑞(𝑡) is left-continuous at 𝑡𝑘, 𝑘=1,2,,
(A3) 𝑝𝐶[𝑅+,𝑅+] and for 𝑘=1,2,𝑞(𝑡)𝑐+𝑡𝑡0𝑝(𝑠)𝑞(𝑠)d𝑠+𝑡0<𝑡𝑘<𝑡𝜂𝑘𝑞𝑡𝑘,𝑡𝑡0,(2.8) where 𝜂𝑘0 and 𝑐=const. Then, 𝑞(𝑡)𝑐𝑡0<𝑡𝑘<𝑡1+𝜂𝑘exp𝑡𝑡0𝑝(𝑠)d𝑠,𝑡𝑡0.(2.9)

Lemma 2.3 (Hardy-Sobolev inequality [27]). Let Ω𝑅𝑚(𝑚3) be a bounded open set containing the origin and 𝑢𝐻1(Ω)={𝜔𝜔𝐿2(Ω),𝐷𝑖𝜔=𝜕𝜔/𝜕𝑥𝑖𝐿2(Ω),1𝑖𝑚}. Then there exists a positive constant 𝐶𝑚=𝐶𝑚(Ω) such that (𝑚2)24Ω𝑢2|𝑥|2𝑑𝑥Ω||||𝑢2𝑑𝑥+𝐶𝑚𝜕Ω𝑢2𝑑𝜎.(2.10)

Lemma 2.4. If 𝑎>0 and 𝑏>0, then 𝑎𝑏(1/𝜀)𝑎2+𝜀𝑏2 holds for any 𝜀>0.

3. Main Results

Theorem 3.1. Provided that(1)for 𝑥=(𝑥1,,𝑥𝑚)𝑇Ω(𝑚3), there exists a constant 𝛽 such that |𝑥|2=𝑚𝑠=1𝑥2𝑠<𝛽. In addition, there exists a constant 𝐷>0 such that 𝐷𝑖𝑠=𝐷𝑖𝑠(𝑡,𝑥,𝑢)𝐷>0. Denote 𝐷(𝑚2)2/2𝛽=𝜒,(2)𝑃𝑖𝑘(𝑢𝑖(𝑡𝑘,𝑥))=𝜃𝑖𝑘𝑢𝑖(𝑡𝑘,𝑥), 0𝜃𝑖𝑘2,(3)there exists a constant 𝛾 satisfying 𝛾+𝜆+𝜌𝑒𝛾𝜏>0 as well as 𝜆+𝜌𝑒𝛾𝜏<0, where 𝜆=max𝑖=1,,𝑛(𝜒2𝑎𝑖+𝑛𝑗=1(𝑏2𝑖𝑗+𝑐2𝑖𝑗))+𝜌, 𝜌=𝑛max𝑖=1,,𝑛(𝑙𝑖2),then, the equilibrium point 𝑢=0 of problems ((2.5)–(2.8)) is globally exponentially stable with convergence rate – (𝜆+𝜌𝑒𝛾𝜏)/2.

Proof. Multiplying both sides of (2.1) by 𝑢𝑖(𝑡,𝑥) and integrating with respect to spatial variable 𝑥 on Ω, we get dΩ𝑢𝑖2(𝑡,𝑥)d𝑥d𝑡=2𝑚𝑠=1Ω𝑢𝑖𝜕(𝑡,𝑥)𝜕𝑥𝑠𝐷𝑖𝑠𝜕𝑢𝑖(𝑡,𝑥)𝜕𝑥𝑠d𝑥2𝑎𝑖Ω𝑢2𝑖(𝑡,𝑥)d𝑥+2𝑛𝑗=1𝑏𝑖𝑗Ω𝑢𝑖(𝑡,𝑥)𝑓𝑗𝑢𝑗(𝑡,𝑥)d𝑥+2𝑛𝑗=1𝑐𝑖𝑗Ω𝑢𝑖(𝑡,𝑥)𝑓𝑗𝑢𝑗𝑡𝜏𝑗(𝑡),𝑥d𝑥𝑡𝑡0,𝑡𝑡𝑘,𝑘=1,2,.(3.1)
Regarding the right-hand part of (3.1), the first term becomes by using Green formula, Dirichlet boundary condition, Lemma 2.3, and condition 1 of Theorem 3.12𝑚𝑠=1Ω𝑢𝑖𝜕(𝑡,𝑥)𝜕𝑥𝑠𝐷𝑖𝑠𝜕𝑢𝑖(𝑡,𝑥)𝜕𝑥𝑠d𝑥=2𝑚𝑠=1Ω𝐷𝑖𝑠𝜕𝑢𝑖(𝑡,𝑥)𝜕𝑥𝑠2𝐷d𝑥(𝑚2)22Ω𝑢2𝑖(𝑡,𝑥)|𝑥|2𝐷𝑑𝑥(𝑚2)22𝛽Ω𝑢2𝑖(𝑡,𝑥)𝑑𝑥𝜒Ω𝑢2𝑖(𝑡,𝑥)d𝑥.(3.2)
Moreover, we derive from (H1) that 2𝑛𝑗=1𝑏𝑖𝑗Ω𝑢𝑖(𝑡,𝑥)𝑓𝑗𝑢𝑗(𝑡,𝑥)d𝑥2𝑛𝑗=1||𝑏𝑖𝑗||Ω||𝑢𝑖||||𝑓(𝑡,𝑥)𝑗𝑢𝑗||(𝑡,𝑥)d𝑥2𝑛𝑗=1Ω𝑙𝑗||𝑏𝑖𝑗||||𝑢𝑖||||𝑢(𝑡,𝑥)𝑗||(𝑡,𝑥)d𝑥𝑛𝑗=1Ω𝑏2𝑖𝑗𝑢2𝑖(𝑡,𝑥)+𝑙2𝑗𝑢2𝑖2(𝑡,𝑥)d𝑥,𝑛𝑗=1𝑐𝑖𝑗Ω𝑢𝑖(𝑡,𝑥)𝑓𝑗𝑢𝑗𝑡𝜏𝑗(𝑡),𝑥𝑑𝑥2𝑛𝑗=1||𝑐𝑖𝑗||Ω||𝑢𝑖||||𝑓(𝑡,𝑥)𝑗𝑢𝑗𝑡𝜏𝑗||(𝑡),𝑥𝑑𝑥2𝑛𝑗=1Ω𝑙𝑗||𝑐𝑖𝑗||||𝑢𝑖||||𝑢(𝑡,𝑥)𝑗𝑡𝜏𝑗||(𝑡),𝑥𝑑𝑥𝑛𝑗=1Ω𝑐2𝑖𝑗𝑢2𝑖(𝑡,𝑥)+𝑙2𝑗𝑢2𝑖𝑡𝜏𝑗(𝑡),𝑥𝑑𝑥.(3.3) Consequently, substituting ((2.10)–(3.14)) into (3.1) produces dΩ𝑢2𝑖(𝑡,𝑥)d𝑥d𝑡𝜒Ω𝑢2𝑖(𝑡,𝑥)d𝑥2𝑎𝑖Ω𝑢2𝑖+(𝑡,𝑥)d𝑥𝑛𝑗=1Ω𝑏2𝑖𝑗𝑢2𝑖(𝑡,𝑥)+𝑙2𝑗𝑢2𝑖+(𝑡,𝑥)d𝑥𝑛𝑗=1Ω𝑐2𝑖𝑗𝑢2𝑖(𝑡,𝑥)+𝑙2𝑗𝑢2𝑖𝑡𝜏𝑗(𝑡),𝑥d𝑥(3.4)for𝑡𝑡0,𝑡𝑡𝑘,𝑘=1,2,.
We define a Lyapunov function 𝑉𝑖(𝑡) as 𝑉𝑖(𝑡)=Ω𝑢2𝑖(𝑡,𝑥)d𝑥. It is easy to find that 𝑉𝑖(𝑡) is a piecewise continuous function with points of discontinuity of the first kind 𝑡𝑘 (𝑘=1,2,), where it is continuous from the left, that is, 𝑉𝑖(𝑡𝑘0)=𝑉𝑖(𝑡𝑘) (𝑘=1,2,). In addition, due to 𝑉𝑖(𝑡0+0)𝑉𝑖(𝑡0) and the following estimate derived from condition 2 of Theorem 3.1𝑢2𝑖𝑡𝑘=+0,𝑥𝜃𝑖𝑘𝑢𝑖𝑡𝑘,𝑥+𝑢𝑖𝑡𝑘,𝑥2=1𝜃𝑖𝑘2𝑢2𝑖𝑡𝑘,𝑥𝑢2𝑖𝑡𝑘,𝑥(𝑘=1,2,),(3.5) we have𝑉𝑖𝑡𝑘+0𝑉𝑖𝑡𝑘,𝑘=0,1,2,.(3.6) holds for 𝑡=𝑡𝑘 (𝑘=0,1,2,). Put 𝑡(𝑡𝑘,𝑡𝑘+1), 𝑘=0,1,2,. Then for the derivative d𝑉𝑖(𝑡)/d𝑡 of 𝑉𝑖 with respect to problems ((2.5)–(2.8)), it results from (3.4) that d𝑉𝑖(𝑡)d𝑡𝜒Ω𝑢2𝑖(𝑡,𝑥)d𝑥2𝑎𝑖Ω𝑢2𝑖(𝑡,𝑥)d𝑥+𝑛𝑗=1Ω𝑏2𝑖𝑗𝑢2𝑖(𝑡,𝑥)+𝑙2𝑗𝑢2𝑖+(𝑡,𝑥)d𝑥𝑛𝑗=1Ω𝑐2𝑖𝑗𝑢2𝑖(𝑡,𝑥)+𝑙2𝑗𝑢2𝑗𝑡𝜏𝑗(𝑡),𝑥d𝑥𝜒2𝑎𝑖+𝑛𝑗=1𝑏2𝑖𝑗+𝑛𝑗=1𝑐2𝑖𝑗𝑉𝑖(𝑡)+max𝑖=1,,𝑛𝑙2𝑖𝑛𝑗=1𝑉𝑗(𝑡)+max𝑖=1,,𝑛𝑙𝑖2𝑛𝑗=1𝑉𝑗𝑡𝜏𝑗𝑡(𝑡)𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2,.(3.7) Choose 𝑉(𝑡) of the form 𝑉(𝑡)=𝑛𝑖=1𝑉𝑖(𝑡). From (3.7), one then reads d𝑉(𝑡)d𝑡max𝑖=1,,𝑛𝜒2𝑎𝑖+𝑛𝑗=1𝑏2𝑖𝑗+𝑐2𝑖𝑗+𝑛max𝑖=1,,𝑛𝑙𝑖2𝑉(𝑡)+𝑛max𝑖=1,,𝑛𝑙𝑖2𝑛𝑗=1𝑉𝑗𝑡𝜏𝑗=𝜆𝑉(𝑡)+𝜌𝑛𝑗=1𝑉𝑗𝑡𝜏𝑗𝑡(𝑡)𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2,.(3.8)
Construct 𝑉(𝑡)=e𝛾(𝑡𝑡0)𝑉(𝑡), where 𝛾 satisfies 𝛾+𝜆+𝜌𝑒𝛾𝜏>0 and 𝜆+𝜌𝑒𝛾𝜏<0. Evidently, 𝑉(𝑡) is also a piecewise continuous function with points of discontinuity of the first kind 𝑡𝑘 (𝑘=1,2,), in which it is continuous from the left, that is 𝑉(𝑡𝑘0)=𝑉(𝑡𝑘) (𝑘=1,2,). Moreover, at 𝑡=𝑡𝑘 (𝑘=0,1,2,), we find by use of (3.6) 𝑉𝑡𝑘+0𝑉𝑡𝑘,𝑘=0,1,2,.(3.9)
Set 𝑡(𝑡𝑘,𝑡𝑘+1), 𝑘=0,1,2,. By virtue of (3.8), one has d𝑉(𝑡)d𝑡=𝛾e𝛾(𝑡𝑡0)𝑉(𝑡)+e𝛾(𝑡𝑡0)d𝑉(𝑡)d𝑡𝛾e𝛾(𝑡𝑡0)𝑉(𝑡)+𝜆𝑉(𝑡)+𝜌𝑛𝑗=1𝑉𝑗𝑡𝜏𝑗e(𝑡)𝛾(𝑡𝑡0)=(𝛾+𝜆)𝑉(𝑡)+𝜌e𝛾(𝑡𝑡0)𝑛𝑗=1𝑉𝑗𝑡𝜏𝑗𝑡(𝑡)𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2,.(3.10)
Choose small enough 𝜀>0. Integrating (3.10) from 𝑡𝑘+𝜀 to 𝑡 gives 𝑉(𝑡)𝑉𝑡𝑘+𝜀+(𝛾+𝜆)𝑡𝑡𝑘+𝜀𝑉(+𝑠)𝑑𝑠𝑡𝑡𝑘+𝜀𝜌e𝛾(𝑠𝑡0)𝑛𝑗=1𝑉𝑗𝑠𝜏𝑗𝑡(𝑠)𝑑𝑠,𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2,(3.11)
which yields after letting 𝜀0 in (3.11) 𝑉(𝑡)𝑉𝑡𝑘+0+(𝛾+𝜆)𝑡𝑡𝑘𝑉(+𝑠)𝑑𝑠𝑡𝑡𝑘𝜌e𝛾(𝑠𝑡0)𝑛𝑗=1𝑉𝑗𝑠𝜏𝑗𝑡(𝑠)𝑑𝑠,𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2,.(3.12)
We now proceed to estimate the value of 𝑉(𝑡) at 𝑡=𝑡𝑘+1, 𝑘=0,1,2,. For small enough 𝜀>0, we put 𝑡=𝑡𝑘+1𝜀. Now an application of (3.12) leads to, for 𝑘=0,1,2,,𝑉𝑡𝑘+1𝜀𝑉𝑡𝑘+0+(𝛾+𝜆)𝑡𝑘+1𝑡𝜀𝑘𝑉(𝑠)𝑑𝑠+𝑡𝑘+1𝑡𝜀𝑘𝜌e𝛾(𝑠𝑡0)𝑛𝑗=1𝑉𝑗𝑠𝜏𝑗(𝑠)𝑑𝑠.(3.13)
If we let 𝜀0 in (3.13), there results 𝑉𝑡𝑘+10𝑉𝑡𝑘+0+(𝛾+𝜆)𝑡𝑘+1𝑡𝑘𝑉(+𝑠)𝑑𝑠𝑡𝑘+1𝑡𝑘𝜌e𝛾(𝑠𝑡0)𝑛𝑗=1𝑉𝑗𝑠𝜏𝑗(𝑠)𝑑𝑠,𝑘=0,1,2,.(3.14)
Note that 𝑉(𝑡𝑘+10)=𝑉(𝑡𝑘+1) is applicable for 𝑘=0,1,2,. Thus, 𝑉𝑡𝑘+1𝑉𝑡𝑘+0+(𝛾+𝜆)𝑡𝑘+1𝑡𝑘𝑉(𝑠)𝑑𝑠+𝑡𝑘+1𝑡𝑘𝜌e𝛾(𝑠𝑡0)𝑛𝑗=1𝑉𝑗𝑠𝜏𝑗(𝑠)𝑑𝑠(3.15)
holds for 𝑘=0,1,2,. By synthesizing (3.12) and (3.15), we then arrive at 𝑉(𝑡)𝑉𝑡𝑘+0+(𝛾+𝜆)𝑡𝑡𝑘𝑉(+𝑠)𝑑𝑠𝑡𝑡𝑘𝜌e𝛾(𝑠𝑡0)𝑛𝑗=1𝑉𝑗𝑠𝜏𝑗𝑡(𝑠)𝑑𝑠𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2,.(3.16)
This, together with (3.9), results in 𝑉(𝑡)𝑉𝑡𝑘+(𝛾+𝜆)𝑡𝑡𝑘𝑉(𝑠)𝑑𝑠+𝑡𝑡𝑘𝜌e𝛾(𝑠𝑡0)𝑛𝑗=1𝑉𝑗𝑠𝜏𝑗(𝑠)𝑑𝑠(3.17)for𝑡(𝑡𝑘,𝑡𝑘+1],𝑘=0,1,2,.
Recalling assumptions that 0𝜏𝑗(𝑡)𝜏 and 𝜏𝑗(𝑡)<11/ (>0), we have 𝑡𝑡𝑘𝜌e𝛾(𝑠𝑡0)𝑛𝑗=1𝑉𝑗𝑠𝜏𝑗((𝑠))𝑑𝑠=𝑛𝑗=1𝑡𝜏𝑗𝑡(𝑡)𝑘𝜏𝑗𝑡𝑘𝜌e𝛾(𝜃+𝜏𝑗(𝑠)𝑡0)𝑉𝑗1(𝜃)1𝜏𝑗(𝑠)𝑑𝜃𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡𝜏𝑗𝑡(𝑡)𝑘𝜏𝑗𝑡𝑘e𝛾(𝜃𝑡0)𝑉𝑗(𝜃)𝑑𝜃.(3.18) Hence, 𝑉(𝑡)𝑉𝑡𝑘+(𝛾+𝜆)𝑡𝑡𝑘𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡𝜏𝑗𝑡(𝑡)𝑘𝜏𝑗𝑡𝑘e𝛾(𝑠𝑡0)𝑉𝑗𝑡(𝑠)𝑑𝑠𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2,.(3.19)
By induction argument, we reach 𝑉𝑡𝑘𝑉𝑡𝑘1+(𝛾+𝜆)𝑡𝑘𝑡𝑘1𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡𝑘𝜏𝑗(𝑡𝑘)𝑡𝑘1𝜏𝑗𝑡𝑘1e𝛾(𝑠𝑡0)𝑉𝑗𝑉(𝑠)𝑑𝑠,𝑡2𝑉𝑡1+(𝛾+𝜆)𝑡2𝑡1𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡2𝜏𝑗(𝑡2)𝑡1𝜏𝑗𝑡1e𝛾(𝑠𝑡0)𝑉𝑗𝑉(𝑠)𝑑𝑠,𝑡1𝑉𝑡0+(𝛾+𝜆)𝑡1𝑡0𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡1𝜏𝑗(𝑡1)𝑡0𝜏𝑗𝑡0e𝛾(𝑠𝑡0)𝑉𝑗(𝑠)𝑑𝑠.(3.20)
Therefore, 𝑉(𝑡)𝑉𝑡0+(𝛾+𝜆)𝑡𝑡0𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡𝜏𝑗𝑡(𝑡)0𝜏𝑗𝑡0e𝛾(𝑠𝑡0)𝑉𝑗(𝑠)𝑑𝑠𝑉𝑡0+(𝛾+𝜆)𝑡𝑡0𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡𝑡0𝜏𝑗𝑡0e𝛾(𝑠𝑡0)𝑉𝑗(𝑠)𝑑𝑠=𝑉𝑡0+(𝛾+𝜆+𝜌𝑒𝛾𝜏)𝑡𝑡0𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡0𝑡0𝜏𝑗𝑡0e𝛾(𝑠𝑡0)𝑉𝑗𝑡(𝑠)𝑑𝑠𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2,.(3.21) Since 𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡0𝑡0𝜏𝑗𝑡0e𝛾(𝑠𝑡0)𝑉𝑗(𝑠)𝑑𝑠𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡0𝑡0𝜏𝑉𝑗(𝑠)𝑑𝑠=𝜌𝑒𝛾𝜏𝑡0𝑡0𝜏𝑛𝑗=1Ω𝜑2𝑗(𝑠,𝑥)𝑑𝑥𝑑𝑠𝜏𝜌𝑒𝛾𝜏𝜑2Ω,(3.22) we claim 𝑉(𝑡)𝑉𝑡0+𝜏𝜌𝑒𝛾𝜏𝜑2Ω+(𝛾+𝜆+𝜌𝑒𝛾𝜏)𝑡𝑡0𝑉(𝑡𝑠)𝑑𝑠𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2.(3.23)
According to Lemma 2.2, we claim 𝑉(𝑉𝑡)𝑡0+𝜏𝜌𝑒𝛾𝜏𝜑2Ω(exp𝛾+𝜆+𝜌𝑒𝛾𝜏)𝑡𝑡0,𝑡𝑡0(3.24) which reduces to𝑢𝑡,𝑥;𝑡0,𝜑Ω1+𝜏𝜌𝑒𝛾𝜏𝜑Ωexp𝜆+𝜌𝑒𝛾𝜏2𝑡𝑡0,𝑡𝑡0.(3.25) This completes the proof.

Remark 3.2. According to the conditions of Theorem 3.1, we see that the reaction-diffusion term do influence the stability of problem ((2.5)–(2.8)). Moreover, besides the reaction-diffusion coefficients, the dimension of the space and the boundary of spatial variables have also an effect on the stability of the equilibrium point 𝑢=0.

Theorem 3.3 .. Providing that(1) for 𝑥=(𝑥1,,𝑥𝑚)𝑇Ω(𝑚3), there exist constants 𝛽 such that |𝑥|2=𝑚𝑠=1𝑥2𝑠<𝛽, in addition, there exists constant 𝐷>0 such that 𝐷𝑖𝑠=𝐷𝑖𝑠(𝑡,𝑥,𝑢)𝐷>0; denote 𝐷(𝑚2)2/2𝛽=𝜒,(2)𝑃𝑖𝑘(𝑢𝑖(𝑡𝑘,𝑥))=𝜃𝑖𝑘𝑢𝑖(𝑡𝑘,𝑥), 11+𝛼𝜃𝑖𝑘1+1+𝛼, 𝛼0,(3)inf𝑘=1,2(𝑡𝑘𝑡𝑘1)>𝜇, (4)there exists constant 𝛾 which satisfies 𝛾+𝜆+𝜌𝑒𝛾𝜏>0 and 𝜆+𝜌𝑒𝛾𝜏+ln(1+𝛼)/𝜇<0, where 𝜆=max𝑖=1,,𝑛(𝜒2𝑎𝑖+𝑛𝑗=1(𝑏2𝑖𝑗+𝑐2𝑖𝑗))+𝜌 and 𝜌=𝑛max𝑖=1,,𝑛(𝑙𝑖2), then, the equilibrium point 𝑢=0 of problem ((2.5)–(2.8)) is globally exponentially stable with convergence rate –(1/2)(𝜆+𝜌𝑒𝛾𝜏+ln(1+𝛼)/𝜇).

Proof. Define a Lyapunov function 𝑉 of the form 𝑉(𝑡)=𝑛𝑖=1𝑉𝑖(𝑡), where 𝑉𝑖(𝑡)=Ω𝑢2𝑖(𝑡,𝑥)d𝑥. Obviously, 𝑉(𝑡) is a piecewise continuous function with points of discontinuity of the first kind 𝑡𝑘, 𝑘=1,2,, where it is continuous from the left, that is, 𝑉1(𝑡𝑘0)=𝑉1(𝑡𝑘) (𝑘=1,2,). Furthermore, for 𝑡=𝑡𝑘 (𝑘=0,1,2,), it follows from condition 2 of Theorem 3.3 that 𝑢2𝑖𝑡𝑘+0,𝑥𝑢2𝑖𝑡𝑘=,𝑥1𝜃𝑖𝑘2𝑢2𝑖𝑡𝑘,𝑥𝑢2𝑖𝑡𝑘,𝑥𝛼𝑢2𝑖𝑡𝑘,𝑥.(3.26) Thereby, 𝑉𝑡𝑘𝑡+0𝛼𝑉𝑘𝑡+𝑉𝑘,𝑘=0,1,2,.(3.27)
Construct another Lyapunov function defined by 𝑉(𝑡)=e𝛾(𝑡𝑡0)𝑉(𝑡), where 𝛾 satisfies 𝛾+𝜆+𝜌𝑒𝛾𝜏>0 and 𝜆+𝜌𝑒𝛾𝜏+(ln(1+𝛼)/𝜇)<0. Then, 𝑉(𝑡) is also a piecewise continuous function with points of discontinuity of the first kind 𝑡𝑘, 𝑘=1,2,, where it is continuous from the left, that is 𝑉(𝑡𝑘0)=𝑉(𝑡𝑘) (𝑘=1,2,). And for 𝑡=𝑡𝑘 (𝑘=0,1,2,), it results from (3.27) that 𝑉𝑡𝑘+0𝛼𝑉𝑡𝑘+𝑉𝑡𝑘,𝑘=0,1,2,.(3.28)
Set 𝑡(𝑡𝑘,𝑡𝑘+1], 𝑘=0,1,2,. Following the same procedure as in Theorem 3.1, we get 𝑉(𝑡)𝑉𝑡𝑘+0+(𝛾+𝜆)𝑡𝑡𝑘𝑉(𝑠)𝑑𝑠+𝜌𝑒𝛾𝜏×𝑛𝑗=1𝑡𝜏𝑗𝑡(𝑡)𝑘𝜏𝑗𝑡𝑘e𝛾(𝜃𝑡0)𝑉𝑗𝑡(𝜃)𝑑𝜃𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2,.(3.29)
The relations (3.28) and (3.29) yield 𝑉(𝑡)𝑉𝑡𝑘𝛼𝑉𝑡𝑘+(𝛾+𝜆)𝑡𝑡𝑘𝑉(𝑠)𝑑𝑠+𝜌𝑒𝛾𝜏×𝑛𝑗=1𝑡𝜏𝑗𝑡(𝑡)𝑘𝜏𝑗𝑡𝑘e𝛾(𝜃𝑡0)𝑉𝑗𝑡(𝜃)𝑑𝜃𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2,.(3.30)
By induction argument, we arrive at 𝑉𝑡𝑘𝑉𝑡𝑘1𝛼𝑉𝑡𝑘1+(𝛾+𝜆)𝑡𝑘𝑡𝑘1𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡𝑘𝜏𝑗(𝑡𝑘)𝑡𝑘1𝜏𝑗𝑡𝑘1e𝛾(𝜃𝑡0)𝑉𝑗𝑉(𝜃)𝑑𝜃,𝑡2𝑉𝑡1𝛼𝑉𝑡1+(𝛾+𝜆)𝑡2𝑡1𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡2𝜏𝑗(𝑡2)𝑡1𝜏𝑗𝑡1e𝛾(𝜃𝑡0)𝑉𝑗𝑉(𝜃)𝑑𝜃,𝑡1𝑉𝑡0𝛼𝑉𝑡0+(𝛾+𝜆)𝑡1𝑡0𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡1𝜏𝑗(𝑡1)𝑡0𝜏𝑗𝑡0e𝛾(𝜃𝑡0)𝑉𝑗(𝜃)𝑑𝜃.(3.31) Hence, 𝑉(𝑡)𝑉𝑡0𝛼𝑉𝑡0+(𝛾+𝜆)𝑡𝑡0𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡𝜏𝑗𝑡(𝑡)0𝜏𝑗𝑡0e𝛾(𝜃𝑡0)𝑉𝑗(𝜃)𝑑𝜃+𝛼𝑡0<𝑡𝑘<𝑡𝑉𝑡𝑘𝛼𝑉𝑡0+(𝛾+𝜆+𝜌𝑒𝛾𝜏)𝑡𝑡0𝑉(𝑠)𝑑𝑠+𝜌𝑒𝑛𝛾𝜏𝑗=1𝑡0𝑡0𝜏𝑗𝑡0e𝛾(𝜃𝑡0)𝑉𝑗(𝜃)𝑑𝜃+𝛼𝑡0<𝑡𝑘<𝑡𝑉𝑡𝑘𝑡𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2,.(3.32)
Introducing 𝜌𝑒𝛾𝜏𝑛𝑗=1𝑡0𝑡0𝜏𝑗(𝑡0)e𝛾(𝜃𝑡0)𝑉𝑗(𝜃)𝑑𝜃𝜏𝜌𝑒𝛾𝜏𝜑2Ω as shown in the proof of Theorem 3.1 into (3.32), the expression becomes 𝑉(𝑡)𝑉𝑡0𝛼𝑉𝑡0+𝜏𝜌𝑒𝛾𝜏𝜑2Ω+(𝛾+𝜆+𝜌𝑒𝛾𝜏)𝑡𝑡0𝑉(𝑠)𝑑𝑠+𝛼𝑡0<𝑡𝑘<𝑡𝑉𝑡𝑘𝑡𝑡𝑘,𝑡𝑘+1,𝑘=0,1,2.(3.33)
It then results from Lemma 2.2 that 𝑉((𝑡)𝛼+1)𝑉𝑡0+𝜏𝜌𝑒𝛾𝜏𝜑2Ω𝑡0<𝑡𝑘<𝑡((1+𝛼)exp𝛾+𝜆+𝜌𝑒𝛾𝜏)𝑡𝑡0=(𝛼+1)𝑉𝑡0+𝜏𝜌𝑒𝛾𝜏𝜑2Ω(1+𝛼)𝑘exp(𝛾+𝜆+𝜌𝑒𝛾𝜏)𝑡𝑡0,𝑡𝑡0.(3.34)
On the other hand, since inf𝑘=1,2,(𝑡𝑘𝑡𝑘1)>𝜇, one has 𝑘<(𝑡𝑘𝑡0)/𝜇. Thereby, (1+𝛼)𝑘<𝑒𝑥𝑝ln(1+𝛼)𝜇𝑡𝑘𝑡0<𝑒𝑥𝑝ln(1+𝛼)𝜇𝑡𝑡0.(3.35)
And (3.34) can be rewritten as 𝑉(𝑡)(𝛼+1)𝑉𝑡0+𝜏𝜌𝑒𝛾𝜏𝜑2Ωexp𝛾+𝜆+𝜌𝑒𝛾𝜏+ln(1+𝛼)𝜇𝑡𝑡0(3.36) which implies 𝑢𝑡,𝑥;𝑡0,𝜑Ω(𝛼+1+𝜏𝜌𝑒𝛾𝜏)𝜑Ω1exp2𝜆+𝜌𝑒𝛾𝜏+ln(1+𝛼)𝜇𝑡𝑡0,𝑡𝑡0.(3.37) The proof is completed.

Remark 3.4. Theorem 3.1 is in fact the special case of Theorem 3.3 by choosing 𝛼=0. Due to Lemma 2.4, we know 2𝑛𝑗=1𝑏𝑖𝑗Ω𝑢𝑖𝑢(𝑡,𝑥)𝑓𝑗(𝑡,𝑥)d𝑥𝑛𝑗=1Ω𝜀1𝑏2𝑖𝑗𝑢2𝑖𝑙(𝑡,𝑥)+2𝑗𝜀1𝑢2𝑖2(𝑡,𝑥)d𝑥,𝑛𝑗=1𝑐𝑖𝑗Ω𝑢𝑖𝑢(𝑡,𝑥)𝑓𝑗𝑡𝜏𝑗,𝑥d𝑥𝑛𝑗=1Ω𝜀2𝑐2𝑖𝑗𝑢2𝑖𝑙(𝑡,𝑥)+2𝑗𝜀2𝑢2𝑖𝑡𝜏𝑗,𝑥d𝑥(3.38) hold for any 𝜀1,𝜀2>0.
In the sequel, we follow the same procedures as in Theorems 3.1 and 3.3 to find the following theorems.

Theorem 3.5. Provided that(1)for 𝑥=(𝑥1,,𝑥𝑚)𝑇Ω(𝑚3), there exists a constant 𝛽 such that |𝑥|2=𝑚𝑠=1𝑥2𝑠<𝛽. in addition, there exists a constant 𝐷>0 such that 𝐷𝑖𝑠=𝐷𝑖𝑠(𝑡,𝑥,𝑢)𝐷>0; denote 𝐷(𝑚2)2/2𝛽=𝜒,(2)𝑃𝑖𝑘(𝑢𝑖(𝑡𝑘,𝑥))=𝜃𝑖𝑘𝑢𝑖(𝑡𝑘,𝑥), 0𝜃𝑖𝑘2,(3)there exist constants 𝛾 and 𝜀1,𝜀2>0 such that 𝛾+𝜆+𝜌𝑒𝛾𝜏>0 and 𝜆+𝜌𝑒𝛾𝜏<0, where 𝜆=max𝑖=1,,𝑛(𝜒2𝑎𝑖+𝑛𝑗=1(𝜀1𝑏2𝑖𝑗+𝜀2𝑐2𝑖𝑗))+(𝑛/𝜀1)max𝑖=1,,𝑛(𝑙2𝑖) and 𝜌=(𝑛/𝜀2)max𝑖=1,,𝑛(𝑙2𝑖), then, the equilibrium point 𝑢=0 of problem ((2.5)–(2.8)) is globally exponentially stable with convergence rate –(𝜆+𝜌𝑒𝛾𝜏)/2.

Theorem 3.6. Assume that(1)for 𝑥=(𝑥1,,𝑥𝑚)𝑇Ω(𝑚3), there exists a constant 𝛽 such that |𝑥|2=𝑚𝑠=1𝑥2𝑠<𝛽; In addition, there exists a constant 𝐷>0 such that 𝐷𝑖𝑠=𝐷𝑖𝑠(𝑡,𝑥,𝑢)𝐷>0; denote 𝐷(𝑚2)2/2𝛽=𝜒,(2)𝑃𝑖𝑘(𝑢𝑖(𝑡𝑘,𝑥))=𝜃𝑖𝑘𝑢𝑖(𝑡𝑘,𝑥), 11+𝛼𝜃𝑖𝑘1+1+𝛼, 𝛼0,(3)inf𝑘=1,2,(𝑡𝑘𝑡𝑘1)>𝜇,(4)there exist constants 𝛾 and 𝜀1,𝜀2>0 such that 𝛾+𝜆+𝜌𝑒𝛾𝜏>0 and 𝜆+𝜌𝑒𝛾𝜏+ln(1+𝛼)/𝜇<0, where 𝜆=max𝑖=1,,𝑛(𝜒2𝑎𝑖+𝑛𝑗=1(𝜀1𝑏2𝑖𝑗+𝜀2𝑐2𝑖𝑗))+(𝑛/𝜀1)max𝑖=1,,𝑛(𝑙2𝑖) and 𝜌=(𝑛/𝜀2)max𝑖=1,,𝑛(𝑙2𝑖).
Then, the equilibrium point 𝑢=0 of problem ((2.5)–(2.8)) is globally exponentially stable with convergence rate (1/2)(𝜆+𝜌𝑒𝛾𝜏+ln(1+𝛼)/𝜇).
Further, on the condition that |𝑃𝑖𝑘(𝑢𝑖(𝑡𝑘,𝑥))|𝜃𝑖𝑘|𝑢𝑖(𝑡𝑘,𝑥)|, where 𝜃2𝑖𝑘<(𝛼1)/2 and 𝛼1, we obtain, for 𝑡=𝑡𝑘 (𝑘=1,2,), 𝑢2𝑖𝑡𝑘+0,𝑥𝑢2𝑖𝑡𝑘=𝑃,𝑥𝑖𝑘𝑢𝑖𝑡𝑘,𝑥+𝑢𝑖𝑡𝑘,𝑥2𝑢2𝑖𝑡𝑘𝑢,𝑥2𝑖𝑡𝑘,𝑥2𝑃+2𝑖𝑘𝑢𝑖𝑡𝑘,𝑥2𝑢2𝑖𝑡𝑘,𝑥2+2𝜃2𝑖𝑘𝑢𝑖𝑡𝑘,𝑥2𝑢2𝑖𝑡𝑘,𝑥𝛼𝑢2𝑖𝑡𝑘.,𝑥(3.39)
Identical with the proof of Theorem 3.3, we present the theorem as follows.

Theorem 3.7. Assume that(1)for 𝑥=(𝑥1,,𝑥𝑚)𝑇Ω(𝑚3), there exists a constant 𝛽 such that |𝑥|2=𝑚𝑠=1𝑥2𝑠<𝛽. in addition, there exists a constant 𝐷>0 such that 𝐷𝑖𝑠=𝐷𝑖𝑠(𝑡,𝑥,𝑢)𝐷>0. Denote 𝐷(𝑚2)2/2𝛽=𝜒,(2)|𝑃𝑖𝑘(𝑢𝑖(𝑡𝑘,𝑥))|𝜃𝑖𝑘|𝑢𝑖(𝑡𝑘,𝑥)|, where 𝜃2𝑖𝑘(𝛼1)/2 and 𝛼1,(3)inf𝑘=1,2,(𝑡𝑘𝑡𝑘1)>𝜇,(4)there exist constants𝛾and𝜀1,𝜀2>0such that𝛾+𝜆+𝜌𝑒𝛾𝜏>0and𝜆+𝜌𝑒𝛾𝜏+ln(1+𝛼)/𝜇<0,where𝜆=max𝑖=1,,𝑛(𝜒2𝑎𝑖+𝑛𝑗=1(𝜀1𝑏2𝑖𝑗+𝜀2𝑐2𝑖𝑗))+(𝑛/𝜀1)max𝑖=1,,𝑛(𝑙𝑖2)and𝜌=(𝑛/𝜀2)max𝑖=1,,𝑛(𝑙𝑖2).
Then, the equilibrium point 𝑢=0 of problem ((2.5)–(2.8)) is globally exponentially stable with convergence rate 1/2(𝜆+𝜌𝑒𝛾𝜏+ln(1+𝛼)/𝜇).

Remark 3.8. Different from Theorems 3.13.6, the impulsive part in Theorem 3.7 could be nonlinear and this will be of more applicability. Actually, Theorems 3.13.6 can be regarded as the special cases of Theorem 3.7.

4. Examples

Example 4.1. Consider the following impulsive reaction-diffusion delayed neural network: 𝜕𝑢𝑖(𝑡,𝑥)=𝜕𝑡𝑚𝑠=1𝜕𝜕𝑥𝑠𝐷𝑖𝑠𝜕𝑢𝑖(𝑡,𝑥)𝜕𝑥𝑠𝑎𝑖𝑢𝑖(𝑡,𝑥)+𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑢𝑗+(𝑡,𝑥)𝑛𝑗=1𝑐𝑖𝑗𝑓𝑗𝑢𝑗𝑡𝜏𝑗(𝑡),𝑥𝑡0,𝑡𝑡𝑘,𝑥Ω,𝑘=1,2,,𝑖=1,,𝑛(4.1) with the impulsive effects characterized by 𝑢1𝑡𝑘+0,𝑥=𝑢1𝑡𝑘,𝑥+1.343𝑢1𝑡𝑘,𝑥,𝑢2𝑡𝑘+0,𝑥=𝑢2𝑡𝑘,𝑥+1.343𝑢2𝑡𝑘,𝑥𝑘=1,2,,𝑥Ω(4.2) and initial condition (2.3) and Dirichlet condition (2.4), where 𝑛=2, 𝑚=4, Ω={(𝑥1,,𝑥4)𝑇|4𝑖=1𝑥2𝑖<4}, 𝑎1=𝑎2=6.5, (𝐷𝑖𝑠)2×4=1.22.32.53.11.83.22.73.4, (𝑏𝑖𝑗)2×2=0.231.30.13, (𝑐𝑖𝑗)2×2=0.10.20.10.3, 𝑓𝑗(𝑢𝑗)=(1/4)(|𝑢𝑗+1||𝑢𝑗1|), 0𝜏𝑗(𝑡)0.5, and 𝜏𝑗(𝑡)<0. For 𝛽=4 and 𝐷=1.2, we compute 𝜒=0.6. This, together with the chosen 𝑙𝑖=1/2, yields 𝜆=max𝑖=1,,𝑛𝜒2𝑎𝑖+𝑛𝑗=1𝑏2𝑖𝑗+𝑐2𝑖𝑗+𝑛max𝑖=1,,𝑛𝑙2𝑖=4,𝜌=𝑛max𝑖=1,,𝑛𝑙2𝑖=12.(4.3)
By selecting 𝛾=2.6, 𝜏=0.5 and =1, we estimate 𝛾+𝜆+𝜌𝑒𝛾𝜏1=2.64+2𝑒1.3>0,𝜆+𝜌𝑒𝛾𝜏1=4+2𝑒1.3<0.(4.4)
According to Theorem 3.1, we therefore conclude that the system in Example 4.1 is globally exponential stable.

Example 4.2. Consider the following impulsive reaction-diffusion delayed neural network: 𝜕𝑢𝑖(𝑡,𝑥)=𝜕𝑡𝑚𝑠=1𝜕𝜕𝑥𝑠𝐷𝑖𝑠𝜕𝑢𝑖(𝑡,𝑥)𝜕𝑥𝑠𝑎𝑖𝑢𝑖(𝑡,𝑥)+𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑢𝑗+(𝑡,𝑥)𝑛𝑗=1𝑐𝑖𝑗𝑓𝑗𝑢𝑗𝑡𝜏𝑗(𝑡),𝑥𝑡0,𝑡𝑡𝑘,𝑥Ω,𝑘=1,2,,𝑖=1,,𝑛(4.5) with the impulsive effects featured by 𝑢1𝑡𝑘+0,𝑥=𝑢1𝑡𝑘,𝑥+arctan0.5𝑢1𝑡𝑘,𝑥,𝑢2𝑡𝑘+0,𝑥=𝑢2𝑡𝑘,𝑥+arctan0.5𝑢2𝑡𝑘,𝑥𝑘=1,2,,𝑥Ω(4.6) and initial condition (2.3) and Dirichlet condition (2.4), where 𝑛=2, 𝑚=4, Ω={(𝑥1,,𝑥4)𝑇|4𝑖=1𝑥2𝑖<4}, 𝑎1=𝑎2=6.5, (𝐷𝑖𝑠)2×4=1.22.32.53.11.83.22.73.4, (𝑏𝑖𝑗)2×2=0.231.30.13, (𝑐𝑖𝑗)2×2=0.10.20.10.3, 𝑓𝑗(𝑢𝑗)=1/4(|𝑢𝑗+1||𝑢𝑗1|), 0𝜏𝑗(𝑡)0.5, 𝜏𝑗(𝑡)<0, and inf𝑘=1,2,(𝑡𝑘𝑡𝑘1)>1. For 𝛽=4 and 𝐷=1.2, we compute 𝜒=0.6. This, together with 𝑙𝑖=1/2 and 𝜀1=𝜀2=1, yields 𝑛𝜌=𝜀2max𝑖=1,,𝑛𝑙2𝑖=12,𝜆=max𝑖=1,,𝑛𝜒2𝑎𝑖+𝑛𝑗=1𝜀1𝑏2𝑖𝑗+𝜀2𝑐2𝑖𝑗+𝑛𝜀1max𝑖=1,,𝑛𝑙2𝑖=4.(4.7)
Select 𝛼=1.5 by setting 𝜃𝑖𝑘=0.5. Hence, we compute by letting 𝜇=1, 𝛾=3, 𝜏=0.5, and =1 that 𝛾+𝜆+𝜌𝑒𝛾𝜏1=34+2𝑒1.5>0,𝜆+𝜌𝑒𝛾𝜏+ln(1+𝛼)𝜇1=4+2𝑒1.5+ln2.5<0.(4.8)
It is then concluded from Theorem 3.7 that the system in Example 4.2 is globally exponentially stable.

Acknowledgment

The work is supported by National Natural Science Foundation of China under Grant 60904028.