- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Journal of Applied Mathematics

Volume 2012 (2012), Article ID 405939, 10 pages

http://dx.doi.org/10.1155/2012/405939

## Stability of the Stochastic Reaction-Diffusion Neural Network with Time-Varying Delays and *p*-Laplacian

^{1}College of Civil Engineering and Architecture, Sanming University, Sanming 365004, China^{2}Department of Mathematics and Physics, Huaihai Institute of Technology, Lianyungang 222005, China^{3}Science and Technology on Microsystems Laboratory, Shanghai Institute of MicroSystems and Information Technology, CAS, Shanghai 200050, China

Received 22 October 2012; Accepted 9 November 2012

Academic Editor: Maoan Han

Copyright © 2012 Pan Qingfei et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The main aim of this paper is to discuss moment exponential stability for a stochastic reaction-diffusion neural network with time-varying delays and *p*-Laplacian. Using the Itô formula, a delay
differential inequality and the characteristics of the neural network, the algebraic conditions for the moment exponential stability of the nonconstant equilibrium solution are derived. An example is also given for illustration.

#### 1. Introduction

In many neural networks, time delays cannot be avoided. For example, in electronic neural networks, time delays will be present due to the finite switching speed of amplifies. In fact, time delays are often encountered in various engineering, biological, and economical systems. On the other hand, when designing a neural network to solve a problem such as optimization or pattern recognition, we need foremost to guarantee that the neural networks model is globally asymptotically stable. However, the existence of time delay frequently causes oscillation, divergence, or instability in neural networks. In recent years, the stability of neural networks with delays or without delays has become a topic of great theoretical and practical importance (see [1–16]).

The stability of neural networks which depicted by partial differential equations was studied in [6, 7]. Stochastic differential equations were employed to research the stability of neural networks in [8–11], while [12, 13] used stochastic partial differential equations to analysis this question. In [15], the authors studied almost exponential stability for a stochastic recurrent neural network with time-varying delays. In addition, moment exponential stability for a stochastic reaction-diffusion neural network with time-varying delays is discussed in [16].

In this paper, we consider the stochastic reaction-diffusion neural network with time-varying delays and -Laplacian as follows: In (1.1), , is a common number. is a bounded convex domain with smooth boundary and measure . denotes the numbers of neurons in the neural network, corresponds to the state of the th neurons at time and in space , the is an amplification function. is output. denotes the output of the th neuron at time and in space , namely, activation function which shows how neurons respond to each other. is an m-dimensional Brownian motion which is defined on a complete probability space with a natural filtration (i.e., ). , . denotes the intensity of the stochastic perturbation. Functions and are subject to certain conditions to be specified later. is a real constant matrix and represents weight of the neuron interconnections, namely, denotes the strength of th neuron on the th neuron at time and in space , and corresponds to axonal signal transmission delay.

#### 2. Definitions and Lemmas

Throughout this paper, unless otherwise specified, let denote Euclidean norm. Define that and where . Denote by the family of continuous functions from to . For every and , denote by × the family of all -measurable × valued random variables such that , where , denotes the expectation of random variable .

*Definition 2.1. *The is called a solution of problem (1.1)–(1.3) if it satisfies following conditions , , and :(1) adapts ;(2)for , , and , where ;(3)for , , it holds that

*Definition 2.2. *The is called a nonconstant equilibrium solution of problem (1.1)–(1.3) if and only if satisfies (1.1) and (1.2).

*Definition 2.3. *The nonconstant equilibrium solution of (1.1) about the given norm is called exponential stability in th moment, if there are constants , for every stochastic field solution of problem (1.1)–(1.3) such that
namely,
The constant on the right hand side in (2.3) is called Lyapunov exponent of every solution of problem (1.1)–(1.3) converging on equilibrium about norm .

In order to obtain th moment exponential stability for a nonconstant equilibrium solution of problem (1.1)–(1.3), we need the following lemmas.

Lemma 2.4 (see [17]). *Let and be two real matrices. The continuous function satisfies the delay differential inequalities
**
If for and and is an M-matrix, then there are constants , such that
**
where are initial functions. is right-hand upper derivate. represents a norm.*

Lemma 2.5 (see [10]). *Let , then there are positive constants and for any such that
*

*Remark 2.6. *If , Lemma 2.5 also holds with .

Suppose that , , and are Lipschitz continuous such that the following conditions hold: , ,, ,, for all , , where , , and are positive constants.

#### 3. Main Result

Set as a solution of the problem (1.1)–(1.3) and as a nonconstant equilibrium solution of the problem (1.1)–(1.3).

Theorem 3.1. *Let and – hold. Assume that there are positive constants such that the matrix is an M-matrix, where
**
then the nonconstant equilibrium solution of problem (1.1)–(1.3) about norm is exponential stability in th moment, that is, there are constants and , for any and any such that
*

*Proof. *Set . For every and , by means of Itô formula and , one has that
Both sides of Inequality (3.3) are integrated about over . Set . One has that
Set . By (1.2), one has that
where is unit outer cotangent vector on . By (3.4), (3.5), (H1), and Young’s inequality, one has that
where and are defined by (3.1).

For , both sides of (3.6) are integrated about from to , then both sides of (3.6) are calculated expectation. By the properties of Brownian motion, one has that
Since the integrals and are finite, by Fubini theorem [18] and (3.7), one obtain that

Set . Both sides of Inequality (3.8) are divided by , let , one has the following inequality:
By Lemma 2.4, there are positive constants , such that
where is initial value. Set , then
By (3.11) and Lemma 2.5, one obtains that

In order to prove Theorem 3.1, we need the following lemma.

Lemma 3.2. *The nonconstant equilibrium solution of the problem (1.1)–(1.3), satisfies .*

* Proof. *Set . Similar to (3.8) in proof of Theorem 3.1, one has that
By (3.13) and the assumption that is an M-matrix, one obtains that
Because of and , one has that .

We continue the proof of Theorem 3.1 as the following.

By Lemma 3.2, one has that
where is a common number. We derive every solution of problem (1.1)–(1.3) such that
then a nonconstant equilibrium solution of problem (1.1)–(1.3) about norm is exponential stability in th moment. The proof of Theorem 3.1 is complete.

In order to illustrate the application of the theorem, we give an example.

*Example 3.3. *Discuss the stochastic reaction-diffusion neural network with time-varying delays and -Laplacian as the following:
where
Set as a nonconstant equilibrium solution of (3.17) and (3.18). One can derive that
Taking and , one has that
and is an -matrix. The nonconstant equilibrium solution of (3.17) and (3.18) about norm is exponential stability in the 3rd moment.

*Remark 3.4. *The Theorem 3.1 extends the correlative results in [12, 13, 16] to the situation related to the -Laplacian.

#### Acknowledgments

The authors thank the reviewers for their constructive comments.This work is supported by the National Science Foundation of China (no. 10971240).

#### References

- X. Liao and J. Wang, “Global dissipativity of continuous-time recurrent neural networks with time delay,”
*Physical Review E*, vol. 68, no. 1, Article ID 016118, 6 pages, 2003. View at Publisher · View at Google Scholar - L. Wang and D. Xu, “Stability for Hopfield neural networks with time delay,”
*Journal of Vibration and Control*, vol. 8, no. 1, pp. 13–18, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - D. Xu, H. Zhao, and H. Zhu, “Global dynamics of Hopfield neural networks involving variable delays,”
*Computers & Mathematics with Applications*, vol. 42, no. 1-2, pp. 39–45, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - L. S. Wang and D. Y. Xu, “Stability analysis of Hopfield neural networks with time delay,”
*Applied Mathematics and Mechanics*, vol. 23, no. 1, pp. 65–70, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - X. Liao, X. Mao, J. Wang, and Z. Zeng, “Algebraic conditions of stability for Hopfield neural network,”
*Science in China F*, vol. 47, no. 1, pp. 113–125, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - X. X. Liao, S. Z. Yang, S. J. Cheng, and Y. Shen, “Stability of general neural networks with reaction-diffusion,”
*Sciena in China E*, vol. 32, no. 1, pp. 87–94, 2002 (Chinese). View at Google Scholar - L. Wang and D. Xu, “Global exponential stability of Hopfield reaction-diffusion neural networks with time-varying delays,”
*Science in China F*, vol. 46, no. 6, pp. 466–474, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - S. Blythe, X. Mao, and X. Liao, “Stability of stochastic delay neural networks,”
*Journal of the Franklin Institute*, vol. 338, no. 4, pp. 481–495, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - S. Hu, X. Liao, and X. Mao, “Stochastic Hopfield neural networks,”
*Journal of Physics A*, vol. 36, pp. 2235–2249, 2003. View at Google Scholar - J. R. Niu, Z. F. Zhang, and D. Y. Xu, “Exponential stability in mean square of a stochastic Cohen-Grossberg neural network with time-varying delays,”
*Chinese Journal of Engineering Mathematics*, vol. 22, no. 6, pp. 1001–1005, 2005 (Chinese). View at Google Scholar - Y. Shen, M. H. Jiang, and H. S. Yao, “Exponential stability of cellular neural networks,”
*Acta Mathematica Scientia A*, vol. 25, no. 2, pp. 264–268, 2005 (Chinese). View at Google Scholar - Q. Luo, F. Deng, J. Bao, B. Zhao, and Y. Fu, “Stabilization of stochastic Hopfield neural network with distributed parameters,”
*Science in China F*, vol. 47, no. 6, pp. 752–762, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Z. Zifang,
*The stability of uncertain dynamical systems and applications [M.S. thesis]*, Sichuan University, 2004. - S. J. Long and L. Xiang, “A nonlinear measure method for stability of Hopfield neural networks with time-varying delays,”
*Journal of Sichuan University*, vol. 44, no. 2, pp. 249–252, 2007 (Chinese). View at Google Scholar - Z.-F. Zhang and D.-Y. Xu, “A note on stability of stochastic delay neural networks,”
*Chinese Journal of Engineering Mathematics*, vol. 27, no. 4, pp. 720–730, 2010. View at Google Scholar · View at Zentralblatt MATH - Z. F. Zhang and J. Deng, “Stability of stochastic reaction-diffusion Hopfield neural network with time-varying delays,”
*Journal of Sichuan University*, vol. 47, no. 3, pp. 251–256, 2010. View at Google Scholar · View at Zentralblatt MATH - D. Xu, “Stability criteria of large-scale systems of the neutral type,” in
*Proceedings of the12th IMACS World Congress on Scientific Computations*, R. Vichnevetsky, P. Borne, and J. Vignes, Eds., vol. 1, p. 213, Gerfidn-cite Scientifique, Pairs, France, 1988. - W. Rudin,
*Real and Complex Analysis*, McGraw-Hill, 1974.