Research Article | Open Access

# Stochastic Stability of Neural Networks with Both Markovian Jump Parameters and Continuously Distributed Delays

**Academic Editor:**Manuel De La Sen

#### Abstract

The problem of stochastic stability is investigated for a class of neural networks with both Markovian jump parameters and continuously distributed delays. The jumping parameters are modeled as a continuous-time, finite-state Markov chain. By constructing appropriate Lyapunov-Krasovskii functionals, some novel stability conditions are obtained in terms of linear matrix inequalities (LMIs). The proposed LMI-based criteria are computationally efficient as they can be easily checked by using recently developed algorithms in solving LMIs. A numerical example is provided to show the effectiveness of the theoretical results and demonstrate the LMI criteria existed in the earlier literature fail. The results obtained in this paper improve and generalize those given in the previous literature.

#### 1. Introduction

In recent years, neural networks (especially recurrent neural networks, Hopfield neural networks, and cellular neural networks) have been successfully applied in many areas such as signal processing, image processing, pattern recognition, fault diagnosis, associative memory, and combinatorial optimization; see, for example, [1–5]. One of the best important works in these applications is to study the stability of the equilibrium point of neural networks. A major purpose that is concerned with is to find stability conditions (i.e., the conditions for the stability of the equilibrium point of neural networks). To do this, existensive literature has been presented; see, for example, [6–22] and references therein. It should be noted that the methods in the literature have seldom considered the case that the systems have Markovian jump parameters due to the difficulty of mathematics. However, neural networks in real life often have a phenomenon of information latching. It is recognized that a way for dealing with this information latching problem is to extract finite state representations (also called modes or clusters). In fact, such a neural network with information latching may have finite modes, and the modes may switch (or jump) from one to another at different times, and the switching (or jumping) between two arbitrarily different modes can be governed by a Markov chain. Hence, the neural networks with Markovian jump parameters are of great significance in modeling a class of neural networks with finite modes.

On the other hand, the time delay is frequently a major source of instability and poor performance in neural networks (e.g., see [6, 23, 24]), and so the stability analysis for neural networks with time delays is an important research topic. The existing works on neural networks with time delays can be classified into three categories: constant delays, time-varying delays, and distributed delays. It is noticed that most works in the literature have focused on the former two simple cases: constant delays or time-varying delays (e.g., see [6, 8–10, 12–16, 19–22]). However, as pointed out in [18], neural networks usually have a spatial nature due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths, and so it is desired to model them by introducing continuously distributed delays on a certain duration of time such that the distant past has less influence than the recent behavior of the state. But discussions about the neural networks with continuously distributed delays are only a few researchers [18, 25]. Therefore, there is enough room to develop novel stability conditions for improvement.

Motivated by the above discussion, the objective of this paper is to study the stability for a class of neural networks with both Markovian jump parameters and continuously distributed delays. Moreover, to make the model more general and practical, the factor of noise disturbance is considered in this paper since noise disturbance is also a major source leading to instability [7]. To the best of the authors' knowledge, up to now, the stability analysis problem for a class of stochastic neural networks with both Markovian jump parameters and continuously distributed delays is still an open problem that has not been properly studied. Therefore, this paper is the first attempt to introduce and investigate the problem of stochastic stability for a class of neural networks with both Markovian jump parameters and continuously distributed delays. By utilizing the Lyapunov stability theory and linear matrix inequality (LMI) technique, some novel delay-dependent conditions are obtained to guarantee the stochastically asymptotic stability of the equilibrium point. The proposed LMI-based criteria are computationally efficient as they can be easily checked by using recently developed standard algorithms such as interior point methods [24] in solving LMIs. Finally, a numerical example is provided to illustrate the effectiveness of the theoretical results and demonstrate the LMI criteria existed in the earlier literature fail. The results obtained in this paper improve and generalize those given in the previous literature.

The remainder of this paper is organized as follows. In Section 2, the model of a class of stochastic neural networks with both Markovian jump parameters and continuously distributed delays is introduced, and some assumptions needed in this paper are presented. By means of Lyapunov-Krasovskii functional approach, our main results are established in Section 3. In Section 4, a numerical example is given to show the effectiveness of the obtained results. Finally, in Section 5, the paper is concluded with some general conclusions.

*Notation.* Throughout this paper, the following notations will be used. and denote the -dimensional Euclidean space and the set of all real matrices, respectively. The superscript “" denotes the transpose of a matrix or vector. Trace denotes the trace of the corresponding matrix, and denotes the identity matrix with compatible dimensions. For square matrices and , the notation denotes that is positive-definite (positive-semidefinite, negative, negative-semidefinite) matrix. Let be an -dimensional Brownian motion defined on a complete probability space with a natural filtration . Also, let and denote the family of continuous function from to with the uniform norm . Denote by the family of all measurable, -valued stochastic variables such that , where stands for the correspondent expectation operator with respect to the given probability measure .

#### 2. Model Description and Problem Formulation

Let be a right-continuous Markov chain on a complete probability space taking values in a finite state space with generator given by where and . Here, is the transition rate from to if while

In this paper we consider a class of neural networks with both Markovian jump parameters and continuously distributed delays, which is described by the following integro-differential equation: where is the state vector associated with the neurons, and the diagonal matrix has positive entries . The matrices , , and are, respectively, the connection weight matrix, the discretely delayed connection weight matrix, and the distributively delayed connection weight matrix. , , and denote the neuron activation functions, and denotes a constant external input vector. The constant denotes the time delay, and denotes the delay kernel vector, where is a real value nonnegative continuous function defined on and such that for .

In this paper we will investigate a more general model in which the environmental noise is considered on system (2.2), and so this model can be written as the following integrodifferential equation: where is the noise perturbation.

Throughout this paper, the following conditions are supposed to hold.

*Assumption 2.1. *There exist six diagonal matrices , , , , , and satisfying
for all , , .

*Assumption 2.2. *There exist two positive definite matrices and such that
for all and , .

*Assumption 2.3. *

Under Assumptions 2.1 and 2.2, it is well known (see, e.g., Mao [16]) that for any initial data on in , (2.3) has a unique equilibrium point. Now, let be the unique equilibrium point of (2.3), and set . Then we can rewrite system (2.3) as where , , , and , , .

Noting the facts that and , the trivial solution of system (2.6) exists. Hence, to prove the stability of of (2.3), it is sufficient to prove the stability of the trivial solution of system (2.6). On the other hand, by Assumption 2.1 we have for all , , .

Let denote the state trajectory from the initial data on in . Clearly, system (2.6) admits a trivial solution corresponding to the initial data . For simplicity, we write . Let denote the family of all nonnegative functions on which are continuously twice differentiable in and differentiable in . If , then along the trajectory of system (2.6) we define an operator from to by where Now we give the definition of stochastic asymptotically stability for system (2.6).

*Definition 2.4. *The equilibrium point of (2.6) (or (2.3) equivalently) is said to be stochastic asymptotically stable in the mean square if, for every , the following equality holds:

In the sequel, for simplicity, when , the matrices , , and will be written as , and , respectively.

#### 3. Main Results and Proofs

In this section, the stochastic asymptotically stability in the mean square of the equilibrium point for system (2.6) is investigated under Assumptions 2.1–2.3.

Theorem 3.1. *Under Assumptions 2.1–2.3, the equilibrium point of (2.6) (or (2.3) equivalently) is stochastic asymptotically stable, if there exist positive scalars , positive definite matrices , , , , , and four positive diagonal matrices , , , and such that the following LMIs hold:
**
where the symbol “" denotes the symmetric term of the matrix:
*

*Proof. *Fixing arbitrarily and writing , consider the following Lyapulov-Krasovskii functional:
where
For simplicity, denote by . Then it follows from (2.10) and (2.6) that
On the other hand, by Assumption 2.2 and condition (3.2) we obtain
which together with (3.8) gives
Also, from direct computations, it follows that
It should be mentioned that the calculation of has applied the following inequality:
with and .

Furthermore, it follows from the conditions (3.3) and (3.4) that

On the other hand, by Assumption 2.1 we have

Hence, by (3.8)–(3.14), we get
where
By condition (3.1), there must exist a scalar such that . Setting , it is clear that . Taking the mathematical expectation on both sides of (3.15), we obtain
Applying the Dynkin formula and from (3.18), it follows that
and so
which implies that the equilibrium point of (2.3) (or (2.2) equivalently) is stochastically asymptotic stability in the mean square. This completes the proof

*Remark 3.2. *Theorem 3.1 provides a sufficient condition for the generalized neural network (2.3) to ascertain the stochastic asymptotically stability in the mean square of the equilibrium point. The condition is easy to be verified and can be applied in practice as it can be checked by using recently developed algorithms in solving LMIs.

*Remark 3.3. *The generalized neural network (2.3) is quite general since it considers the effects of many factors including noise perturbations, Markovian jump parameters, and continuously distributed delays. Furthermore, the constants , , , , , in Assumption 2.1 are allowed to be * positive, negative, or zero*. To the best of our knowledge, the generalized neural network (2.3) has never been considered in the previous literature. Hence, the LMI criteria existed in all the previous literature fail in our results.

*Remark 3.4. *If we take , then system (2.3) can be written as
If we take , then system (2.3) can be written as
If we do not consider noise perturbations, then system (2.3) can be written as
To the best of our knowledge, even systems (3.21)–(3.23) still have not been investigated in the previous literature.

*Remark 3.5. *We now illustrate that the neural network (2.3) generalizes some neural networks considered in the earlier literature. For example, if we take
then system (2.3) can be written as
System (3.25) was discussed by Liu et al. [26] and Wang et al. [27], although the delays are time-varyings in [27]. Nevertheless, we point out that system (3.25) can be generalized to the neural networks with time-varying delays without any difficulty. If we take and do not consider noise perturbations, then system (2.3) can be written as
The stability analysis for system (3.26) was investigated by Wang et al. [20]. In [15], Lou and Cui also consider system (3.26) with .

The next five corollaries follow directly from Theorem 3.1, and so we omit their proofs.

Corollary 3.6. *Under Assumptions 2.1–2.3, the equilibrium point of (3.21) is stochastic asymptotically stable, if there exist positive scalars , positive definite matrices , , , and three positive diagonal matrices , , and such that the following LMIs hold:
**
where the symbol “" denotes the symmetric term of the matrix:
*

Corollary 3.7. *Under Assumptions 2.1–2.3, the equilibrium point of (3.22) is stochastic asymptotically stable, if there exist positive scalars , positive definite matrices , , , and three positive diagonal matrices , , and such that the following LMIs hold:
**
where the symbol “" denotes the symmetric term of the matrix:
*

Corollary 3.8. *Under Assumptions 2.1–2.3, the equilibrium point of (3.23) is stochastic asymptotically stable, if there exist positive scalars , positive definite matrices , , , , , and three positive diagonal matrices , , and such that the following LMIs hold:
**
where the symbol “" denotes the symmetric term of the matrix:
*

Corollary 3.9. *Under Assumptions 2.1–2.3, the equilibrium point of (3.25) is stochastic asymptotically stable, if there exist positive scalars , positive definite matrices , , , , , and four positive diagonal matrices , , , and such that the following LMIs hold:
**
where the symbol “" denotes the symmetric term of the matrix:
*

Corollary 3.10. *Under Assumption 2.1, the equilibrium point of (3.26) is stochastic asymptotically stable, if there exist positive scalars , positive definite matrices , , , , , and three positive diagonal matrices , , and such that the following LMIs hold:
**
where the symbol “" denotes the symmetric term of the matrix:
*

*Remark 3.11. *As discussed in Remark 3.4, Corollaries 3.6–3.8 are “new" since they have never been considered in the previous literature. Corollary 3.9 was discussed by Liu et al. [26] and Wang et al. [27], although the delays are time-varyings in [27]. Nevertheless, we point out that our results can be generalized to the neural networks with time-varying delays without any difficulty. Corollary 3.10 has been discussed by Wang et al. [20] and Lou and Cui [15], but our conditions are weaker than those in [15, 20], for the constants , , , in Corollary 3.10 are allowed to be * positive, negative, or zero*.

#### 4. Illustrative Example

In this section, a numerical example is given to illustrate the effectiveness of the obtained results.

*Example 4.1. *Consider a two-dimensional stochastic neural network with both Markov jump parameters and continuously distributed delays:
where , , is a two dimensional Brownian motion, and is a right-continuous Markov chain taking values in with generator
Let
then system (4.1) satisfies Assumption 2.1 with . Take
then system (4.1) satisfies Assumptions 2.2 and 2.3 with .

Other parameters of the network (4.1) are given as follows: Here we let . By using the Matlab LMI toolbox, we can obtain the following feasible solution for the LMIs (3.1)–(3.4): Therefore, it follows from Theorem 3.1 that the network (4.1) is stochastic asymptotically stable.

By using the Euler-Maruyama numerical scheme, simulation results are as follows: and step size . Figure 1 is the state response of model 1 (i.e., the network (4.1) when ) with the initial condition , for , and Figure 2 is the state response of model 2 (i.e., the network (4.1) when ) with the initial condition , for .

*Remark 4.2. *As discussed in Remarks 3.2–3.11, the LMI criteria existed in all the previous literature (e.g., Liu et al. [26], Wang et al. [20, 27], Lou and Cui [15], etc.) fail in Example 4.1 since many factors including noise perturbations, Markovian jump parameters, and continuously distributed delays are considered in Example 4.1.

#### 5. Concluding Remarks

In this paper we have investigated the stochastic stability analysis problem for a class of neural networks with both Markovian jump parameters and continuously distributed delays. It is worth mentioning that our obtained stability condition is delay-dependent, which is less conservative than delay-independent criteria when the delay is small. Furthermore, the obtained stability criteria in this paper are expressed in terms of LMIs, which can be solved easily by recently developed algorithms. A numerical example is given to show the less conservatism and effectiveness of our results. The results obtained in this paper improve and generalize those given in the previous literature. On the other hand, it should be noted that the explicit rate of convergence for the considered system is not given in this paper since it is difficult to deal with continuously distributed delays. Therefore, investigating the explicit rate of convergence for the considered system remains an open issue. Finally, we point out that it is possible to generalize our results to a class of neural networks with uncertainties. Research on this topic is in progress.

#### Acknowledgments

The authors would like to thank the editor and five anonymous referees for their helpful comments and valuable suggestions regarding this paper. This work was jointly supported by the National Natural Science Foundation of China (10801056, 60874088), the Natural Science Foundation of Guangdong Province (06300957), K. C. Wong Magna Fund in Ningbo University, and the Specialized Research Fund for the Doctoral Program of Higher Education (20070286003).

#### References

- A. Cichocki and R. Unbehauen,
*Neural Networks for Optimalition and Signal Processing*, John Wiley & Sons, New York, NY, USA, 1993. - L. O. Chua and L. Yang, “Cellular neural networks: applications,”
*IEEE Transactions on Circuits and Systems*, vol. 35, no. 10, pp. 1273–1290, 1988. View at: Publisher Site | Google Scholar | MathSciNet - G. Joya, M. A. Atencia, and F. Sandoval, “Hopfield neural networks for optimization: study of the different dynamics,”
*Neurocomputing*, vol. 43, pp. 219–237, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH - W.-J. Li and T. Lee, “Hopfield neural networks for affine invariant matching,”
*IEEE Transactions on Neural Networks*, vol. 12, no. 6, pp. 1400–1410, 2001. View at: Publisher Site | Google Scholar - S. S. Young, P. D. Scott, and N. M. Nasrabadi, “Object recognition using multilayer Hopfield neural network,”
*IEEE Transactions on Image Processing*, vol. 6, no. 3, pp. 357–372, 1997. View at: Publisher Site | Google Scholar - S. Arik, “Stability analysis of delayed neural networks,”
*IEEE Transactions on Circuits and Systems I*, vol. 47, no. 7, pp. 1089–1092, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - S. Blythe, X. Mao, and X. Liao, “Stability of stochastic delay neural networks,”
*Journal of the Franklin Institute*, vol. 338, no. 4, pp. 481–495, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - J. Cao, A. Chen, and X. Huang, “Almost periodic attractor of delayed neural networks with variable coeffcients,”
*Physics Letters A*, vol. 340, no. 1–4, pp. 104–120, 2005. View at: Publisher Site | Google Scholar - J. Cao and J. Wang, “Global exponential stability and periodicity of recurrent neural networks with time delays,”
*IEEE Transactions on Circuits and Systems I*, vol. 52, no. 5, pp. 920–931, 2005. View at: Publisher Site | Google Scholar | MathSciNet - W.-H. Chen and X. Lu, “Mean square exponential stability of uncertain stochastic delayed neural networks,”
*Physics Letters A*, vol. 372, no. 7, pp. 1061–1069, 2008. View at: Publisher Site | Google Scholar | MathSciNet - W.-H. Chen and W. X. Zheng, “Global asymptotic stability of a class of neural networks with distributed delays,”
*IEEE Transactions on Circuits and Systems I*, vol. 53, no. 3, pp. 644–652, 2006. View at: Publisher Site | Google Scholar | MathSciNet - H. Huang, D. W. C. Ho, and J. Lam, “Stochastic stability analysis of fuzzy Hopeld neural networks with time-varying delays,”
*IEEE Transactions on Circuits and Systems II*, vol. 52, no. 5, pp. 251–255, 2005. View at: Google Scholar - M. P. Joy, “Results concerning the absolute stability of delayed neural networks,”
*Neural Networks*, vol. 13, no. 6, pp. 613–616, 2000. View at: Publisher Site | Google Scholar - Y. Liu, Z. Wang, and X. Liu, “On global exponential stability of generalized stochastic neural networks with mixed time-delays,”
*Neurocomputing*, vol. 70, no. 1–3, pp. 314–326, 2006. View at: Publisher Site | Google Scholar - X. Lou and B. Cui, “Delay-dependent stochastic stability of delayed Hopfield neural networks with Markovian jump parameters,”
*Journal of Mathematical Analysis and Applications*, vol. 328, no. 1, pp. 316–326, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - X. Mao,
*Stochastic Differential Equations and Their Applications*, Horwood Publishing Series in Mathematics & Applications, Horwood Publishing, Chichester, UK, 1997. View at: MathSciNet - Y. S. Moon, P. Park, W. H. Kwon, and Y. S. Lee, “Delay-dependent robust stabilization of uncertain state-delayed systems,”
*International Journal of Control*, vol. 74, no. 14, pp. 1447–1455, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - J. H. Park, “On global stability criterion of neural networks with continuously distributed delays,”
*Chaos, Solitons & Fractals*, vol. 37, no. 2, pp. 444–449, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - R. Rakkiyappan and P. Balasubramaniam, “Delay-dependent asymptotic stability for stochastic delayed recurrent neural networks with time varying delays,”
*Applied Mathematics and Computation*, vol. 198, no. 2, pp. 526–533, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - Z. Wang, Y. Liu, L. Yu, and X. Liu, “Exponential stability of delayed recurrent neural networks with Markovian jumping parameters,”
*Physics Letters A*, vol. 356, no. 4-5, pp. 346–352, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH - L. Wan and J. Sun, “Mean square exponential stability of stochastic delayed Hopfield neural networks,”
*Physics Letters A*, vol. 343, no. 4, pp. 306–318, 2005. View at: Publisher Site | Google Scholar - Q. Zhou and L. Wan, “Exponential stability of stochastic delayed Hopfield neural networks,”
*Applied Mathematics and Computation*, vol. 199, no. 1, pp. 84–89, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - P. Baldi and A. F. Atiya, “How delays affect neural dynamics and learning,”
*IEEE Transactions on Neural Networks*, vol. 5, no. 4, pp. 612–621, 1994. View at: Publisher Site | Google Scholar - S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan,
*Linear Matrix Inequalities in System and Control Theory*, vol. 15 of*SIAM Studies in Applied Mathematics*, SIAM, Philadelphia, Pa, USA, 1994. View at: MathSciNet - H. Yang and T. Chu, “LMI conditions for stability of neural networks with distributed delays,”
*Chaos, Solitons & Fractals*, vol. 34, no. 2, pp. 557–563, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - Y. Liu, Z. Wang, and X. Liu, “On delay-dependent robust exponential stability of stochastic neural networks with mixed time delays and Markovian switching,”
*Nonlinear Dynamics*, vol. 54, no. 3, pp. 199–212, 2008. View at: Google Scholar | MathSciNet - G. Wang, J. Cao, and J. Liang, “Exponential stability in the mean square for stochastic neural networks with mixed time-delays and Markovian jumping parameters,”
*Nonlinear Dynamics*, vol. 57, no. 1-2, pp. 209–218, 2009. View at: Publisher Site | Google Scholar

#### Copyright

Copyright © 2009 Quanxin Zhu and Jinde Cao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.