About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis

Volume 2014 (2014), Article ID 842976, 15 pages

http://dx.doi.org/10.1155/2014/842976
Research Article

Robust Exponential Stabilization of Stochastic Delay Interval Recurrent Neural Networks with Distributed Parameters and Markovian Jumping by Using Periodically Intermittent Control

1College of Mathematics and Statistics, South Central University for Nationalities, Wuhan 430074, China

2School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China

3College of Automation, Huazhong University of Science and Technology, Wuhan 430074, China

4College of Science, Huazhong Agriculture University, Wuhan 430070, China

Received 13 January 2014; Accepted 14 February 2014; Published 27 April 2014

Academic Editor: Zhengguang Wu

Copyright © 2014 Junhao Hu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We consider a class of stochastic delay recurrent neural networks with distributed parameters and Markovian jumping. It is assumed that the coefficients in these neural networks belong to the interval matrices. Several sufficient conditions ensuring robust exponential stabilization are derived by using periodically intermittent control and Lyapunov functional. The obtained results are very easy to verify and implement, and improve the existing results. Finally, an example with numerical simulations is given to illustrate the presented criteria.

1. Introduction

In recent decades, neural network dynamics has been widely studied by many authors due to the fact that neural network dynamics can be applied to associate memory, signal processing, pattern classification, and quadratic optimization. Liao and Mao [1, 2] investigated the stability of stochastic neural network for the first time in 1996. By Razumikhin-type theorems, the stability of stochastic neural networks with variable delays was considered [3]. Considering electrons moving in the asymptotic electromagnetic field, the diffusion phenomena could not be ignored. Luo et al. [4] gave several algebra criteria for stochastic Hopfield neural networks with distributed parameters by using average Lyapunov function. The asymptotic stability of stochastic reaction- diffusion systems was also established in [5]. The asymptotic behavior of several classes of neural networks with reaction-diffusion terms has been reported in [69]. Hu et al. [10] discussed the exponential stability and synchronization of delay neural networks with reaction-diffusion terms by impulsive control.

However, the parameters in neural networks are always some uncertainty and error. Taking these uncertainty and error into account, Xu et al. [11] investigated stochastic exponential robust stability of interval neural networks with reaction-diffusion terms and mixed delays by applying the vector Lyapunov function method and -matrix theory. Wang and Gao [12] studied global exponential robust stability of reaction-diffusion interval neural networks with time-varying delays by means of the topological degree theory and Lyapunov functional method. And, a sufficient condition was presented for robust global exponential stability of interval reaction-diffusion Hopfield neural networks with distributed delays by constructing Lyapunov functional and utilizing some inequality techniques [13].

The neural networks driven by continuous-time Markov Chains have been also used to model many practical neural networks because they may experience abrupt changes in their structure and parameters caused by phenomena such as component failures or repairs, changing subsystem interconnections, and abrupt environmental disturbances. The exponential stability and stabilization of recurrent neural networks with Markovian jumping were discussed in [1420]. Robust stability of stochastic delayed additive neural networks with Markov jumping was investigated in [21]. Mao [22] studied the stability of stochastic delay interval system with Markovian jumping by linear matrix inequality.

Many control approaches have been developed to stable and synchronized system such as impulsive control [23] and intermittent control [2429]. Gan [2426] revealed exponential synchronization of three classes of stochastic delay neural networks via periodically intermittent control. Hu et al. [27, 28] investigated exponential stabilization and synchronization of delay neural networks. Huang et al. [29] studied stabilization of delayed chaotic neural networks by periodically intermittent control.

In this paper, we will consider a class of stochastic delay interval recurrent neural networks with distributed parameters and Markovian switching whose active functions are more general than the Lipschitz continuous active function [2426] and the monotone active function [2729]. By the average Lyapunov functional and periodically intermittent control, several sufficient conditions ensuring robust exponential stabilization are given. Therefore, the organization of this paper is as follows. Some preliminaries and introduction are given in Section 2. In Section 3, robust exponential stabilization of these stochastic neural networks is proved. An example with numerical simulation is given to illustrate the effectiveness of the obtained results in Section 4.

2. Preliminaries

Throughout this paper, unless otherwise specified, we let be a complete probability space with a filtration satisfying the usual conditions (i.e., it is right-continuous and contains all -null sets). Let be the -dimensional Euclidean space and let be the Euclidean norm in , and . Assuming that is a bounded compact set with smooth boundary and mes in space . Let denote the family of continuous function from to with . Denote by the family of all bounded, -measurable, -valued random variables. Let be -dimension Brownian motion defined on the probability space. Let be right-continuous Markov chain on the probability space taking values in a finite state space with generator given by where . Here, is the transition rate from to if while We assume that the Markov chain is independent of the Brownian motion . It is well known that almost every sample path of is right-continuous step function with a finite number of simple jumps in any finite subinterval .

In this paper, we consider a class of stochastic delay interval recurrent neural networks with distributed parameters and Markovian jumping: for , , where denotes the number of neurons in neural networks. is the space variable, is a bounded compact set with smooth boundary , and mes in space . corresponds to the state variable of the th neural in space and at time . denotes the transmission diffusion operator along the th neuron. denotes the changing time constant or passive decay rate of the th neuron. and denote the connection weight and the delayed connection weight of the th neuron on the th neuron, respectively. corresponds to the transmission delay and satisfies , for all . denotes stochastic perturbation function to the neuron.

The boundary condition of system (3), The initial value of system (3), Moreover, , , and are the interval connection weight matrix for each value of in with the initial value ; is interval transmission diffusion operator matrix for each value of in with the initial value .

For convenience, we give the following notions that for in :

Definition 1. The stochastic vector is called the solution of system (3)–(5), if it satisfies the following conditions: (i) is adapted to ;(ii)for every and (iii)for every , so it holds as -a.s., .

Definition 2. System (3)–(5) is called robust exponential stable in th moment for any , , , , if the solution of system (3)–(5) satisfies where .

To assure the existence and uniqueness of the solution to system (3)–(5) (see, [30, 31]), we give the following assumptions: (H1)for , , the neuron activation functions are bounded, , and satisfy where , and , , , are constants.(H2)For , , there exists positive constant , such that and .(H3)Time-varying delay function satisfies for , where and are constants.

It is well known, if the parameters or time-varying delay in neural networks is appropriately chosen, neural networks may lead to some phenomena such as instability, divergence, oscillation, chaos [32, 33].

In order to stabilize the origin of system (3)–(5), we introduce the following periodically intermittent controller: where and is the control gains for , denotes the control period, and is called the control width.

Then, system (3) under the periodically intermittent controller (13) is described by the following equations:

Lemma 3 (see [10]). Let be a positive integer, let be a positive constant, let be a cube for , and let be a real-valued function belonging to which vanish on the boundary ; that is, ; then

3. Robust Exponential Stabilization

In this section, we design suitable , , and such that system (3)–(5) under the external controller (13) can realize robust exponential stability in th moment. For convenience, we give some denotations as follows: where , , , , , and , , , , , , , , , , , , and are nonnegative constants, satisfying In the following, we give an assumption:

(H4) and there exists such that

We consider the function It is easy to see that On the other hand, is continuous on , and as . Then there exists a positive constant such that and , for .

Let ; then we have

In similar, there exists a positive constant , such that

Let ; we have

We give another assumption:

(H5) , where , .

Theorem 4. Under assumptions (H1)–(H5), the origin of system (3)–(5) under periodically intermittent controller (13) is robust exponentially stable in th moment.

Proof. Let us define the average Lyapunov-Krasovskii functional (see [4]) by with where .

By the generalized Itô formula (see [31]), we have By Lemma  3.1 in [22] and , we get that for By the fundamental inequality , we have where we use .

By using the fundamental inequality , we have

Similarly, we have

Further, we also have

Substituting (33)–(35) into (32), we obtain where

Substituting (36) into (30), we obtain By Lemma 3 and the boundary condition (4), we have

Substituting these into (38), we get

Similarly, for , we can obtain where , .

By the Gronwall inequality, we have

Combining (40) and (42), we summarize that,(I)for , from (40), we have (II)For , from (42), we get (III)For , from (40), we have (IV)For , from (42), we have

Repeating the above procedure, we obtain that, for , ,

Moreover, for , , Hence, for any , we always have

By (28) and (49), we have

Note that Under assumption (H5), the assertion of Theorem 4 follows from (50) and (51).

Corollary 5. Under assumptions (H1)–(H3), the origin of system (3)–(5) under periodically intermittent control (13) is robust exponentially stable in th moment if the following conditions hold:(I) , (II) , where , .

Proof. In Theorem 4, let , , , , , for all ; then . Under condition (i), select , and Corollary 5 holds immediately from Theorem 4.

In Theorem 4, we choose for , and ; then