Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2016 (2016), Article ID 4958217, 19 pages
http://dx.doi.org/10.1155/2016/4958217
Research Article

Attractor and Boundedness of Switched Stochastic Cohen-Grossberg Neural Networks

1School of Mathematics and Statistics, Changsha University of Science and Technology, Changsha, Hunan 410114, China
2Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China

Received 15 February 2016; Accepted 5 April 2016

Academic Editor: Guoqiang Hu

Copyright © 2016 Chuangxia Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We address the problem of stochastic attractor and boundedness of a class of switched Cohen-Grossberg neural networks (CGNN) with discrete and infinitely distributed delays. With the help of stochastic analysis technology, the Lyapunov-Krasovskii functional method, linear matrix inequalities technique (LMI), and the average dwell time approach (ADT), some novel sufficient conditions regarding the issues of mean-square uniformly ultimate boundedness, the existence of a stochastic attractor, and the mean-square exponential stability for the switched Cohen-Grossberg neural networks are established. Finally, illustrative examples and their simulations are provided to illustrate the effectiveness of the proposed results.

1. Introduction

In the last few decades, theoretical and applied researches of artificial neural networks have been the new worldwide focus. Some of the reasons for this are due to the successful hardware implementations and their various applications, such as classification, associative memories, parallel computation, optimization, and signal processing [1, 2]. It is recognized that such applications of neural networks depend heavily on some dynamic behaviors, such as stability properties, periodic oscillatory behavior, and attractor and boundedness (see [316] and references therein).

Since the seminal work by Cohen and Grossberg [17], Cohen-Grossberg neural networks have been intensively studied [2, 1822]. During hardware implementation, time delays do exist due to the finite switching speed of the amplifiers and communication time; it is important to incorporate delays into the neural networks. Generally speaking, there are two kinds of delays, discrete delays and distributed delays [2, 16, 23]. The utilization of discrete delays in models of delayed feedback provides a good approximation in simple circuits consisting of a small number of cells. When the neural networks have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths, it is necessary to incorporate continuously distributed delays. The distributed delay includes finite delay and infinite delay [2, 5, 18, 24].

In real nervous systems, synaptic transmission is a noisy process brought about by random fluctuations from the release of neurotransmitters and other probabilistic causes [19, 2527]. It is well known that for stochastic neural networks, it is rather difficult to analyze their dynamic properties due to the introduction of noise. Such studies are however important for understanding the dynamic characteristics of neuron behavior in stochastic environments. For instance, during the implementation of Kalman filter training, stochastic neural networks characterized as zero-mean white noise have been successfully employed [2].

On the other hand, neural networks are complex and large-scale nonlinear dynamics; during hardware implementation, the connection topology of networks may change very quickly and link failures or new creations in networks often bring about switching connection topology [2]. To obtain a deep and clear understanding of the dynamics of this complex system, one of the usual ways is to investigate the switched neural network. As a special class of hybrid systems, switched neural network systems are composed of a family of continuous-time or discrete time subsystems and a rule that orchestrates the switching among the subsystems [28]. In general, the switched rule is a piecewise constant function dependent on the state or time. The logical rule that orchestrates switching between these subsystems generates switching signals [29]. Recently, switched systems have numerous applications in the control of mechanical systems, the automotive industry, aircraft and air traffic control, switching power converters, and many other fields [30]. In [28], Huang et al. are the first to investigate the robust stability of switched Hopfield neural networks with time-varying delays by an arbitrary switched rule. The average dwell time approach provided an effective tool to study the stability of switched systems. Wu et al. used average dwell time approach to analyze the exponential stability of continuous-time switched delayed neural networks in [31]. In [27], the average dwell time and LMI method have been utilized to discuss the exponential synchronization of switched stochastic competitive neural networks with mixed delays. In addition, [32] has focused on the delay-dependent global robust asymptotic stability problem of uncertain switched Hopfield neural networks (USHNNs) with discrete interval and distributed time-varying delays and time delay in the leakage term. Moreover, parametric uncertainty which often breaks the stability of systems can be commonly encountered due to modeling inaccuracies or changes in the environment of the model. To deal with the difficulties brought about by uncertainty, exponential stability analysis and control of different uncertain systems have received great research attention [33]. Moreover, the parametric uncertainty is assumed to be norm-bounded in [34]. Unfortunately, up to now, few researchers have considered the mean-square uniformly ultimate boundedness and stochastic attractor for switched SCGNN with discrete delays and infinite distributed delays.

However, these available literatures mainly consider the stability property of switching neural networks. In fact, except for the stability property, boundedness and attractor are also the foundational concepts of dynamical neural networks, which play important roles in the investigation of the uniqueness of the equilibrium point (periodic solutions), global asymptotic stability, global exponentially stability, and the synchronization [35]. To the best of the author’s knowledge, few researchers have considered the uniformly ultimate boundedness and attractors for switched CGNN with discrete delays and distributed delays.

Inspired by the above discussions, the objects of this paper are to study the mean-square uniformly ultimate boundedness and stochastic attractor for switched SCGNN with discrete delays and infinitely distributed delays by employing stochastic analysis technology, the Lyapunov-Krasovskii functional method, the linear matrix inequalities (LMI) technique, and the average dwell time approach (ADT). In addition, the parametric uncertainty is considered and assumed to be norm-bounded.

As is well known, mean-square uniformly ultimate boundedness (MSUUB) conditions are derived in terms of linear matrix inequalities (LMIs), which can be easily calculated by the MATLAB LMI control toolbox. All of the above mentioned reasons motivate us to investigate the problems of the MSUUB and stochastic attractor for switched SCGNN in this paper. Numerical examples are provided to demonstrate the feasibility and effectiveness of the proposed criteria.

The rest of this paper is organized as follows. Some preliminaries are given in Section 2. We present some basic definitions and notations, as well as some lemmas needed in later sections. In Section 3, we present some sufficient conditions of MSUUB and stochastic attractor for switched stochastic CGNN. In Section 4, an example is presented to illustrate the effectiveness of the proposed approach. The conclusions are summarized in Section 5.

Notations. The superscript “” stands for matrix transposition; denotes the -dimensional Euclidean space; the notation means that is real symmetric and positive-definite; and represent the identity matrix and a zero matrix, respectively; stands for a block-diagonal matrix; and () denotes the minimum (maximum) eigenvalue of symmetric matrix . In symmetric block matrices or long matrix expressions, a is used to represent a term that is induced by symmetry.

2. Preliminaries and Problem Formulation

The Itô’s formula plays a key role in the dynamic analysis of stochastic systems. To facilitate understanding, some related results are cited here (see [36] for details). For a general stochastic system on with initial value , where is -dimensional Brownian motion defined on , , and . Let be the family of all nonnegative functions which are continuous once differentiable in and twice differentiable in . For , define an operator from to by and , , and .

In practical systems, the neural network models are disturbed by environmental noises. Therefore, in this paper, we will consider the stochastic Cohen-Grossberg neural networks with mixed time delay described by the following stochastic nonlinear integrodifferential equations: System (2) for convenience can be rewritten as the following vector form: where , , are known constant matrices with appropriate dimensions, is the neural state vector associated with the neurons at time , represents an amplification function, denotes the behavior function, is the neuron activation functions, , , is the constant external input vector, is the diffusion coefficient matrix, and is an -dimension Brownian motion satisfying , and defined on a complete probability space with a natural filtration generated by , where we associate with the canonical space generated by and denote by the associated -algebra generated by with the probability measure .

The discrete time-varying delays satisfy and the delay kernel is a real valued continuous function defined on and satisfies; for each , Moreover, there exist and matrix ,

As usual, the initial conditions associated with system (3) are given in the form where the initial value function is the family of all -measurable -valued random variables satisfying , in which denotes expectations with respect to and denotes the family of all continuous -valued functions on .

We can describe the switched stochastic Cohen-Grossberg neural networks as follows: where the function is the switching signal, which is deterministic, piecewise constant, and right continuous.

To continue our discussion, we give the following basic assumptions.(H1)We assume there exist constants and , , such that (H2)There exist positive constants , , for all , such that (H3)There exist positive constants , such that (H4)We assume that the stochastic term satisfies which is locally Lipschitz continuous and satisfies the linear growth condition as well. Moreover, satisfies the following condition: where , () are known constant matrices with appropriate dimensions.(H5)The , , and are unknown matrices that represent the time-varying parameter uncertainties and are assumed to be of the following form: where , , , , , and are known real constant matrices with appropriate dimensions. , , and may be time-varying matrices with Lebesgue measurable elements bounded by

Remark 1. The constants and can be positive, negative, or zero. Therefore, the activation functions are more general than the forms , , .

Remark 2. It is worth mentioning that the structures of the parametric uncertainties with the form (13) and (14) are more general than those in previous literature in [28, 34, 37]. However, has been discussed [28, 34, 37]. Recently, in [35], the attractor and boundedness of stochastic Cohen-Grossberg neural networks without parametric uncertainties were investigated.

Definition 3 (see [38]). The system (3) is mean-square uniformly ultimately bounded, if there exists a constant vector , for any constant ; there is , for all , , ; the solution of system (3) satisfies such that where and .
In this case, the set is called the attractor for the solution of system (3) in the mean-square sense. Clearly, the proposition above is equal to

Definition 4 (see [39]). For the switching signal , construct a switching sequence , , , where is the initial time and denotes the th switching instant. Moreover, means that the th subsystem is activated. denotes the number of the subsystems. For each , let denote the number of discontinuities of in the interval . If there exist and such that holds, then is called the average dwell time. is the chatter bound.

Remark 5. It should be pointed out that for the chatter bound , in our work, we take , which is more preferable than those previously reported in [31, 37]. If is equivalent to the existence of a common function for all subsystems, this implies that switching signals can be arbitrary. Hence, the results reported in this paper are more effective than the arbitrary switching signals reported in the previous literatures [28, 37].

So, to obtain the main results of this paper, we introduce the following lemmas.

Lemma 6 (see [40]). For any positive-definite constant matrix , scalar , and vector function , , then (1)Jensen’s inequality (2)  

Lemma 7 (see [41]). Let , , and be real matrices of appropriate dimension such that . Then for any scalar and vectors and with appropriate dimensions, the following inequality is true:

Lemma 8 (see [41]). For any real matrix , and one positive-definite matrix , the following matrix inequality holds:

Lemma 9 (see [42], Schur’s complement). The LMI with , is equivalent to

3. Main Results

Theorem 10. For given constants , , , if there exist positive scalars and positive-definite matrices , , , (), , , , , , , , , , such that the following conditions hold:where then system (3) is mean-square uniformly ultimately bounded.

Proof. Choose the following Lyapunov-Krasovskii functional: where By applying the Itô differential formula, the stochastic derivative of along the trajectory of system (3) isWith the infinitesimal-operator, we can deduce thatDenote , , according to assumptions (H2) and (H3). Then, we obtain the following inequalities:By assumption (H4), we have Combining inequalities (29) and (30), we derivewhere .
Then, we easily derive Similarly, calculating the operation of along the trajectory of system (3), one can get By Lemma 6  , it follows thatSimilarly, one can derive Using Lemma 6  , we have Similarly, one can obtain Then, noting the condition and the Cauchy-Schwarz inequality it can yieldThen, based on assumption (H1), it easy to see that, for ,Then, letTherefore,From Lemma 8, the following inequality holds true: Denote whereUsing (27)–(43), we can derive where