Abstract

We address the problem of stochastic attractor and boundedness of a class of switched Cohen-Grossberg neural networks (CGNN) with discrete and infinitely distributed delays. With the help of stochastic analysis technology, the Lyapunov-Krasovskii functional method, linear matrix inequalities technique (LMI), and the average dwell time approach (ADT), some novel sufficient conditions regarding the issues of mean-square uniformly ultimate boundedness, the existence of a stochastic attractor, and the mean-square exponential stability for the switched Cohen-Grossberg neural networks are established. Finally, illustrative examples and their simulations are provided to illustrate the effectiveness of the proposed results.

1. Introduction

In the last few decades, theoretical and applied researches of artificial neural networks have been the new worldwide focus. Some of the reasons for this are due to the successful hardware implementations and their various applications, such as classification, associative memories, parallel computation, optimization, and signal processing [1, 2]. It is recognized that such applications of neural networks depend heavily on some dynamic behaviors, such as stability properties, periodic oscillatory behavior, and attractor and boundedness (see [316] and references therein).

Since the seminal work by Cohen and Grossberg [17], Cohen-Grossberg neural networks have been intensively studied [2, 1822]. During hardware implementation, time delays do exist due to the finite switching speed of the amplifiers and communication time; it is important to incorporate delays into the neural networks. Generally speaking, there are two kinds of delays, discrete delays and distributed delays [2, 16, 23]. The utilization of discrete delays in models of delayed feedback provides a good approximation in simple circuits consisting of a small number of cells. When the neural networks have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths, it is necessary to incorporate continuously distributed delays. The distributed delay includes finite delay and infinite delay [2, 5, 18, 24].

In real nervous systems, synaptic transmission is a noisy process brought about by random fluctuations from the release of neurotransmitters and other probabilistic causes [19, 2527]. It is well known that for stochastic neural networks, it is rather difficult to analyze their dynamic properties due to the introduction of noise. Such studies are however important for understanding the dynamic characteristics of neuron behavior in stochastic environments. For instance, during the implementation of Kalman filter training, stochastic neural networks characterized as zero-mean white noise have been successfully employed [2].

On the other hand, neural networks are complex and large-scale nonlinear dynamics; during hardware implementation, the connection topology of networks may change very quickly and link failures or new creations in networks often bring about switching connection topology [2]. To obtain a deep and clear understanding of the dynamics of this complex system, one of the usual ways is to investigate the switched neural network. As a special class of hybrid systems, switched neural network systems are composed of a family of continuous-time or discrete time subsystems and a rule that orchestrates the switching among the subsystems [28]. In general, the switched rule is a piecewise constant function dependent on the state or time. The logical rule that orchestrates switching between these subsystems generates switching signals [29]. Recently, switched systems have numerous applications in the control of mechanical systems, the automotive industry, aircraft and air traffic control, switching power converters, and many other fields [30]. In [28], Huang et al. are the first to investigate the robust stability of switched Hopfield neural networks with time-varying delays by an arbitrary switched rule. The average dwell time approach provided an effective tool to study the stability of switched systems. Wu et al. used average dwell time approach to analyze the exponential stability of continuous-time switched delayed neural networks in [31]. In [27], the average dwell time and LMI method have been utilized to discuss the exponential synchronization of switched stochastic competitive neural networks with mixed delays. In addition, [32] has focused on the delay-dependent global robust asymptotic stability problem of uncertain switched Hopfield neural networks (USHNNs) with discrete interval and distributed time-varying delays and time delay in the leakage term. Moreover, parametric uncertainty which often breaks the stability of systems can be commonly encountered due to modeling inaccuracies or changes in the environment of the model. To deal with the difficulties brought about by uncertainty, exponential stability analysis and control of different uncertain systems have received great research attention [33]. Moreover, the parametric uncertainty is assumed to be norm-bounded in [34]. Unfortunately, up to now, few researchers have considered the mean-square uniformly ultimate boundedness and stochastic attractor for switched SCGNN with discrete delays and infinite distributed delays.

However, these available literatures mainly consider the stability property of switching neural networks. In fact, except for the stability property, boundedness and attractor are also the foundational concepts of dynamical neural networks, which play important roles in the investigation of the uniqueness of the equilibrium point (periodic solutions), global asymptotic stability, global exponentially stability, and the synchronization [35]. To the best of the author’s knowledge, few researchers have considered the uniformly ultimate boundedness and attractors for switched CGNN with discrete delays and distributed delays.

Inspired by the above discussions, the objects of this paper are to study the mean-square uniformly ultimate boundedness and stochastic attractor for switched SCGNN with discrete delays and infinitely distributed delays by employing stochastic analysis technology, the Lyapunov-Krasovskii functional method, the linear matrix inequalities (LMI) technique, and the average dwell time approach (ADT). In addition, the parametric uncertainty is considered and assumed to be norm-bounded.

As is well known, mean-square uniformly ultimate boundedness (MSUUB) conditions are derived in terms of linear matrix inequalities (LMIs), which can be easily calculated by the MATLAB LMI control toolbox. All of the above mentioned reasons motivate us to investigate the problems of the MSUUB and stochastic attractor for switched SCGNN in this paper. Numerical examples are provided to demonstrate the feasibility and effectiveness of the proposed criteria.

The rest of this paper is organized as follows. Some preliminaries are given in Section 2. We present some basic definitions and notations, as well as some lemmas needed in later sections. In Section 3, we present some sufficient conditions of MSUUB and stochastic attractor for switched stochastic CGNN. In Section 4, an example is presented to illustrate the effectiveness of the proposed approach. The conclusions are summarized in Section 5.

Notations. The superscript “” stands for matrix transposition; denotes the -dimensional Euclidean space; the notation means that is real symmetric and positive-definite; and represent the identity matrix and a zero matrix, respectively; stands for a block-diagonal matrix; and () denotes the minimum (maximum) eigenvalue of symmetric matrix . In symmetric block matrices or long matrix expressions, a is used to represent a term that is induced by symmetry.

2. Preliminaries and Problem Formulation

The Itô’s formula plays a key role in the dynamic analysis of stochastic systems. To facilitate understanding, some related results are cited here (see [36] for details). For a general stochastic system on with initial value , where is -dimensional Brownian motion defined on , , and . Let be the family of all nonnegative functions which are continuous once differentiable in and twice differentiable in . For , define an operator from to by and , , and .

In practical systems, the neural network models are disturbed by environmental noises. Therefore, in this paper, we will consider the stochastic Cohen-Grossberg neural networks with mixed time delay described by the following stochastic nonlinear integrodifferential equations: System (2) for convenience can be rewritten as the following vector form: where , , are known constant matrices with appropriate dimensions, is the neural state vector associated with the neurons at time , represents an amplification function, denotes the behavior function, is the neuron activation functions, , , is the constant external input vector, is the diffusion coefficient matrix, and is an -dimension Brownian motion satisfying , and defined on a complete probability space with a natural filtration generated by , where we associate with the canonical space generated by and denote by the associated -algebra generated by with the probability measure .

The discrete time-varying delays satisfy and the delay kernel is a real valued continuous function defined on and satisfies; for each , Moreover, there exist and matrix ,

As usual, the initial conditions associated with system (3) are given in the form where the initial value function is the family of all -measurable -valued random variables satisfying , in which denotes expectations with respect to and denotes the family of all continuous -valued functions on .

We can describe the switched stochastic Cohen-Grossberg neural networks as follows: where the function is the switching signal, which is deterministic, piecewise constant, and right continuous.

To continue our discussion, we give the following basic assumptions.(H1)We assume there exist constants and , , such that (H2)There exist positive constants , , for all , such that (H3)There exist positive constants , such that (H4)We assume that the stochastic term satisfies which is locally Lipschitz continuous and satisfies the linear growth condition as well. Moreover, satisfies the following condition: where , () are known constant matrices with appropriate dimensions.(H5)The , , and are unknown matrices that represent the time-varying parameter uncertainties and are assumed to be of the following form: where , , , , , and are known real constant matrices with appropriate dimensions. , , and may be time-varying matrices with Lebesgue measurable elements bounded by

Remark 1. The constants and can be positive, negative, or zero. Therefore, the activation functions are more general than the forms , , .

Remark 2. It is worth mentioning that the structures of the parametric uncertainties with the form (13) and (14) are more general than those in previous literature in [28, 34, 37]. However, has been discussed [28, 34, 37]. Recently, in [35], the attractor and boundedness of stochastic Cohen-Grossberg neural networks without parametric uncertainties were investigated.

Definition 3 (see [38]). The system (3) is mean-square uniformly ultimately bounded, if there exists a constant vector , for any constant ; there is , for all , , ; the solution of system (3) satisfies such that where and .
In this case, the set is called the attractor for the solution of system (3) in the mean-square sense. Clearly, the proposition above is equal to

Definition 4 (see [39]). For the switching signal , construct a switching sequence , , , where is the initial time and denotes the th switching instant. Moreover, means that the th subsystem is activated. denotes the number of the subsystems. For each , let denote the number of discontinuities of in the interval . If there exist and such that holds, then is called the average dwell time. is the chatter bound.

Remark 5. It should be pointed out that for the chatter bound , in our work, we take , which is more preferable than those previously reported in [31, 37]. If is equivalent to the existence of a common function for all subsystems, this implies that switching signals can be arbitrary. Hence, the results reported in this paper are more effective than the arbitrary switching signals reported in the previous literatures [28, 37].

So, to obtain the main results of this paper, we introduce the following lemmas.

Lemma 6 (see [40]). For any positive-definite constant matrix , scalar , and vector function , , then (1)Jensen’s inequality (2)  

Lemma 7 (see [41]). Let , , and be real matrices of appropriate dimension such that . Then for any scalar and vectors and with appropriate dimensions, the following inequality is true:

Lemma 8 (see [41]). For any real matrix , and one positive-definite matrix , the following matrix inequality holds:

Lemma 9 (see [42], Schur’s complement). The LMI with , is equivalent to

3. Main Results

Theorem 10. For given constants , , , if there exist positive scalars and positive-definite matrices , , , (), , , , , , , , , , such that the following conditions hold:where then system (3) is mean-square uniformly ultimately bounded.

Proof. Choose the following Lyapunov-Krasovskii functional: where By applying the Itô differential formula, the stochastic derivative of along the trajectory of system (3) isWith the infinitesimal-operator, we can deduce thatDenote , , according to assumptions (H2) and (H3). Then, we obtain the following inequalities:By assumption (H4), we have Combining inequalities (29) and (30), we derivewhere .
Then, we easily derive Similarly, calculating the operation of along the trajectory of system (3), one can get By Lemma 6  , it follows thatSimilarly, one can derive Using Lemma 6  , we have Similarly, one can obtain Then, noting the condition and the Cauchy-Schwarz inequality it can yieldThen, based on assumption (H1), it easy to see that, for ,Then, letTherefore,From Lemma 8, the following inequality holds true: Denote whereUsing (27)–(43), we can derive whereBy integrating both sides of (46) in time interval and taking expectation results in Note from (25) that which implies where andIf one chooses , then for any constant and , there is , such that for all . According to Definition 3, we have for all . In other words, system (3) is mean-square uniformly ultimately bounded. This completes the proof.

Note from (25), we know that there exists a constant vector , such that Thus, combining (50) and (52) leads to where .

Theorem 11. If all of the conditions of Theorem 10 hold, then there exists an attractor for the solutions of system (3), where .

Proof. If one choose , Theorem 10 shows that for any , there is , such that for all . Let . Clearly, is closed, bounded, and invariant. Furthermore, . Therefore, is a stochastic attractor for the solutions of system (3).

Corollary 12. In addition to all of the conditions of Theorem 10 that hold, if and , then system (3) has a trivial solution , and the trivial solution of system (3) is mean-square exponentially stable.

Proof. If and , then it is obvious that system (3) has a trivial solution . From Theorem 10, one haswhere . Therefore, the trivial solution of system (3) is mean-square exponentially stable. This completes the proof.

In practice, parameter uncertainties in neural networks are always unavoidable, in order to explain such a phenomenon. In this section, we will investigate the mean-square uniform ultimate boundedness of the switching stochastic systems with uncertainties by applying the average dwell time.

Now, we consider the switched stochastic Cohen-Grossberg neural networks with unknown parameters as follows:

Theorem 13. For a given constant , , , , , if there exist positive scalars , positive-definite matrix , , , (), , , , , , , , , , and any matrices , , with appropriate dimensions such that the following condition holds:where then system (55) is mean-square uniformly ultimately bounded for any switching signal with average dwell time satisfying where , , .

Proof. Let us consider the same Lyapunov functional candidate Then, we will compute the stochastic derivative of along the trajectory of system (55). Therefore, from Theorem 10, we merely need to obtain the idea of the following equalities: Moreover, by assumption and Lemma 7, we obtainThen, along a similar way to Theorem 10, it can be deduced that By using Schur’s complement lemma, the LMI (56) is equivalent to whereSupposing and multiplying both sides of LMI (64) bywe derive the LMI .
Therefore, when , the th subsystem is activated. From the proof of Theorem 10 and (53), there exists a positive constant vector , such thatwhere , , .
As the system state is continuous, it follows from (67) thatIf one chooses then for any constant and there is , such that for all . According to Definition 3, we have for all . In other words, the switched stochastic Cohen-Grossberg neural networks system (55) is mean-square uniformly ultimately bounded. This completes the proof.

Theorem 14. If all of the conditions of Theorem 10 hold, then there exists an attractor for the solutions of system (55), where .

Proof. If one chooses then Theorem 10 shows that, for any , there is ; we have for all . Let . Clearly, is closed, bounded, and invariant. Furthermore, . Therefore, is an attractor for the solutions of system (55).

Corollary 15. In addition to all of the conditions of Theorem 13 that hold, if , , then system (55) has a trivial solution , and the trivial solution of system (55) is mean-square exponentially stable.

Proof. If , , then it is obvious that system (55) has a trivial solution . From Theorem 10, one has where .
Therefore, the trivial solution of system (55) is mean-square exponentially stable. This completes the proof.

Remark 16. It is noteworthy that the time-varying delay restricts the interval and the lower bound of time delay may not be equal to 0. In previous work, such as [19, 24, 26, 37], the well-used Lyapunov functional, in which the time delay information is from 0 to an upper bound , is of the form . In this paper, a new Lyapunov functional is constructed, which contains the information of the lower bound of time delay , and is of the form . Thus the methods in the paper can be adopted to discuss the dynamic behaviors of interval stochastic switched Cohen-Grosberg neural networks with time delays. Therefore, the time-varying from to is more general and less conserving of the neural networks models. If the lower bound of time delay , then our results will turn into the traditional time delay results.

Remark 17. It is known that noise disturbance is a major source of instability and poor performances in neural networks in real neural networks. If , , the switched stochastic Cohen-Grossberg neural networks (8) degenerate into the ordinary switched Cohen-Grossberg neural networks, which have been studied for exponential stability in [22] and robust stability in [37]. In addition, when , , the switched Cohen-Grosberg neural networks will turn into the famous switched Hopfield neural networks; this has been investigated in [28] without distributed time delay and for global robust asymptotic stability in [32] with finite distributed time delay. However, the infinite distributed time delay was not taken into account in neural networks. Therefore, our developed results in this paper are more comfortable and general than those reported in [28, 32, 37].

Remark 18. If , then the switched stochastic Cohen-Grossberg neural networks (8) degenerate into the ordinary stochastic Cohen-Grossberg neural networks without being switched. The attractor and boundedness for stochastic Cohen-Grossberg neural networks with delays have been discussed in [35] by LaSalle-type theorem and stability has been studied in [2, 18, 19]. So our results generalize these previous results.

Remark 19. The triple integral termsconsidered in this paper lead to new dynamic criteria. We make full use of Lemma 6 and do not ignore any terms, which can reduce some conservatism of proposed method. This can be verified from the numerical examples discussed in Section 5.

Remark 20. It should be mentioned that the nonlinear output function in [2, 18, 24, 35, 37, 38] is required to satisfy ; however, in our paper, the assumption condition was deleted. In assumption , the constants and are allowed to be positive, negative, or zero, whereas the constant is restricted to be zero or positive in [2, 18, 26, 35]. Moreover, assumption is weaker than those given in [26, 35] since they are required to be differentiable of the amplification function . The usual condition required to the behaved function is differentiable in [18, 37] or satisfies , () or in [26]. If we take , obviously, the assumption in this paper holds; yet, the conditions in [26] can not be achieved.

4. Illustrative Examples

In this section, we present examples to show the effectiveness of the proposed method. Let and consider the switched stochastic neural networks with two subsystems.

Example 1. Consider the following switched stochastic Cohen-Grossberg neural network (73) with unknown parameters: where , , are uncertainties, satisfying , , , where , , are normal bounded matrices. Let , , (), , and the connection weight matrices are as follows.

Subsystems 1. Consider

Subsystems 2. Consider

Just from assumptions (H1) and (H2), we can obtain , , , , , , , , , , .

Therefore, for , by solving LMIS (56) and (57), we get Taking , and using (59), we can obtain the average dwell time . Therefore, one can choose . The simulations of arbitrary switching signal with the average dwell time can be shown in Figure 1.

The mean-square exponential stability of system (55) with the initial value as can be shown in Figure 2. With the help of MATLAB, the time evolutions of state variables of the system (55) can be shown in Figure 3. In the above conditions, phase portraits of simulations under initial condition of the system (55) are shown in Figure 4.

5. Conclusion

This paper has studied the problem of boundedness for a class of switched stochastic Cohen-Grossberg neural networks with both average dwell time and norm-bounded parameter uncertainties. By employing multiple Lyapunov-Krasovskii functionals (25) and (60), we formulate a method that derives new sufficient conditions guaranteeing the mean-square uniformly ultimate boundedness, the existence of an attractor, and the mean-square exponential stability. A numerical example has been presented to demonstrate the effectiveness and the merits of the proposed method. It is expected that the approach presented in this paper can be easily extended to analyze other neural networks.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The work is partially supported by the National Natural Science Foundation of China (numbers 11101053 and 71471020), China Postdoctoral Science Foundation (numbers 2014M550097 and 2015T80144), Hunan Provincial Natural Science Foundation (number 16JJ1015), Scientific Research Fund of Hunan Provincial Education Department (number 15A003), and Hunan Provincial Innovation Foundation for Postgraduate (number CX2015B373).