- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Abstract and Applied Analysis
Volume 2014 (2014), Article ID 505164, 17 pages
Exponential Synchronization for Stochastic Neural Networks with Mixed Time Delays and Markovian Jump Parameters via Sampled Data
1School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China
2Department of Applied Mathematics, Yanshan University, Qinhuangdao 066004, China
Received 17 October 2013; Accepted 31 December 2013; Published 5 March 2014
Academic Editor: Chuanzhi Bai
Copyright © 2014 Yingwei Li and Xueqing Guo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The exponential synchronization issue for stochastic neural networks (SNNs) with mixed time delays and Markovian jump parameters using sampled-data controller is investigated. Based on a novel Lyapunov-Krasovskii functional, stochastic analysis theory, and linear matrix inequality (LMI) approach, we derived some novel sufficient conditions that guarantee that the master systems exponentially synchronize with the slave systems. The design method of the desired sampled-data controller is also proposed. To reflect the most dynamical behaviors of the system, both Markovian jump parameters and stochastic disturbance are considered, where stochastic disturbances are given in the form of a Brownian motion. The results obtained in this paper are a little conservative comparing the previous results in the literature. Finally, two numerical examples are given to illustrate the effectiveness of the proposed methods.
Neural networks, such as Hopfield neural networks, cellular neural networks, the Cohen-Grossberg neural networks, and bidirectional associative neural networks, are very important nonlinear circuit networks and, in the past few decades, have been extensively studied due to their potential applications in classification, signal and image processing, parallel computing, associate memories, optimization, cryptography, and so forth; see [1–7]. Many results, which deal with the dynamics of various neural networks such as stability, periodic oscillation, bifurcation, and chaos, have been obtained by applying the Lyapunov stability theory; see, for example, [8–10] and the references therein. As a special case, synchronization issues of the neural network systems have been extensively investigated too, and a lot of criteria have been developed to guarantee the global synchronization of the network systems in [11–17].
It has been widely reported that a neural network sometimes has finite modes that switch from one mode to another at different times; such a switching (jumping) signal between different neural network models can be governed by a Markovian chain; see [18–25] and the references therein. This class of systems has the advantage of modeling the dynamic systems subject to abrupt variation in their structures and has many applications such as target tracking problems, manufactory processes, and fault-tolerant systems. In , delay-interval dependent stability criteria are obtained for neural networks with Markovian jump parameters and time-varying delays, which are based on free-weighing matrix method and LMIs technique. In , by introducing some free-weighting matrices, delay-dependent stochastic exponential synchronization conditions are derived for chaotic neural networks with Markovian jump parameters and mixed time delays in terms of the Jensen inequality and linear matrix inequalities.
It is well known that noise disturbance widely exists in biological networks due to environmental uncertainties, which is a major source of instability and can lead to poor performances in neural networks. Such systems are described by stochastic differential systems which have been used efficiently in modeling many practical problems that arise in the fields of engineering, physics, and science as well. Therefore, the theory of stochastic differential equation is also attracting much attention in recent years and many results have been reported in the literature [26–30]. In addition to the noise disturbance, time delay is also a major source for causing instability and poor performances in neural networks; see, for example, [31–35]. It is known that time delays are often encountered in real neural networks, and the existence of time delays may cause oscillation or instability in neural networks, which are harmful to the applications of neural networks. Therefore, the stability analysis for neural networks with time delays has been widely studied in the literature.
On the other hand, as the rapid development of computer hardware, the sampled-data control technology has shown superiority over other control approaches because it is difficult to guarantee that the state variables transmitted to controllers are continuous in many real-world applications. In , Wu et al. investigated the synchronization problem of neural networks with time-varying delay under sampled-data control in the presence of a constant input delay. In , by using sampled-data controller, the global synchronization of the chaotic Lur'e systems is discussed and sufficient conditions are obtained in terms of effective synchronization linear matrix inequality by constructing the new discontinuous Lyapunov functionals. Wu et al. studied the sampled-data synchronization for Markovian jump neural networks with time-varying delay; some new and useful synchronization conditions in the framework of the input delay approach and the linear matrix inequality technique are derived in .
Motivated by the above discussion, in this paper we study the delay-dependent exponential synchronization of neural networks with stochastic perturbation, discrete and distributed time-varying delays, and Markovian jump parameters. Here, it should be mentioned that our results are delay dependent, which depend on not only the upper bounds of time delays but also their lower bounds. Moreover, the derivatives of time delays are not necessarily zero or smaller than one since several free matrices are introduced in our results. By constructing an appropriate Lyapunov-Krasovskii functional based on delay partitioning, several improved delay-dependent criteria are developed to achieve the exponential synchronization in mean square in terms of linear matrix inequalities. Two numerical examples are also provided to demonstrate the advantage of the theoretical results.
The rest of this paper is organized as follows. In Section 2, the model of stochastic neural network with both mixed time delays and Markovian jump parameters under sampled-data control is introduced, together with some definitions and lemmas. Exponential synchronization is proposed for neural networks with both Markovian jump parameters and mixed time delays via sampled data in Section 3. In Section 4, exponential synchronization is proved for stochastic neural networks with both Markovian jump parameters and mixed time delays under sampled-data control. In Section 5, two illustrative examples are given to demonstrate the validity of the proposed results. Finally, some conclusions are drawn in Section 6.
Notations. Throughout this paper, denotes the set of real numbers, denotes the n-dimensional Euclidean space, and denotes the set of all real matrices. For any matrix , denotes the transpose of . If is a real symmetric matrix, means that is positive definite (negative definite). and represent minimum and maximum eigenvalues of a real symmetric matrix, respectively. is a complete probability space, where is the sample space, is the -algebra of subsets of the sample space, and is the probability measure on . denotes a block-diagonal matrix and stands for a matrix column with blocks given by the matrices in . denotes the expectation operator with respect to some probability measure . Given the column vectors , , , , and . denotes the derivative of and represents the symmetric form of matrix. Matrices, if their dimensions are not explicitly stated, are assumed to have compatible dimensions for algebraic operations.
2. Model Description and Preliminaries
Let be a right-continuous Markovian chain on the probability space ( ) taking values in a finite state space with generator given by where and . Here, is the transition rate from to if at time , and is the transition rate from to at time .
Remark 1. The probability defined in (1) is called time-homogeneous transition probability, which is only relevant to the time internal ; that is, is not relevant to the starting point . Moreover, for the time-homogeneous transition probability defined in (1), the following two properties should be satisfied: Accordingly, for any satisfying the conditions in (2), the matrix is called the probability transition matrix for the right-continuous Markovian chain .
Fix a probability space and consider the neural networks with mixed time delays and Markovian jump described by the following differential equation system: where is the neuron state vector and is the state of the neuron at time ; denotes the neuron activation function; is a diagonal matrix with positive entries; , , and are, respectively, the connection weight matrix, the discretely delayed connection weight matrix, and the distributively delayed connection weight matrix; is an external input vector; and denote the discrete delay and the distributed delay.
Throughout this paper, we make the following assumptions.There exist positive constants , , and such that Each activation function in (3) is continuous and bounded, and there exist constants and such that where and .
Remark 2. In the earlier literature, the activation functions are supposed to be continuous, differentiable, monotonically increasing, and bounded. Moreover, the constants for or the constants for . However, in this paper, the resulting activation functions may be not monotonically increasing and more general than the usual Lipschitz-type conditions. Moreover, the constants and are allowed to be positive, negative, or zero. Hence, Assumption 2 of this paper is weaker than those given in the earlier literature (see, e.g., [39, 40]).
In this paper, we consider system (3) as the master system and a slave system for (3) can be described by the following equation: where and for are matrices given in (3) and is the appropriate control input.
In order to investigate the problem of exponential synchronization between systems (3) and (6), we define the error signal . Therefore, the error dynamical system between (3) and (6) is given as follows: where . It can be found that the functions satisfy the following condition: where and .
The control signal is assumed to be generated by using a zero-order-hold function with a sequence of hold times . Therefore, the mode-independent state feedback controller takes the following form: where is a sampled-data feedback controller gain matrix to be determined, is a discrete measurement of at the sampling instant , and . It is assumed that for any integer , where is a positive scalar and represents the largest sampling interval.
For convenience, in the following, each possible value of is denoted by , . Then we have , , , and , where , , and , for any , are known constant matrices of appropriate dimensions. The system (10) can be written as
The first purpose of this paper is to design a controller with the form (9) to achieve the exponential synchronization of the master system (3) and slave system (6). In other words, we are interested in finding a feedback gain matric such that the error system (11) is exponentially stable.
As mentioned earlier, it is often the case in practice that the neural network is disturbed by environmental noises that affect the stability of the equilibrium. Motivated by this we express a stochastic system whose consequent parts are a set of stochastic uncertain recurrent neural networks with mixed time delays: where is an n-dimensional Brownian motion defined on a complete probability space satisfying and and , is the noise intensity function matrix.
And a slave system for (12) can be described by the following equation:
The mode-independent state feedback controller is made as the form of (9) and each possible value of is denoted by , ; then we have the final stochastic error system:
We impose the following assumption: is locally Lipschitz continuous and satisfies
Our second purpose of this paper is to find a feedback gain matric in the controller with the form (9) to ensure that the error system (14) is exponentially stable, so that the master system (12) and slave system (13) are exponentially synchronous.
To state our main results, the following definition and lemmas are first introduced, which are essential for the proof in the sequel.
Definition 3. Master system and slave system are said to be exponentially synchronous if error system is exponentially stable; that is, for any initial condition defined on the interval , , the following condition is satisfied:
Lemma 4 (the Jensen inequality, see ). For any constant matrix , , scalar , vector function , , such that the integrations concerned are well defined; then
Lemma 5 (the Schur complement). Given one positive definite matrix and constant matrices and , where , if and only if
Lemma 6 (see ). For any constant matric , symmetric positive definite matrix , two functions and satisfying , and vector function such that the integrations concerned are well defined, let where and . Then the following inequality holds:
3. Exponential Synchronization for Markovian Jump Neural Networks with Mixed Time Delays via Sampled Data
To present the main results of this section, we denote , , , and .
Proof. Denote and
and then consider the following Lyapunov functional for error system (11):
Let be the weak infinitesimal generator of the random process along system (11). Next, we will compute , , , , , , and along the trajectories of the error system (11), respectively, for : According to Lemma 4, it follows that On the other hand, denote When , by Lemma 4, we have that It is clear from (21) that Moreover, Based on the lower bounds lemma of , we have from (38)–(40) that
Note that, when or , we have or , respectively. Thus, (41) still holds.
Inspired by the free-weighting matric approach , we can find that, for any appropriately dimensioned matrix , . Hence, the following inequality holds: where .
From (44), we can immediately get that
Furthermore, according to error system (11), for any appropriately dimensioned matrix and scalar , the following equality is satisfied:
On the other hand, we have from (8) that, for any , which is equivalent to where denotes the unit column vector with one element on its row and zeros elsewhere. Thus, for any appropriately dimensioned diagonal matrices , the following inequality holds : which implies
Similarly, for any appropriately dimensioned diagonal matrices , , and , the following inequalities also hold:
Adding the left-hand sides of (46)–(51) to and letting , we have from (29)–(36), (42), and (45) that, for ,
According to Lemma 5, (24) is equivalent to which implies
From (23) and (55), it can be seen that
Thus, we can show from (52)–(56) that
From the definition of , , and , there exist positive scalars , , , , and such that the following inequality holds:
Define a new function , where and . It can be found that where
Due to the fact that and , we can find a sufficiently small scalar such that and , which implies .
On the other hand, from (57) and (58) we have
By using Dynkin's formula, for , we have where .
Consequently, by changing the integration sequence, the following inequalities hold:
After substituting (63) into the right side of (62) and then using , we can obtain where .
So, Then it can be shown that, for any , where .
Consequently, according to the Lyapunov-Krasovskii stability theory and Definition 3, we know that the error system (11) is exponentially stable. This completes the proof.
4. Exponential Synchronization for Stochastic Neural Networks with Mixed Time Delays and Markovian Jump via Sampled Data
In this section, some sufficient conditions of exponential synchronization for stochastic error system (14) are obtained by employing the Lyapunov-Krasovskii functionals.
Theorem 8. Under Assumptions , , and , for given scalar , the error system (14) is globally exponentially stable, which ensures that the master system (12) and slave system (13) are stochastically synchronized, if there exist positive scalars , symmetric positive definite matrices , , , , , , , and , positive definite matrices , , , and , and matrices , , , , , and , such that, for any , the following matrix inequalities hold: where , , , and is given as follows: where
Proof. Let . Then, the system (14) can be written as
To analyze the stability of error system (14), we construct the following stochastic Lyapunov functional candidate: where
Let be the weak infinitesimal operator of stochastic process along the trajectories of error system (14). Then we obtain that