## Stochastic Systems: Modeling, Analysis, Synthesis, Control, and their Applications to Engineering

View this Special IssueResearch Article | Open Access

# Master-Slave Synchronization of Stochastic Neural Networks with Mixed Time-Varying Delays

**Academic Editor:**Xue-Jun Xie

#### Abstract

This paper investigates the problem on master-salve synchronization for stochastic neural networks with both time-varying and distributed time-varying delays. Together with the drive-response concept, LMI approach, and generalized convex combination, one novel synchronization criterion is obtained in terms of LMIs and the condition heavily depends on the upper and lower bounds of state delay and distributed one. Moreover, the addressed systems can include some famous network models as its special cases, which means that our methods extend those present ones. Finally, two numerical examples are given to demonstrate the effectiveness of the presented scheme.

#### 1. Introduction

In the past decade, synchronization of chaotic systems has attracted considerable attention since the pioneering works of Pecora and Carroll [1], in which it shows when some conditions are satisfied, a chaotic system (the slave/response system) may become synchronized to another identical chaotic system (the master/drive system) if the master system sends some driving signals to the slave one. Now, it is widely known that there exist many benefits of having synchronization or chaos synchronization in various engineering fields, such as secure communication [2], image processing [3], and harmonic oscillation generation. Meanwhile, there exists synchronization in language development, which comes up with a common vocabulary, while agents' synchronization in organization management will improve their work efficiency. Recently, chaos synchronization has been widely investigated due to its great potential applications. Especially, since artificial neural network model can exhibit the chaotic behaviors [4, 5], the synchronization has become an important area of study, see [6–23] and references therein. As special complex networks, delayed neural networks have been also found to exhibit some complex and unpredictable behaviors including stable equilibria, periodic oscillations, bifurcation, and chaotic attractors [24–27]. Presently, many literatures dealing with chaos synchronization phenomena in delayed neural networks have appeared. Together with various techniques such as LMI tool, -matrix, and Jensen's inequalities, some elegant results have been derived for global synchronization of various delayed neural networks including discrete-time ones in [6–14]. Moreover, some authors have considered the problems on adaptive synchronization and synchronization in [15, 16].

Meanwhile, it is worth noting that, like time-delay and parameter uncertainties, noises are ubiquitous in both nature and man-made systems and the stochastic effects on neural networks have drawn much particular attention. Thus a large number of elegant results concerning dynamics of stochastic neural networks have already been presented in [17–23, 28, 29]. Since noise can induce stability and instability oscillations to the system, by virtue of the stability theory for stochastic differential equations, there has been an increasing interest in the study of synchronization for delayed neural networks with stochastic perturbations [17–23]. Based on LMI technique, in [17–19], some novel results have been derived on the global synchronization as the addressed networks were involved in distributed delay or neutral type. Also the works [20–23] have considered the adaptive synchronization and lag synchronization for stochastic delayed neural networks. However, the control schemes in [17–19] cannot tackle the cases as the upper bound of delay's derivative is not less than 1, and the presented results in [20–23] are not formulated in terms of LMIs, which makes them checked inconveniently by most recently developed algorithms. Meanwhile, in order to implement the practical point of view better, distributed delay should be taken into consideration and thus, some researchers have began to give some preliminary discussions in [9–11, 19]. It is worth pointing out that the range of time delays considered in [17–23] is from 0 to an upper bound. In practice, the range of delay may vary in a range for which the lower bound is not restricted to be 0. Thus the criteria in the above literature can be more conservative because they have not considered the information on the lower bound of delay. Meanwhile, it has been verified that the convex combination idea was more efficient than some previous techniques when tackling time-varying delay, and furthermore, the novel idea needs some improvements since it has not taken distributed delay into consideration altogether [30]. Yet, few authors have employed improved convex combination to consider the stochastic neural networks with both variable and distributed variable delays and proposed some less conservative and easy-to-test control scheme for the exponential synchronization, which constitutes the main focus of the presented work.

Motivated by the above-mentioned discussion, this paper focuses on the exponential synchronization for a broad class of stochastic neural networks with mixed time-varying delays, in which two involved delays belong to the intervals. The form of addressed networks can include several well-known neural network models as the special cases. Together with the drive-response concept and Lyapunov stability theorem, a memory control law is proposed which guarantees the exponential synchronization of the drive system and response one. Finally, two illustrative examples are given to illustrate that the obtained results can improve some earlier reported works.

*Notation 1. *For symmetric matrix (resp., ) means that is a positive-definite (resp., positive-semidefinite) matrix; represent the transposes of matrices and , respectively. For denotes the family of continuous functions from to with the norm . Let be a complete probability space with a filtration satisfying the usual conditions; is the family of all -measurable -valued random variables such that , where stands for the mathematical expectation operator with respect to the given probability measure ; denotes the identity matrix with an appropriate dimension and with denoting the symmetric term in a symmetric matrix.

#### 2. Problem Formulations

Consider the following stochastic neural networks with time-varying delays described by where is the neuron state vector, represents the neuron activation function, is a constant external input vector, and are the connection weight matrix, the delayed weight matrix, and the distributively delayed connection weight one, respectively.

In the paper, we consider the system (2.1) as the master system and the slave system as follows: with , where are constant matrices similar to the relevant ones (2.1) and is the appropriate control input that will be designed in order to obtain a certain control objective. In practical situations, the output signals of the drive system (2.1) can be received by the response one (2.2).

The following assumptions are imposed on systems (2.1) and (2.2) throughout the paper. (A1) Here and denote the time-varying delay and the distributed one satisfying and we introduce , and . (A2) Each function is locally Lipschitz, and there exist positive scalars and such that for all . Here, we denote and . (A3) For the constants , the neuron activation functions in (2.1) are bounded and satisfy (A4) In system (2.2), the function is locally Lipschitz continuous and satisfies the linear growth condition as well. Moreover, satisfies the following condition: where are the known constant matrices of appropriate dimensions.

Let be the error state and subtract (2.1) from (2.2); it yields the synchronization error dynamical systems as follows: where . One can check that the function satisfies , and Moreover, we denote , , and In the paper, we adopt the following definition.

*Definition 2.1 (see [18]). *For the system (2.6) and every initial condition , the trivial solution is globally exponentially stable in the mean square, if there exist two positive scalars such that
where stands for the mathematical expectation and are the initial conditions of systems (2.1) and (2.2), respectively.

In many real applications, we are interested in designing a memoryless state-feedback controller , where is a constant gain matrix. In the paper, for a special case that the information on the size of is available, we consider the delayed feedback controller of the following form: then replacing into system (2.6) yields Then the purpose of the paper is to design a controller in (2.10) to let the slave system (2.2) synchronize with the master one (2.1).

#### 3. Main Results

In this section, some lemmas are introduced firstly.

Lemma 3.1 (see [18]). *For any symmetric matrix , scalar , vector function such that the integrations concerned are well defined, then .*

Lemma 3.2 (see [19]). *Given constant matrices , where , then the linear matrix inequality (LMI) is equivalent to the condition: ,.*

Lemma 3.3 (see [31]). *Suppose that are the constant matrices of the appropriate dimensions, , and , then the inequality holds, if the four inequalities hold simultaneously.*

Then, a novel criterion is presented for the exponential stability for system (2.11) which can guarantee the master system (2.1) to synchronize the slave one (2.2).

Theorem 3.4. *Supposing that assumptions (A1)–(A4) hold, then system (2.11) has one equilibrium point and is globally exponentially stable in the mean square, if there exist matrices diagonal matrices matrices , and one scalar such that the matrix inequalities (3.1)-(3.2) hold:
**
where and
**
With
*

*Proof. *Denoting , we represent system (2.11) as the following equivalent form:
Now, together with assumptions (A1) and (A2), we construct the following Lyapunov-Krasovskii functional:
where
with setting , and . In the following, the weak infinitesimal operator of the stochastic process is given in [32].

By employing (A1) and (A2) and directly computing , it follows from any matrices that
where and .

Now adding the terms on the right side of (3.8)–(3.11) to and employing (2.5), (3.1), it is easy to obtain
Based on methods in [33] and (2.7), for any diagonal matrices , the following inequality can be achieved:
From (A1), for any diagonal matrix , one can yield
Furthermore, for any constant matrices , we can obtain
where
Then together with the methods in [28, 29], combining (3.12)–(3.15) yields
where are presented in (3.2) and
Together with Lemmas 3.2 and 3.3, the nonlinear matrix inequalities in (3.2) can guarantee to be true. Therefore, there must exist a negative scalar such that
Taking the mathematic expectation of (3.19), we can deduce , which indicates that the dynamics of the system (2.11) is globally asymptotically stable in the mean square. Based on in (3.6) and directly computing, there must exist three positive scalars such that
Letting , we can deduce
By changing the integration sequence, it can be deduced that
Substituting the terms (3.22) into the relevant ones in (3.21), it is easy to have
where . Choose one sufficiently small scalar such that . Then, . Through directly computing, there must exist a positive scalar such that
Meanwhile, . Thus with (3.24), one can obtain
which indicates that system (2.11) is globally exponentially stable in the mean square, and the proof is completed.

*Remark 3.5. *As for systems (2.1) and (2.2), many present literatures have much attention to with positive-definite diagonal matrix, which can be checked as one special case of assumption (A3). Also in Theorem 3.4, it can be checked that in (3.17) was not simply enlarged by , but equivalently guaranteed by utilizing two matrix inequalities (3.2) and Lemma 3.3, which can be more effective than these techniques employed in [18, 28, 29]. Moreover, we compute and estimate in (3.11) more efficiently than those present ones owing to that some previously ignored terms have been taken into consideration.

In order to show the design of the estimate gain matrices and , a simple transformation is made to obtain the following theorem.

Theorem 3.6. *Supposing that assumptions (A1)–(A4) hold and setting , then the system (2.1) and system (2.2) can exponentially achieve the master-slave synchronization in the mean square, if there exist matrices diagonal matrices matrices , and one scalar such that the LMIs in (3.26)-(3.27) hold
**
where are similar to the relevant ones in (3.2), and
**
with
**
Moreover, the estimation gains and . *

*Proof. *Letting and setting in (3.2) of Theorem 3.4, it is easy to derive the result and the detailed proof is omitted here.

*Remark 3.7. *Theorem 3.6 presents one novel delay-dependent criterion guaranteeing the systems (2.1) and (2.2) to achieve the master-slave synchronization in an exponential way. The method is presented in terms of LMIs, therefore, by using LMI in MATLAB Toolbox, it is straightforward and convenient to check the feasibility of the proposed results without tuning any parameters. Moreover, the systems addressed in this paper can include some famous networks in [17, 19–21, 23] as its special cases or is not differentiable.

*Remark 3.8. *Through setting in (3.6) and employing similar methods, Theorems 3.4 and 3.6 can be applicable without taking the upper bound on derivative of into consideration, which means that Theorems 3.4 and 3.6 can be true even as is unknown.

*Remark 3.9. *As we all know, most of free-weighting matrices of in Theorems 3.4 and 3.6 cannot help reduce the conservatism but only result in computational complexity. Thus we can choose the simplified slack matrices as follows:
with matrices . Though the number of matrix variables in (3.30) is much smaller than the one in (3.2) and (3.27), the numerical examples given in the paper still demonstrate that the simplified criteria can reduce the conservatism as effectively as Theorems 3.4 and 3.6 do.

#### 4. Numerical Examples

In this section, two numerical examples will be given to illustrate the effectiveness of the proposed results.

*Example 4.1. *Consider the drive system (2.1) and response one (2.2) of delayed neural networks as follows:
Then it is easy to check that , and
By setting and utilizing Theorem 3.6, then the estimator gain matrices and in (2.10) can be worked out
Furthermore, as for , and setting , we can obtain the following estimator gain matrices by using Theorem 3.6 and Remark 3.8:
which means that the obtained results still hold as the time delay is not differentiable. However, the methods proposed in [17–19] fail to solve the synchronization problem even without the distributed delay.

*Example 4.2. *As a special case, we consider the master system (2.1) of delayed stochastic neural networks as follows:
where , and , . It can be verified that , and . The activation functions can be taken as . The corresponding slave system can be