Abstract

The generalized projective synchronization (GPS) between two different neural networks with nonlinear coupling and mixed time delays is considered. Several kinds of nonlinear feedback controllers are designed to achieve GPS between two different such neural networks. Some results for GPS of these neural networks are proved theoretically by using the Lyapunov stability theory and the LaSalle invariance principle. Moreover, by comparison, we determine an optimal nonlinear controller from several ones and provide an adaptive update law for it. Computer simulations are provided to show the effectiveness and feasibility of the proposed methods.

1. Introduction

Over the past two decades, the investigation on the synchronization of complex networks has attracted a great deal of attention due to its potential applications in various fields, such as physics, mathematics, secure communication, engineering, automatic control, biology, and sociology [19]. In the literature, there are many widely studied synchronization patterns, which define the correlated in-time behaviors among the nodes in a dynamical network, for example, complete synchronization [1012], lag synchronization [1315], anti-synchronization [1618], phase synchronization [1921], projective synchronization [2232], and so on. Projective synchronization reflects a kind of proportionality between the synchronized states, so it is an interesting research topic and has many applications. For instance, if this proportional feature is applied to M-nary digital communication, the communication speed can be accelerated substantially. In view of this merit, many researchers throw themselves into generalized projective synchronization.

Recent years have witnessed many written achievements on projective synchronization between two identical complex dynamical networks [2230]. We introduce three typical references here. In [28], Chen et al. studied projective synchronization of time-delayed chaotic systems in a driven-response complex network, where the nodes are not partially linear and the scale factors are different from each other. In [29], Feng et al. investigated projective-anticipating and projective-lag synchronization on complex dynamical networks composed of a large number of interconnected components, in which the dynamics of the nodes of the complex networks were time-delayed chaotic systems without the limitation of the partial linearity. In [30], Wang et al. explored the problem of outer synchronization between two complex networks with the same topological structure and time-varying coupling delay with a new mixed outer synchronization behavior. In addition, a novel nonfragile linear state feedback controller is designed to realize the mixed outer synchronization between two networks and proved analytically by using the Lyapunov-Krasovskii stability theory.

However, in real world, studying the phenomena of synchronizing two different complex networks is closer to reality. The so-called different implies that the drive and response networks have different node dynamics, different number of nodes, or different topological structures. Recently, some related works have come out, such as [31, 32]. In [31], Zheng et al. probed into adaptive projective synchronization between two complex networks with different topological structures, although its systems contained time-varying delays. Some results on topology identification were obtained which can be also seen as a highlight of this paper. In [32], the generalized projective synchronization with the above three differences was investigated based on the LaSalle invariance principle. However, the model of complex network only has linear coupling and coupling time delay.

Due to the finite information transmission and processing speeds among the units, the connection delays in realistic modeling of many large networks with communication must be taken into account. Therefore, it is important to study the effect of time delay in synchronization of coupled systems. Usually, time delay involves two parts. One is the delay inside the systems, called internal delay. The other is caused by the exchange of information between systems referred to as coupling delay. Moreover, nonlinear functions can display more nature phenomena. Hence, the internal delay and nonlinear functions are introduced into the considered neural networks in this writing. What is more, the nonlinear functions in the drive and response networks are also different. In particular, three comparable nonlinear controllers are presented to realize the GPS based on the LaSalle invariance principle and some basic inequalities. On the contrary, an optimal nonlinear controller is produced eventually. In the last theorem, we use an adaptive control technique for the optimal nonlinear controller in order to make the feedback control gain small enough.

Notation. Throughout this paper, 𝑅𝑛 and 𝑅𝑚×𝑛 denote 𝑛-dimensional Euclidean space and the set of 𝑚×𝑛 real matrices, respectively. 𝜆min(𝐴) represents the smallest eigenvalue of a symmetric matrix A. is the Kronecker product. The superscript 𝑇 of 𝑥𝑇 or 𝐴𝑇 denotes the transpose of the vector 𝑥𝑅𝑛 or the matrix 𝐴𝑅𝑚×𝑛. 𝐼𝑛 is identity matrix with 𝑛 nodes.

2. Model Description and Preliminaries

Consider a general neural network with mixed time delays consisting of 𝑁1 nodes and nonlinear couplings, which is described as follows: ̇𝑥𝑖(𝑡)=𝐶1𝑥𝑖(𝑡)+𝐴1𝑓1𝑥𝑖(𝑡)+𝐵1𝑔1𝑥𝑖𝑡𝜏1+𝑁1𝑗=1𝐺𝑖𝑗Γ11𝑥𝑗+(𝑡)𝑁1𝑗=1𝐺𝜏𝑖𝑗Γ21𝑥𝑗𝑡𝜏2,𝑖1,(2.1) where 𝑥𝑖(𝑡)=(𝑥𝑖1(𝑡),𝑥𝑖2(𝑡),,𝑥𝑖𝑛(𝑡))𝑇𝑅𝑛,(𝑖1={1,2,,𝑁1}) are the state variables of the 𝑖th node at time 𝑡; 𝐶1=diag{𝑐11,𝑐12,,𝑐1𝑛} is the decay constant matrix with 𝑐1𝑚>0(𝑚{1,2,,𝑛}); 𝐴1=(𝑎1𝑖𝑗)𝑛×𝑛 and 𝐵1=(𝑏1𝑖𝑗)𝑛×𝑛 are system matrices; 𝑓1(𝑥𝑖(𝑡))=[𝑓11(𝑥𝑖1(𝑡)),𝑓12(𝑥𝑖2(𝑡)),,𝑓1𝑛(𝑥𝑖𝑛(𝑡))]𝑇, 𝑔1(𝑥𝑖(𝑡))=[𝑔11(𝑥𝑖1(𝑡)),𝑔12(𝑥𝑖2(𝑡)),,𝑔1𝑛(𝑥𝑖𝑛(𝑡))]𝑇 and 1(𝑥𝑖(𝑡))=[11(𝑥𝑖1(𝑡)),12(𝑥𝑖2(𝑡)),,1𝑛(𝑥𝑖𝑛(𝑡))]𝑇 are the continuous functions of the neurons; the positive constants 𝜏1 and 𝜏2 are internal delay and coupling delay, respectively; Γ1 and Γ2 are the inner coupling matrices at time 𝑡 and 𝑡𝜏2, respectively, which describe the way of linking the components in each pair of connected two nodes; 𝐺=(𝐺𝑖𝑗)𝑁1×𝑁1 and 𝐺𝜏=(𝐺𝜏𝑖𝑗)𝑁1×𝑁1 are coupling configuration matrices which are not necessarily irreducible and symmetric.

In this paper, the neural network (2.1) is used as the drive network, and the response neural network consisting of 𝑁2 nodes is expressed by ̇𝑦𝑖(𝑡)=𝐶2𝑦𝑖(𝑡)+𝐴2𝑓2𝑦𝑖(𝑡)+𝐵2𝑔2𝑦𝑖𝑡𝜏1+𝑁2𝑗=1𝐻𝑖𝑗Γ12𝑦𝑗+(𝑡)𝑁2𝑗=1𝐻𝜏𝑖𝑗Γ22𝑦𝑗𝑡𝜏2+𝑢𝑖(𝑡),𝑖2,(2.2) where 𝑦𝑖(𝑡)=(𝑦𝑖1(𝑡),𝑦𝑖2(𝑡),,𝑦𝑖𝑛(𝑡))𝑇𝑅𝑛,𝑖2={1,2,,𝑁2} are the state variables of the 𝑖th node at time 𝑡. Without loss of generality, we suppose 𝑁1𝑁2>0. 𝐶2=diag{𝑐21,𝑐22,,𝑐2𝑛} is the decay constant matrix with 𝑐2𝑚>0(𝑚{1,2,,𝑛}); 𝐴2=(𝑎2𝑖𝑗)𝑛×𝑛 and 𝐵2=(𝑏2𝑖𝑗)𝑛×𝑛 are system matrices; 𝑓2(𝑦𝑖(𝑡))=[𝑓21(𝑦𝑖1(𝑡)),𝑓22(𝑦𝑖2(𝑡)),,𝑓2𝑛(𝑦𝑖𝑛(𝑡))]𝑇, 𝑔2(𝑦𝑖(𝑡))=[𝑔21(𝑦𝑖1(𝑡)),𝑔22(𝑦𝑖2(𝑡)),,𝑔2𝑛(𝑦𝑖𝑛(𝑡))]𝑇 and 2(𝑦𝑖(𝑡))=[21(𝑦𝑖1(𝑡)),22(𝑦𝑖2(𝑡)),,2𝑛(𝑦𝑖𝑛(𝑡))]𝑇 are the continuous functions of the neurons; 𝜏1, 𝜏2, Γ1, and Γ2 have the same meaning as those in (2.1). 𝐻=(𝐻𝑖𝑗)𝑁2×𝑁2 and 𝐻𝜏=(𝐻𝜏𝑖𝑗)𝑁2×𝑁2 are coupling configuration matrices which are not necessarily irreducible and symmetric either.

Now, two mathematical definitions for the generalized projective synchronization are introduced as follows.

Definition 2.1. If there is a nonzero constant 𝜎 such that lim𝑡+𝑦𝑖(𝑡)𝜎𝑥𝑖(𝑡)=0,𝑖2,(2.3) the GPS between neural networks (2.1) and (2.2) is said to be achieved. The parameter 𝜎 is called a scaling factor.

Definition 2.2. A continuous function 𝑓()𝑅𝑅 is said to be the nonnegative-bound function class, denoted as 𝑓NBF(𝜉), if there exists a positive scalar 𝜉, such that 0𝑓(𝑥)𝑓(𝑦)𝑥𝑦𝜉(2.4) holds for any 𝑥,𝑦𝑅.
The following hypothesis is used throughout the paper.

Assumption 2.3. For activation functions 𝑓1𝑖,𝑓2𝑖,𝑔1𝑖,𝑔2𝑖,1𝑖,2𝑖(𝑖{1,2,,𝑛}), there exist positive constants 𝜃𝑖,𝛽𝑖,𝜇𝑖,𝛾𝑖,𝜋𝑖,𝛿𝑖(𝑖{1,2,,𝑛}), such that 𝑓1𝑖()NBF(𝜃𝑖),𝑓2𝑖()NBF(𝛽𝑖),𝑔1𝑖()NBF(𝜇𝑖), 𝑔2𝑖()NBF(𝛾𝑖), 1𝑖()NBF(𝜋𝑖), 2𝑖()NBF(𝛿𝑖).
For better convenience, we denote that 𝜃=max1𝑖𝑛𝜃𝑖;𝜇=max1𝑖𝑛𝜇𝑖;𝜋=max1𝑖𝑛𝜋𝑖;𝛽=max1𝑖𝑛𝛽𝑖;𝛾=max1𝑖𝑛𝛾𝑖;𝛿=max1𝑖𝑛𝛿𝑖.(2.5)

Lemma 2.4 (see [33]). For a matrix 𝐵=(𝑏𝑖𝑗)𝑅𝑝×𝑞, denote 𝛼(𝐵)=(1/2)max[𝑝,𝑞]max𝑖,𝑗|𝑏𝑖𝑗|, then 𝑥𝑇𝑥𝐵𝑦𝛼(𝐵)𝑇𝑥+𝑦𝑇𝑦(2.6) holds for all 𝑥𝑅𝑝,𝑦𝑅𝑞.

3. GPS between Two Different Neural Networks with Mixed Time Delays

In this section, we will make a study of GPS between two different neural networks with mixed time delays by means of the LaSalle invariance principle; the, Lyapunov direct method, and nonlinear feedback control technique.

Define the synchronization errors between the drive network (2.1) and the response network (2.2) as 𝑒𝑖(𝑡)=𝑦𝑖(𝑡)𝜎𝑥𝑖(𝑡),𝑖2, then we have the following error system: ̇𝑒𝑖(𝑡)=𝐶2𝑒𝑖𝐶(𝑡)+1𝐶2𝜎𝑥𝑖(𝑡)+𝐴2𝑓2𝑦𝑖(𝑡)𝜎𝐴1𝑓1𝑥𝑖(𝑡)+𝐵2𝑔2𝑦𝑖𝑡𝜏1𝜎𝐵1𝑔1𝑥𝑖𝑡𝜏1+𝑁2𝑗=1𝐻𝑖𝑗Γ12𝑦𝑗(𝑡)𝜎𝑁1𝑗=1𝐺𝑖𝑗Γ11𝑥𝑗+(𝑡)𝑁2𝑗=1𝐻𝜏𝑖𝑗Γ22𝑦𝑗𝑡𝜏2𝜎𝑁1𝑗=1𝐺𝜏𝑖𝑗Γ21𝑥𝑗𝑡𝜏2+𝑢𝑖(𝑡),𝑖2.(3.1)

Theorem 3.1. Suppose Assumption 2.3 holds; if the nonlinear controllers are chosen as follows: 𝑢𝑖(𝑡)=𝜎𝐶2𝑥𝑖(𝑡)𝐶1𝑦𝑖(𝑡)𝑘𝑖𝑒𝑖(𝑡)𝐴2𝑓2𝜎𝑥𝑖(𝑡)+𝜎𝐴1𝑓1𝑦𝑖(𝑡)𝜎𝐵2𝑔2𝜎𝑥𝑖𝑡𝜏1+𝜎𝐵1𝑔1𝑦𝑖𝑡𝜏1𝜎𝑁2𝑗=1𝐻𝑖𝑗Γ12𝜎𝑥𝑗(𝑡)+𝐻𝜏𝑖𝑗Γ22𝜎𝑥𝑗𝑡𝜏2+𝜎𝑁2𝑗=1𝐺𝑖𝑗Γ11𝑦𝑗(𝑡)𝜎+𝜎𝑁2𝑗=1𝐺𝜏𝑖𝑗Γ21𝑦𝑗𝑡𝜏2𝜎+𝜎𝑁1𝑗=𝑁2+1𝐺𝑖𝑗Γ11𝑥𝑗(𝑡)+𝐺𝜏𝑖𝑗Γ21𝑥𝑗𝑡𝜏2,𝑖2,(3.2) where 𝑘𝑖 are the feedback control gains, let 𝑘=min𝑖2{𝑘𝑖}, and when 𝑘𝜆min𝐶1+𝐶2𝐴+𝛼2𝛽2𝐵+1+𝛼21+𝛾2𝐴+𝛼1𝜃|𝜎|+2𝐵|𝜎|+𝛼1|𝜎|+𝛼𝐻Γ1𝛿2𝐻+1+𝛼𝜏Γ21+𝛿2+𝛼𝐺Γ1𝜋|𝜎|+2𝐺|𝜎|+𝛼𝜏Γ2𝛼𝐵|𝜎|+1𝜇2+𝛼𝐺|𝜎|𝜏Γ2𝜋2|𝜎|+𝜖1,(3.3) where 𝜖1 is a positive constant, then the GPS between the two neural networks (2.1) and (2.2) can be achieved.

Proof. Consider the Lyapunov functional candidate 1𝑉(𝑡)=2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖𝛼𝐵(𝑡)+1𝜇2𝐵|𝜎|+𝛼2𝛾2𝑁2𝑖=1𝑡𝑡𝜏1𝑒𝑇𝑖(𝑠)𝑒𝑖+𝛼𝐺(𝑠)𝑑𝑠𝜏Γ2𝜋2|𝐻𝜎|+𝛼𝜏Γ2𝛿2𝑁2𝑖=1𝑡𝑡𝜏2𝑒𝑇𝑖(𝑠)𝑒𝑖(𝑠)𝑑𝑠.(3.4) Calculating ̇𝑉 with respect to 𝑡 along the solution of (3.1), and noticing the nonlinear feedback controllers (3.2), one has ̇𝑉(𝑡)(3)=𝑁2𝑖=1𝑒𝑇𝑖(𝑡)̇𝑒𝑖𝛼𝐵(𝑡)+1𝜇2𝐵|𝜎|+𝛼2𝛾2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)𝑒𝑇𝑖𝑡𝜏1𝑒𝑖𝑡𝜏1+𝛼𝐺𝜏Γ2𝜋2|𝐻𝜎|+𝛼𝜏Γ2𝛿2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)𝑒𝑇𝑖𝑡𝜏2𝑒𝑖𝑡𝜏2=𝑁2𝑖=1𝑒𝑇𝑖𝐶(𝑡)1+𝐶2𝑒𝑖(𝑡)𝑘𝑖𝑒𝑖(𝑡)+𝐴2𝑓2𝑦𝑖(𝑡)𝑓2𝜎𝑥𝑖(𝑡)+𝐵2𝑔2𝑦𝑖𝑡𝜏1𝑔2𝜎𝑥𝑖𝑡𝜏1+𝑁2𝑗=1𝐻𝑖𝑗Γ12𝑦𝑗(𝑡)2𝜎𝑥𝑗+(𝑡)𝑁2𝑗=1𝐻𝜏𝑖𝑗Γ22𝑦𝑗𝑡𝜏22𝜎𝑥𝑗𝑡𝜏2+𝜎𝐴1𝑓1𝑦𝑖(𝑡)𝜎𝑓1𝑥𝑖(𝑡)+𝜎𝐵1𝑔1𝑦𝑖𝑡𝜏1𝜎𝑔1𝑥𝑖𝑡𝜏1+𝜎𝑁2𝑗=1𝐺𝑖𝑗Γ11𝑦𝑗(𝑡)𝜎1𝑥𝑗(𝑡)+𝜎𝑁2𝑗=1𝐺𝜏𝑖𝑗Γ21𝑦𝑗𝑡𝜏2𝜎1𝑥𝑗𝑡𝜏2+𝛼𝐵1𝜇2𝐵|𝜎|+𝛼2𝛾2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)𝑒𝑇𝑖𝑡𝜏1𝑒𝑖𝑡𝜏1+𝛼𝐺𝜏Γ2𝜋2𝐻|𝜎|+𝛼𝜏Γ2𝛿2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)𝑒𝑇𝑖𝑡𝜏2𝑒𝑖𝑡𝜏2.(3.5) By utilizing Lemma 2.4 and Assumption 2.3, we have the following four inequalities: the first one is 𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝐴2𝑓2𝑦𝑖(𝑡)𝑓2𝜎𝑥𝑖𝐴(𝑡)𝛼2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖𝑓(𝑡)+2𝑦𝑖(𝑡)𝑓2𝜎𝑥𝑖(𝑡)𝑇𝑓2𝑦𝑖(𝑡)𝑓2𝜎𝑥𝑖𝐴(𝑡)𝛼2𝛽2+1𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡).(3.6) Denote 𝐻2(𝑒(𝑡))=[(2(𝑦1(𝑡))2(𝜎𝑥1(𝑡)))𝑇,(2(𝑦2(𝑡))2(𝜎𝑥2(𝑡)))𝑇,,(2(𝑦𝑁2(𝑡))2(𝜎𝑥𝑁2(𝑡)))𝑇]𝑇 and 𝑒(𝑡)=(𝑒𝑇1(𝑡),𝑒𝑇2(𝑡),,𝑒𝑇𝑁2(𝑡))𝑇, and then we can get the second inequality: 𝑁2𝑁𝑖=12𝑗=1𝑒𝑇𝑖(𝑡)𝐻𝑖𝑗Γ12𝑦𝑗(𝑡)2𝜎𝑥𝑗(𝑡)=𝑒𝑇(𝑡)𝐻Γ1𝐻2(𝑒(𝑡))𝛼𝐻Γ1𝑒𝑇(𝑡)𝑒(𝑡)+𝐻𝑇2(𝑒(𝑡))𝐻2(𝑒(𝑡))=𝛼𝐻Γ1𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)+2𝑦𝑖(𝑡)2𝜎𝑥𝑖(𝑡)𝑇2𝑦𝑖(𝑡)2𝜎𝑥𝑖(𝑡)𝛼𝐻Γ1𝛿2+1𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡),(3.7) and the third one is as follows: 𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝜎𝐴1𝑓1𝑦𝑖(𝑡)𝜎𝑓1𝑥𝑖𝐴(𝑡)|𝜎|𝛼1𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖𝑓(𝑡)+1𝑦𝑖(𝑡)𝜎𝑓1𝑥𝑖(𝑡)𝑇𝑓1𝑦𝑖(𝑡)𝜎𝑓1𝑥𝑖𝐴(𝑡)|𝜎|𝛼1𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖𝜃(𝑡)+2𝜎2𝑒𝑇𝑖(𝑡)𝑒𝑖=𝜃(𝑡)|𝜎|+2𝛼𝐴|𝜎|1𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡).(3.8) Let 𝐻1(𝑒(𝑡))=[(1(𝑦1(𝑡)/𝜎)1(𝑥1(𝑡)))𝑇,(1(𝑦2(𝑡)/𝜎)1(𝑥2(𝑡)))𝑇,,(1(𝑦𝑁2(𝑡)/𝜎)1(𝑥𝑁2(𝑡)))𝑇]𝑇; thus, we have the last one: 𝜎𝑁2𝑁𝑖=12𝑗=1𝑒𝑇𝑖(𝑡)𝐺𝑖𝑗Γ11𝑦𝑗(𝑡)𝜎1𝑥𝑗(𝑡)=𝜎𝑒𝑇(𝑡)𝐺Γ1𝐻1(𝑒(𝑡))|𝜎|𝛼𝐺Γ1𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)+1𝑦𝑖(𝑡)𝜎1𝑥𝑖(𝑡)𝑇1𝑦𝑖(𝑡)𝜎1𝑥𝑖(𝑡)|𝜎|𝛼𝐺Γ1𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖𝜋(𝑡)+2𝜎2𝑒𝑇𝑖(𝑡)𝑒𝑖=𝜋(𝑡)|𝜎|+2𝛼|𝜎|𝐺Γ1𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡).(3.9)
Similarly, we can obtain the following four inequalities: 𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝐵2𝑔2𝑦𝑖𝑡𝜏1𝑔2𝜎𝑥𝑖𝑡𝜏1𝐵𝛼2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)+𝛾2𝑒𝑇𝑖𝑡𝜏1𝑒𝑖𝑡𝜏1,(3.10)𝑁2𝑁𝑖=12𝑗=1𝑒𝑇𝑖(𝑡)𝐻𝜏𝑖𝑗Γ22𝑦𝑗𝑡𝜏22𝜎𝑥𝑗𝑡𝜏2𝐻𝛼𝜏Γ2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)+𝛿2𝑒𝑇𝑖𝑡𝜏2𝑒𝑖𝑡𝜏2,(3.11)𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝜎𝐵1𝑔1𝑦𝑖𝑡𝜏1𝜎𝑔1𝑥𝑖𝑡𝜏1𝐵|𝜎|𝛼1𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖𝜇(𝑡)+2𝜎2𝑒𝑇𝑖𝑡𝜏1𝑒𝑖𝑡𝜏1,𝜎(3.12)𝑁2𝑁𝑖=12𝑗=1𝑒𝑇𝑖(𝑡)𝐺𝜏𝑖𝑗Γ21𝑦𝑗𝑡𝜏2𝜎1𝑥𝑗𝑡𝜏2𝐺|𝜎|𝛼𝜏Γ2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖𝜋(𝑡)+2𝜎2𝑒𝑇𝑖𝑡𝜏2𝑒𝑖𝑡𝜏2.(3.13)
Substituting (3.6)–(3.13) into (3.5), we can obtain ̇𝑉(𝑡)𝜆min𝐶1+𝐶2𝐴+𝛼2𝛽2𝐵+1+𝛼21+𝛾2𝐴+𝛼1𝜃|𝜎|+2𝐵|𝜎|+𝛼1|𝜎|+𝛼𝐻Γ1𝛿2𝐻+1+𝛼𝜏Γ21+𝛿2+𝛼𝐺Γ1𝜋|𝜎|+2|𝐺𝜎|+𝛼𝜏Γ2+𝛼𝐵|𝜎|𝑘1𝜇2+𝛼𝐺|𝜎|𝜏Γ2𝜋2𝑒|𝜎|𝑇(𝑡)𝑒(𝑡),(3.14)
Taking account of condition (3.3), we have ̇𝑉(𝑡)𝜖1𝑒𝑇(𝑡)𝑒(𝑡)0.
Clearly, 𝑆={𝑒𝑖(𝑡)=0,𝑘𝑖=𝑘,𝑖2} is the largest invariant set contained in {̇𝑉(𝑡)=0}={𝑒𝑖(𝑡)=0,𝑖2}. In terms of the LaSalle invariant principle, the trajectory asymptotically converges to the largest invariant set 𝑆 with any initial value of (3.1), namely, lim𝑡+𝑒𝑖(𝑡)=0,𝑖2. Hence, the GPS between neural networks (2.1) and (2.2) is realized. The proof is completed.

In Theorem 3.1, the generalized projective synchronization of two different neural networks with time delay has been investigated by choosing suitable nonlinear feedback controllers. However, it requires feedback control gains to be large in 𝑢𝑖 given by (3.2), which is not practical. Hence, it is desirable to improve the scheme for reducing the feedback control gains to be as small as possible. Now, we give the following improvement scheme.

Theorem 3.2. Suppose Assumption 2.3 holds; if the nonlinear controllers are chosen as follows: 𝑢𝑖𝐶(𝑡)=𝜎2𝐶1𝑥𝑖(𝑡)𝐴2𝑓2𝜎𝑥𝑖(𝑡)+𝜎𝐴1𝑓1𝑥𝑖(𝑡)𝐵2𝑔2𝜎𝑥𝑖𝑡𝜏1+𝜎𝐵1𝑔1𝑥𝑖𝑡𝜏1𝑁2𝑗=1𝐻𝑖𝑗Γ12𝜎𝑥𝑗(𝑡)+𝐻𝜏𝑖𝑗Γ22𝜎𝑥𝑗𝑡𝜏2+𝜎𝑁1𝑗=1𝐺𝑖𝑗Γ11𝑥𝑗(𝑡)+𝐺𝜏𝑖𝑗Γ21𝑥𝑗𝑡𝜏2𝑘𝑖𝑒𝑖(𝑡),𝑖2,(3.15) where 𝑘𝑖 are the feedback gains, denote 𝑘=min𝑖2{𝑘𝑖}, and if 𝑘𝜆min𝐶2𝐴+𝛼2𝛽2𝐵+1+𝛼21+𝛾2+𝛼𝐻Γ1𝛿2𝐻+1+𝛼𝜏Γ21+𝛿2+𝜖2,(3.16) where 𝜖2 is a positive constant, then the GPS between neural networks (2.1) and (2.2) can be achieved.

Proof. Select the following Lyapunov functional candidate: 1𝑉(𝑡)=2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖𝐵(𝑡)+𝛼2𝛾2𝑁2𝑖=1𝑡𝑡𝜏1𝑒𝑇𝑖(𝑠)𝑒𝑖𝐻(𝑠)𝑑𝑠+𝛼𝜏Γ2𝛿2𝑁2𝑖=1𝑡𝑡𝜏2𝑒𝑇𝑖(𝑠)𝑒𝑖(𝑠)𝑑𝑠.(3.17) Differentiating 𝑉 with respect to time along (3.1), we have 𝑑𝑉𝑑𝑡(3)=𝑁2𝑖=1𝑒𝑇𝑖(𝑡)̇𝑒𝑖𝐵(𝑡)+𝛼2𝛾2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)𝑒𝑇𝑖𝑡𝜏1𝑒𝑖𝑡𝜏1𝐻+𝛼𝜏Γ2𝛿2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)𝑒𝑇𝑖𝑡𝜏2𝑒𝑖𝑡𝜏2=𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝐶2𝑒𝑖(𝑡)𝑘𝑖𝑒𝑖(𝑡)+𝐴2𝑓2𝑦𝑖(𝑡)𝑓2𝜎𝑥𝑖(𝑡)+𝐵2𝑔2𝑦𝑖𝑡𝜏1𝑔2𝜎𝑥𝑖𝑡𝜏1+𝑁2𝑗=1𝐻𝑖𝑗Γ12𝑦𝑗(𝑡)2𝜎𝑥𝑗+(𝑡)𝑁2𝑗=1𝐻𝜏𝑖𝑗Γ22𝑦𝑗𝑡𝜏22𝜎𝑥𝑗𝑡𝜏2𝐵+𝛼2𝛾2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)𝑒𝑇𝑖𝑡𝜏1𝑒𝑖𝑡𝜏1𝐻+𝛼𝜏Γ2𝛿2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)𝑒𝑇𝑖𝑡𝜏2𝑒𝑖𝑡𝜏2.(3.18)
Substituting (3.6), (3.7), (3.10), and (3.11) into (3.18), we can obtain ̇𝑉(𝑡)𝜆min𝐶2𝐴+𝛼2𝛽2𝐵+1+𝛼21+𝛾2+𝛼𝐻Γ1𝛿2𝐻+1+𝛼𝜏Γ21+𝛿2𝑒𝑘𝑇(𝑡)𝑒(𝑡).(3.19) By condition (3.16), we have ̇𝑉(𝑡)𝜖2𝑒𝑇(𝑡)𝑒(𝑡)0. Similarly, in light of the proof of Theorem 3.1, the GPS between neural networks (2.1) and (2.2) can be achieved under nonlinear controllers (3.15) too.

Remark 3.3. Comparing the infimum of the feedback control gain 𝑘 in Theorem 3.2 with that in Theorem 3.1 implies that 𝑘res=𝜆min(𝐶1)+𝛼(𝐴1)(|𝜎|+𝜃2/|𝜎|)+𝛼(𝐵1)|𝜎|+𝛼(𝐺Γ1)(|𝜎|+𝜋2/|𝜎|)+𝛼(𝐺𝜏Γ2)|𝜎|+𝛼(𝐵1)𝜇2/|𝜎|+𝛼(𝐺𝜏Γ2)𝜋2/|𝜎| is the extra part compared with the needed feedback gain in Theorem 3.2. Usually, it can be considered that 𝑘res>0, which will be demonstrated in simulation.
Furthermore, in order to obtain much smaller feedback control gains, we choose more suitable controllers as follows.

Theorem 3.4. Suppose Assumption 2.3 holds. Under the nonlinear controllers 𝑢𝑖(𝑡)=𝜎𝐶2𝑥𝑖(𝑡)𝐶1𝑦𝑖(𝑡)𝐴2𝑓2𝜎𝑥𝑖(𝑡)+𝜎𝐴1𝑓1𝑥𝑖(𝑡)𝐵2𝑔2𝜎𝑥𝑖𝑡𝜏1+𝜎𝐵1𝑔1𝑥𝑖𝑡𝜏1𝑁2𝑗=1𝐻𝑖𝑗Γ12𝜎𝑥𝑗(𝑡)+𝐻𝜏𝑖𝑗Γ22𝜎𝑥𝑗𝑡𝜏2+𝜎𝑁1𝑗=1𝐺𝑖𝑗Γ11𝑥𝑗(𝑡)+𝐺𝜏𝑖𝑗Γ21𝑥𝑗𝑡𝜏2𝑘𝑖𝑒𝑖(𝑡),𝑖2,(3.20) where 𝑘𝑖 are the feedback control gains, denote 𝑘=min𝑖2{𝑘𝑖}, and if 𝑘𝜆min𝐶1+𝐶2𝐴+𝛼2𝛽2𝐵+1+𝛼21+𝛾2+𝛼𝐻Γ1𝛿2𝐻+1+𝛼𝜏Γ21+𝛿2+𝜖3,(3.21) where 𝜖3 is some positive constant, then the GPS between the two neural networks (2.1) and (2.2) can be achieved.

Proof. Choose the same Lyapunov functional as (3.17) in the proof of Theorem 3.2, and then we can get 𝑑𝑉𝑑𝑡(3)=𝑁2𝑖=1𝑒𝑇𝑖𝐶(𝑡)1+𝐶2𝑒𝑖(𝑡)𝑘𝑖𝑒𝑖(𝑡)+𝐴2𝑓2𝑦𝑖(𝑡)𝑓2𝜎𝑥𝑖(𝑡)+𝐵2𝑔2𝑦𝑖𝑡𝜏1𝑔2𝜎𝑥𝑖𝑡𝜏1+𝑁2𝑗=1𝐻𝑖𝑗Γ12𝑦𝑗(𝑡)2𝜎𝑥𝑗+(𝑡)𝑁2𝑗=1𝐻𝜏𝑖𝑗Γ22𝑦𝑗𝑡𝜏22𝜎𝑥𝑗𝑡𝜏2𝐵+𝛼2𝛾2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)𝑒𝑇𝑖𝑡𝜏1𝑒𝑖𝑡𝜏1𝐻+𝛼𝜏Γ2𝛿2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡)𝑒𝑇𝑖𝑡𝜏2𝑒𝑖𝑡𝜏2.(3.22)
Combining (3.6), (3.7), (3.10), (3.11), and (3.22), we could change inequality (3.19) into ̇𝑉(𝑡)𝜆min𝐶1+𝐶2𝐴+𝛼2𝛽2𝐵+1+𝛼21+𝛾2+𝛼𝐻Γ1𝛿2𝐻+1+𝛼𝜏Γ21+𝛿2𝑒𝑘𝑇(𝑡)𝑒(𝑡).(3.23) Taking into account condition (3.21), the GPS can be realized under nonlinear controllers (3.18).

Remark 3.5. It is easy to find that the infimum of 𝑘 in (3.21) has one more term 𝜆min(𝐶1) than (3.16). Because 𝐶1 is diagonally positive definite, 𝜆min(𝐶1)<0 demonstrates the required 𝑘 in Theorem 3.4 is smaller than the one in Theorem 3.2.
If the two neural networks (2.1) and (2.2) have identical number of nodes, node dynamics, and topological structure, that is, 𝑁1=𝑁2, 𝐶1=𝐶2, 𝐴1=𝐴2, 𝐵1=𝐵2, 𝑓1=𝑓2, 𝑔1=𝑔2, 𝐺𝑖𝑗=𝐻𝑖𝑗, 𝐺𝜏𝑖𝑗=𝐻𝜏𝑖𝑗, and Γ1=Γ2, the error system (3.1) can be rewritten as follows: ̇𝑒𝑖(𝑡)=𝐶1𝑒𝑖(𝑡)+𝐴1𝑓1𝑦𝑖(𝑡)𝜎𝑓1𝑥𝑖(𝑡)+𝐵1𝑔1𝑦𝑖𝑡𝜏1𝜎𝑔1𝑥𝑖𝑡𝜏1+𝑢𝑖(𝑡)+𝑁1𝑗=1𝐺𝑖𝑗Γ11𝑦𝑗(𝑡)𝜎1𝑥𝑗+(𝑡)𝑁1𝑗=1𝐺𝜏𝑖𝑗Γ22𝑦𝑗𝑡𝜏2𝜎2𝑥𝑗𝑡𝜏2,𝑖2.(3.24)
Thus, we can obtain the following corollary for synchronizing the error system (3.24).

Corollary 3.6. Suppose the two neural networks (2.1) and (2.2) have identical number of nodes, node dynamics, and topological structure. If the following nonlinear controllers: 𝑢𝑖(𝑡)=𝐴1𝑓1𝜎𝑥𝑖(𝑡)𝜎𝑓1𝑥𝑖(𝑡)𝐵1𝑔1𝜎𝑥𝑖𝑡𝜏1𝜎𝑔1𝑥𝑖𝑡𝜏1𝑘𝑖𝑒𝑖(𝑡)𝑁1𝑗=1𝐺𝑖𝑗Γ11𝑦𝑗(𝑡)𝜎1𝑥𝑗(𝑡)𝑁1𝑗=1𝐺𝜏𝑖𝑗Γ22𝑦𝑗𝑡𝜏2𝜎2𝑥𝑗𝑡𝜏2,𝑖1,(3.25) are employed, where 𝑘𝑖 are the feedback control gains, denote 𝑘=min𝑖2{𝑘𝑖}, and when 𝑘𝐶1+𝛼(𝐴1)(𝛽2+1)+𝛼(𝐵1)(1+𝛾2), the error system (3.24) can be synchronized.

Proof. We construct the Lyapunov function as follows: 1𝑉(𝑡)=2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖𝐵(𝑡)+𝛼1𝛾2𝑁2𝑖=1𝑡𝑡𝜏1𝑒𝑇𝑖(𝑠)𝑒𝑖(𝑠)𝑑𝑠.(3.26) We can conclude that 𝑘𝐶1+𝛼(𝐴1)(𝛽2+1)+𝛼(𝐵1)(1+𝛾2) by using the same method as that in the above theorems.
Furthermore, let us assume that system (3.24) is without time-delay terms; then, system (3.24) reduces to ̇𝑒𝑖(𝑡)=𝐶1𝑒𝑖(𝑡)+𝐴1𝑓1𝑦𝑖(𝑡)𝜎𝑓1𝑥𝑖+(𝑡)𝑁1𝑗=1𝐺𝑖𝑗Γ11𝑦𝑗(𝑡)𝜎1𝑥𝑗(𝑡)+𝑢𝑖(𝑡),𝑖1.(3.27)
Then, a simpler corollary can be produced.

Corollary 3.7. The following controllers: 𝑢𝑖(𝑡)=𝐴1𝑓1𝜎𝑥𝑖(𝑡)𝜎𝑓1𝑥𝑖(𝑡)𝑁1𝑗=1𝐺𝑖𝑗Γ11𝑦𝑗(𝑡)𝜎1𝑥𝑗(𝑡)𝑘𝑖𝑒𝑖(𝑡),𝑖1,(3.28) can be applied to synchronize the system (3.27) when the feedback control gain 𝑘 satisfies the inequality 𝑘𝐶1+𝛼(𝐴1)(𝛽2+1).
Similarly, in the drive network (2.1) and the response network (2.2), if 𝐴1=𝐼𝑁1, 𝐴2=𝐼𝑁2, 𝐵1=𝐵2=0, 1, 2 are linear functions and 𝐺=𝐻=0, the error system (3.1) can be rewritten as follows: ̇𝑒𝑖𝐶(𝑡)=1+𝐶2𝑒𝑖(𝑡)+𝑓2𝑒𝑖+(𝑡)𝑁2𝑗=1𝐻𝜏𝑖𝑗𝜎2𝑒𝑖𝑡𝜏2𝑘𝑖𝑒𝑖(𝑡),𝑖2.(3.29)

Thus, we can obtain the following corollary for synchronizing the error system (3.24).

Corollary 3.8. By applying the nonlinear controllers 𝑢𝑖(𝑡)=𝜎𝐶2𝑥𝑖(𝑡)𝐶1𝑦𝑖(𝑡)𝑓2𝜎𝑥𝑖(𝑡)+𝜎𝑓1𝑥𝑖(𝑡)𝜎𝑁2𝑗=1𝐻𝜏𝑖𝑗Γ2𝑥𝑗𝑡𝜏2+𝜎𝑁1𝑗=1𝐺𝜏𝑖𝑗Γ2𝑥𝑗𝑡𝜏2𝑘𝑖𝑒𝑖(𝑡),𝑖2,(3.30) where 𝑘𝑖 are the feedback control gains, denote 𝑘=min𝑖2{𝑘𝑖}, and when 𝑘𝜆min(𝐶1+𝐶2)+2𝛼(𝐻𝜏Γ2)+1, the GPS between the two neural networks (2.1) and (2.2) can be achieved.

Proof. We construct the Lyapunov function as follows: 1𝑉(𝑡)=2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖𝐻(𝑡)+𝛼𝜏Γ2𝑡𝑡𝜏2𝑒𝑇𝑖(𝑠)𝑒𝑖(𝑠)𝑑𝑠.(3.31) We can conclude that 𝑘𝜆min(𝐶1+𝐶2)+2𝛼(𝐻𝜏Γ2)+1 by using the same method as that in the above theorems. Meanwhile, this conclusion also contains the result of [32].
It is easy to see that the theoretical feedback gains given in the above results (Theorems 3.13.4) are too conservative, usually much larger than the needed value; clearly, it is desirable to make the feedback gains as small as possible. Here, the adaptive technique is adopted to achieve this goal.

Theorem 3.9. Suppose that Assumption 2.3 holds and the feedback controllers are chosen as (3.20). If the feedback control gains satisfy the update law ̇𝑘𝑖=𝜚𝑖𝑒𝑇𝑖(𝑡)𝑒𝑖(𝑡),𝑖2,(3.32) (𝜚𝑖 are arbitrary positive constants), then the GPS between the neural network (2.1) and (2.2) can be realized.

Proof. We construct the Lyapunov function as follows: 1𝑉(𝑡)=2𝑁2𝑖=1𝑒𝑇𝑖(𝑡)𝑒𝑖𝐵(𝑡)+𝛼2𝛾2𝑁2𝑖=1𝑡𝑡𝜏1𝑒𝑇𝑖(𝑠)𝑒𝑖1(𝑠)𝑑𝑠+2𝑁2𝑖=11𝜚𝑖𝑘𝑖𝜚2𝐻+𝛼𝜏Γ2𝛿2𝑁2𝑖=1𝑡𝑡𝜏2𝑒𝑇𝑖(𝑠)𝑒𝑖(𝑠)𝑑𝑠,(3.33) where 𝜚 is a sufficient large positive constant to be determined.
Let 𝑉11(𝑡)=2𝑁2𝑖=11𝜚𝑖𝑘𝑖𝜚2.(3.34) Then we have ̇𝑉1(𝑡)=𝑁2𝑖=1𝑘𝑖𝑒𝜚𝑇𝑖(𝑡)𝑒𝑖(𝑡).(3.35)
Combining (3.18) and (3.35) with (3.33), one can obtain the following inequality: ̇𝑉(𝑡)𝜆min𝐶1+𝐶2𝐴+𝛼2𝛽2𝐵+1+𝛼21+𝛾2+𝛼𝐻Γ1𝛿2𝐻+1+𝛼𝜏Γ21+𝛿2𝑒𝜚𝑇(𝑡)𝑒(𝑡).(3.36) Therefore, according to the above proof of the three theorems, it is easy to verify the conclusion of Theorem 3.9.

4. Numerical Simulations

In this section, several examples are given to verify the conclusions established above. Consider the two neural networks (2.1) and (2.2) with the following parameters 𝐶1=1001,𝐶2=0.97001.1,𝐴1=20.1153.2,𝐴2=,𝐵2.10.15.13.21=1.60.10.182.4,𝐵2=1.500.152.3,Γ1=Γ2=,1001𝐺=511111151111113100111410110141110013,𝐺𝜏=,101000010010011000001100010010010001𝐻=3111131111311113,𝐻𝜏=,1010110001100101(4.1) and 𝑔𝑗𝑖(𝑥)=𝑓𝑗𝑖(𝑥)=tanh(𝑥(𝑡)) for 𝑗=1,2, 𝑁1=6 and 𝑁2=4.

Simulation 1
The four graphs in Figure 1 represent the motion trace of drive and response systems. Comparing Figures 1(b), 1(c), and 1(d) with Figure 1(a), we can find a trivial conclusion. Because the parameters in response system and the ones in drive system are very similar, the shape of the motion orbit of response systems looks like the one of drive system. But the biases between the four traces and the “0” orbit is rather different.

Simulation 2
Setting 𝜃=1, 𝜇=1, 𝜋=1, 𝛽=1, 𝛾=1, 𝛿=1, it is easy to verify that Assumption 2.3 holds. Choose 𝑥𝑖𝑗(𝑡)=100(123𝑖𝑗) and 𝑥𝑖𝑗(𝑡)=100(8+3𝑖+𝑗) for 𝑡0 as the initial values and define 𝐸𝑥=𝑖>𝑗𝑥𝑖(𝑡)𝑥𝑗(𝑡),𝐸𝑦=𝑖>𝑗𝑦𝑖(𝑡)𝑦𝑗(𝑡),𝐸𝑥𝑦=64𝑖=1𝑗=1𝑥𝑖(𝑡)+𝑦𝑗(𝑡),(4.2) for measuring the process of projective synchronization.
In Figure 2, the top three plots present the synchronization process of drive system with 𝛼=0.7,0.7,1, respectively; the middle three graphs demonstrate the change trend of 𝐸𝑦 as 𝑡 with the former three projective factors; the bottom three ones represent the synchronization trace of the two neural networks for the three 𝛼s. The nine figures say that, for different 𝛼, the required synchronization times are all in the interval [4, 6]. However, the change amplitudes of the nine curves show obvious differences. By the same way, for the three projective factors, we can provide the three infimums of feedback control gains that are 𝑘40.65 with 𝛼=0.7, 𝑘40.65 with 𝛼=0.7 and 𝑘39.33 with 𝛼=1, respectively.

Remark 4.1. The reason why we apply three different nonlinear controllers for obtaining Theorems 1, 2, and 3 is that we want to find smaller feedback control gain. By computing, we have 𝑘40.65 for Theorem 3.1, 𝑘21.43 for Theorem 3.2, and 𝑘20.43 for Theorem 3.4, respectively. It shows that our conjecture and simulation meet these results.

Simulation 3
In this simulation, we want to verify the synchronization process of the two neural networks when an adaptive control is applied to the response system. Figure 3 shows the time evolution of 𝐸𝑥, 𝐸𝑦, 𝐸𝑥𝑦 and 𝑘(𝑡), as well as the fourth plot Figure 3(d) tell us that the feedback control gain trends towards 11.15 approximately as 𝑡 which is far lower than 39.33. Actually, this proves that the adaptive control method can diminish the feedback control gain.

5. Conclusion

The GPS between two neural networks with mixed time delays and different parameters was investigated in this paper. By means of the Lyapunov stability theories, the GPS was realized under control of three nonlinear controllers. By comparison, we found that the nonlinear controller in Theorem 3.4 was simpler and easier to ensure that the GPS was achieved. Therefore, it was also applied in practical design. According to the optimal nonlinear controller, an adaptive update technique was designed to guarantee that the feedback control gain was sufficiently small. Eventually, several numerical simulations verified the validity of those results.

Acknowledgments

The authors thank the referees and the editor for their valuable comments on this paper. This work was supported by the National Science Foundation of China under Grant no. 61070087, the Guangdong Education University Industry Cooperation Projects (2009B090300355), the Foundation for Distinguished Young Talents in Higher Education of Guangdong (LYM11115), the Shenzhen Basic Research Project (JC200903120040A, JC201006010743A), and the Shenzhen Polytechnic Youth Innovation Project (2210k3010020).