Abstract

We study complete synchronization of the complex dynamical networks described by linearly coupled ordinary differential equation systems (LCODEs). Here, the coupling is timevarying in both network structure and reaction dynamics. Inspired by our previous paper (Lu et al. (2007-2008)), the extended Hajnal diameter is introduced and used to measure the synchronization in a general differential system. Then we find that the Hajnal diameter of the linear system induced by the time-varying coupling matrix and the largest Lyapunov exponent of the synchronized system play the key roles in synchronization analysis of LCODEs with identity inner coupling matrix. As an application, we obtain a general sufficient condition guaranteeing directed time-varying graph to reach consensus. Example with numerical simulation is provided to show the effectiveness of the theoretical results.

1. Introduction

Complex networks have widely been used in theoretical analysis of complex systems, such as Internet, World Wide Web, communication networks, and social networks. A complex dynamical network is a large set of interconnected nodes, where each node possesses a (nonlinear) dynamical system and the interaction between nodes is described as diffusion. Among them, linearly coupled ordinary differential equation systems (LCODEs) are a large class of dynamical systems with continuous time and state.

The LCODEs are usually formulated as follows: where stands for the continuous time and denotes the variable state vector of the th node, represents the node dynamic of the uncoupled system, denotes coupling strength, with denotes the interaction between the two nodes, and , denotes the inner coupling matrix. The LCODEs model is widely used to describe the model in nature and engineering. For example, the authors study spike-burst neural activity and the transitions to a synchronized state using a model of linearly coupled bursting neurons in [1]; the dynamics of linearly coupled Chua circuits are studied with application to image processing and many other cases in [2].

For decades, a large number of papers have focused on the dynamical behaviors of coupled systems [35], especially the synchronizing characteristics. The word “synchronization” comes from Greek; in this paper the concept of local complete synchronization (synchronization for simplicity) is considered (see Definition 3). For more details, we refer the readers to [6] and the references therein.

Synchronization of coupled systems have attracted a great deal of attention [79]. For instances, in [7], the authors considered the synchronization of a network of linearly coupled and not necessarily identical oscillators; in [8], the authors studied globally exponential synchronization for linearly coupled neural networks with time-varying delay and impulsive disturbances. Synchronization of networks with time-varying topologies was studied in [1016]. For example, in [10], the authors proposed the global stability of total synchronization in networks with different topologies; in [16], the authors gave a result that the network will synchronize with the time-varying topology if the time-average is achieved sufficiently fast.

Synchronization of LCODEs has also been addressed in [1719]. In [17], mathematical analysis was presented on the synchronization phenomena of LCODEs with a single coupling delay; in [18], based on geometrical analysis of the synchronization manifold, the authors proposed a novel approach to investigate the stability of the synchronization manifold of coupled oscillators; in [19], the authors proposed new conditions on synchronization of networks of linearly coupled dynamical systems with non-Lipschitz right-handsides. The great majority of research activities mentioned above all focused on static networks whose connectivity and coupling strengths are static. In many applications, the interaction between individuals may change dynamically. For example, communication links between agents may be unreliable due to disturbances and/or subject to communication range limitations.

In this paper, we consider synchronization of LCODEs with time-varying coupling. Similar to [1719], time-varying coupling will be used to represent the interaction between individuals. In [6, 13], they showed that the Lyapunov exponents of the synchronized system and the Hajnal diameter of the variational equation play key roles in the analysis of the synchronization in the discrete-time dynamical networks. In this paper, we extend these results to the continuous-time dynamical network systems. Different from [11, 16], where synchronization of fast-switching systems was discussed, we focus on the framework of synchronization analysis with general temporal variation of network topologies. Additional contributions of this paper are that we explicitly show that (a) the largest projection Lyapunov exponent of a system is equal to the logarithm of the Hajnal diameter, and (b) the largest Lyapunov exponent of the transverse space is equal to the largest projection Lyapunov exponent under some proper conditions.

The paper is organized as follows: in Section 2, some necessary definitions, lemmas, and hypotheses are given; in Section 3, synchronization of generalized coupled differential systems is discussed; in Section 4, criteria for the synchronization of LCODEs are obtained; in Section 5, we obtain a sufficient condition ensuring directed time-varying graph reaching consensus; in Section 6, example with numerical simulation is provided to show the effectiveness of the theoretical results; the paper is concluded in Section 7.

Notions. denotes the -dimensional vector with all components zero except the th component 1, denotes the -dimensional column vector with each component 1; for a set in some Euclidean space , denotes the closure of , denotes the complementary set of , and ; for , denotes some vector norm, and for any matrix , denotes some matrix norm induced by vector norm, for example, and ; for a matrix , denotes a matrix with ; for a real matrix , denotes its transpose and for a complex matrix , denotes its conjugate transpose; for a set in some Euclidean space , , where ; denotes the cardinality of set ; denotes the floor function, that is, the largest integer not more than the real number ; denotes the Kronecker product; for a set in some Euclidean space , denote the Cartesian product ( times).

2. Preliminaries

In this section we will give some necessary definitions, lemmas, and hypotheses. Consider the following general coupled differential system: with initial state , where denotes the initial time, denotes the continuous time, and denotes the variable state of the th node, .

For the functions , , we make the following assumption.

Assumption 1. (a) There exists a function such that for all , , and ; (b) for any , is -smooth for all , and by denotes the Jacobian matrix of with respect to ; (c) there exists a locally bounded function such that for all ; (d) is uniformly locally Lipschitz continuous: there exists a locally bounded function such that for all and ; (e) and are both measurable for .

We say a function is locally bounded if for any compact set , there exists such that holds for all .

The first item of Assumption 1 ensures that the diagonal synchronization manifold is an invariant manifold for (2).

If is the synchronized state, then the synchronized state satisfies

Since is -smooth, then can be denoted by the corresponding continuous semiflow of the intrinsic system (5). For , we make following assumption.

Assumption 2. The system (5) has an asymptotically stable attractor: there exists a compact set such that (a) is invariant through the system (5), that is, for all ; (b) there exists an open bounded neighborhood of such that ; (c) is topologically transitive; that is, there exists such that , the limit set of the trajectory , is equal to [3].

Definition 3. Local complete synchronization (synchronization for simplicity) is defined in the sense that the set is an asymptotically stable attractor in . That is, for the coupled dynamical system (2), differences between components converge to zero if the initial states are picked sufficiently near , that is, if the components are all close to the attractor and if their differences are sufficiently small.

Next we give some lemmas which will be used later, and the proofs can be seen in the appendix.

Lemma 4. Under Assumption 1, one has for all and , where .

Lemma 5. Under Assumptions 1 and 2, there exists a compact neighborhood of such that for all and .

Let , where . We have the following variational equation near the synchronized state : or in matrix form: where denotes the Jacobin matrix for simplicity.

From [20], we can give the results on the existence, uniqueness, and continuous dependence of (2) and (9).

Lemma 6. Under Assumption 2, each of the differential equations (2) and (9) has a unique solution which is continuously dependent on the initial condition.

Thus, the solution of the linear system (9) can be written in matrix form.

Definition 7. Solution matrix of the system (9) is defined as follows. Let , where denotes the th column and is the solution of the following Cauchy problem:

Immediately, according to Lemma 6, we can conclude that the solution of the following Cauchy problem can be written as .

We define the time-varying Jacobin matrix by the following way: with , where is the collection of all the subsets of .

Definition 8. For a time varying system denoted by , we can define its Hajnal diameter of the variational system (9) as follows: where for a matrix in block matrix form: with , its Hajnal diameter is defined as follows: where .

Lemma 9 (Grounwell-Beesack's inequality). If function satisfies the following condition: where and are some measurable functions, then one has

Based on Assumption 1, for the solution matrix , we have the following lemma.

Lemma 10. Under Assumption 1, one has the following:(1), where denotes the solution matrix of the following Cauchy problem: (2)for any given and the compact set given in Lemma 5, is bounded for all and and equicontinuous with respect to .

Let be a matrix with satisfying (a) for some orthogonal matrix and all ; (b) is also an orthogonal matrix in . We also write and its inverse in the form where and . According to Lemma 10, we have Since which implies that each row of is located in the subspace orthogonal to the subspace , we can conclude that . Then, we have where denotes the common row sum of as defined in Lemma 10, , denotes a matrix, and we omit its accurate expression. One can see that is the solution matrix of the following linear differential system.

Definition 11. We define the following linear differential system by the projection variational system of (9) along the directions : where .

Definition 12. For any time varying variational system , we define the Lyapunov exponent of the variational system (9) as follows: where and .

Similarly, we can define the projection Lyapunov exponents by the following projection time-varying variation: that is, where and . Let

Then, we have the following lemma.

Lemma 13. .

Remark 14. From Lemma 13, we can see that the largest projection Lyapunov exponent is independent of the choice of matrix .

Consider the time-varying driven by some metric dynamical system MDS, where is the compact state space, is the -algebra, is the probability measure, and is a continuous semiflow. Then, the variational equation (9) is independent of the initial time and can be rewritten as follows: In this case, we denote the solution matrix, the projection solution matrix, and the solution matrix on the synchronization space by , , and , respectively. For simplicity, we write them as , , and , respectively. Also, we write the Lyapunov exponents and the projection Lyapunov exponent as follows: We add the following assumption.

Assumption 15. (a) is a continuous semiflow; (b) is a continuous map for all .

The following are involving linear differential systems. For more details, we refer the readers to [21]. For a continuous scalar function , we denote its Lyapunov exponent by The following properties will be used later: (1), where , , are constants;(2)if , which is finite, then ;(3);(4)for a vector-value or matrix-value function , we define .

For the following linear differential system: where , a transformation is said to be a Lyapunov transformation if satisfies(1);(2), , are bounded for all .It can be seen that the class of Lyapunov transformations forms a group and the linear system for should be where . Then, we say system (30) is a reducible system of system (29). We define the adjoint system of (29) by If letting be the fundamental matrix of (29), then is the fundamental matrix of (31). Thus, we say the system (29) is a regular system if the adjoint systems (29) and (31) have convergent Lyapunov exponent series: and , respectively, which satisfy for , or its reducible system (30) is also regular.

Lemma 16. Suppose that Assumptions 1, 2, and 15 are satisfied. Let be the Lyapunov exponents of the variational system (26), where correspond to the synchronization space and the remaining correspond to the transverse space. Let and . If (a) the linear system (17) is a regular system, (b) for all , (c) , then .

3. General Synchronization Analysis

In this section we provide a methodology based on the previous theoretical analysis to judge whether a general differential system can be synchronized or not.

Theorem 17. Suppose that is the compact subset given in Lemma 5, and Assumptions 1 and 2 are satisfied. If then the coupled system (2) is synchronized.

Proof. The main techniques of the proof come from [3, 6] with some modifications. Let be the semiflow of the uncoupled system (5). By the condition (32), there exist satisfying and such that , and . For each , there must exist such that for all . According to the equicontinuity of , there exists such that for any , for all . According to the compactness of , there exists a finite positive number set with for all such that for any , there exists such that for all . Let be the collective states which is the solution of the coupled system (2) with initial condition , . And let be the solution of the synchronization state equation (5) with initial condition . Then, letting , we have where , , , are obtained by the mean value principle of the differential functions. Letting , we can write the equations above in matrix form: and denote its solution matrix by . Then, for any there exists such that for all and according to the 3th item of Assumption 1. Then, we have By Lemma 9, we have Let Picking sufficiently small such that for each , there exists such that and for all .
Thus, we are to prove synchronization step by step.
For any , there exists such that Therefore, we have , which implies that and .
Then, reinitiated with time and condition , continuing with the phase above, we can obtain that . Namely, the coupled system (2) is synchronized. Furthermore, from the proof, we can conclude that the convergence is exponential with rate where , and uniform with respect to and . This completes the proof.

Remark 18. According to Assumption 2 that attractor is asymptotically stable and the properties of the compact neighbor given in Lemma 5, we can conclude that the quantity is independent on the choice of .

If the timevariation is driven by some MDS and there exists a metric dynamical system , where is the product -algebra on , is the probability measure, and . From Theorem 17, we have the following.

Corollary 19. Suppose that the conditions in Lemma 16 are satisfied, is compact in the topology defined in this MDS, the semiflow is continuous, and on the Jacobian matrix is continuous. Let be the Lyapunov exponents of this MDS with multiplicity and correspond to the synchronization space. If where denotes the ergodic probability measure set supported in the MDS , then the coupled system (2) is synchronized.

4. Synchronization of LCODEs with Identity Inner Coupling Matrix and Time-Varying Couplings

In this section we study synchronization in linearly coupled ordinary differential equation systems (LCODEs) with time-varying couplings. Considering the following LCODEs with identity inner coupling matrix: where denotes the state variable of the th node, is a differential map, denotes coupling strength, and denotes the coupling coefficient from node to at time , for all , which are supposed to satisfy the following assumption. Here, we highlight that the inner coupling matrix is the identity matrix.

Assumption 20. (a) , are measurable and ; (b) there exists such that for all .
Similarly, we can define the Hajnal diameter of the following linear system: Let be the fundamental solution matrix of the system (42). Then, its solution matrix can be written as . Thus, the Hajnal diameter of the system (42) can be defined as follows:

By Theorem 17, we have the following theorem.

Theorem 21. Suppose Assumptions 1, 2, and 20 are satisfied. Let be the largest Lyapunov exponent of the synchronized system , that is, If , then the LCODEs (41) is synchronized.

Proof. Considering the variational equation of (41): Let be the solution matrix of the synchronized state system (17) and be the solution matrix of the linear system (42). We can see that is the solution matrix of the variational system (45). Then, This implies that the Hajnal diameter of the variational system (45) is less than . This completes the proof according to Theorem 17.

For the linear system (42), we firstly have the following lemma.

Lemma 22 (see [22]). is a stochastic matrix.

From Lemmas 13 and 16, we have the following corollary.

Corollary 23. , where denotes the largest one of all the projection Lyapunov exponents of system (41). Moreover, if the conditions in Lemma 16 are satisfied, then , where denotes the largest one of all the Lyapunov exponents corresponding to the transverse space, that is, the space orthogonal to the synchronization space.

If is periodic, we have the following.

Corollary 24. Suppose that is periodic. Let , , are the Floquet multipliers of the linear system (42). Then, there exists one multiplier denoted by and .

If is driven by some MDS, from Corollaries 19 and 23, we have the following corollary.

Corollary 25. Suppose is continuous on and conditions in Lemma 16 are satisfied. Let , , , be the Lyapunov exponents of the linear system (42) with , and . If , then the coupled system (41) is synchronized.

Let be the set consisting of all compact time intervals in and be the the set consisting of all graph with vertex set .

Define where is a graph with vertex set and its edge set is defined as follows: there exists an edge from vertex to vertex if and only if . Namely, we say that there is a -edge from vertex to across .

Definition 26. We say that the LCODEs (41) has a -spanning tree across the time interval if the corresponding graph has a spanning tree.

For a stochastic matrix , let where , , and . Then, we can also define that is -scrambling if .

Theorem 27. Suppose Assumption 20 is satisfied. if and only if there exist and such that the LCODEs (41) has a -spanning tree across any -length time interval.

Remark 28. Different from [16], we do not need to assume that has zero column sums and the timeaverage is achieved sufficiently fast.

Before proving this theorem, we need the following lemma.

Lemma 29. If the LCODEs (41) has a -spanning tree across any -length time interval, then there exist and such that is -scrambling for any -length time interval.

Proof of Theorem 27. Sufficiency. From Lemma 29, we can conclude that there exist , , and such that is -scrambling across any -length time interval and . For any , let , where is an integer and and , . Then, we have For the first inequality, we use the results in [23, 24]. This implies .
Necessity. Suppose that for any and , there exists , does not have a -spanning tree. According to the condition, there exist , , and such that for all and and . Thus, picking , , , and , there exist two vertex set and such that if and , or and . For each and , we have Then, Let . According to Lemma 9, we have Similarly, we can conclude that for all . Without loss of generality, we suppose and , where and are integers with . Then, we can write in the following matrix form: where and correspond to the vertex subset and , respectively. Immediately, we have . Let . We let Let with and , , . Then, Also, This implies which leads contradiction with . Therefore, we can conclude the necessity.

5. Consensus Analysis of Multiagent System with Directed Time-Varying Graphs

If we let , , and in system (41), then we have In this case, if Assumption 20 is satisfied, then the synchronization analysis of system (57) becomes another important research field named consensus problems.

Definition 30. We say the differential system (57) reaches consensus if for any , as for all .

In graph view, the coefficients matrix of (57) is equal to the negative graph Laplacian associated with the digraph at time , where is a weighted digraph (or directed graph) with vertices, the set of nodes , set of edges , and the weighted adjacency matrix with nonnegative adjacency elements . An edge of is denoted by if there is a directed edge from vertex to vertex at time . The adjacency elements associated with the edges of the graph are positive, that is, , for all . It is assumed that for all . The indegree and outdegree of node at time are, respectively, defined as follows: The degree matrix of digraph is defined as at time . The graph Laplacian associated with the digraph at time is defined as

Let defined as before. We say that the digraph has a -spanning tree across the time interval if has a spanning.

Theorem 31. Suppose Assumption 20 is satisfied. The system (57) reaches consensus if and only if there exist and such that the corresponding digraph has a -spanning tree across any -length time interval.

Proof. Since , we have in Theorem 21. This completes the proof according to Theorems 27 and 21.

Remark 32. This theorem is a part of Theorem 17 in [25].

6. Numerical Examples

In this section, a numerical example is given to demonstrate the effectiveness of the presented results on synchronization of LCODEs with time-varying couplings. The Lyapunov exponents are computed numerically. By this way, we can verify the the synchronization criterion and analyze synchronization numerically. We use the Rössler system [16, 26] as the node dynamics where , , and . Figure 1 shows the dynamical behaviors of the Rössler system (60) with random initial value in that includes a chaotic attractor [16, 26].

The network with time-varying topology we used here is NW small-world network with a time-varying coupling, which was introduced as the blinking model in [11, 27]. The time-varying network model algorithm is presented as follows: we divide the time axis into intervals of length , in each interval: (a) begin with the nearest neighbor coupled network consisting of nodes arranged in a ring, where each node is adjacent to its -nearest neighbor nodes; (b) add a connection between each pair of nodes with probability , which usually is a random number between ; for more details, we refer the readers to [11]. Figure 2 shows the time-varying structure of shortcut connections in the blinking model with and .

In this example, the parameters are taken values as , , , and . Then blinking small-world network can be generated with the coupling graph Laplacian . The dynamical network system can be described as follows:

Let denotes the maximum distance between nodes at time . Let , for some sufficiently large and . Let defined in Corollary 25. As described in Corollary 25, two steps are needed for verification: (a) calculating the largest Lyapunov exponent of the uncoupled synchronized system (60), and (b) calculating the second largest Lyapunov exponent of the linear system (42). In detail, we use Wolf's method [28] to compute and the Jacobian method [29] to compute Lyapunov spectra of (42). More details can be found in [2830]. Figure 3 shows convergence of the maximum distance between nodes during the topology evolution with a different coupling strength . It can be seen from Figure 3 that the dynamical network system (61) can be synchronized with and .

We pick the time length 200. Let and . And choose initial state randomly from the interval . Figure 4 shows the variation of and with respect to the coupling strength . It can be seen that the parameter (coupling strength ) region where is negative coincides with that of synchronization, that is, where is near zero. This verified the theoretical result (Corollary 25). In addition, we find that is the threshold for synchronizing the coupled systems in this case.

7. Conclusions

In this paper, we present a theoretical framework for synchronization analysis of general coupled differential dynamical systems. The extended Hajnal diameter is introduced to measure the synchronization. The coupling between nodes is timevarying in both network structure and reaction dynamics. Inspired by the approaches in [6, 13], we show that the Hajnal diameter of the linear system induced by the time-varying coupling matrix and the largest Lyapunov exponent of the synchronized system play the key roles in synchronization analysis of LCODEs. These results extend synchronization analysis of discrete-time network in [6] to continuous-time case. As an application, we obtain a very general sufficient condition ensuring directed time-varying graph reaching consensus, and the way we get this result is different from [25]. An example of numerical simulation is provided to show the effectiveness the theoretical results. Additional contributions of this paper are that we explicitly show that the largest projection Lyapunov exponent, the Hajnal diameter, and the largest Lyapunov exponent of the transverse space are equal to each other in coupled differential systems (see Lemmas 13 and 16), which was proved in [6] for couple discrete-time systems.

Appendix

Proof of Lemma 5. Let be a bounded open neighborhood of satisfying and . This implies if , is an open set due to the continuity of the semiflow , and for all . Let . We claim that there exists such that for all .
For any , let and . We can conclude that . We will prove in the following that there exists such that . Otherwise, there always exists for . Let . We have (i) and (ii) . For any limit point of , can be either finite or infinite. For both cases, which implies . However, the claim (i) implies that , which contradicts with the claim (ii). This completes the proof by letting .

Proof of Lemma 10. (a) For any initial condition with the form , the solution of (11) can be according to Lemma 4. This implies the first claim in this lemma.
(b) According to Lemma 5, there exists such that , the solution of (5), satisfies for all and . So, there exists such that according to the 3th item of Assumption 1. Write the solution of (11) as Then, According to Lemma 9, we have . This implies that for all and .
For any , let and be the solution of the synchronized state equation (5) with initial condition and , respectively. We have By Lemma 9, we have for all and . Also, according to the 4th item of Assumption 1, there must exist such that for all and . Then, let , , and . We have According to Lemma 9, This implies for all . This completes the proof.

Proof of Lemma 13. We define the projection joint spectral radius as follows: First, we will prove that . For any , there exists such that for all and . This implies that for some , all and all . Thus, there exist some and some matrix function such that for all and , where denotes a matrix, and we omit its accurate expression. So, we can conclude that for some , all , and . This implies that , that is, due to the arbitrariness of . Conversely, for any , there exists such that for some , all , and , where the first rows of . Then, for some , all , and , where and denotes a matrix, and we omit its accurate expression. This implies that holds for some , all , and . Therefore, we can conclude that . So, .
Second, it is clear that . We will prove that . Otherwise, there exists some satisfying . If so, there exists a sequence as , , and with such that for all . Then, there exists a subsequence with . Let be a normalized orthogonal basis of . And, let . We have for all . Thus, there exists such that for all . This implies which contradicts with . This implies . Therefore, we can conclude . The proof is completed.

Proof of Lemma 16. Let . We have
Write , where . Then, we have Thus, we can write its solution by
We write , , and by , , and , respectively for simplicity.
Case 1 (). We can conclude that and From Cauchy-Buniakowski-Schwarz inequality, we have
Claim 1 (). Considering the linear system due to its regularity and the boundedness of its coefficients, there exists a Lyapunov transform such that letting , consider the transformed linear system Let solution matrix , which satisfies that and are lowertriangular. And its Lyapunov exponents can be written as follows: which are just the Lyapunov exponents of the regular linear system (A.18), . We have and This implies By induction, we can conclude that for all . For , due to the lower-triangularity of the matrix .
Considering the lower-triangular matrix , its transpose can be regarded as the solution matrix of the adjoint system of (A.18): which is also regular. By the same arguments, we can conclude that for all , for all , and for all . Therefore, for each , This implies that .
Noting that So, . This leads to . This implies that . Thus, can be concluded due to .
Case 2 (). For any with , there exists such that
for all . Define the subspace of : which is well defined due to . For each with initial condition , we have and according to the arguments above. Thus, we have . Since , define the transverse space and . This completes the proof.

Proof of Lemma 22. Since satisfies Assumption 20, if the initial condition is , then the solution must be , which implies that each row sum of is one. Then, we will prove all elements in are nonnegative. Consider the column of denoted by which can be regarded as the solution of the following equation: For any , if is the index with , we have . This implies that is always nondecreasing for all . Therefore, holds for all and . We can conclude that is a stochastic matrix. The proof is completed.

Proof of Lemma 29. Consider the following Cauchy problem: Noting that , we have . For each , since for all and , we have So, if there exists a -edge from vertex to across , then we have . Let . We can see that has a spanning tree across any -length time interval. Therefore, according to [31, 32], there exist and such that is scrambling across any -length time interval. The Lemma is proved.

Acknowledgments

This work is jointly supported by the National Key Basic Research and Development Program (no. 2010CB731403), the National Natural Sciences Foundation of China under Grant (nos. 61273211 and 61273309), the Shanghai Rising-Star Program (no. 11QA1400400), the Marie Curie International Incoming Fellowship from the European Commission under Grant FP7-PEOPLE-2011-IIF-302421, and the Laboratory of Mathematics for Nonlinear Science, Fudan University.