Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
Volume 2015, Article ID 363251, 12 pages
http://dx.doi.org/10.1155/2015/363251
Research Article

Exponential Robust Consensus of Multiagent Systems with Markov Jump Parameters

1Institute of Mathematics and Statistics, Baise University, Baise, Guangxi 533000, China
2Institute of Mathematics and Statistics, Chongqing University of Technology, Chongqing 400054, China
3School of Automation Science and Engineering, South China University of Technology, Guangzhou, Guangdong 510640, China

Received 29 July 2015; Revised 17 September 2015; Accepted 20 September 2015

Academic Editor: Elena Litsyn

Copyright © 2015 He Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Exponential robust consensus of stochastic multiagent systems is studied. Coupling structures of multiagent systems are Markov jump switching; that is, multiagent systems contain Markov jump parameters. Sufficient conditions of almost surely exponential robust consensus are derived by utilizing the stochastic method and the approach of the matrix inequality. Finally, two simulations are shown to demonstrate the validity of the achieved theoretical results.

1. Introduction

As we all know, the study of multiagent systems has attracted a lot of public attention in the last years, partly due to its extensive applications in enormous fields, such as flocking, formation control, and cooperative control of unmanned vehicles. What should not be ignored is that the way of information exchange among multiagents plays a key role in well understanding the coordinated behavior among each individual. Therefore, the consensus problem has become an interesting and inevitable topic for guaranteeing certain agreement on some properties or purposes. In particular, the design of the consensus protocol is a critical and unavoidable issue when we tackle the consensus control problem for multiagent systems. How to construct a novel interaction algorithm such that all agents can realize an agreement on certain variables of common interest states based on the limited or unreliable information exchange is full of indispensable necessary and signification. Consequently, a great variety of consensus protocols [18] have been proposed for first-order or second-order multiagent systems based on various perspectives including the kind of communication time delays, deterministic structure of interaction topology, measurement uncertainties, or communication errors. For instance, the consensus problem was considered in [6] for multiagent systems under fixed connected communication topology where the information contained two cases, that is, fully available and partially available. In [7], a consensus protocol was obtained by using the algebraic graph theory and the stochastic tools for second-order multiagent systems with communication noises under directed fixed and switching topologies. And, with communication errors in [8], a distributed dynamic output feedback algorithm was developed for multiagent systems with only the relative output information between the individual itself and its neighbors under a fixed communication topology.

In many practical systems, the interaction construction or coordinated construction among agents may change abruptly for some reasons, such as random failures and repairs of the components, changes in the interconnections of subsystems, signal channel, and environment changes. Actually, they are a special kind of stochastic dynamic systems with finite operation modes, which may switch from one to another at different time subject to certain laws. Under the circumstances, Markov jump which can be determined by a Markov chain is employed to describe the abrupt phenomena and the switching of the topology structure for various dynamic systems. For example, Markov jump stochastic Cohen-Grossberg neural networks were considered in [9, 10], though, with partially known transition rates, new stability criteria were obtained for stochastic global exponential stability. With uncertain Markov transition probability, in [11], a linear distributed inference protocol was proposed for the average consensus problem of distributed inference in wireless sensor network. Addition, a class of Markovian switching complex networks with mixed time-varying delays under the delay-partition approach is investigated in [12].

The multiagent system is no exception. A series of works related to multiagent with Markovian switching topologies were analyzed; see [1317]. In [14], based on graph theory and the approach of stochastic analysis, containment control problem has been studied for discrete- and continuous-time second-order and even high-order multiagent systems under Markovian switching topologies, respectively. And sufficient conditions for mean square containment control are achieved. As for distributed output feedback control for Markov jump multiagent systems, the work in [15] has considered the case that the information obtainable for every individual is only dependent on the noisy output and the Markov jump parameters. Moreover, distributed output feedback control algorithm has been provided. The following three studies are all related with leader-following control for Markov jump multiagent systems; the work in [16] considers the case that interconnection information among leaders is subject to unexpected change by employing the reciprocally convex approach, Lyapunov-Krasovskii functional and linear matrix inequalities, some conditions for guaranteeing consensus for the Markov jump multiagent systems were derived. The containment tracking problem was investigated in [17], which not only presented the eventually convergence points for the followers but also obtained some necessary and sufficient criterions for the containment control for Markovian switching multiagent systems.

A fact which cannot be neglected is that noises in various forms often occur in practical systems due to environment disturbances during transmission, quantization errors, and measurement errors. The systems with noises can diverge just as what is shown in the work in [18] if we use the traditional consensus protocols. However, in [19], many methods including lifting technique and the stochastic Lyapunov theory are out of work to the analysis of consensus for linear discrete-time multiagent systems, which is because of the presence of noises and delays. The necessary and sufficient conditions were obtained by adopting a new consensus algorithm. Compared with previous works, the work in [7] considered a distributed protocol without relative velocity information for consensus of second-order multiagent systems with communication noises under directed fixed and switching topologies. The consensus algorithms derived in the noisy measurement case were more general to guarantee asymptotic mean square convergence.

Motivated by the above studies with respect to consensus of Markovian jumping multiagent systems, there is still space for development of the consensus problem for multiagent systems with noises under Markovian switching. In this paper, exponential robust consensus of Markovian jumping multiagent systems is investigated. The model investigated here is general. At each node, the uncoupled system defined by can have various dynamical behaviors. The left eigenvectors of the coupling matrix corresponding to the eigenvalue 0 that we explored play key role in the geometrical analysis of the consensus manifold. A critical Lyapunov functional was constructed to analyze the robust consensus of stochastic multiagent systems. By utilizing the approach of the matrix inequality, the criteria were derived to ensure that multiagent systems can reach exponential robust consensus almost surely. Finally, as numerical simulations, there are two systems to be used to illustrate the validity of these conditions.

The paper is organized as follows. Section 2 presents some preliminaries. Then main results and proof, that is, the analysis of the consensus protocol associated with stochastically multiagent system, are given in Section 3. Some simulation results are shown in Section 4. Some conclusions are given in Section 5.

2. Preliminaries

2.1. Preliminary in Graph Theory

Preliminaries about graph theory are mainly introduced in this section.

Let be a weighted directed network of order , where is a vertex set of the directed graph , and is a directed edge set such that each edge is an ordered pair of vertices in . Let be the neighborhood of the vertex . is the weighted adjacency matrix. The weighted matrix elements if and only if , where is an edge of . Moreover, for all . The in-degree of node is defined as . The degree matrix of is denoted by . The Laplacian matrix of is defined as .

A network is called undirected if there is a connection between two nodes and in graph ; then ; otherwise, . A network is directed if there is a connection from node to in graph ; then ; otherwise, . A (directed) path of from vertex to denotes a sequence of distinct vertices with and such that for . We call the graph which contains a spanning tree if there exists a vertex such that there is a directed path from vertex to for all the other vertices , and is called the root vertex.

Remark 1. The Laplacian matrix has a simple eigenvalue zero, and all the other eigenvalues have positive real parts if and only if the directed network has a directed spanning tree. Assume is the left eigenvector of the general Laplacian matrix corresponding to eigenvalue 0, where and .

2.2. Model Description

Suppose that represents the position of agent (). We just consider the consensus problem for the position of multiagent in this paper. Now, exponential robust consensus protocol with Markov jump parameters is presented as follows:where is dynamically nonlinear term of the agent . is a connection matrix, and is a Laplacian matrix: for and for . and are the global coupling strength. One has . is external disturbance function. is a standard white noise. Since the system above is driven by a standard white noise, system (1) can be rewritten in the form of the Itô stochastic differential equation: is a standard Brownian motion. Markovian process is independent of the Brownian motion , and is a right-continuous Markovian process on the probability space . picks up value in a finite state space with generator shown by where and is the transition rate from to if ; otherwise, .

Throughout this paper, let be a complete probability space and is a filtration satisfying the usual conditions. is a scalar Brownian motion defined on the probability space ; , and for ; (see Definition 4). is Lipschitz continuous: that is, there exists a positive constant such that .

2.3. Definitions and Lemmas

In this paper, we make the following definitions, lemma, and two remarks.

Definition 2. A system is called to achieve exponential robust consensus almost surely if there exist two positive constants and , such that Namely,

Lemma 3 (for more details, see [20]). If , then for any stopping times there is as long as the integrations involved exist and are finite.

Definition 4 (see [21]). Function class : let be a positive definite diagonal matrix and is a diagonal matrix. denotes a class of continuous functions satisfying for some , all and .

Remark 5. The matrix is allowed to be any matrix. Hence, the resulting nonlinear term of the agent may be nonmonotonic and more general than the usual sigmoid functions and Lipschitz-type conditions. In fact, function class can contain many common chaotic systems, for example, the Lorenz system [22], the Chen system [23], and the Lü system [24].

Remark 6. It is worth pointing out that most of the existing results concern the issue of consensus without noise. For example, the system in [25] is considered as , while we introduce noise into the network considered here. The model of network (1) is meaningful due to noise which is unavoidable in the real world. However, most systems in the real world will inevitably be affected by the interference of noise; it is why we consider the influence of noise on system (1) in this paper which makes our discussion more meaningful and more realistic.

3. Main Results

In fact, it is inevitable that model errors occur in the process of constructing multiagent systems. Model errors are inconsistent due to fluctuations of information transmission with external disturbance. In order to characterize consensus of multiagent systems, a natural and efficient way is to analyze stability of error systems.

Let , where is the left eigenvector of associated with eigenvalue 0, and one can obtain the error dynamical system of system (1) as follows:

Then one obtains the reduced error system as follows:, , such that

Therefore, through investigating the stability of error dynamical system (9), one can solve almost surely exponential robust consensus of stochastic multiagent System (1). In the following, we show the stability criteria to ensure that error dynamical system (9) achieves exponentially robust stable-state; that is, stochastic multiagent System (1) with Markov jump parameters achieves exponentially robust consensus almost surely.

Theorem 7. Assume that the graph topologies have a spanning tree in a directed network . System (1) achieves exponential robust consensus almost surely if there exist a series of definite and diagonal matrices , a series of matrices (where ), a diagonal matrix , and a series of positive constants such that the following condition holds: where is the minimum positive constant such that holds, , and

Proof. Establish a Lyapunov functional as follows:Obviously, the weak infinitesimal generator is shown as follows:To obtain , we are going to analyze each term of (14) in the following.
Let us start by considering the first term of (14), and let , where the value of is dependent on the matrix . It is not difficult to see thatdue to and .
Then, one can obtain from (15) thatwhere and for , . It is worthy to point out thatIn the end, one can obtain from the last term of (14) thatIn view of (15) to (18), there isAs we know well, . Let be a space composed of . The columns of matrix form a basis of space , whereNow, can be shown by , where . Then, one hasAccording to Lemma 3 (the generalized Itô formula), there exists a positive constant such thatOne obtains from (13) that . Then, there isThen, it is obvious thatConsequently, system (9) achieves exponential robust stability in mean square sense. Then in terms of Th.7.24 in [26], system (1) achieves exponential robust consensus almost surely.
Now, this proof is completed.

Remark 8. In Theorem 7, we just pick up a fixed matrix since the nonlinear term of each agent is the same. For system (1), it should be pointed out that the value of is dependent on the matrix , . Note that the system is just analyzed at . By utilizing the stochastic method, the exponential robust consensus criterion of stochastic multiagent network has been given. This implies that the state-coupled stochastic multiagent network can be validly forced to the objective convergence trajectory by employing coupling controllers.

Remark 9. In [27], the impact of uncertainties and stochastic coupling on synchronization performance is discussed. System (1) is different from the systems in [27]; the multiagent system considered in this paper takes Markov jump parameters into account and renders more practical factor.

Remark 10. Reference [25] also studied consensus of system (1) with Markov jump structures. In [25], it was pointed out that if the expectation of with respect to the stationary distribution satisfies , system (9) will achieve consensus almost surely. If we pick up the following matrix and , the result of [25] in the case is invalid to check out the consensus problem of system (9) due to . But our result is valid. What is more, compared with [25], we get the condition of exponential robust consensus almost surely. It implies that we extend the result of [25].

It is obvious to find that the condition of Theorem 7 is too complex to work out matrices due to matrices associated with . For the convenience of facilitating the controller design problem, the following theorem is given, and it is equivalent to Theorem 7.

Theorem 11. Assume that the graph topologies have a spanning tree in a directed network . System (1) achieves exponential robust consensus almost surely if there exist a series of definite and diagonal matrices , a series of vectors (where ), a diagonal matrix , and a series of positive constants such that the following condition holds: where is the minimum positive constant such that holds, , and

Proof. Based on the Schur Complement, there isSince , , one can obtainThen, this proof is completed according to the conclusion of Theorem 7.

Assume that every agent is just governed by internal communication of all agents; namely, the nonlinear term of agent system does not exist. Then, we know that system (1) can be described asWe just analyze the communication exchange term of multiagent system. In the following corollary, we give the condition to ensure that multiagent system achieves exponential consensus almost surely where multiagent system (30) is composed of agents.

Corollary 12. Assume that the graph topologies have a spanning tree in a directed network . System (30) achieves exponential robust consensus almost surely if there exist a series of definite and diagonal matrices and a series of matrices (where ) such that the following condition holds: where and

Proof. It is obvious to hold in terms of Theorem 7.

4. Simulations

In this section, two examples are presented to demonstrate the validity of main theoretical results. Consider the following exponential consensus protocol governed by Markov jump differential equation:where

Consider the state space with three states, which is denoted by . Pick up the coupling matrices (or connection coefficients) associated with the state space as follows:and then the left eigenvectors of coupling matrices are shown (there may be many left eigenvectors of each matrix with respect to eigenvalue 0, and we just pick up one arbitrarily) as follows:In view of Theorem 7, it is obvious that one can obtain

We take , , and as follows:

Pick up the transition matrix and as follows:

4.1. Simulation 1

In this subsection, an example is proposed to demonstrate the validity of Theorem 7.

Consider agent described by the following system:

It is obvious that and . Pick up and . In view of Theorem 7 and (35)–(39), if the state of system (33) governed by (40) is , one has And their eigenvalues are , respectively.

For , one hasAnd their eigenvalues are , respectively.

For , one hasAnd their eigenvalues are , respectively. From Theorem 7, we know that system (33) composed of agents governed by (40) is exponential robust consensus almost surely with the given control law.

In the following simulation, the initial values are chosen randomly. Trajectory of system (33) is shown in Figure 1. For , agent-, agent-, and agent- represent three states of agent , respectively. In view of Figure 1, one can find that, for state , agent-, agent-, and agent- three agents’ trajectories merge into a trajectory along time , where . The dynamics of every individual agent system is shown in Figure 2. Assume the error of system (33) is . Next, in view of Figure 3, it is easy to see that the system error converges fast nearby 0. And then, the error graph demonstrates almost surely exponential robust consensus of system (33), effectively. From Figures 1, 2, 3, 4, and 5, the simulation results demonstrate the validity of achieved theoretical result obviously.

Figure 1: Dynamic of multiagent system (33) with being given by (40).
Figure 2: Dynamics of each agent of multiagent system (33) with being given by (40).
Figure 3: Dynamics of consensus error among agents in multiagent system (33).
Figure 4: Dynamics of multiagent system (30) with .
Figure 5: Dynamics of synchronization error among agents in multiagent system (30) with .
4.2. Simulation 2

In this subsection, in order to analyze the influence of stochastic disturbances onto the dynamics of the coupled system, we now consider exponential robust consensus model (30). The coefficients and parameters are the same as those in Simulation 1 except the following matrices:

In view of Corollary 12, (37)–(39), and (44), if the state of system (30) governed by (40) is , one hasAnd their eigenvalues are , respectively.

For , one hasAnd their eigenvalues are ; ; , respectively.

For , one hasAnd their eigenvalues are , respectively. From Corollary 12, we know that system (30) (system (33) with ) is exponential robust consensus with the given control law.

In the following simulation, the initial values are chosen randomly, too. Trajectory of system (30) is shown in Figure 4. In view of Figure 4, one can find that, for state , agent-, agent-, and agent- three agents’ trajectories merge into a trajectory along time , where . Consider the error of system (30) as . Next, in view of Figure 5, it is easy to see that the error of system (30) converges fast nearby 0. And then, the error graph demonstrates almost surely exponential robust consensus of system (30), effectively. From Figures 4 and 5, the simulation results demonstrate the validity of achieved theoretical result obviously.

5. Conclusions

In this paper, exponential robust consensus of stochastic multiagent systems is studied. Coupling structures of multiagent systems are Markov jump switching; that is, multiagent systems contain Markov jump parameters. Sufficient conditions of almost surely exponential robust consensus are derived by utilizing the stochastic method and the approach of the matrix inequality. Finally, two simulations are shown to demonstrate the validity of the achieved theoretical results.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grants 11401062 and 61374104. The authors would like to express sincere appreciation to the editor and anonymous reviewers for their valuable comments which have led to an improvement in the presentation of the paper.

References

  1. Y. Zhang and Y.-P. Tian, “Consentability and protocol design of multi-agent systems with stochastic switching topology,” Automatica, vol. 45, no. 5, pp. 1195–1201, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. Y. Liu and Y. Jia, “H consensus control for multi-agent systems with linear coupling dynamics and communication delays,” International Journal of Systems Science, vol. 43, no. 1, pp. 50–62, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. H. Zhao and J. H. Park, “Group consensus of discrete-time multi-agent systems with fixed and stochastic switching topologies,” Nonlinear Dynamics, vol. 77, no. 4, pp. 1297–1307, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. Q. Zhang, S. Chen, and C. Yu, “Impulsive consensus problem of second-order multi-agent systems with switching topologies,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 1, pp. 9–16, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. W. Yu, G. Chen, M. Cao, and J. Kurths, “Second-order consensus for multiagent systems with directed topologies and nonlinear dynamics,” IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, vol. 40, no. 3, pp. 881–891, 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. X. Wang, K. Zhao, Z. You, and L. Zheng, “A nonlinear consensus protocol of multiagent systems considering measuring errors,” Mathematical Problems in Engineering, vol. 2013, Article ID 794346, 8 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  7. S. Djaidja, Q. Wu, and H. Fang, “Consensus of double-integrator multi-agent systems without relative state derivatives under communication noises and directed topologies,” Journal of the Franklin Institute, vol. 352, no. 3, pp. 897–912, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. X. Yang and J. Wang, “Distributed robust H consensus control of multiagent systems with communication errors using dynamic output feedback protocol,” Mathematical Problems in Engineering, vol. 2013, Article ID 979087, 12 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. R. Rao, S. Zhong, and X. Wang, “Delay-dependent exponential stability for Markovian jumping stochastic Cohen-Grossberg neural networks with p-Laplace diffusion and partially known transition rates via a differential inequality,” Advances in Difference Equations, vol. 2013, article 183, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. W. Zhang, “Stability analysis of Markovian jumping impulsive stochastic delayed RDCGNNs with partially known transition probabilities,” Advances in Difference Equations, vol. 2015, article 102, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  11. W. I. Kim, R. Xiong, Q. Zhu, and J. Wu, “Average consensus analysis of distributed inference with uncertain markovian transition probability,” Mathematical Problems in Engineering, vol. 2013, Article ID 505848, 7 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  12. X. Wang, J.-A. Fang, A. Dai, and W. Zhou, “Global synchronization for a class of Markovian switching complex networks with mixed time-varying delays in the delay-partition approach,” Advances in Difference Equations, vol. 2014, article 248, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  13. B.-C. Wang and J.-F. Zhang, “Mean field games for large-population multiagent systems with Markov jump parameters,” SIAM Journal on Control and Optimization, vol. 50, no. 4, pp. 2308–2334, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. G. Miao and T. Li, “Mean square containment control problems of multi-agent systems under Markov switching topologies,” Advances in Difference Equations, vol. 2015, article 157, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  15. B.-C. Wang and J.-F. Zhang, “Distributed output feedback control of Markov jump multi-agent systems,” Automatica, vol. 49, no. 5, pp. 1397–1402, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. M. J. Park, O. M. Kwon, J. H. Park, S. M. Lee, and E. J. Cha, “Randomly changing leader-following consensus control for Markovian switching multi-agent systems with interval time-varying delays,” Nonlinear Analysis. Hybrid Systems, vol. 12, pp. 117–131, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. W. Li, L. Xie, and J.-F. Zhang, “Containment control of leader-following multi-agent systems with Markovian switching network topologies and measurement noises,” Automatica, vol. 51, pp. 263–267, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. L. Xiao, S. Boyd, and S.-J. Kim, “Distributed average consensus with least-mean-square deviation,” Journal of Parallel and Distributed Computing, vol. 67, no. 1, pp. 33–46, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  19. S. Liu, L. Xie, and H. Zhang, “Distributed consensus for multi-agent systems with delays and noises in transmission channels,” Automatica, vol. 47, no. 5, pp. 920–934, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. A. V. Skorokhod, Asymptotic Methods in the Theory of Stochastic Differential Equations, vol. 78, American Mathematical Society, Providence, RI, USA, 2009.
  21. W. Lu and T. Chen, “New approach to synchronization analysis of linearly coupled ordinary differential systems,” Physica D: Nonlinear Phenomena, vol. 213, no. 2, pp. 214–230, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. E. N. Lorenz, “Deterministic nonperiodic flow,” Journal of the Atmospheric Sciences, vol. 20, no. 2, pp. 130–141, 1963. View at Google Scholar
  23. G. Chen and T. Ueta, “Yet another chaotic attractor,” International Journal of Bifurcation and Chaos, vol. 9, no. 7, pp. 1465–1466, 1999. View at Publisher · View at Google Scholar · View at Scopus
  24. J. Lü and G. Chen, “A new chaotic attractor coined,” International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, vol. 12, no. 3, pp. 659–661, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  25. B. Liu, W. Lu, and T. Chen, “Synchronization in complex networks with stochastically switching coupling structures,” IEEE Transactions on Automatic Control, vol. 57, no. 3, pp. 754–760, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, London, UK, 2006. View at MathSciNet
  27. Y. Tang, H. Gao, and J. Kurths, “Distributed robust synchronization of dynamical networks with stochastic coupling,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 61, no. 5, pp. 1508–1519, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus