About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 158731, 9 pages
http://dx.doi.org/10.1155/2013/158731
Research Article

Distributed Consensus for Discrete-Time Directed Networks of Multiagents with Time-Delays and Random Communication Links

1Department of Mathematics, Yangzhou University, Yangzhou 225002, China
2Department of Engineering, Faculty of Engineering and Science, University of Agder, 4898 Grimstad, Norway
3School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing 210094, China

Received 11 May 2013; Accepted 5 June 2013

Academic Editor: Zidong Wang

Copyright © 2013 Yurong Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper is concerned with the leader-following consensus problem in mean-square for a class of discrete-time multiagent systems. The multiagent systems under consideration are the directed and contain arbitrary discrete time-delays. The communication links are assumed to be time-varying and stochastic. It is also assumed that some agents in the network are well informed and act as leaders, and the others are followers. By introducing novel Lyapunov functionals and employing some new analytical techniques, sufficient conditions are derived to guarantee the leader-following consensus in mean-square for the concerned multiagent systems, so that all the agents are steered to an anticipated state target. A numerical example is presented to illustrate the main results.

1. Introduction

In recent years, the multiagent distributed coordination problem has attracted many researchers since it has broad applications in satellite formation flying, cooperative search of unmanned air vehicles, scheduling of automated highway systems, air traffic control, and distributed optimization of multiple mobile robotic systems. In many applications involving multiagent systems, one of the most fundamental problems is that groups of agents need to agree upon certain quantities of interest, which is called the consensus or agreement problem in the literature. Consensus problems have a long history in the field of computer science [1], many distributed control and estimation strategies are designed based on consensus algorithms [28], and consensus problems are used to model many different phenomena involving information flow among agents, including flocking, swarming, synchronization, distributed decision making, and schooling; see, for example, the survey paper [9]. Consensus problems for networked dynamic systems have been extensively studied in the last few years [1012].

Usually, algebraic graph theory [13] acts as a good framework for analyzing consensus problems; see, for example, [10, 11, 14, 15]. In this framework, each agent is modeled as a vertex of a graph, and an edge of the graph joins node to node if agent is receiving information from agent . The models and algorithms for consensus have been recently reported by a number of investigators. In [16], Vicsek et al. proposed a simple discrete-time model to simulate a group of autonomous agents moving in the plane with the same speed but different headings. Vicsek’s model in essence is a simplified version of the model introduced earlier by Reynolds [17]. Based on the algebraic graph theory [18], it has been shown that the network connectivity is a key factor in reaching consensus [11, 14, 15]. It has also been proved that consensus in a network with a dynamically changing topology can be reached if and only if the time-varying network topology contains a spanning tree frequently enough as the network evolves with time [11, 14]. Recently, stochastic-approximation-type algorithms with a decreasing step size are developed, and almost sure convergence is established for consensus seeking; see, for example, [19] and the references therein. It has been recognized that time-delay is unavoidable in signal transmission and is also one of the main sources for causing instability and poor performances of systems [2022]. Recently, the multiagent networks with time-delay have started to receive some initial attention [15, 23, 24].

On the other hand, in many multiagent systems, some agents are well informed and served as leaders, and the others track the leaders and act as followers. It was reported that the leader-following configuration is an energy saving mechanism [25] which was found in many biological systems, and it can also enhance the communication and orientation of the flock [26]. The leader-following consensus has been an active area of research [14, 27, 28]. Such a leader-following consensus problem is considered and proved in [14] that if all the agents were jointly connected with their leader, their states would converge to that of the leader as time goes on. Reference [28] studied a leader-following consensus problem for a multiagent system with a varying-velocity leader and time-varying delays, where the interaction graph among the followers was switching and balanced. Reference [27] investigated the leader-following consensus problem of higher-order multiagent systems. Unfortunately, so far, the delayed networks considered for the leader-following consensus problem are almost continuous-time multiagent systems, and the leader-following consensus problems for discrete-time multiagent systems with time-delay and random communication links have received little research attention. Hence, it is our intention in this paper to tackle such an important yet challenging problem.

In this paper, we will investigate the leader-following consensus problem for the discrete-time directed multiagent systems with time-delay and random communication links. By constructing new Lyapunov functionals and employing some analytical techniques, sufficient conditions for the leader-following consensus in mean-square are established for multiagent system, so that all the agents are steered to an anticipated state target. A numerical example is used to illustrate the proposed theory.

2. Problem Formulation

Throughout this paper, and stand for the natural numbers and the positive integer set, respectively; , , and denote, respectively, the set of real numbers, the dimensional Euclidean space, and the set of all real matrices. The superscript represents the transpose for a matrix, and may stand for any absolute value of real numbers or the standard Euclidean norm from the context. In an underlying probability space , and denote, respectively, the mean and the variance for a random variable, and will mean the expectation of conditional on .

Consider agents distributed according to a directed graph with a set of nodes , a set of edges , and a weighted adjacency matrix with nonnegative adjacency elements . In , the th node represents th agent, and a directed edge (simply called an edge) from node to node denoted as an ordered pair represents a unidirectional information exchange link from node to node ; that is, agent can receive or obtain information from agent , but not necessarily vice versa. The set of neighbors of node is denoted by . A weighted adjacency of a weighted directed graph is defined such that is a positive weight if only (so there is no edge between a node and itself; that is, , for all ). In other words, , if , otherwise . A directed path (simply called a path) of length from to () is a sequence of edges with , and for . A graph is said to be strongly connected if there exists a path between any two distinct nodes in it. For convenience of presentation, the two names, agent and node, will be used interchangeably.

Now consider the dynamics of agents distributed over a directed graph . Let denote the state of node at time , the state of the system accordingly, and let be the weighted adjacency matrix associated with the graph. In general, the dynamics of discrete-time multiagent network with fixed topology are described by where is the time-delay of the information transmission from node to node .

Remark 1. The consensus problem for the multiagent system (1) is considered in [29], and the consensus problem for its continuous-time counterpart (analogue) is investigated in [24], and system (1) without time-delays is also investigated extensively; see, for example, [10, 19] and the references therein.

In multiagent network (1), it is assumed that there is no communication failure between agents. However, during signal exchange of the sensor nodes, an important uncertainty feature is signal losses, which may be caused by the temporary extreme deterioration of the link quality, for instance, due to blocking objects traveling between the transmitter or receiver [30]. Therefore, we consider the general case where each communication link is subject to some probability distribution. Assume that weighted adjacency matrix is time-varying with being random variable. Denote and . As usual, we assume that if and only if , and the set of neighbors of node is denoted by .

Now, the dynamics of discrete-time multiagent network with random communication links are given by where, as in the previous discussion, is the time-delay of the information transmission from node to node . For convenience, we let , (), hereafter.

In practical applications, it is often important to steer the state of each agent in a network to a fixed objective. In this paper, we will consider the regulation of the multiagent network (3) so that all agents can reach a common objective. Suppose that there are some agents acting as leaders and well informed. Specifically, let be the anticipated state target. If necessary, we can relabel the agents, and without loss of generality, we assume that the first agents serve as leaders, and the other ones act as followers. Consider the following controlled multiagent network: where are given constants.

Definition 2. The multiagent network (3) is said to reach leader-following consensus on a state target in mean-square if for any solution of system (4), it always holds that
In this paper, we will investigate the leader-following consensus problem in mean-square for discrete-time multiagent system (3). By constructing novel Lyapunov functionals and employing some new analytical techniques, sufficient conditions are established to ensure the leader-following consensus in mean-square for multiagent system (3).

3. Main Results and Proofs

This section is devoted to the leader-following consensus analysis for system (3), and let us make some necessary preparations before introducing our main results.

Assume that are independent with respect to and and also independent of the initial states.

Let with for , , otherwise . Then, (4) is rewritten as

Denote , and . Then, all row sums of both and are one, and all row sums of are zero; namely,

Also denote the variance of random variable by , where is its standard deviation. Notice that is the variance of random variable , and for is dependent with respect to and . Therefore, we have

For the interaction topology of multiagent system (3), we make the following assumption.

Assumption 3. The graph is strongly connected; namely, the matrix is irreducible.

Lemma 4 (see [31]). Let be a nonnegative matrix; that is, , and let be the spectral radius (called the Perron root of ). In addition, suppose that is strongly connected, then there is a positive vector such that .

From Lemma 4, it follows readily that there exists a positive left eigenvector of such that In the sequel, we denote Also, we make the following assumption.

Assumption 5. Assume that

Remark 6. Notice that in Assumption 5 the condition means that the set of the first nodes and their neighbors contains all the nodes of the network.

We are now in a position to introduce the main results of this paper.

Theorem 7. Consider the multiagent systems (3) and (4). Suppose that Assumptions 3 and 5 are satisfied, and assume that holds. Then, the multiagent network (3) reaches leader-following consensus on the state target in mean-square.

Proof. Let with and denote . Then, the controlled network (6) can be rewritten as
Let is the initial value of network (21), and denote by the -algebras consisting of all events induced by the random variables with , ; that is, . Also denote , and , .
To prove that the multiagent network (3) reaches leader-following consensus on the state target in mean-square, it suffices to prove the mean-square stability of (21). To this end, we construct the following Lyapunov functional: where with Then, for system (21), using (8)–(11), we conduct the following computation: It is not difficult to see that From (27)–(29), it follows that A straightforward computation yields that
Noticing the equality (see (7)), it follows readily that Substituting (31) and (32) into (30) yields that It is easy to see that and
Substituting (34) into (33) results in which implies that Employing the Lyapunov stability theory, we can deduce that . This completes the proof of the theorem.

Remark 8. In Theorem 7, the condition always holds when is sufficiently small. In particular, when the interaction topology of multiagent system is deterministic, the system (3) and the controlled network (4) are reduced, respectively, to In this case, ; accordingly, the condition is always satisfied, and from Theorem 7, we have the following corollary.

Corollary 9. Consider the multiagent systems (38) and (39). Under Assumptions 3 and 5, the multiagent network (38) reaches leader-following consensus on the state target .

In the previous discussion, we only consider scalar individual states, and it is easy to extend them to the case where the individual states are vectors. Consider the following multiagent system of nodes with vector-valued states: and the controlled network is given by where . We have the following results.

Theorem 10. Consider the multiagent systems (40) and (41). Suppose that Assumptions 3 and 5 are satisfied, and assume that holds. Then, the multiagent network (40) reaches the leader-following consensus on the state target in mean-square.

Proof. The proof of this theorem is similar to that of Theorem 7. The minor modification is to replace some scalar multiplication operations by the Kronecker product of matrices, and we omit the details here.

4. A Numerical Example

In this section, we present a numerical example to illustrate the proposed methods.

Example  1. Consider the multiagent networks (3) and (4), and for simplicity, we take . The interaction topology between the agents is shown in Figure 1(a), and other parameters are taken as follows: Clearly, the network topology is strongly connected, and it is also obvious that . Therefore, we can choose . Assume that , , and . By a straightforward computation, we can get that , and it is also easy to see that , , , , and . In this case, , and the multiagent. Therefore, by Theorem 7, network (3) reaches the leader-following consensus on an anticipated state target in mean-square. With the above parameters and a set of initial values produced in a stochastic way, the numerical simulation shown in Figure 1(b) matches well with the theoretical results.

fig1
Figure 1: Numerical simulation.

5. Conclusions

We have investigated the leader-following consensus problem in mean-square for a class of discrete-time multiagent systems. The network under study is bidirectional and contains arbitrary time-delays and the random communication links. Some agents in the network are well informed and serve as leaders. By employing novel Lyapunov functionals and analytical skills, sufficient conditions are established to ensure the leader-following consensus in mean-square for multiagent system. A numerical example is given to demonstrate the proposed approach.

Acknowledgments

This work was supported in part by the International Travel Grant 2011/R3 sponsored by the Royal Society of the UK, the National Natural Science Foundation of China under Grant 61074129, and the Natural Science Foundation of Jiangsu Province of China under Grant BK2012682.

References

  1. N. A. Lynch, Distributed Algorithms, Morgan Kaufmann, San Francisco, Calif, USA, 1997. View at MathSciNet
  2. D. Ding, Z. Wang, H. Dong, and H. Shu, “Distributed H state estimation with stochastic parameters and nonlinearities through sensor networks: the finite-horizon case,” Automatica, vol. 48, no. 8, pp. 1575–1585, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  3. D. Ding, Z. Wang, B. Shen, and H. Shu, “H state estimation for discrete-time complex networks with randomly occurring sensor saturations and randomly varying sensor delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 5, pp. 725–736, 2012.
  4. H. Dong, Z. Wang, and H. Gao, “Distributed filtering for a class of time-varying systems over sensor networks with quantization errors and successive packet dropouts,” IEEE Transactions on Signal Processing, vol. 60, no. 6, pp. 3164–3173, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  5. J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, “Extended Kalman filtering with stochastic nonlinearities and multiple missing measurements,” Automatica, vol. 48, no. 9, pp. 2007–2015, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. J. Hu, Z. Wang, Y. Niu, and L. K. Stergioulas, “H sliding mode observer design for a class of nonlinear discrete time-delay systems: a delay-fractioning approach,” International Journal of Robust and Nonlinear Control, vol. 22, no. 16, pp. 1806–1826, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  7. J. Liang, Z. Wang, B. Shen, and X. Liu, “Distributed state estimation in sensor networks with randomly occurring nonlinearities subject to time-delays,” ACM Transactions on Sensor Networks, vol. 9, no. 1, article 4, 2012. View at Publisher · View at Google Scholar
  8. R. Olfati-Saber, “Distributed Kalman filter with embedded consensus filters,” in Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference (CDC-ECC '05), pp. 8179–8184, Seville, Spain, December 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. W. Ren, R. W. Beard, and E. M. Atkins, “A survey of consensus problems in multi-agent coordination,” in Proceedings of the American Control Conference (ACC '05), vol. 3, pp. 1859–1864, June 2005. View at Scopus
  10. Y. Hatano and M. Mesbahi, “Agreement over random networks,” IEEE Transactions on Automatic Control, vol. 50, no. 11, pp. 1867–1872, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  11. W. Ren and R. W. Beard, “Consensus seeking in multiagent systems under dynamically changing interaction topologies,” IEEE Transactions on Automatic Control, vol. 50, no. 5, pp. 655–661, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  12. B. Shen, Z. Wang, and X. Liu, “Sampled-data synchronization control of dynamical networks with stochastic sampling,” IEEE Transactions on Automatic Control, vol. 57, no. 10, pp. 2644–2650, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  13. C. Godsil and G. Royle, Algebraic Graph Theory, Springer, New York, NY, USA, 2001. View at Publisher · View at Google Scholar · View at MathSciNet
  14. A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups of mobile autonomous agents using nearest neighbor rules,” IEEE Transactions on Automatic Control, vol. 48, no. 6, pp. 988–1001, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  15. R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Transactions on Automatic Control, vol. 49, no. 9, pp. 1520–1533, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  16. T. Vicsek, A. Czirk, E. Ben-Jacob, I. Cohen, and O. Shochet, “Novel type of phase transition in a system of self-driven particles,” Physical Review Letters, vol. 75, no. 6, pp. 1226–1229, 1995. View at Publisher · View at Google Scholar · View at Scopus
  17. C. W. Reynolds, “Flocks, herds, and schools: a distributed behavior model,” Computer Graphics, vol. 21, no. 4, pp. 25–34, 1987. View at Scopus
  18. M. Fiedler, “Algebraic connectivity of graphs,” Czechoslovak Mathematical Journal, vol. 23, no. 98, pp. 298–305, 1973. View at Zentralblatt MATH · View at MathSciNet
  19. M. Huang and J. H. Manton, “Stochastic consensus seeking with noisy and directed inter-agent communication: fixed and randomly varying topologies,” IEEE Transactions on Automatic Control, vol. 55, no. 1, pp. 235–241, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  20. H. Dong, Z. Wang, and H. Gao, “Fault detection for Markovian jump systems with sensor saturations and randomly varying nonlinearities,” IEEE Transactions on Circuits and Systems I, vol. 59, no. 10, pp. 2354–2362, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  21. J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, “Robust sliding mode control for discrete stochastic systems with mixed time delays, randomly occurring uncertainties, and randomly occurring nonlinearities,” IEEE Transactions on Industrial Electronics, vol. 59, no. 7, pp. 3008–3015, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. Z. Wang, B. Shen, H. Shu, and G. Wei, “Quantized H control for nonlinear stochastic time-delay systems with missing measurements,” IEEE Transactions on Automatic Control, vol. 57, no. 6, pp. 1431–1444, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  23. J. Hu, Z. Wang, B. Shen, and H. Gao, “Gain-constrained recursive filtering with stochastic nonlinearities and probabilistic sensor delays,” IEEE Transactions on Signal Processing, vol. 61, no. 5, pp. 1230–1238, 2013.
  24. J. Lu, D. W. C. Ho, and J. Kurths, “Consensus over directed static networks with arbitrary finite communication delays,” Physical Review E, vol. 80, no. 6, Article ID 066121, 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. D. Hummel, “Formation flight as an energy-saving mechanism,” Israel Journal of Zoology, vol. 41, no. 3, pp. 261–278, 1995. View at Scopus
  26. M. Andersson and J. Wallander, “Kin selection and reciprocity in flight formation,” Behavioral Ecology, vol. 15, no. 1, pp. 158–162, 2004. View at Publisher · View at Google Scholar · View at Scopus
  27. W. Ni and D. Cheng, “Leader-following consensus of multi-agent systems under fixed and switching topologies,” Systems & Control Letters, vol. 59, no. 3-4, pp. 209–217, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  28. K. Peng and Y. Yang, “Leader-following consensus problem with a varying-velocity leader and time-varying delays,” Physica A, vol. 388, no. 2-3, pp. 193–208, 2009. View at Publisher · View at Google Scholar · View at Scopus
  29. Y. Liu, D. W. C. Ho, and Z. Wang, “A new framework for consensus for discrete-time directed networks of multi-agents with distributed delays,” International Journal of Control, vol. 85, no. 11, pp. 1755–1765, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  30. M. Huang, S. Dey, G. N. Nair, and J. H. Manton, “Stochastic consensus over noisy networks with Markovian and arbitrary switches,” Automatica, vol. 46, no. 10, pp. 1571–1583, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  31. R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK, 1985. View at MathSciNet