Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2013 / Article
Special Issue

Resource-Constrained Signal Processing in Sensor Networks

View this Special Issue

Research Article | Open Access

Volume 2013 |Article ID 505848 | https://doi.org/10.1155/2013/505848

Won Il Kim, Rong Xiong, Qiuguo Zhu, Jun Wu, "Average Consensus Analysis of Distributed Inference with Uncertain Markovian Transition Probability", Mathematical Problems in Engineering, vol. 2013, Article ID 505848, 7 pages, 2013. https://doi.org/10.1155/2013/505848

Average Consensus Analysis of Distributed Inference with Uncertain Markovian Transition Probability

Academic Editor: Shuli Sun
Received18 Jun 2013
Revised09 Oct 2013
Accepted29 Oct 2013
Published20 Nov 2013

Abstract

The average consensus problem of distributed inference in a wireless sensor network under Markovian communication topology of uncertain transition probability is studied. A sufficient condition for average consensus of linear distributed inference algorithm is presented. Based on linear matrix inequalities and numerical optimization, a design method of fast distributed inference is provided.

1. Introduction

During the past few decades, consensus problems of multiagent systems by information exchange have been extensively studied by many researchers, due to their widespread applications in autonomous spacecraft, unmanned air vehicles, mobile robots, and distributed sensor networks. Olfati-Saber and Murray introduced in [1, 2] a theoretical framework for solving consensus problems. In [3, 4], consensus problems of first-order integrator systems were proposed based on algebra graph theory. In [5, 6], consensus problems of directed second-order systems were presented. In [5], the authors provided necessary and sufficient condition for reaching mean square consensus of discrete-time second order systems. Consensus conditions were studied in [6] of continuous-time second order systems by Linear Matrix Inequality (LMI) approach.

Among consensus problems, the average consensus problem is challenging which requires distributed computation of the average of the initial state of a network [1, 2]. For a strongly connected network, [1] proved that the average consensus problem is solvable if and only if the network is balanced. The discrete-time average consensus plays a key role in distributed inference in sensor networks. In networks of fixed topology, [7] gave necessary and sufficient conditions for linear distributed inference to achieve average consensus. A design method was presented in [7] to implement linear distributed inference of fastest consensus. Because of noisy communication channels, link failures often occur in a real network. Therefore, it is meaningful to study distributed inference in networks of swing topology. Through a common Lyapunov function, a result of [1] stated that distributed inference reaches average consensus in a network of swing topology if the network holds strongly connected and balanced topology. Reference [8] modeled a network of swing topology using a Bernoulli process and established a necessary and sufficient condition for average consensus of distributed inference. The condition is related to a mean Laplacian matrix.

The Bernoulli process in [8] means that the network link failure events are temporally independent. From the viewpoint of engineering, it is more reasonable to consider network link failures of temporal independence. The most famous and most tractable stochastic process of temporal independence is Markovian chain in which any future event is independent of the past events and depends only on the present event. This motivates us to model a network of swing topology using a Markovian chain and hence to study distributed inference using Markovian jump linear system method [912]. In practice, transition probabilities of a Markovian chain are not known precisely a priori, and only estimated values of transition probabilities are available. Hence, this paper thinks of networks with Markovian communication of uncertain transition probability.

In fact, in the research on networked control systems, Markovian chain has been used by several authors to describe random communication in networks. Reference [13] provided packet-loss model by Markovian chain in networked control. Under network communication of update times driven by Markovian chain, [14] gave stability conditions of model-based networked control systems. Networked control systems with bounded packet losses and transmission delays are modeled through Markovian chain in [15]. The networked predictive control system in [16] adopted 2 Markovian chains to express date transmission in both the controller-actuator channel and the sensor-controller channel.

In this paper, is used to denote the set of all nonnegative integers. The real identity matrix of is denoted by . Let 1 be the vector whose elements are all equal to 1. The Euclidean norm is denoted by . If a matrix is positive (negative) definite, it is denoted by (<0). The notation within a matrix represents the symmetric term of the matrix. The expected value is represented by .

The paper is organized as follows. Section 2 contains a description of the network and linear distributed inference. Section 3 presents average consensus conditions and a design method. Numerical simulation results are in Section 4. Finally, Section 5 draws conclusions.

2. Network Description

Consider distributed inference in a wireless sensor network consisting of sensor. Each sensor collects a local measurement about the situation of environment. It is assumed that these local measurements are independent and identically distributed random variables. The goal of inference is for all sensors to reach the global measurement such that the true situation of environment can be monitored convincingly.

This paper studies iterative distributed inference. Define which includes all realizable undirected links in the wireless sensor network. At the th iteration , the successful communication links in the wireless sensor network are described by the set A pair means that the sensors and communicate with each other at . A pair but means that there is no communication link between the sensors and at . Due to noisy communication channels and limited network power budget, is assumed to be random and to be modeled as follows. Given distinct subsets . Let be a stochastic process taking values in and driven by a Markov chain with a transition probability matrix , where , for all , for all . However, these s are not known precisely. Each is expressed as with a known and a unknown whose absolute value is less than a given positive constant . For any for all , and . This paper models The neighborhood of sensor at is denoted by and the element number of set is denoted by .

For sensor , set its initial state . At the th iteration, each sensor obtains its neighbors’ states and updates its state using the following linear iteration law: where is the weight parameter which is assigned by designers. Our study on the above distributed inference has two objectives: one is to derive a condition on the convergence of to in the sense of mean square; the other is how to find such that a fast convergence is achieved.

3. Average Consensus Analysis

3.1. Convergence Condition

Denote The system in Section 2 can be described as where is a matrix with entries . For , For , From (2)~(6), it is seen that if and only if , and hence

Furthermore, (10)~(12) imply that for all Then, we have which means that for all Denote . The iterative distributed inference is said to be average consensus in mean square sense if for any initial condition and .

Theorem 1. The linear distributed inference (7) reaches average consensus by choice of , if there exist m positive definite matrices such that for all with .

Proof. Assume at time step . From (9), (14), and (16), one has
We now consider the stochastic Lyapunov function
Then for all , we have
Denote
When the conditions in Theorem 1 are satisfied, we have where denotes the minimal eigenvalue of and
Therefore, for all , for all , for all , This means .

3.2. Optimal Design

From the above proof, it is seen that the conditions in Theorem 1 result in not only but also decreasing . Moreover, implies ; that is, also converges to zero. Therefore, the decrease rate of can express the convergence speed of distributed inference. The following theorem is about the decrease rate of .

Theorem 2. Given and , if there exist m positive definite matrices such that for all ,then in linear distributed inference (7), for any nonzero ,

Proof. According to Schur complement [17], condition (25) can be rewritten as
From (27), for any nonzero , one has for all and for all . From (28), (19), and (20), it is known that and hence

Condition (25) in Theorem 2 is a LMI. We denote condition (25) as and for any define Using the LMI toolbox of MATLAB, can be computed by Algorithm 1.

Choose ;
Choose enough large such that has solutions;
Choose enough small such that has no solution;
repeat until
;
Solve
if   has solutions
    ;
else
    ;
end (repeat)
Set   .

For a with , it is known from Theorems 1 and 2 that linear distributed inference (7) is average consensus and that is a bound of convergence speed. Since a less value of gives a faster convergence speed, the fast distributed inference problem is addressed as which is an unconstrainted optimization problem of only one variable. Many existing numerical optimization methods [18] can be utilized to solve this problem efficiently. When , the optimal parameter provides a fast linear distributed inference which reaches average consensus.

4. Numerical Example

In this section, we present simulation results for average consensus of distributed inference in a simple sensor network. The network has 10 sensor nodes and switched in three possible communication situations. Figure 1 illustrates 3 communication situations. The estimated transition probabilities of is The estimate error satisfies Using the computation procedure in Section 3, the optimization problem (32) is solved. Graph of is displayed in Figure 2. The result is and . For the communication situation in Figure 1(1), we use the design method in [7] of minimizing asymptotic convergence factor and obtain . The design method in [7] is also applied to the other 2 situations in Figure 1 and get .

In order to compare our method with that in [7], the initial states of each sensor node is selected as

Thus from (1), we have . The real transition probability matrix is set as Figures 3, 4, and 5 show state curves of all sensor nodes under , , and , respectively. It can be seen that all sensor states convergence to and that our has faster convergence rate than or has. Achieving faster convergence is because our method considers the random switching among the 3 communication situations while [7]’s method considers only 1 communication situation.

5. Conclusion

The distributed average consensus problem in sensor networks has been studied under a Markovian switching communication topology of uncertain transition probabilities. Stochastic Lyapunov functions have been employed to investigate average consensus of linear distributed inference. A sufficient condition of average consensus has been proposed based on feasibility of a set of coupled LMIs. The design problem of fast distributed inference has been solved by numerical optimization techniques.

Acknowledgment

This work was supported by 973 Program of China (Grant 2009CB320603).

References

  1. R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Transactions on Automatic Control, vol. 49, no. 9, pp. 1520–1533, 2004. View at: Publisher Site | Google Scholar | MathSciNet
  2. R. Olfati-Saber and R. M. Murray, “Consensus protocols for networks of dynamic agents,” in Proceedings of the American Control Conference, pp. 951–956, June 2003. View at: Google Scholar
  3. L. Moreau, “Stability of multiagent systems with time-dependent communication links,” IEEE Transactions on Automatic Control, vol. 50, no. 2, pp. 169–182, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  4. A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups of mobile autonomous agents using nearest neighbor rules,” IEEE Transactions on Automatic Control, vol. 48, no. 6, pp. 988–1001, 2003. View at: Publisher Site | Google Scholar
  5. W. Ren and R. W. Beard, “Consensus seeking in multiagent systems under dynamically changing interaction topologies,” IEEE Transactions on Automatic Control, vol. 50, no. 5, pp. 655–661, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  6. Y. Hatano and M. Mesbahi, “Agreement over random networks,” IEEE Transactions on Automatic Control, vol. 50, no. 11, pp. 1867–1872, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  7. L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Systems & Control Letters, vol. 53, no. 1, pp. 65–78, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  8. S. Kar and J. M. F. Moura, “Sensor networks with random links: topology design for distributed consensus,” IEEE Transactions on Signal Processing, vol. 56, no. 7, part 2, pp. 3315–3326, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  9. Y. Ji, H. J. Chizeck, X. Feng, and K. A. Loparo, “Stability and control of discrete-time jump linear systems,” Control Theory and Advanced Technology, vol. 7, no. 2, pp. 247–270, 1991. View at: Google Scholar | MathSciNet
  10. O. L. V. Costa and M. D. Fragoso, “Stability results for discrete-time linear systems with Markovian jumping parameters,” Journal of Mathematical Analysis and Applications, vol. 179, no. 1, pp. 154–178, 1993. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  11. J. Xiong, J. Lam, H. Gao, and D. W. C. Ho, “On robust stabilization of Markovian jump systems with uncertain switching probabilities,” Automatica, vol. 41, no. 5, pp. 897–903, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  12. L. Zhang and E.-K. Boukas, “H control for discrete-time Markovian jump linear systems with partly unknown transition probabilities,” International Journal of Robust and Nonlinear Control, vol. 19, no. 8, pp. 868–883, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  13. P. Seiler and R. Sengupta, “An H approach to networked control,” IEEE Transactions on Automatic Control, vol. 50, no. 3, pp. 356–364, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  14. L. A. Montestruque and P. Antsaklis, “Stability of model-based networked control systems with time-varying transmission times,” IEEE Transactions on Automatic Control, vol. 49, no. 9, pp. 1562–1572, 2004. View at: Publisher Site | Google Scholar | MathSciNet
  15. J. Yu, L. Wang, and M. Yu, “Switched system approach to stabilization of networked control systems,” International Journal of Robust and Nonlinear Control, vol. 21, no. 17, pp. 1925–1946, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  16. Y. Xia, G.-P. Liu, M. Fu, and D. Rees, “Predictive control of networked systems with random delay and data dropout,” IET Control Theory and Applications, vol. 3, no. 11, pp. 1476–1486, 2009. View at: Publisher Site | Google Scholar
  17. S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, vol. 15 of SIAM Studies in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1994. View at: Publisher Site | MathSciNet
  18. J. Nocedal and S. J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer, New York, NY, USA, 1999. View at: Publisher Site | MathSciNet

Copyright © 2013 Won Il Kim et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

585 Views | 417 Downloads | 1 Citation
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.