Abstract

Considering any connected network with unknown initial states for all nodes, the nearest-neighbor rule is utilized for each node to update its own state at every discrete-time step. Distributed function calculation problem is defined for one node to compute some function of the initial values of all the nodes based on its own observations. In this paper, taking into account uncertainties in the network and observations, an algorithm is proposed to compute and explicitly characterize the value of the function in question when the number of successive observations is large enough. While the number of successive observations is not large enough, we provide an approach to obtain the tightest possible bounds on such function by using linear programing optimization techniques. Simulations are provided to demonstrate the theoretical results.

1. Introduction

Consider a network where each node has the knowledge of its initial value . The objective for each node is to compute some function relating to the initial values of all nodes, that is, , under the constraint that each node transmits its value to its neighbors and updates its own value with some protocol at each time-step. Such problem is called distributed function calculation (DFC) and it has received considerable attention in recent years [1, 2] due to the development of distributed networks and systems.

Various protocols are designed to achieve such a goal, for example, the nearest-neighbor rule [3]. Moreover, it leads to a well-studied field in control theory, distributed consensus [35] which can be viewed as a special case of this DFC problem; some model predictive control (MPC) methods are proposed to solve the consensus problem [68]. When the observations of each node are noise-free, [2] shows that, instead of requiring asymptotic convergence, finite observations are enough to compute a predefined function. Reference [9] shows that consensus function calculation can be performed in a minimal number of time-steps, and, furthermore, the minimal number of time-steps can be fully characterized algebraically and graphically [10].

When it comes to the noisy network where each node only obtains a noisy (or uncertain) measurement of its neighbors’ values (e.g., the transmission channel between nodes is noisy (bounded noise) and quantized (given a certain quantization scheme)), the estimation on initial value becomes a very challenging problem. It has been shown in [11] that each node can obtain an unbiased estimate of any desired linear function as a linear combination of the noisy values received from its neighbors, along with its own values, using only the first-order statistics of the noise. If the second order statistics of the noise are also known, the variance of the estimation error can be minimized distributedly by refining its estimate of the linear function. Quantization and other communication issues are addressed and analyzed in [12].

Instead of stimulating the network with different initial conditions [11] to characterize statistical properties, the focus of this paper is on what the best estimation is on the initial condition of the network given a number of successive observations. The main contribution (Section 3) is to propose an algorithm to solve initial values of all nodes with uncertain output measurements, for example, outputs with bounded channel noises and quantization, over a finite (minimal) estimation horizon. Minimal number of time-steps is characterized to fully recover the initial states of all nodes from merely the information of a randomly chosen node and its neighbors.

Furthermore, when minimal number of time-steps is not satisfied, we propose an approach to solve the upper and lower bounds on the initial values of all the nodes. These bounds are the tightest possible given the system model, output measurements, and the rough bounds on the unknown initial state. After obtaining the exact value or the tightest bounds, any exact value or bounds of objective function related to can be therefore computed. Simulations about undirected and directed networks are given in Section 4 to verify the theoretical results. Finally, Section 5 summarizes this paper.

The notation is fairly standard. For a matrix , denotes the element in the th row and th column. For a column vector , denotes its th element. Similarly for a row vector , denotes its th element. We denote . For vectors , and denote element-wise inequalities. denotes the identity matrix with dimension .

2. Problem Formulation

The underlying weighted direct graph of the considered network is denoted by , where is the set of nodes, is the set of edges, and the adjacency matrix , with nonnegative element when there is a link from to and equaling zero when there is no link from to . Let denote the state of node , which might represent a physical quantity such as attitude, position, temperature, voltage, and so forth.

Considering the classical protocol, the dynamics of a network of discrete-time integrator agents is defined by where is the discrete-time index; is the entry of a row-stochastic matrix at the discrete-time index with the additional assumption that for all and , if and otherwise. Intuitively, the information state of each vehicle is updated as the weighted average of its current state and the current states of its neighbors. Equation (1) can be written in matrix form as

Definition 1 (DFC for the th node [2]). Let be a function of the initial values; that is, is calculable by node if it can be calculated by node after running iteration (2) for a sufficiently large number of time-steps.

Remark 2. In particular, we are interested in distributed linear function calculation; that is, is linear. The distributed average consensus problem [4] can be categorized into this.

Definition 3 (distributed asymptotic consensus [13]). System (2) is said to asymptotically achieve distributed consensus if , where is the column vector with all components equal to and is some constant row vector. In other words, the values of all nodes converge to the same linear combination of the initial node values .

The set of all computable linear is characterized and the algorithm for each node to compute its correspondent is proposed in [2] for noiseless case. However, there could be noise, uncertainty caused by quantizations, both at the process and at the observation. By taking uncertainty into account, the system can be generalized and modeled as a discrete-time network system of the formfor , where is the estimation horizon and , , and are the state, output, and noise vectors (the word “noise” here has broader meanings, e.g., noise, quantization error, and uncertainty), respectively. and are the interconnection matrix and the transmission channel noise and quantization distribution matrix, respectively, while are the output channel disturbance distribution matrices. is matrix (where is the degree of node ) with a single at each row denoting the information for the node : its own and neighbors’ values at each time-step.

3. Solution to Robust DFC Problem

3.1. Output Model

Assuming that each node in the network has some memory and computational ability, this section proposes a new scheme for each node to solve the DFC problem subject to bounded channel noises by using a system model in (3) together with output measurements over a finite estimation horizon. Firstly, a necessary and sufficient condition for each node obtaining the exact initial state value is derived. It is shown that when this condition is satisfied, the initial state value can be easily solved by doing matrix calculations using the local observations of each node.

Then, for the case in which the exact calculation of initial state value is not possible (i.e., the necessary and sufficient condition does not hold), an approach to estimate the initial condition of network is proposed. This approach computes upper and lower bounds on the initial state value by using Linear Programming (LP) optimization techniques and these bounds are the tightest possible given the network model, available output measurements, and the a priori rough bounds on the initial state and noises.

The first problem of exact initial state calculation is formulated as follows.

Problem 1. With the system dynamics defined as in (3), assume that are given and (A1) is observable;(A2).Find the sufficient and necessary condition such that the exact value of the initial condition can be solved for any .

Here, assume that the successive observations are available for, say node (we ignore the superscript for notational convenience), all . For the estimation horizon , let where , , , and . Using an iterative computation, the system algebraic formulation is described asFor the reason for simplicity, the output over the estimation horizon is represented by

3.2. Exact Solution to DFC

We will have Theorem 4 to solve this problem.

Theorem 4. Let all data and assumptions be as given in Problem 1; then the exact value of the initial state can be solved from (6) for any disturbance if and only if there exists which spans the null space of such that

Proof. It is seen from (6) that can be solved for all if and only if there always exists such that Note that all in (8) satisfying are given by , where is a free variable and spans the null space of . Then condition (8) is satisfied if and only if there always exists such that is satisfied. Note that there always exists which is the left inverse of and can be solved from (9) if and only if has full column rank, so that , where and can be obtained by using singular value decomposition [14].

Remark 5. The idea of Theorem 4 is that even if the noise/quantization is unknown, we can still cancel the noise/quantization effect by using simple matrix calculations such that finding satisfies (8).

Remark 6. It is noted that has rank which increases as the horizon length increases and that has full column rank as long as is larger than the observability index of the pair . Therefore, condition (7) is satisfied if and only if is large enough such that both and have rank larger than or equal to .

Corollary 7. Let all data and assumptions be as given in Theorem 4. Assume that has full column rank and take as the observability index of the pair ; then the exact value of the initial state can be solved from (6) for any disturbance if and only if the measurement horizon length is

Proof. has full column rank, which implies always has full column rank. So there exists with rank which spans the null space of . Note that ; hence (7) is satisfied if and only if and .

Theorem 4 gives the sufficient and necessary condition which ensures the exact value of can be calculated. However, it may not be always implementable due to the limitation of the number of successive observations in practical network systems. Hence, it is useful to investigate the techniques for estimating initial state when output measurements are not sufficient. The rest of this section studies the case when condition (7) is not satisfied. A state estimation approach is proposed by which the initial state is estimated by solving its tightest upper and lower bounds. LP optimization techniques are used to obtain the bounds and the cases for upper and lower bounds which are dealt with separately.

3.3. Optimal Bounds for DFC

Here, assume that the rough upper and lower bounds , and , on the initial state and disturbances, respectively, are available and have the form

Let represent the th element of the initial state values and ; stand for the tightest upper and lower bounds on , where , given the system model, output measurements over the estimation horizon, and the a priori rough bounds on and ; then the problem formulation for obtaining tightest upper and lower bounds on is described as follows.

Problem 2. Let the system dynamics be as defined in (3). Assume that the rough upper and lower bounds on and are known and that are given. Find the upper and lower bounds and , for , defined by

The following theorem shows that evaluating the upper bound on each can be obtained by solving an LP problem. Note that we use boldface to denote variables in the optimization.

Theorem 8. Let all data be as defined previously and . Then for , if and only if there exist , , and such that

Proof. As it can be seen that , then a manipulation verifies that the following identity is valid for all , , and , wherewhere denotes terms readily inferred from symmetry.
It follows that a sufficient condition for subject to , , and is That the condition is necessary follows from Farkas’ Lemma [15]. Note that evaluating subject to (16) is equivalent to solving the LP problem as follows: Therefore, the upper bound on each for can be computed by solving (17).

Similarly, the next theorem is proposed to solve the optimal lower bound on each for . Since the proof can be obtained by following that of Theorem 8, it is omitted.

Theorem 9. Let all data be as defined above and . Then for if and only if there exist , , and such that , where is defined in (16). Similarly to (17), evaluating the tightest lower bound on is equivalent to solving the following LP problem:

Remark 10. From Theorems 8 and 9, it can be seen that sufficient and necessary conditions for and to be upper and lower bounds on are given in the form of the LP conditions (17) and (18), respectively. Then the tightest bounds on the initial state are obtained by solving LP problems.

Remark 11. Since LP problems can be easily solved, another benefit of the proposed method is that we can use it as a recursive online implementation; that is, at each time-step , node uses Theorems 8 and 9 to compute the upper and lower bounds on the initial state until condition (7) is satisfied, in which case the initial conditions can be fully recovered.

4. Simulation

4.1. Directed Network Example

Next, we consider a directed network with nodes in Figure 1. Its corresponding adjacency matrix and the underlying graphical representation are shown as follows:Take the sampling time to be ; then the system matrices are obtained as follows:We randomly choose an initial stateto stimulate the system. It is assumed that there are transmission noises between nodes which are zero-mean bounded from . Also, assume that the rough upper and lower bounds on the initial state and are and , respectively. From (7), it can be seen that the minimal number of steps for direct calculation on is .

Therefore, in the scenario that there are enough output measurements over the horizon length , condition (7) is satisfied. So there exists satisfying (8) such that . By taking the matrix calculation, the initial state is recovered.

For the scenario that there are only output measurements over the horizon length , the full recovery of is not possible. By using the method given in Theorems 8 and 9, the tightest upper and lower bounds on are solved and given as follows:As it can be seen that the difference between the upper and lower bounds is small, is well-estimated.

By using the exact value or the tightest bounds for , any exact value or bounds of objective function related to can be therefore obtained. For example, here, the true average consensus value of this system, that is, , is , while the bounds on it obtained from our method are when which are very tight with respect to the noise level .

Figure 2 illustrates the bounds on the average consensus value as the horizon of output measurements reaches in which the exact average consensus value is solved. It is shown that the bounds become tighter and tighter as more and more measurement information becomes available.

5. Conclusion

We have studied the problem of distributed function calculation in a connected network operating with noise. In particular, this is not focused on the nearest-neighbor protocol, where each node updates its own value at each step as a linear combination of its own previous values and those of its neighbors. Each node tries to compute a linear function of all the initial states of all nodes. Algorithm to compute the function in question is proposed when the number of successive observations is large enough; we also propose an approach to obtain tightest bounds on such a function based on the available observations while the number of observations is not large enough.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work was supported by National Natural Science Foundation of China (NNSFC) under Grant no. 61322304.