Abstract

Accurate location information plays an important role in the performance of wireless sensor networks since many mission applications depend on it. This paper proposes a fully distributed localization algorithm based on the concept of data fusion, allowing the full intranodes information including the correlations among estimates to take part in the algorithm. By properly constructing and updating the estimates as well as the corresponding covariance matrices, the algorithm can fuse intranodes information to generate more accurate estimates on the sensor locations with a fast speed. Finally, numerical simulations are given as examples to demonstrate the effectiveness of the algorithm.

1. Introduction

Wireless sensor networks (WSNs) have attracted considerable attention since large numbers of simple sensor nodes working cooperatively can achieve much more complex tasks, such as environmental monitoring, target tracking, and synthetic aperture. With recent advances in the rapid development of small-sized and low-cost smart sensor nodes, large-scale deployed WSNs have become viable and promising.

Sensor node localization is crucial in most of the above-mentioned cooperative tasks, because the location information plays an important role in the coordinated performance. Generally, localization techniques can be classified into centralized approaches and distributed approaches, and the latter approaches are believed to be more efficient and flexible in large-scale networks. Although the existing algorithms can give solutions to the localization problem, there is still a great potential to research more on it.

In this paper, we depend on the perspective of data fusion and propose a distributed localization algorithm, namely, the constructed neighborhood Kalman filter (CNKF). It is a fully distributed algorithm where each sensor node only cooperates with its neighboring sensor nodes to get the localization with noisy distance measurements. The algorithm is easily implementable for large-scale networks and can make full use of the intranetwork information.

The rest of this paper is organized as follows. Preliminaries are given in Section 2. Related works are reviewed in Section 3. The CNKF algorithm is proposed in detail in Section 4. Simulations are given and discussed in Section 5. Finally, conclusions are remarked in Section 6.

2. Preliminaries

2.1. The Kalman Filter

We will review in this section the principal equations for the extended Kalman filter in order to introduce the necessary notation for the following sections. The Kalman filter is a real-time estimation algorithm, which can estimate the “state” (state means the variable desired to be estimated) by predicting and correcting it at discrete moments.

The main preconditions of the Kalman filter are twofold. Firstly, the change of the state should be predictable. That is, an equation describing how the state is changing with time is necessary. Such an equation is named “state equation.” Secondly, the measurement and the state should be quantitatively related. That is, a functional equation between the measurement and the state is desired. Such an equation is named “measurement equation.”

At each moment, the summary of procedure of the Kalman filter is known as follows. (a) State prediction, according to the state equation, predicts the theoretical value of the state based on the estimated value at the previous moment. (b) Measurement prediction, according to the measurement equation, predicts the theoretical value of the measurement based on the predicted value of the state. (c) Measurement observation is observing the actual value of the measurement. (d) State correction finds the difference between the predicted value and actual value of the measurement and compensates for the predicted value of the state. Since steps (b), (c), and (d) as a whole can refine the predicted value of the state, they can also be called “state filtering.”

The formulas of the Kalman filter are reviewed as follows, acting as a basic framework. These formulas were proposed under strict derivation, which can be found in [1] and hence detailed derivation is omitted in this paper.

Without loss of generality, assume that the state equation may be written as

Similarly, assume that the measurement equation may be written aswhere is the time stamp to distinguish variables at different moments, and are state-transition matrices, and is the state vector. and (the process noise and measurement noise, resp.) are Gaussian white-noise processes whose mean and covariance function are given bywhere symbol denotes the expectation and matrix transpose, respectively.

Assume that the initial guesses of the mean and covariance of the state vector at time are given byAfter being initialized, the Kalman filter will enter the iterative procedure.

State Prediction. At each moment , the state at time can be predicted based on the estimated state at time according to (1). The predicted state can be obtained as

State Filtering. Once the state has been predicted, the theoretical measurement at time can be predicted according to (2)and the minimum variance estimate of the state immediately following the measurement is given bywhere the Kalman gain matrix is given byand the measurement sensitivity matrix is given by

It is worthwhile to note that (7) is the optimal way to correct the predicted state, so that the estimated state can achieve the minimum variance. Following the estimated state, the covariance matrix of the estimate is given by

The above is the formulaic framework of the Kalman filter. One may also see some more literature for closer understanding on how it works in many applications (see, e.g., [2, 3]).

2.2. The Covariance Intersection Fusion

The necessity and principal equations of the fusion of correlated estimates will be reviewed in this subsection.

When following the Kalman filtering scheme, the estimates to be fused must satisfy the assumption that they are independent. However, in many conditions where estimates are highly correlated but the degree of the correlation is not known exactly, applying the Kalman filter might lead to overconfidence in the accuracy of the fused estimate and result in divergent iterations. In this case, the covariance intersection algorithm is suitable for providing a conservative confidence fusion for these correlated estimates [4].

Given two estimates and , the covariance intersection algorithm can be summarized aswhere is the weight. The weight can be further optimized with respect to different performance criteria such as minimising the trace or the determinant of .

2.3. Definitions

In this paper, all the sensor nodes in the network being studied are assumed to be equipped with the same wireless communication modules and computation modules. Sensor nodes are divided into two types, the ordinary nodes and the anchor nodes.

Definition 1 (ordinary nodes). The ordinary nodes refer to the sensor nodes that do not know their locations. We denote by the set of the ordinary nodes in the network.

Definition 2 (anchor nodes). The anchor nodes refer to the sensor nodes that are aware of their locations with respect to a given coordinate frame. We denote by the set of the ordinary nodes in the network.

Definition 3 (neighbors). If two sensor nodes can communicate with each other in short distances, they are called neighbors. It is assumed that the distance information between a pair of neighbors can be measured. If the actual distance between nodes and is , then the distance measurement is given bywhere is the measurement noise (see, e.g., [5]) following and is influenced by the distance and the noise ratio , simulated by

Problem Statement. Suppose that there exists a static WSN consisting of ordinary nodes and anchor nodes, and the neighboring distance measurements among the nodes are available but noisy. After giving an initial guess on the localization as the starting condition, the problem is how to achieve a much better localization fast and accurately.

General overviews of localization approaches for WSNs can be found in lots of literatures (see, e.g., [610]). Localization approaches can be simply classified into the centralized type and the distributed type, and the latter is widely believed to be more efficient than the former for large-scale networks because communication with only the neighboring sensor nodes is much easier than with all the sensor nodes in the network [11]. In distributed approaches, incremental algorithms and concurrent algorithms are two fundamental thoughts. Incremental algorithms determine the location information node by node increasingly. That is, in each moment, the incremental algorithms start with three (or more) neighboring anchor nodes to solve the location information of a fourth node that is located inside the overlapped area, and then the fourth node will serve as a new anchor node in the next moment as soon as it knows its location (see, e.g., [1214]). The main drawback of the incremental algorithms is that errors may accumulate and increase as the propagation spreading [8, 15]. Some assistant strategies can be adopted in order to improve the localization accuracy of the incremental algorithms, such as using more stable anchor nodes [16] and utilizing the local geometrical constraint [17]. Unlike incremental algorithms, concurrent algorithms determine the location information of all the sensor nodes simultaneously, where all the sensor nodes behave indistinctively and update the location information in parallel without interference, and then the global localization result of the whole network will be achieved asymptotically by neighboring cooperation as the time goes on (see, e.g., [15, 1820]). Among concurrent algorithms, the mass-spring relaxation is the most popular algorithm, which treats location estimates and distance measurements as masses and springs, respectively, in an imaginary mass-spring system. By relaxing the masses and letting them move under spring forces, the final location estimates will be gotten when the system achieves the equilibrium [15]. There are also some algorithms considering the statistical characteristics such as [2123] from which it can be seen that the location estimates can be further refined by statistical approaches under certain conditions such as linear assumptions and a priori knowledge. Recently, a so-called distributed Kalman filter has been studied (see, e.g., [24, 25]), which focuses on the problem of overlapping estimation. Overlapping estimation means that different separate objects are interested in an overlapped common state and estimate on it cooperatively. The distributed Kalman filter requires all the objects (i.e., sensor nodes) to be able to observe the overlapped common state, and therefore, it cannot be applied in the localization problem. Nevertheless, the thought of the distributed Kalman filter brings much inspiration to researchers.

4. Constructed Neighborhood Kalman Filter

We will propose the constructed neighborhood Kalman filter for distributed localization in this section. Overall speaking, the main idea of the CNKF can be summarized into four points. (a) Construct a proper form of the state containing not only the ego’s location but also the neighbors’ locations. By doing this, the states of any two neighboring ordinary nodes would contain an overlapped part, and this provides each ordinary node with the feasibility of cooperating with the neighbors. (b) Use the static nature of static sensor networks well. Note that the state constructed by each ordinary node contains the neighbors’ locations, so the state would be very hard to predict if the neighbors were able to move. Fortunately, the nodes in static sensor networks are known to be static, or in other words, the change of the state should be zero. Therefore, the state equation (14) is suitable for predicting the state. (c) Use the Kalman filter to locally refine the state, and use the covariance intersection algorithm to cooperatively refine the overlapped part of the state. (d) Reconstruct the state by neighboring cooperation properly, and reconstruct the covariance approximately satisfying both the positive definiteness (in order to guarantee the consecutive iterations) and the near-minimum variance (in order to achieve the optimal estimation). In detail, the procedures and formulas of the CNKF are shown as follows.

State Establishment. For each sensor node , all its neighbors can be categorized into two disjoint lists, which are the list of ordinary neighbors and the list of anchor neighbors , respectively. The total numbers of the ordinary neighbors and the anchor neighbors are denoted by and (lengths of the two lists). For convenience, refer to th ordinary neighbor and th anchor neighbor as and for sensor node , with , . Denote the actual location of sensor node by , and then the state is established by the location vectors of both node and all its neighborswhose dimension is . Write the estimated location vector of the sensor node which is estimated by sensor node as , and then the estimated state can be written as

Denote the covariance of the state vector by and the subblock matrix covariance corresponding to by . It is clear that by is a main diagonal block of

Because the sensor nodes being studied are static, the following differential equation is suitable for describing the change of the state

State Prediction. By (17), the prediction of the state and covariance at time can be given by

Local Filtering. At the time stamp , the measurement vector is formed by all the neighboring distance measurementswhose dimension is .

Similar to the Kalman filter, the local minimum variance estimate of the state immediately following the measurement can be obtained as

The covariance matrix associated with the local minimum variance estimate can be obtained aswhere the Kalman gain matrix is given byNote that the relationship equation between the noise-free measurement and the state vector isTherefore, the measurement sensitivity matrix in (21) and (22) can be determined as

Neighborhood Block Fusion. In this stage, each ordinary node fuses the block of the estimated state with its neighbors. Without loss of generality, the average fusion is considered. The average weight is determined by the number of the neighbors, following

The ego’s location is fused followingand the corresponding subblock covariance is fused following

State and Covariance Reconstruction. In this stage, each ordinary node reconstructs the state estimate and the covariance by combining the fused blocks. The reconstructed state is given byand the main diagonal block of the reconstructed covariance matrix is given by

From the definition of the state, it is easy to see that the nondiagonal block of the state covariance has little correlation with those of the neighbors. Therefore, the reconstructed covariance is approximately given by

It is worthwhile to note that since all the diagonal blocks of are determined by the intersection covariance algorithm they are approximately optimized. Because all the diagonal blocks of are independent, the overall reconstructed covariance is also the near-minimum. Furthermore, the positive definiteness of the reconstructed covariance matrix would guarantee the consecutive iterations of the CNKF.

It is also worthwhile to note that the CNKF is sensitive to the initial guess of the state. The reason is twofold. Firstly, the distance measurement is not a linear projection of the location but a highly nonlinear and nonconvex one. So, the solution might fall into the ambiguous or wrong optimum if the start point is not “close” enough to the true value. Secondly, The CNKF uses the first-order approximation to model the nonlinear measurement as shown in (24). So, the iterations may be hard to converge if the error of initial guess is large. Fortunately, some works (see, e.g., [26]) that focus on finding a good initial guess of the localization can be helpful. Therefore, using other algorithms first to find a good initial guess and then using CNKF to achieve much more accurate localization should be the best.

5. Simulation Result

In the experiments, we run CNKF algorithm on the WSNs with random topologies by numerical simulations. In the simulations, one hundred ordinary nodes are placed randomly inside a 100 m by 100 m square area, and another four anchor nodes are placed at the four corners of the square, respectively. As an instantiation, Figure 1 shows a distribution example of all the sensor nodes with the communication radius . In the graph, red stars represent the anchor nodes, yellow circles represent the ordinary nodes, and the grey lines among nodes indicate the communication relationships.

As a contrast algorithm, a well-known algorithm AFL by [15] is borrowed. The reason of comparing with the AFL is twofold. Firstly, the AFL is known to be able to achieve a relatively high accuracy localization compared with many other algorithms. Secondly, the AFL is also a fully simultaneous distributed algorithm just like the CNKF, while many other algorithms are not fully distributed or simultaneous just as reviewed in Section 3.

The principle updating equation of the AFL comes from the mass-spring relaxation model, which can be given bywhere denotes the distance estimate between nodes and ,   denotes the unit vector from node to node , and is a convergence coefficient (usually small in order to guarantee the convergence) influencing the converging speed and accuracy.

In the simulations, rough initial location estimates are given to all the ordinary sensors, whose errors are about 1 meter on each axis. Accordingly, the covariance matrices are all initialized to the identity matrix. The distance measurements are corrupted by Gaussian noises as in (12), in which the noise ratio . The processing noise matrices are set to diagonal matrices, whose diagonal elements are positive and very small, because the nodes in the network are static. The measurement noise matrices are also constructed to the diagonal matrices, whose diagonal elements are given by

Figure 2 shows the overall performance of the CNKF, where the curves represent the estimate errors of all the ordinary nodes in both the - and -axes. It can be seen that the estimate errors decrease gradually with the increase of the iteration times, implying that CNKF is effective in reducing the errors and refining the location estimates. In order to show more detailed results, two metrics are borrowed as follows to evaluate the accuracy performance of localization algorithms.

Performance Metric 1 (mean localization error). The mean localization error (MLE) is used to indicate the individual localization error of all the ordinary nodes, given by

Performance Metric 2  (global energy ratio). The global energy ratio (GER) is used to indicate the structural error of the localization of the network, given by

First of all, a common case is simulated to help see the improvement of the CNKF, and simulation conditions are set to be the same for both CNKF and AFL with and . Figures 3 and 4 show the comparison results of the estimate errors of the two algorithms (during the first 60 iterations). The metrics of MLE and GER both decrease more rapidly and more sharply by using the CNKF, implying that the CNKF can quickly achieve the convergence with high accuracy in this case.

Furthermore, in order to see the steady-state performance of the CNKF, plenty more experiments under various cases are also simulated. We record the data after 2000 iterations as the steady-state results, and then we calculate the metrics of MLE and GER by averaging the data of the repeated experiments. Table 1 shows the corresponding comparison results. It can be seen from Table 1 that both the metrics of the MLE and GER are enhanced by at least 57%, which means that the CNKF can achieve a much smaller estimate error in comparison with the AFL or, in other words, the localization accuracy of the CNKF is significantly high. The reason behind the result mainly lies in the full use of the intranodes information, which treats the estimate as a distribution instead of a simple point following the data-fusion perspective, so that it allows the correlated estimates to make a much better refinement after fusion. It is worthwhile to note that the performance of the CNKF is more insensitive to the average connectivity of the network, which is of benefit in the applications of flexible and large-scale networks.

It is also necessary to point out that CNKF would probably diverge and fail to achieve the localization when the initial guess of the locations has a large deviation. So, in the experiments, we have assumed that a good guess (whose error is no more than 1 meter) is already available. As previously noted in Section 4, some initial algorithms or practical means could be used to provide such an initial guess.

6. Conclusion

This paper deals with the localization problems of WSNs and proposes a fully distributed localization algorithm named CNKF. The CNKF is based on the concept of data fusion, allowing all the intranodes information to take part in the estimating process, and it has the capacity of estimating sensor locations in a fast speed and a high accuracy. The key points of the algorithm are fourfold. The first point is the constructing way to establish the appropriate form of the states, so that neighboring nodes can work under a collaborative mechanism. The second point is the use of the static nature of the network to predict the states. The third point is the filtering and fusing way to refine the estimated states both locally and cooperatively. The fourth point is the reconstructing way to update the states and the corresponding covariance matrices, so that the optima and consecution of the iterative estimation can be guaranteed. Finally, in order to validate the algorithm, a series of experiments including the comparison are performed by the simulations. The results show that the proposed algorithm is effective and suitable for the wireless sensor networks.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

Thanks are due to the support by the National Natural Science Foundation of China (Grant no. ).