Mathematical Problems in Engineering

Volume 2016 (2016), Article ID 2510903, 8 pages

http://dx.doi.org/10.1155/2016/2510903

## Neighborhood Kalman Estimation for Distributed Localization in Wireless Sensor Networks

^{1}Qian Xuesen Laboratory of Space Technology, China Academy of Space Technology, Beijing 100094, China^{2}Department of Precision Instrument, Tsinghua University, Beijing 100084, China

Received 10 November 2015; Revised 7 January 2016; Accepted 26 January 2016

Academic Editor: Paolo Addesso

Copyright © 2016 Xiaochu Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Accurate location information plays an important role in the performance of wireless sensor networks since many mission applications depend on it. This paper proposes a fully distributed localization algorithm based on the concept of data fusion, allowing the full intranodes information including the correlations among estimates to take part in the algorithm. By properly constructing and updating the estimates as well as the corresponding covariance matrices, the algorithm can fuse intranodes information to generate more accurate estimates on the sensor locations with a fast speed. Finally, numerical simulations are given as examples to demonstrate the effectiveness of the algorithm.

#### 1. Introduction

Wireless sensor networks (WSNs) have attracted considerable attention since large numbers of simple sensor nodes working cooperatively can achieve much more complex tasks, such as environmental monitoring, target tracking, and synthetic aperture. With recent advances in the rapid development of small-sized and low-cost smart sensor nodes, large-scale deployed WSNs have become viable and promising.

Sensor node localization is crucial in most of the above-mentioned cooperative tasks, because the location information plays an important role in the coordinated performance. Generally, localization techniques can be classified into centralized approaches and distributed approaches, and the latter approaches are believed to be more efficient and flexible in large-scale networks. Although the existing algorithms can give solutions to the localization problem, there is still a great potential to research more on it.

In this paper, we depend on the perspective of data fusion and propose a distributed localization algorithm, namely, the constructed neighborhood Kalman filter (CNKF). It is a fully distributed algorithm where each sensor node only cooperates with its neighboring sensor nodes to get the localization with noisy distance measurements. The algorithm is easily implementable for large-scale networks and can make full use of the intranetwork information.

The rest of this paper is organized as follows. Preliminaries are given in Section 2. Related works are reviewed in Section 3. The CNKF algorithm is proposed in detail in Section 4. Simulations are given and discussed in Section 5. Finally, conclusions are remarked in Section 6.

#### 2. Preliminaries

##### 2.1. The Kalman Filter

We will review in this section the principal equations for the extended Kalman filter in order to introduce the necessary notation for the following sections. The Kalman filter is a real-time estimation algorithm, which can estimate the “state” (state means the variable desired to be estimated) by predicting and correcting it at discrete moments.

The main preconditions of the Kalman filter are twofold. Firstly, the change of the state should be predictable. That is, an equation describing how the state is changing with time is necessary. Such an equation is named “state equation.” Secondly, the measurement and the state should be quantitatively related. That is, a functional equation between the measurement and the state is desired. Such an equation is named “measurement equation.”

At each moment, the summary of procedure of the Kalman filter is known as follows. (a) State prediction, according to the state equation, predicts the theoretical value of the state based on the estimated value at the previous moment. (b) Measurement prediction, according to the measurement equation, predicts the theoretical value of the measurement based on the predicted value of the state. (c) Measurement observation is observing the actual value of the measurement. (d) State correction finds the difference between the predicted value and actual value of the measurement and compensates for the predicted value of the state. Since steps (b), (c), and (d) as a whole can refine the predicted value of the state, they can also be called “state filtering.”

The formulas of the Kalman filter are reviewed as follows, acting as a basic framework. These formulas were proposed under strict derivation, which can be found in [1] and hence detailed derivation is omitted in this paper.

Without loss of generality, assume that the state equation may be written as

Similarly, assume that the measurement equation may be written aswhere is the time stamp to distinguish variables at different moments, and are state-transition matrices, and is the state vector. and (the process noise and measurement noise, resp.) are Gaussian white-noise processes whose mean and covariance function are given bywhere symbol denotes the expectation and matrix transpose, respectively.

Assume that the initial guesses of the mean and covariance of the state vector at time are given byAfter being initialized, the Kalman filter will enter the iterative procedure.

*State Prediction*. At each moment , the state at time can be predicted based on the estimated state at time according to (1). The predicted state can be obtained as

*State Filtering*. Once the state has been predicted, the theoretical measurement at time can be predicted according to (2)and the minimum variance estimate of the state immediately following the measurement is given bywhere the Kalman gain matrix is given byand the measurement sensitivity matrix is given by

It is worthwhile to note that (7) is the optimal way to correct the predicted state, so that the estimated state can achieve the minimum variance. Following the estimated state, the covariance matrix of the estimate is given by

The above is the formulaic framework of the Kalman filter. One may also see some more literature for closer understanding on how it works in many applications (see, e.g., [2, 3]).

##### 2.2. The Covariance Intersection Fusion

The necessity and principal equations of the fusion of correlated estimates will be reviewed in this subsection.

When following the Kalman filtering scheme, the estimates to be fused must satisfy the assumption that they are independent. However, in many conditions where estimates are highly correlated but the degree of the correlation is not known exactly, applying the Kalman filter might lead to overconfidence in the accuracy of the fused estimate and result in divergent iterations. In this case, the covariance intersection algorithm is suitable for providing a conservative confidence fusion for these correlated estimates [4].

Given two estimates and , the covariance intersection algorithm can be summarized aswhere is the weight. The weight can be further optimized with respect to different performance criteria such as minimising the trace or the determinant of .

##### 2.3. Definitions

In this paper, all the sensor nodes in the network being studied are assumed to be equipped with the same wireless communication modules and computation modules. Sensor nodes are divided into two types, the ordinary nodes and the anchor nodes.

*Definition 1 (ordinary nodes). *The ordinary nodes refer to the sensor nodes that do not know their locations. We denote by the set of the ordinary nodes in the network.

*Definition 2 (anchor nodes). *The anchor nodes refer to the sensor nodes that are aware of their locations with respect to a given coordinate frame. We denote by the set of the ordinary nodes in the network.

*Definition 3 (neighbors). *If two sensor nodes can communicate with each other in short distances, they are called neighbors. It is assumed that the distance information between a pair of neighbors can be measured. If the actual distance between nodes and is , then the distance measurement is given bywhere is the measurement noise (see, e.g., [5]) following and is influenced by the distance and the noise ratio , simulated by

*Problem Statement*. Suppose that there exists a static WSN consisting of ordinary nodes and anchor nodes, and the neighboring distance measurements among the nodes are available but noisy. After giving an initial guess on the localization as the starting condition, the problem is how to achieve a much better localization fast and accurately.

#### 3. Related Works

General overviews of localization approaches for WSNs can be found in lots of literatures (see, e.g., [6–10]). Localization approaches can be simply classified into the centralized type and the distributed type, and the latter is widely believed to be more efficient than the former for large-scale networks because communication with only the neighboring sensor nodes is much easier than with all the sensor nodes in the network [11]. In distributed approaches, incremental algorithms and concurrent algorithms are two fundamental thoughts. Incremental algorithms determine the location information node by node increasingly. That is, in each moment, the incremental algorithms start with three (or more) neighboring anchor nodes to solve the location information of a fourth node that is located inside the overlapped area, and then the fourth node will serve as a new anchor node in the next moment as soon as it knows its location (see, e.g., [12–14]). The main drawback of the incremental algorithms is that errors may accumulate and increase as the propagation spreading [8, 15]. Some assistant strategies can be adopted in order to improve the localization accuracy of the incremental algorithms, such as using more stable anchor nodes [16] and utilizing the local geometrical constraint [17]. Unlike incremental algorithms, concurrent algorithms determine the location information of all the sensor nodes simultaneously, where all the sensor nodes behave indistinctively and update the location information in parallel without interference, and then the global localization result of the whole network will be achieved asymptotically by neighboring cooperation as the time goes on (see, e.g., [15, 18–20]). Among concurrent algorithms, the mass-spring relaxation is the most popular algorithm, which treats location estimates and distance measurements as masses and springs, respectively, in an imaginary mass-spring system. By relaxing the masses and letting them move under spring forces, the final location estimates will be gotten when the system achieves the equilibrium [15]. There are also some algorithms considering the statistical characteristics such as [21–23] from which it can be seen that the location estimates can be further refined by statistical approaches under certain conditions such as linear assumptions and a priori knowledge. Recently, a so-called distributed Kalman filter has been studied (see, e.g., [24, 25]), which focuses on the problem of overlapping estimation. Overlapping estimation means that different separate objects are interested in an overlapped common state and estimate on it cooperatively. The distributed Kalman filter requires all the objects (i.e., sensor nodes) to be able to observe the overlapped common state, and therefore, it cannot be applied in the localization problem. Nevertheless, the thought of the distributed Kalman filter brings much inspiration to researchers.

#### 4. Constructed Neighborhood Kalman Filter

We will propose the constructed neighborhood Kalman filter for distributed localization in this section. Overall speaking, the main idea of the CNKF can be summarized into four points. (a) Construct a proper form of the state containing not only the ego’s location but also the neighbors’ locations. By doing this, the states of any two neighboring ordinary nodes would contain an overlapped part, and this provides each ordinary node with the feasibility of cooperating with the neighbors. (b) Use the static nature of static sensor networks well. Note that the state constructed by each ordinary node contains the neighbors’ locations, so the state would be very hard to predict if the neighbors were able to move. Fortunately, the nodes in static sensor networks are known to be static, or in other words, the change of the state should be zero. Therefore, the state equation (14) is suitable for predicting the state. (c) Use the Kalman filter to locally refine the state, and use the covariance intersection algorithm to cooperatively refine the overlapped part of the state. (d) Reconstruct the state by neighboring cooperation properly, and reconstruct the covariance approximately satisfying both the positive definiteness (in order to guarantee the consecutive iterations) and the near-minimum variance (in order to achieve the optimal estimation). In detail, the procedures and formulas of the CNKF are shown as follows.

*State Establishment*. For each sensor node , all its neighbors can be categorized into two disjoint lists, which are the list of ordinary neighbors and the list of anchor neighbors , respectively. The total numbers of the ordinary neighbors and the anchor neighbors are denoted by and (lengths of the two lists). For convenience, refer to th ordinary neighbor and th anchor neighbor as and for sensor node , with , . Denote the actual location of sensor node by , and then the state is established by the location vectors of both node and all its neighborswhose dimension is . Write the estimated location vector of the sensor node which is estimated by sensor node as , and then the estimated state can be written as

Denote the covariance of the state vector by and the subblock matrix covariance corresponding to by . It is clear that by is a main diagonal block of

Because the sensor nodes being studied are static, the following differential equation is suitable for describing the change of the state

*State Prediction*. By (17), the prediction of the state and covariance at time can be given by

*Local Filtering*. At the time stamp , the measurement vector is formed by all the neighboring distance measurementswhose dimension is .

Similar to the Kalman filter, the local minimum variance estimate of the state immediately following the measurement can be obtained as

The covariance matrix associated with the local minimum variance estimate can be obtained aswhere the Kalman gain matrix is given byNote that the relationship equation between the noise-free measurement and the state vector isTherefore, the measurement sensitivity matrix in (21) and (22) can be determined as

*Neighborhood Block Fusion*. In this stage, each ordinary node fuses the block of the estimated state with its neighbors. Without loss of generality, the average fusion is considered. The average weight is determined by the number of the neighbors, following

The ego’s location is fused followingand the corresponding subblock covariance is fused following

*State and Covariance Reconstruction*. In this stage, each ordinary node reconstructs the state estimate and the covariance by combining the fused blocks. The reconstructed state is given byand the main diagonal block of the reconstructed covariance matrix is given by

From the definition of the state, it is easy to see that the nondiagonal block of the state covariance has little correlation with those of the neighbors. Therefore, the reconstructed covariance is approximately given by

It is worthwhile to note that since all the diagonal blocks of are determined by the intersection covariance algorithm they are approximately optimized. Because all the diagonal blocks of are independent, the overall reconstructed covariance is also the near-minimum. Furthermore, the positive definiteness of the reconstructed covariance matrix would guarantee the consecutive iterations of the CNKF.

It is also worthwhile to note that the CNKF is sensitive to the initial guess of the state. The reason is twofold. Firstly, the distance measurement is not a linear projection of the location but a highly nonlinear and nonconvex one. So, the solution might fall into the ambiguous or wrong optimum if the start point is not “close” enough to the true value. Secondly, The CNKF uses the first-order approximation to model the nonlinear measurement as shown in (24). So, the iterations may be hard to converge if the error of initial guess is large. Fortunately, some works (see, e.g., [26]) that focus on finding a good initial guess of the localization can be helpful. Therefore, using other algorithms first to find a good initial guess and then using CNKF to achieve much more accurate localization should be the best.

#### 5. Simulation Result

In the experiments, we run CNKF algorithm on the WSNs with random topologies by numerical simulations. In the simulations, one hundred ordinary nodes are placed randomly inside a 100 m by 100 m square area, and another four anchor nodes are placed at the four corners of the square, respectively. As an instantiation, Figure 1 shows a distribution example of all the sensor nodes with the communication radius . In the graph, red stars represent the anchor nodes, yellow circles represent the ordinary nodes, and the grey lines among nodes indicate the communication relationships.