International Journal of Navigation and Observation

International Journal of Navigation and Observation / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 281592 | 10 pages | https://doi.org/10.1155/2012/281592

A Comparison of Parametric and Sample-Based Message Representation in Cooperative Localization

Academic Editor: Elena Lohan
Received14 Apr 2012
Accepted18 Jul 2012
Published12 Sep 2012

Abstract

Location awareness is a key enabling feature and fundamental challenge in present and future wireless networks. Most existing localization methods rely on existing infrastructure and thus lack the flexibility and robustness necessary for large ad hoc networks. In this paper, we build upon SPAWN (sum-product algorithm over a wireless network), which determines node locations through iterative message passing, but does so at a high computational cost. We compare different message representations for SPAWN in terms of performance and complexity and investigate several types of cooperation based on censoring. Our results, based on experimental data with ultra-wideband (UWB) nodes, indicate that parametric message representation combined with simple censoring can give excellent performance at relatively low complexity.

1. Introduction

Location awareness has the potential to revolutionize a diverse array of present and future technologies. Accurate knowledge of a user's location is essential for a wide variety of commercial, military, and social applications, including next-generation cellular services [1, 2], sensor networks [3, 4], search-and-rescue [5, 6], military target tracking [7, 8], health care monitoring [9, 10], robotics [11, 12], data routing [13, 14], and logistics [15, 16]. Typically, only a small fraction of the nodes in the network, known as anchors, have prior knowledge about their location. The remaining nodes, known as agents, must determine their locations through a process of localization or positioning. The ad hoc and often dynamic nature of wireless networks requires distributed and autonomous localization methods. Moreover, location-aware wireless networks are frequently deployed in unknown environments and hence can rely only on minimal (if any) infrastructure, human maintenance, and a priori location information.

Cooperation is an emerging paradigm for localization in which agents take advantage of network connections and interagent measurements to improve their location estimates. Non-Bayesian cooperative localization in wireless sensor networks is discussed in [17]. Different variations of Bayesian cooperation have been considered, including Monte-Carlo sequential estimation [18] and nonparametric belief propagation in static networks [19]. For a comprehensive overview of Bayesian and non-Bayesian cooperative localization in wireless networks, we refer the reader to [20], which also introduces a distributed cooperative algorithm for large-scale mobile networks called SPAWN (sum-product algorithm over a wireless network). This message-passing algorithm achieves improved localization accuracy and coverage compared to other methods and will serve as the basic algorithm in this paper.

The complexity and cost associated with the SPAWN algorithm depend largely on how messages are represented for computation and transmission. As wireless networks typically operate under tight power and resource constraints, the choice of message representation heavily impacts the feasibility and ease of implementation of the algorithm. The method of message representation and ensuing tradeoff between communication cost and localization performance are thus of great practical importance in the deployment of realistic localization systems. Particle methods do not necessarily lend themselves well in practice to be exchanged wirelessly between devices, due to their high computational complexity and communication overhead [21]. Other message-passing methods have been developed that rely on parametric message representation, thus alleviating these drawbacks but limiting representational flexibility. In particular, in [22], the expectation propagation algorithm is considered with Gaussian messages, while in [23], variational message passing with parametric messages is shown to exhibit low complexity. A variation of SPAWN combining GPS and UWB was evaluated in [24], using a collection of parametric distributions with ellipsoidal, conic, and cylindrical shapes.

This paper addresses the need for accurate, resource-efficient localization with an in-depth comparison of various message representations for SPAWN. We describe and evaluate different parametric and nonparametric message representations in terms of complexity and accuracy. Additionally, we analyze the performance of various cooperative schemes and message representations in a simulated large-scale ultra-wide bandwidth (UWB) network using experimental UWB ranging data. UWB is an attractive choice for ranging and communication due to its ability to resolve multipath [25, 26], penetrate obstacles [27], and provide high resolution distance measurements [28, 29]. Recent research advances in UWB signal acquisition [30, 31], multiuser interference [32, 33], multipath channels [34, 35], non-line-of-sight (NLOS) propagation [28, 29], and time-of-arrival estimation [36] increase the potential for highly accurate UWB-based localization systems in harsh environments. Consequently, significant attention has been paid to both algorithm design [37ā€“42] and fundamental limits of accuracy [43ā€“46] for UWB localization. It is expected that UWB will be exploited in future location-aware systems that utilize coexisting networks of sensors, controllers, and peripheral devices [47, 48].

2. Problem Formulation

We consider a wireless network of š‘ nodes in an environment ā„°. Time is slotted with nodes moving independently from time slot to time slot. The position of node š‘– at time š‘” is described by the random variable š±(š‘”)š‘–; the vector of all positions is denoted by š±(š‘”). At each time š‘”, node š‘– may collect internal position-related measurements š‘§(š‘”)š‘–,self, for example, from an inertial measurement unit. The set of all internal measurements is denoted by š³(š‘”)self. Within the network, nodes communicate with each other via wireless transmissions. We denote the set of nodes from which node š‘– can receive transmissions at time š‘” by š’®(š‘”)ā†’š‘–. Note that the communication link may not be bidirectional; that is, š‘—āˆˆš’®(š‘”)ā†’š‘– does not imply š‘–āˆˆš’®(š‘”)ā†’š‘—. Using packets received from š‘—āˆˆš’®(š‘”)ā†’š‘–, node š‘– may collect a set of relative measurements, represented by the vector š‘§(š‘”)š‘—ā†’š‘–, which we will limit to distance measurements. We denote the set of all relative measurements made in the network at time š‘” by š³(š‘”)rel. The full set of relative and internal measurements is denoted š³(š‘”).

The objective of the localization problem is for each node š‘– to determine the a posteriori distribution š‘(š±(š‘”)š‘–āˆ£š³(1āˆ¶š‘”)) of its position š±(š‘”)š‘– at each time š‘”, given information up to and including š‘”.

3. A Brief Introduction to SPAWN

In [20], we proposed a cooperative localization algorithm by factorizing the joint distribution š‘(š±(0āˆ¶š‘‡)āˆ£š³(1āˆ¶š‘‡)), formulating the problem as a factor graph with temporal and spatial constraints, and applying the sum-product algorithm. This leads to a distributed algorithm, known as SPAWN, presented in Algorithm 1. The aim of SPAWN is to compute a belief š‘(š‘”)š‘–(š±(š‘”)š‘–) available to node š‘– at the end of any time slot š‘”, which serves as an approximation of the marginal a posteriori distribution š‘(š±(š‘”)š‘–āˆ£š³(1āˆ¶š‘”)). Note that each operation of SPAWN requires only information local to an individual node. Information is shared between nodes via physical transmissions. Each node can therefore perform the computations in Algorithm 1 using its local information and transmissions received from neighboring nodes.

( 1 ) Initialize belief š‘ ( 0 ) ( š± ( 0 ) š‘– ) = š‘ ( š± ( 0 ) š‘– ) , āˆ€ š‘–
( 1 ) for š‘” = 1 to š‘‡ do { time index}
( 2 ) ā€ƒfor all š‘– do { mobility update}
( 5 ) ā€ƒā€ƒMobility update:
ā€ƒā€ƒā€ƒ Ģƒ ā€Œ š‘ ( š‘” ) š‘– ( š± ( š‘” ) š‘– ) āˆ
ā€ƒā€ƒā€ƒā€ƒ āˆ« š‘ ( š± ( š‘” ) š‘– | š± ( š‘” āˆ’ 1 ) š‘– ) š‘ ( š‘§ ( š‘” ) š‘– , s e l f | š± ( š‘” āˆ’ 1 ) š‘– , š± ( š‘” ) š‘– )
ā€ƒā€ƒā€ƒā€ƒā€‚ Ɨ š‘ ( š‘” āˆ’ 1 ) š‘– ( š± ( š‘” āˆ’ 1 ) š‘– ) š‘‘ š± ( š‘” āˆ’ 1 ) š‘– ā€ƒā€ƒā€ƒā€ƒā€ƒā€ƒ(A-1)
( 6 ) ā€ƒend for
( 7 ) ā€ƒInitialize š‘ ( š‘” ) š‘– ( š± ( š‘” ) š‘– ) = Ģƒ ā€Œ š‘ ( š‘” ) š‘– ( š± ( š‘” ) š‘– ) , āˆ€ š‘–
( 8 ) ā€ƒfor š‘™ = 1 to š‘ i t do { iteration index; begin cooperative
ā€ƒā€ƒupdate}
( 9 ) ā€ƒā€ƒfor all š‘– do
( 1 0 ) ā€ƒā€ƒā€ƒfor all š‘— āˆˆ š‘† ( š‘” ) ā†’ š‘– do
( 1 1 ) ā€ƒā€ƒReceive and convert š‘ ( š‘” ) š‘— ( š± ( š‘” ) š‘— ) to a distribution
ā€ƒā€ƒā€‚ā€ƒ š‘ ( š‘™ ) š‘— ā†’ š‘– ( š± ( š‘” ) š‘– ) :
ā€ƒā€ƒā€ƒā€ƒā€ƒ š‘ ( š‘™ ) š‘— ā†’ š‘– ( š± ( š‘” ) š‘– ) āˆ
ā€ƒā€ƒā€ƒā€ƒā€ƒā€ƒ āˆ« š‘ ( š‘§ ( š‘” ) š‘— ā†’ š‘– | š± ( š‘” ) š‘– , š± ( š‘” ) š‘— ) š‘ ( š‘” ) š‘— ( š± ( š‘” ) š‘— ) š‘‘ š± ( š‘” ) š‘— ā€ƒ ā€ƒ (A-2)
( 1 2 ) ā€ƒā€ƒUpdate and broadcast š‘ ( š‘” ) š‘– ( š± ( š‘” ) š‘– ) :
ā€ƒā€ƒā€ƒā€‚ š‘ ( š‘” ) š‘– ( š± ( š‘” ) š‘– ) āˆ Ģƒ ā€Œ š‘ ( š‘” ) š‘– ( š± ( š‘” ) š‘– ) āˆ š‘˜ āˆˆ š‘† ( š‘” ) ā†’ š‘– š‘ ( š‘™ ) š‘˜ ā†’ š‘– ( š± ( š‘” ) š‘– ) ā€ƒā€ƒ(A-3)
( 1 3 ) ā€ƒā€ƒā€ƒend for
( 1 4 ) ā€ƒā€ƒend for
( 1 5 ) ā€ƒend forā€‰ā€‰{end cooperative update}
( 1 6 ) end forā€‰ā€‰{end current time step}

Observe that Algorithm 1 contains a number of key steps.(i)Mobility update (line 4), requiring knowledge of mobility models š‘(š±(š‘”)š‘–āˆ£š±(š‘”āˆ’1)š‘–) and self-measurement likelihood functions š‘(š‘§(š‘”)š‘–,selfāˆ£š±(š‘”āˆ’1)š‘–,š±(š‘”)š‘–).(ii)Message conversion (line 10) of position information from neighboring devices to account for relative measurements, requiring knowledge of the neighbors and of relative measurement likelihood functions š‘(š‘§(š‘”)š‘—ā†’š‘–āˆ£š±š‘–,š±š‘—).(iii)Belief update (line 11), to fuse information from the mobility update with information from the current neighbors.The first two operations can be interpreted as message filtering, while the latter operation is a message multiplication. How these operations can be implemented in practice will be the topic of Section 4.

4. Message Representation

4.1. Key Operations

In SPAWN, probabilistic information is exchanged and computed through messages. The manner in which these messages are represented for transmission between nodes and internal computation is closely related to the complexity and performance of the localization algorithm. In traditional communications problems, such as decoding, messages can be represented efficiently and exactly through, for instance, log-likelihood ratios [49]. In SPAWN, exact representation is impossible, so we must resort to different types of approximate message representations. Any representation must be able to capture the salient properties of the true message and must enable efficient computation of the key steps in SPAWN, namely, message filtering (A-1)-(A-2) and message multiplication (A-3). We consider three types of message representation: discretized, sample-based, and parametric.

For convenience, we will introduce a set of new notations. For the filtering operation, the incoming message is denoted by š‘š—(š±), the filtering operation by ā„Ž(š±,š²), and the outgoing message by š‘š˜(š²), with š‘š˜(š²)āˆī€œā„Ž(š±,š²)š‘š—(š±)š‘‘š±.(1) For the multiplication operation, we assume š‘€ incoming messages š‘(š‘–)š—(š±) (š‘–=1,ā€¦,š‘€) over a single variable š—, and an outgoing message šœ™š—(š±)āˆš‘€ī‘š‘–=1š‘(š‘–)š—(š±).(2) Note that (1) maps to (A-1) through the following association: š±ā†’š±(š‘”āˆ’1)š‘–,š²ā†’š±(š‘”)š‘–,ā„Ž(š±,š²)ā†’š‘ī‚€š±(š‘”)š‘–āˆ£š±(š‘”āˆ’1)š‘–ī‚š‘ī‚€š‘§(š‘”)š‘–,selfāˆ£š±(š‘”āˆ’1)š‘–,š±(š‘”)š‘–ī‚,š‘š—(š±)ā†’š‘(š‘”āˆ’1)š‘–ī‚€š±(š‘”āˆ’1)š‘–ī‚.(3) Similarly, (1) maps to (A-2) through the following association: š±ā†’š±(š‘”)š‘—,š²ā†’š±(š‘”)š‘–,ā„Ž(š±,š²)ā†’š‘ī‚€š‘§(š‘”)š‘—ā†’š‘–āˆ£š±(š‘”)š‘–,š±(š‘”)š‘—ī‚,š‘š—(š±)ā†’š‘(š‘”)š‘–ī‚€š±(š‘”)š‘–ī‚.(4)

4.2. Discretized Message Representation

A naive but simple approach to represent a continuous distribution š‘š—(š±) is to uniformly discretize the domain of š—, yielding a set of quantization points š‘„={š±1,ā€¦,š±š‘…}. The distribution is then approximated as a finite list of values, {š‘š—(š±š‘˜)}š‘…š‘˜=1. The filtering operation then becomes š‘š˜ī€·š²š‘˜ī€øāˆš‘…ī“š‘™=1ā„Žī€·š±š‘™,š²š‘˜ī€øš‘š—ī€·š±š‘™ī€ø,(5) requiring š’Ŗ(š‘…2) operations. The multiplication becomes šœ™š—ī€·š±š‘˜ī€øāˆš‘€ī‘š‘–=1š‘(š‘–)š—ī€·š±š‘˜ī€ø,(6) requiring š’Ŗ(š‘…š‘€) operations. Because š‘… scales exponentially with the dimensionality of š— and a large number of points are required in every dimension to capture fine features of the messages, discretization is impractical for SPAWN in UWB localization.

4.3. Sample-Based Message Representation

A sample-based message representation, as used in [19, 50], overcomes the drawback of discretization by representing messages as samples, concentrated where the messages have significant mass. Before describing the detailed implementation of the filtering and multiplication operations, we give a brief overview of generic sampling techniques (see also [51, 52]) and kernel density estimation (KDE).

4.3.1. Background: Sampling and Kernel Density Estimation

We say that a list of samples with associated weights {š±š‘˜,š‘¤š‘˜}š‘…š‘˜=1 is a representation for a distribution š‘š—(š±) if, for any integrable function š‘”(š±), we have the following approximation: š¼=ī€œš‘”(š±)š‘š—(š±)š‘‘š‘„ā‰ˆš‘…ī“š‘˜=1š‘¤š‘˜š‘”ī€·š±š‘˜ī€ø.(7) Popular methods for obtaining the list of weighted samples include (i) direct sampling, where we draw š‘… i.i.d. samples from š‘š—(š±), each with weight 1/š‘…; and (ii) importance sampling, where we draw š‘… i.i.d. samples from a distribution š‘žš—(š±), with a support that includes the support of š‘š—(š±), and set the weight corresponding to sample š±š‘˜ as š‘¤š‘˜=š‘š—(š±š‘˜)/š‘žš—(š±š‘˜). In both cases, it can easily be verified that the approximation is unbiased with mean š¼ and variance that reduces with š‘… (and that depends on š‘žš—(š±), for importance sampling). Most importantly, the variance does not depend on the dimensionality of š±.

A variation of importance sampling that is not unbiased but that often has smaller variance is obtained by setting the weights as follows: š‘¤š‘˜āˆš‘š—(š±š‘˜)/š‘žš—(š±š‘˜), āˆ‘š‘˜š‘¤š‘˜=1. This approach has the additional benefit that it does not require knowledge of the normalization constants of š‘š—(š±) or š‘žš—(š±). A list of š‘… equally weighted samples can be obtained from {š±š‘˜,š‘¤š‘˜}š‘…š‘˜=1 through resampling, that is, by drawing (with repetition) š‘… samples from the probability mass function defined by {š±š‘˜,š‘¤š‘˜}š‘…š‘˜=1.

For numerical stability reasons, weights are often computed and stored in the logarithmic domain, that is, šœ†š‘˜=logš‘š—(š±š‘˜)āˆ’logš‘žš—(š±š‘˜). When the distributions involved contain exponentials or products, the log-domain representation is also computationally efficient. Operations such as additions can be evaluated efficiently in the log-domain as well, using the Jacobian logarithm [49, pages 90ā€“94]. Once all š‘… log-domain weights are computed, they are translated, exponentiated, and normalized: š‘¤š‘˜āˆexp(šœ†š‘˜āˆ’maxš‘™šœ†š‘™).

Given a sample representation {š±š‘˜,š‘¤š‘˜}š‘…š‘˜=1 of a distribution š‘š—(š±), we obtain a kernel density estimate of š‘š—(š±) as Ģ‚š‘š—(š±)=š‘…ī“š‘˜=1š‘¤š‘˜š¾šœŽī€·š±āˆ’š±š‘˜ī€ø,(8) where š¾šœŽ(š±) is the so-called kernel with bandwidth šœŽ. The kernel is a symmetric distribution with a width parameter that is tuned through šœŽ. For instance, a two-dimensional Gaussian kernel is given by š¾šœŽ(š±)=12šœ‹šœŽ2expī‚µāˆ’ā€–š±ā€–22šœŽ2ī‚¶.(9) While the choice of kernel affects the performance of the estimate to some limited extent (e.g., in an MMSE sense, where the error is āˆ«|š‘š‘‹(š‘„)āˆ’Ģ‚š‘š‘‹(š‘„)|2š‘š‘‹(š‘„)š‘‘š‘„), the crucial parameter is the bandwidth šœŽ, which needs to be estimated from the samples {š±š‘˜,š‘¤š‘˜}š‘…š‘˜=1. A large choice of šœŽ makes Ģ‚š‘š—(š±) smooth, but it may no longer capture the interesting features of š‘š—(š±). When šœŽ is too small, Ģ‚š‘š—(š±) may exhibit artificial structure not present in š‘š—(š±) [53].

With this background in sampling techniques and KDE, we return to the problem at hand: filtering and multiplication of messages.

4.3.2. Message Filtering

We assume a message representation of š‘š—(š±) as {š±š‘˜,š‘¤š‘˜}š‘…š‘˜=1 and wish to obtain a message representation of š‘š˜(š²)āˆāˆ«ā„Ž(š±,š²)š‘š—(š±)š‘‘š±. Let us interpret ā„Ž(š±,š²) as a conditional distribution š‘š˜āˆ£š—(š²āˆ£š±), up to some arbitrary constant. Suppose we can draw samples {[š±š‘˜,š²š‘˜],š‘¤š‘˜}š‘…š‘˜=1āˆ¼š‘š˜āˆ£š—(š²āˆ£š±)š‘š—(š±); then {š²š‘˜,š‘¤š‘˜}š‘…š‘˜=1 will form a sample representation of š‘š˜(š²). Now the problem reverts to drawing samples from š‘š˜āˆ£š—(š²āˆ£š±)š‘š—(š±). This can be accomplished as follows: first, for every sample š±š‘˜, draw š²š‘˜āˆ¼š‘žš˜āˆ£š—(š²āˆ£š±š‘˜) from some distribution š‘žš˜āˆ£š—(š²āˆ£š±š‘˜). Second, set the weight of sample [š±š‘˜,š²š‘˜] as š‘£š‘˜=š‘¤š‘˜š‘š˜āˆ£š—ī€·š²š‘˜āˆ£š±š‘˜ī€øš‘žš˜āˆ£š—ī€·š²š‘˜āˆ£š±š‘˜ī€ø.(10) Finally, renormalize the weights š‘£š‘˜ to š‘£š‘˜/āˆ‘š‘™š‘£š‘™. The complexity of the filtering operation scales as š’Ŗ(š‘…), a significant improvement from š’Ŗ(š‘…2) for discretization. In addition, š‘… can generally be much smaller in a particle-based representation.

Let us consider some examples of the filtering operation in SPAWN.(i)Mobility update (A-1): let š‘š—(š±) be the belief before movement (represented by {š±š‘˜,š‘¤š‘˜}š‘…š‘˜=1) and š‘š˜(š²) the belief after movement. Assume that we are able to measure perfectly the distance traveled (given by š‘§self), but have no information regarding the direction, and furthermore that the direction is chosen uniformly in (0,2šœ‹]. In that case, ā„Ž(š±,š²)āˆš›æī€·š‘§selfāˆ’ā€–š±āˆ’š²ā€–ī€ø,(11) where š›æ is a Dirac delta function, so that š‘žš˜āˆ£š—(š²āˆ£š±š‘˜)=š‘š˜āˆ£š—(š²āˆ£š±š‘˜)āˆš›æ(š‘§selfāˆ’ā€–š±š‘˜āˆ’š²ā€–) is a reasonable choice. For every š±š‘˜, we can now draw values for š²=š±š‘˜+š‘ŸĆ—[cosšœƒsinšœƒ]š‘‡ by drawing šœƒāˆ¼š’°(0,2šœ‹) and setting š‘Ÿ=š‘§self, leading to š²š‘˜=š±š‘˜+š‘§selfƗ[cosšœƒš‘˜sinšœƒš‘˜]š‘‡, with š‘£š‘˜āˆš‘¤š‘˜.(ii)Ranging update (A-2): let š‘š—(š±) be a message (represented by {š±š‘˜,š‘¤š‘˜}š‘…š‘˜=1) from a node with which we have performed ranging, resulting in a range estimate š‘§. Let ā„Ž(š±,š²)=š‘š‘āˆ£š·(š‘§āˆ£š‘‘), where š‘‘=ā€–š±āˆ’š²ā€–. Note that š‘š‘āˆ£š·(š‘§āˆ£š‘‘) is a likelihood function, since the measurement š‘§ is known. Assume that we have a model for the ranging performance in the form of distributions š‘š‘āˆ£š·(š‘§āˆ£š‘‘) for any value of š‘‘. We then sample š‘š˜(š²) as follows: for every š±š‘˜, draw š²=š±š‘˜+š‘ŸĆ—[cosšœƒsinšœƒ]š‘‡ by drawing šœƒāˆ¼š’°(0,2šœ‹) and š‘Ÿāˆ¼š‘žš‘…āˆ£š‘(š‘Ÿāˆ£š‘§), for some well-chosen š‘žš‘…āˆ£š‘(š‘Ÿāˆ£š‘§)(e.g., a Gaussian distribution with mean equal to the distance estimate, š‘§, and a standard deviation that is sufficiently large with respect to the standard deviation of š‘(š‘§|š‘‘) for any š‘‘). The weights are set as š‘£š‘˜=š‘¤š‘˜š‘š‘āˆ£š·ī€·š‘§āˆ£š‘Ÿš‘˜ī€øš‘žš‘…āˆ£š‘ī€·š‘Ÿš‘˜š‘§ī€ø.(12)

4.3.3. Message Multiplication

Here we assume message representations {š±(š‘–)š‘˜,š‘¤(š‘–)š‘˜}š‘…š‘˜=1 for š‘(š‘–)š—(š±š‘˜), š‘–=1,ā€¦,š‘€. In contrast to the discretization approach, we cannot directly compute āˆš‘€š‘–=1š‘(š‘–)š—(š±) for arbitrary values of š±. Rather, for every message š‘(š‘–)š—(š±š‘˜), we create a KDE Ģ‚š‘(š‘–)š—(š±)=āˆ‘š‘…š‘˜=1š‘¤š‘˜š¾šœŽ(š‘–)(š±āˆ’š±(š‘–)š‘˜) with a Gaussian kernel and a bandwidth estimated using the methods from [53]. Suppose we now draw š‘… samples from a distribution š‘žš—(š±); then the weights are š‘£š‘˜āˆāˆš‘€š‘–=1Ģ‚š‘(š‘–)š—ī€·š±š‘˜ī€øš‘žš—ī€·š±š‘˜ī€ø,(13) which can be computed efficiently in the log-domain. A reasonable choice for š‘žš—(š±) could be one of the incoming messages š‘(š‘–)š—(š±) (e.g., the one with the smallest entropy) or a mixture of the incoming messages. The computational complexity of the message multiplication operation scales as š’Ŗ(š‘€š‘…2). This appears worse than the discretized case (complexity š’Ŗ(š‘€š‘…)), but note that š‘… is much smaller for sample-based representations than for discretization (e.g., š‘…=103 or š‘…=104 for the sample-based representation compared to š‘…=108 in the discretization).

4.4. Parametric Message Representation
4.4.1. Choosing a Suitable Parameterization

From the previous section, it is clear that the bottleneck of the sample-based message representation lies in the message multiplication, which scales quadratically with the number of samples. An alternative approach is to represent each message as a set of parameters (e.g., a Gaussian distribution characterized by a mean and covariance matrix). In contrast to the sample-based message representation, which can represent messages of any shape, parametric representations must be specially tailored to the problem at hand. For example, single two-dimensional Gaussian parametric messages are utilized in [22] for localization with both range and angle measurements. Our choice of parametric message is based on the following observations.(i)For the filtering operation with a two-dimensional Gaussian input š‘š—(š±), the output š‘š˜(š²) can be approximated by a circular distribution with the same mean for both the mobility update (A-1) and the ranging update (A-2).(ii)Multiplying Gaussian distributions yield a Gaussian distribution.(iii)The multiplication of multiple circular distributions can be approximated by a Gaussian distribution or a mixture of Gaussian distributions.

We will use as a basic building block the following distribution in two dimensions: š’Ÿī€·š±;š‘š1,š‘š2,šœŽ2,šœŒī€ø=1š¶ī€·šœŽ2,šœŒī€øexpāŽ§āŽŖāŽŖāŽØāŽŖāŽŖāŽ©āˆ’ī‚øī”ī€·š‘„1āˆ’š‘š1ī€ø2+ī€·š‘„2āˆ’š‘š2ī€ø2āˆ’šœŒī‚¹22šœŽ2āŽ«āŽŖāŽŖāŽ¬āŽŖāŽŖāŽ­,(14) where [š‘š1,š‘š2] is the midpoint of the distribution, šœŒ is the radius, šœŽ2 is the variance, and š¶(šœŽ2,šœŒ) is a normalization constant equal to š¶ī€·šœŽ2,šœŒī€ø=2šœ‹šœŽ2āŽ”āŽ¢āŽ£expī‚µāˆ’šœŒ22šœŽ2ī‚¶+12ī„¶ī„µāŽ·2šœ‹šœŒ2šœŽ2āŽ›āŽœāŽ1+erfī„¶ī„µāŽ·šœŒ22šœŽ2āŽžāŽŸāŽ āŽ¤āŽ„āŽ¦.(15) As a special case, we note that, when šœŒ=0, (14) reverts to a two-dimensional Gaussian. Moreover, we will represent all messages as a mixture of two distributions of the type (14), so that š‘š—(š±)=12š’Ÿī‚€š±;š‘š(š‘Ž)1,š‘š(š‘Ž)2,šœŽ2,šœŒī‚+12š’Ÿī‚€š±;š‘š(š‘)1,š‘š(š‘)2,šœŽ2,šœŒī‚,(16) which can be represented by the six-dimensional vector [š‘š(š‘Ž)1,š‘š(š‘Ž)2,š‘š(š‘)1,š‘š(š‘)2,šœŒ,šœŽ2]. We will denote the family of distributions of the form (16) by š’Ÿ2. Note that it is trivial to extend this distribution, which is designed for two-dimensional localization systems, for use in three-dimensional systems. Before we describe the message filtering and message multiplication operations, let us first show how the parameters of (14) can be estimated from a list of samples.

4.4.2. ML Estimation of the Parameters š‘š1,š‘š2,šœŽ2,šœŒ.

Given a list of samples {š±š‘˜}š‘…š‘˜=1, we can estimate the parameters [š‘š1,š‘š2,šœŽ2,šœŒ] as follows. The midpoint š¦=[š‘š1,š‘š2] is estimated by īš¦=1š‘…š‘…ī“š‘˜=1š±š‘˜.(17) To find the radius šœŒ and variance šœŽ2 of š’Ÿ-distribution, we use maximum likelihood (ML) estimation, assuming the š‘… samples are independent. Introducing šœ¶=[šœŒšœŽ2]š‘‡, we find that īšœ¶ML=argmaxšœ¶š‘…ī“š‘˜=1logš‘š—ī€·š±š‘˜;īš¦,šœ¶ī€øī„æī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…ƒī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…€ī…ŒĪ›(šœ¶).(18) Treating the log-likelihood function (LLF) Ī›(šœ¶) as an objective function, we find its maximum through the gradient ascent algorithm īšœ¶(š‘›+1)=īšœ¶(š‘›)āˆ’šœ€āˆ‡šœ¶Ī›(šœ¶)||šœ¶=īšœ¶(š‘›),(19) where šœ€ is a suitably small step size, and the gradient vector can be approximated using finite differences. To initialize (19), we consider two initial estimates for šœŽ2 and šœŒ: one assuming šœŒā‰ŖšœŽ and a second assuming šœŒā‰«šœŽ. The LLF is evaluated for both preliminary solutions, and the one with the largest log-likelihood is used as the initial estimate in (19).

4.4.3. Message Filtering

To perform message filtering, we use the fact that sample-based message filtering is a low-complexity operation. We decompose š‘š—(š±), represented in parametric form, into its two mixture components. From each component, we draw š‘…/2 samples and perform sample-based message filtering, as outlined in Section 4.3.2. We can then estimate the new š’Ÿ-parameters for each mixture component using the ML method described above. We thus have š‘š˜(š²) in parametric form. The complexity of this operation scales as š’Ŗ(š‘…).

4.4.4. Message Multiplication

The motivation for using the parametric message representation is to avoid the complexity associated with sample-based message multiplication. Given š‘€ distributions š‘(š‘–)š±(š±)āˆˆš’Ÿ2, our goal is to compute šœ™š—(š±)=š‘€ī‘š‘–=1š‘(š‘–)š—(š±).(20) Typically, šœ™š—(š±)āˆ‰š’Ÿ2, so we will approximate šœ™š—(š±) by š‘žāˆ—š—(š±)āˆˆš’Ÿ2 by projecting šœ™š—(š±) onto the family š’Ÿ2: š‘žāˆ—š—(ā‹…)=argminš‘žš—āˆˆš’Ÿ2š·KLī€·š‘žš±ā€–ā€–šœ™š±ī€ø,(21) where š·KL(ā‹…||ā‹…) denotes the Kullback Leibler (KL) divergence, defined as š·KLī€·š‘žš—ā€–ā€–šœ™š—ī€ø=ī€œš‘žš—(š±)logš‘žš—(š±)šœ™š—(š±)š‘‘š±.(22)

Observe that all elements of š’Ÿ2 are characterized by the parameters š©ā‰œ[š‘š(š‘Ž)1,š‘š(š‘Ž)2,š‘š(š‘)1,š‘š(š‘)2,šœŒ,šœŽ2]š‘‡ and that the optimization (21) is therefore a six-dimensional problem over all possible š©. The divergence š·KL(š‘žā€–šœ™) for an arbitrary š©āˆˆā„4Ɨā„2+ can be determined using Monte-Carlo integration as follows. We rewrite (22) as š·KLī€·š‘žš±ā€–ā€–šœ™š±ī€ø=ī€œš‘žš—(š±)š‘“(š±)š‘‘š±,(23) where š‘“(š±)=logš‘žš—(š±)āˆ’āˆ‘š‘€š‘–=1logš‘(š‘–)š—(š±). By drawing š‘… weighted samples {š‘¤š‘˜,š±š‘˜}š‘…š‘˜=1 from š‘žš—(š±) (e.g., through importance sampling), we can approximate (23) by š·KLī€·š‘žš±ā€–ā€–šœ™š±ī€øā‰ˆš‘…ī“š‘˜=1š‘¤š‘˜š‘“ī€·š±š‘˜ī€ø.(24) Using this approximation, the six-dimensional optimization problem (21) is solved through gradient descent, similar to (19). The complexity of this operation scales as š’Ŗ(š‘…š‘€). The initial estimate of š© is obtained through a set of heuristics: we first decide whether šœ™š—(š±) can reasonably be represented by a distribution in š’Ÿ2. If not, the outgoing message is not computed. Otherwise, we use a geometric argument to find at most two midpoints. The initial estimates for šœŒ and šœŽ2 are set to a small constant value.

4.5. Comparison of Message Representations

The complexities of the discretized, sample-based, and parametric message representations are compared in Table 1.



Approach
Operation Complexity Value of š‘…

Discretized Filtering š’Ŗ ( š‘… 2 ) Large
Discretized Multiplication š’Ŗ ( š‘… š‘€ ) Large
Sample-based Filtering š’Ŗ ( š‘… ) Small
Sample-based Multiplication š’Ŗ ( š‘… 2 š‘€ ) Small
Parametric Filtering š’Ŗ ( š‘… ) Small
Parametric Multiplication š’Ŗ ( š‘… š‘€ ) Small

5. Performance Analysis

In this section, we compare the performance of the SPAWN algorithm with sample-based versus parametric message representation in a simulated wireless network. We also analyze the use of different subsets of information in the algorithm and its effect on localization performance.

5.1. Simulation Setup and Performance Measures

We simulate a large-scale ultra-wide bandwidth (UWB) network in a 100ā€‰m Ɨ 100ā€‰m homogeneous environment, with 100 uniformly distributed agents and 13 fixed anchors in a grid configuration. Each node is able to measure its range to other nodes within 20 meters. The simulated ranging measurements are independently drawn from the UWB ranging model developed in [54]. The model, based on data collected in a variety of indoor scenarios, consists of three component Gaussian densities, where the mean and variance of each component are experimentally determined functions of the true distance between the ranging nodes. To decouple the effect of mobility with the message representation, we consider a single time slot, where every agent has a uniform a priori distribution over the environment ā„°. SPAWN was run for š‘it=20 iterations, though convergence was generally achieved well before 10 iterations. For the sample-based representation, the number of samples is set to š‘…=2048 unless otherwise stated.

We quantify localization performance using the complementary cumulative distribution function (CCDF) of the localization error š‘’=ā€–š±š‘–āˆ’Ģ‚ā€Œš±š‘–ā€–, where Ģ‚ā€Œš±š‘– is the estimated location of node š‘–, taken as the mean of the belief, similar to [18]. To estimate the CCDF, we consider 50 random network topologies and collect position estimates at every iteration for every agent. Note that a CCDF of 0.01 at an error of, say, š‘’=1ā€‰m means that 99% of the nodes have an error less than 1 meter.

5.2. Cooperation with Censoring

In Section 3, we considered processing messages between all neighboring pairs of nodes. However, information from neighbors may not always be useful: (i) when the receiving node's belief is already very informative (e.g., concentrated around the mean); or (ii) when the transmitting node's belief is very uninformative. To better understand how much cooperative information is beneficial to localization, we will consider varying the subset of nodes that broadcast and update their location beliefs at each iteration. We distinguish between these subsets by the level of cooperative information they induce in the algorithm. The level of cooperative information indicates how each node utilizes information from its neighbors at each iteration.

We introduce the following terminology: a distribution is said to be ā€œsufficiently informativeā€ when 95% of the probability mass is located within 2ā€‰m of the mean; a node becomes a virtual anchor when its belief is sufficiently informative; a virtual bianchor is a node with a bimodal belief, with each mode being sufficiently informative; a node that is neither a virtual anchor nor a virtual bianchor will be called a blind agent. We are now ready to introduce four levels of cooperative information at each iteration.(i)Level 1 (L1): virtual anchors broadcast their beliefs, while all other nodes censor their belief broadcast. Virtual anchors do not update their beliefs.(ii)Level 2 (L2): virtual anchors and virtual bianchors broadcast their beliefs, while blind nodes censor their belief broadcast. Virtual anchors do not update their beliefs.(iii)Level 3 (L3): all nodes broadcast their beliefs. Virtual anchors do not update their beliefs.(iv)Level 4 (L4): all nodes broadcast their beliefs. All nodes update their beliefs. In terms of cooperation, note that L4 utilizes more cooperative information than L3, L3 utilizes more cooperative information than L2, and L2 utilizes more cooperative information than L1. In this sense, the levels of cooperative information are strict subsets.

From previous sections, we know that the algorithm complexity scales linearly in š‘€, the number of incoming messages in the multiplication operation. Hence, the level of cooperative information directly affects the algorithm's computational cost, with lower levels requiring less computation.

5.3. Numerical Results

We now examine how localization performance varies with the algorithm parameters. In particular, numerical results show the effect of message representation (sample-based or parametric) and level of cooperative information (L1, L2, L3, or L4) on the CCDF of the localization error.

We first consider the localization performance as a function of the number of samples š‘… and level of cooperative information. Figure 1 displays the CCDF at š‘’=1ā€‰m after 10 iterations. As expected, for any level of cooperative information, the CCDF decreases as the number of samples is increased. However, the decrease in CCDF comes with a cost in computation time; as š‘… is increased, the per-node complexity increases quadratically. Figure 1 also shows that levels L1, L2, and L4 are not as sensitive to š‘… as L3 and that each generally outperforms L3. This effect is particularly pronounced when š‘… is small. L3 broadcasts more complex distributions than L2 and L1, and these elaborate distributions are not accurately represented with a small number of samples.

Secondly, we investigate level of cooperative information and its effect on localization performance, with numerical results represented in Figures 2 and 3, after š‘it=20 iterations. Note that each curve exhibits a ā€œfloorā€ because there is always some subset of nodes that have insufficient information to localize without ambiguity. This may be due to lack of connectivity or large flip ambiguities. Let us focus on the sample-based representation in Figure 2 and consider the effect of the level of cooperative information on localization performance. In general, L4 has the best performance in terms of accuracy and floor. Intuitively, one might expect L3 to have the next best performance, followed by L2, and then L1. However, Figure 2 demonstrates that in some cases L3 has poorer accuracy than L2 and a similar floor. This effect can be explained as follows. Agents that do not become virtual anchors within š‘it=20 tend to have large localization errors, creating a floor. Such agents comprise 1.7% of the total nodes for L1 and 0.3% for both L2 and L3. Since L2 and L3 have a similar fraction of agents that do not become virtual anchors, they have similar floors. In addition, the accuracy of beliefs belonging to agents that have become virtual anchors turns out to be highest for L1, followed by L2, and then L3. This is because L3 uses less reliable information than L2, which in turn is less reliable than L1. The final CCDF depends both on the fraction of virtual anchors (lowest for L1) and the accuracy of those virtual anchors (highest for L1). Note that we cannot compare L4 in this context, since there is no concept of a virtual anchor in L4.

We now move on to the parametric representation, still in Figure 2. We observe that L4 has the lowest overall CCDF for any š‘’, for both types of message representation. For the parametric messages, the differences among different levels of cooperative information are smaller, and we generally obtain better performance (for š‘’<1ā€‰m) compared to sample-based messages.

Finally, in Figure 3, we evaluate the convergence speed of the different message representations and levels of cooperation, for a fixed error of 1 meter. We see that the parametric messages generally lead to faster convergence and lower CCDF than their sample-based counterparts. Levels L2, L3, and L4 all converge in around 5 iterations with a final CCDF at š‘’=1ā€‰m of around 0.01 for the parametric representation. Our results show that more cooperative information leads to faster improvement in terms of accuracy. The lowest level of cooperative information, L1, is consistently slower to converge and less accurate. However, higher levels of cooperative information also require the computation and representation of more complicated distributions. As a possible consequence, convergence issues may occur for levels L3 and L4. We also see that the parametric message representation performs approximately equal to or better than the sample-based messages in terms of both convergence and accuracy, while requiring much less execution time. Overall, parametric message representations yield a better performance/complexity tradeoff. This is due to the fact that the parametric distributions are well tailored to the localization problem and the homogeneous simulation environment.

6. Conclusions and Extensions

In this paper, we considered different message representations for Bayesian cooperative localization in wireless networks: a generic sample-based representation and a tailored parametric representation. We used experimentally derived UWB ranging models to evaluate the performance of SPAWN as a function of message representation and level of cooperative information. Our results show that the tradeoffs between message representation, cooperative information, localization accuracy, and algorithm convergence are not straightforward and should be tailored to the scenario.

Through large-scale network simulations, we demonstrated that more cooperative information may improve localization accuracy but also increase the complexity of messages. Higher levels of cooperative information do not always correspond to an improvement in localization accuracy or convergence rate. As complicated distributions associated with location-uncertain nodes are computed and transmitted, the resulting increases in computational complexity and signal interference can actually reduce localization performance. It may therefore be advantageous to broadcast only confident information in cooperative localization networks, especially considering the resources saved by a node censorship policy.

We also demonstrated that though parametric messages have less representational flexibility, they can outperform nonparametric message representation at a much lower computational cost. In our simulations, the parametric representation achieved a lower probability of outage for errors under 1 meter while converging in equal or fewer iterations than the sample-based representation. Clearly, a parametric representation well tailored to the localization scenario is desirable in terms of both resource efficiency and localization accuracy.

The use of parametric distributions for localization can be extended to (i) different ranging models; (ii) different types of measurements; (iii) more general scenarios. In terms of ranging models, the proposed distributions can be applied as long as typical distributions in SPAWN roughly resemble a distribution in the š’Ÿ2-family. Note that a Gaussian ranging error satisfies this criteria, as would many other, more realistic, models. Other models, such as those derived from received signal strength, will require different types of parametric distributions. The same comment applies to the use of different types of measurements. For instance, with angle-of-arrival measurements, the parametric distributions should include a collection of linear distributions. Finally, more general scenarios may require tailor-made distributions. With NLOS measurements that can be modeled as biased Gaussians [20], for example, mixtures of š’Ÿ2 distributions would easily accommodate LOS/NLOS propagation, without relying on explicit NLOS identification.

References

  1. J. J. Caffery and G. L. Stüber, ā€œOverview of radiolocation in CDMA cellular systems,ā€ IEEE Communications Magazine, vol. 36, no. 4, pp. 38ā€“45, 1998. View at: Google Scholar
  2. A. H. Sayed, A. Tarighat, and N. Khajehnouri, ā€œNetwork-based wireless location: challenges faced in developing techniques for accurate wireless location information,ā€ IEEE Signal Processing Magazine, vol. 22, no. 4, pp. 24ā€“40, 2005. View at: Publisher Site | Google Scholar
  3. A. Mainwaring, D. Culler, J. Polastre, R. Szewczyk, and J. Anderson, ā€œWireless sensor networks for habitat monitoring,ā€ in Proceedings of the 1st ACM International Workshop on Wireless Sensor Networks and Applications (WSNA '02), pp. 88ā€“97, ACM Press, September 2002. View at: Google Scholar
  4. T. He, C. Huang, B. M. Blum, J. A. Stankovic, and T. Abdelzaher, ā€œRange-free localization schemes for large scale sensor networks,ā€ in Proceedings of the 9th Annual International Conference on Mobile Computing and Networking (MobiCom '03), pp. 81ā€“95, September 2003. View at: Google Scholar
  5. K. Pahlavan, X. Li, and J. P. Mäkelä, ā€œIndoor geolocation science and technology,ā€ IEEE Communications Magazine, vol. 40, no. 2, pp. 112ā€“118, 2002. View at: Publisher Site | Google Scholar
  6. S. J. Ingram, D. Harmer, and M. Quinlan, ā€œUltrawideband indoor positioning systems and their use in emergencies,ā€ in Proceedings of the Position Location and Navigation Symposium (PLANS '04), pp. 706ā€“715, April 2004. View at: Publisher Site | Google Scholar
  7. C.-Y. Chong and S. P. Kumar, ā€œSensor networks: evolution, opportunities, and challenges,ā€ Proceedings of the IEEE, vol. 91, no. 8, pp. 1247ā€“1256, 2003. View at: Publisher Site | Google Scholar
  8. H. Yang and B. Sikdar, ā€œA protocol for tracking mobile targets using sensor networks,ā€ in Proceedings of the 1st IEEE International Workshop on Sensor Network Protocols and Applications, pp. 71ā€“81, May 2003. View at: Publisher Site | Google Scholar
  9. X. Ji and H. Zha, ā€œSensor positioning in wireless ad-hoc sensor networks using multidimensional scaling,ā€ in Proceedings of the 23rd Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM '04), vol. 4, pp. 2652ā€“2661, March 2004. View at: Google Scholar
  10. R. A. Marjamaa, P. M. Torkki, M. I. Torkki, and O. A. Kirvelä, ā€œTime accuracy of a radio frequency identification patient tracking system for recording operating room timestamps,ā€ Anesthesia & Analgesia, vol. 102, no. 4, pp. 1183ā€“1186, 2006. View at: Publisher Site | Google Scholar
  11. D. Fox, W. Burgard, F. Dellaert, and S. Thrun, ā€œMonte Carlo localization: efficient position estimation for mobile robots,ā€ in Proceedings of the 16th National Conference on Artificial Intelligence (AAAI '99), pp. 343ā€“349, Orlando, Fla, USA, July 1999. View at: Google Scholar
  12. J. J. Leonard and H. F. Durrant-Whyte, ā€œMobile robot localization by tracking geometric beacons,ā€ IEEE Transactions on Robotics and Automation, vol. 7, no. 3, pp. 376ā€“382, 1991. View at: Publisher Site | Google Scholar
  13. R. Jain, A. Puri, and R. Sengupta, ā€œGeographical routing using partial information for wireless ad hoc networks,ā€ IEEE Personal Communications, vol. 8, no. 1, pp. 48ā€“57, 2001. View at: Publisher Site | Google Scholar
  14. H. Frey, ā€œScalable geographic routing algorithms for wireless ad hoc networks,ā€ IEEE Network, vol. 18, no. 4, pp. 18ā€“22, 2004. View at: Publisher Site | Google Scholar
  15. R. J. Fontana and S. J. Gunderson, ā€œUltra-wideband precision asset location system,ā€ in Proceedings of IEEE Conference on Ultra Wideband Systems and Technologies (UWBST '02), vol. 21, no. 1, pp. 147ā€“150, Baltimore, Md, USA, May 2002. View at: Google Scholar
  16. W. C. Chung and D. Ha, ā€œAn accurate ultra wideband (UWB) ranging for precision asset location,ā€ in Proceedings of IEEE Conference on Ultra Wideband Systems and Technologies (UWBST '03), pp. 389ā€“393, November 2003. View at: Google Scholar
  17. N. Patwari, J. N. Ash, S. Kyperountas, A. O. Hero III, R. L. Moses, and N. S. Correal, ā€œLocating the nodes: cooperative localization in wireless sensor networks,ā€ IEEE Signal Processing Magazine, vol. 22, no. 4, pp. 54ā€“69, 2005. View at: Publisher Site | Google Scholar
  18. M. Castillo-Effen, W. A. Moreno, M. A. Labrador, and K. P. Valavanis, ā€œAdapting sequential Monte-Carlo estimation to cooperative localization in wireless sensor networks,ā€ in Proceedings of IEEE International Conference on Mobile Ad Hoc and Sensor Sysetems (MASS '06), pp. 656ā€“661, Vancouver, Canada, October 2006. View at: Publisher Site | Google Scholar
  19. A. T. Ihler, J. W. Fisher III, R. L. Moses, and A. S. Willsky, ā€œNonparametric belief propagation for self-localization of sensor networks,ā€ IEEE Journal on Selected Areas in Communications, vol. 23, no. 4, pp. 809ā€“819, 2005. View at: Publisher Site | Google Scholar
  20. H. Wymeersch, J. Lien, and M. Z. Win, ā€œCooperative localization in wireless networks,ā€ Proceedings of the IEEE, vol. 97, no. 2, pp. 427ā€“450, 2009. View at: Publisher Site | Google Scholar
  21. A. T. Ihler, J. W. Fisher, and A. S. Willsky, ā€œParticle filtering under communications constraints,ā€ in Proceedings of the 13th IEEE/SP Workshop on Statistical Signal Processing, pp. 89ā€“94, Bordeaux, France, July 2005. View at: Google Scholar
  22. M. Welling and J. J. Lim, ā€œA distributed message passing algorithm for sensor localization,ā€ in Proceedings of the 17th International Conference on Artificial Neural Networks (ICANN '07), pp. 767ā€“775, September 2007. View at: Google Scholar
  23. C. Pedersen, T. Pedersen, and B. H. Fleury, ā€œA variational message passing algorithm for sensor self-localization in wireless networks,ā€ in Proceedings of IEEE International Symposium on Information Theory Proceedings (ISIT '11), pp. 2158ā€“2162, Saint-Petersburg, Russia, July 2011. View at: Publisher Site | Google Scholar
  24. M. Caceres, F. Penna, H. Wymeersch, and R. Garello, ā€œHybrid cooperative positioning based on distributed belief propagation,ā€ IEEE Journal on Selected Areas in Communications, vol. 29, no. 10, pp. 1948ā€“1958, 2011. View at: Publisher Site | Google Scholar
  25. M. Z. Win and R. A. Scholtz, ā€œCharacterization of ultra-wide bandwidth wireless indoor channels: a communication-theoretic view,ā€ IEEE Journal on Selected Areas in Communications, vol. 20, no. 9, pp. 1613ā€“1627, 2002. View at: Publisher Site | Google Scholar
  26. M. Z. Win and R. A. Scholtz, ā€œOn the robustness of ultra-wide bandwidth signals in dense multipath environments,ā€ IEEE Communications Letters, vol. 2, no. 2, pp. 51ā€“53, 1998. View at: Google Scholar
  27. A. F. Molisch, J. R. Foerster, and M. Pendergrass, ā€œChannel models for ultrawideband personal area networks,ā€ IEEE Wireless Communications, vol. 10, no. 6, pp. 14ā€“21, 2003. View at: Publisher Site | Google Scholar
  28. J.-Y. Lee and R. A. Scholtz, ā€œRanging in a dense multipath environment using an UWB radio link,ā€ IEEE Journal on Selected Areas in Communications, vol. 20, no. 9, pp. 1677ā€“1683, 2002. View at: Publisher Site | Google Scholar
  29. D. Dardari, A. Conti, U. Ferner, A. Giorgetti, and M. Z. Win, ā€œRanging with ultrawide bandwidth signals in multipath environments,ā€ Proceedings of the IEEE, vol. 97, no. 2, pp. 404ā€“425, 2009. View at: Publisher Site | Google Scholar
  30. W. Suwansantisuk and M. Z. Win, ā€œMultipath aided rapid acquisition: optimal search strategies,ā€ IEEE Transactions on Information Theory, vol. 53, no. 1, pp. 174ā€“193, 2007. View at: Publisher Site | Google Scholar
  31. H. Xu and L. Yang, ā€œTiming with dirty templates for low-resolution digital UWB receivers,ā€ IEEE Transactions on Wireless Communications, vol. 7, no. 1, pp. 54ā€“59, 2008. View at: Publisher Site | Google Scholar
  32. M. Z. Win, P. C. Pinto, and L. A. Shepp, ā€œA mathematical theory of network interference and its applications,ā€ Proceedings of the IEEE, vol. 97, no. 2, pp. 205ā€“230, 2009. View at: Publisher Site | Google Scholar
  33. N. C. Beaulieu and D. J. Young, ā€œDesigning time-hopping ultrawide bandwidth receivers for multiuser interference environments,ā€ Proceedings of the IEEE, vol. 97, no. 2, pp. 255ā€“284, 2009. View at: Publisher Site | Google Scholar
  34. M. Z. Win, G. Chrisikos, and A. F. Molisch, ā€œWideband diversity in multipath channels with nonuniform power dispersion profiles,ā€ IEEE Transactions on Wireless Communications, vol. 5, no. 5, pp. 1014ā€“1022, 2006. View at: Publisher Site | Google Scholar
  35. M. Z. Win, G. Chrisikos, and N. R. Sollenberger, ā€œPerformance of Rake reception in dense multipath channels: implications of spreading bandwidth and selection diversity order,ā€ IEEE Journal on Selected Areas in Communications, vol. 18, no. 8, pp. 1516ā€“1525, 2000. View at: Publisher Site | Google Scholar
  36. C. Falsi, D. Dardari, L. Mucchi, and M. Z. Win, ā€œTime of arrival estimation for UWB localizers in realistic environments,ā€ EURASIP Journal on Advances in Signal Processing, vol. 2006, Article ID 32082, pp. 1ā€“13, 2006. View at: Publisher Site | Google Scholar
  37. S. Venkatesh and R. M. Buehrer, ā€œNon-line-of-sight identification in ultra-wideband systems based on received signal statistics,ā€ IET Microwaves, Antennas & Propagation, vol. 1, no. 6, pp. 1120ā€“1130, 2007. View at: Publisher Site | Google Scholar
  38. D. B. Jourdan, J. J. Deyst Jr., M. Z. Win, and N. Roy, ā€œMonte Carlo localization in dense multipath environments using UWB ranging,ā€ in Proceedings of IEEE International Conference on Ultra-Wideband (ICU '05), pp. 314ā€“319, September 2005. View at: Publisher Site | Google Scholar
  39. S. Venkatesh and R. M. Buehrer, ā€œMultiple-access design for ad hoc UWB position-location networks,ā€ in Proceedings of IEEE Wireless Communications and Networking Conference (WCNC '06), vol. 4, pp. 1866ā€“1873, Las Vegas, Nev, USA, April 2006. View at: Publisher Site | Google Scholar
  40. D. Dardari, A. Conti, J. Lien, and M. Z. Win, ā€œThe effect of cooperation on UWB-based positioning systems using experimental data,ā€ EURASIP Journal on Advances in Signal Processing, vol. 2008, Article ID 513873, 2008. View at: Publisher Site | Google Scholar
  41. S. Venkatesh and R. M. Buehrer, ā€œPower control in UWB position-location networks,ā€ in Proceedings of IEEE International Conference on Communications (ICC '06), vol. 9, pp. 3953ā€“3959, Istanbul, Turkey, June 2006. View at: Publisher Site | Google Scholar
  42. W. C. Headley, C.R.C.M. da Suva, and R. M. Buehrer, ā€œIndoor location positioning of non-active objects using ultra-wideband radios,ā€ in Proceedings of IEEE Radio and Wireless Symposium (RWS '07), pp. 105ā€“108, Long Beach, Calif, USA, January 2007. View at: Publisher Site | Google Scholar
  43. D. B. Jourdan, D. Dardari, and M. Z. Win, ā€œPosition error bound for UWB localization in dense cluttered environments,ā€ IEEE Transactions on Aerospace and Electronic Systems, vol. 44, no. 2, pp. 613ā€“628, 2008. View at: Publisher Site | Google Scholar
  44. Y. Shen and M. Z. Win, ā€œFundamental limits of wideband localization—part I: a general framework,ā€ IEEE Transactions on Information Theory, vol. 56, no. 10, pp. 4956ā€“4980, 2010. View at: Publisher Site | Google Scholar
  45. Y. Shen, H. Wymeersch, and M. Z. Win, ā€œFundamental limits of wideband localization—part II: cooperative networks,ā€ IEEE Transactions on Information Theory, vol. 56, no. 10, pp. 4981ā€“5000, 2010. View at: Publisher Site | Google Scholar
  46. Y. Shen and M. Z. Win, ā€œOn the accuracy of localization systems using wideband antenna arrays,ā€ IEEE Transactions on Communications, vol. 58, no. 1, pp. 270ā€“280, 2010. View at: Publisher Site | Google Scholar
  47. S. Gezici, Z. Tian, G. B. Giannakis et al., ā€œLocalization via ultra-wideband radios: a look at positioning aspects for future sensor networks,ā€ IEEE Signal Processing Magazine, vol. 22, no. 4, pp. 70ā€“84, 2005. View at: Publisher Site | Google Scholar
  48. I. Oppermann, M. Hämäläinen, and J. Iinatti, UWB Theory and Applications, John Wiley & Sons, 2004.
  49. H. Wymeersch, Iterative Receiver Design, Cambridge University Press, 2007.
  50. D. Fox, W. Burgard, H. Kruppa, and S. Thrun, ā€œA Monte Carlo algorithm for multi-robot localization,ā€ Tech. Rep. CMU-CS-99-120, Computer Science Department, Carnegie Mellon University, Pittsburgh, Pa, USA, 1999. View at: Google Scholar
  51. D. MacKay, Information Theory, Inference and Learning Algorithms, Cambridge University Press, 2003.
  52. A. Doucet, S. Godsill, and C. Andrieu, ā€œOn sequential Monte Carlo sampling methods for Bayesian filtering,ā€ Statistics and Computing, vol. 10, no. 3, pp. 197ā€“208, 2000. View at: Publisher Site | Google Scholar
  53. Z. Botev, Nonparametric Density Estimation Via Diffusion Mixing, Postgraduate Series, The University of Queensland, 2007, http://espace.library.uq.edu.au/view/UQ:120006.
  54. J. Lien, A framework for cooperative localization in ultra-wideband wireless networks [M.S. thesis], Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Mass, USA, 2007.

Copyright © 2012 Jaime Lien et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1637Ā Views | 782Ā Downloads | 53Ā Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.