A corrigendum for this article has been published. To view the corrigendum, please click here.

Journal of Electrical and Computer Engineering

Volume 2018, Article ID 5763461, 11 pages

https://doi.org/10.1155/2018/5763461

## Log-PF: Particle Filtering in Logarithm Domain

Correspondence should be addressed to Christian Gentner; ed.rld@rentneg.naitsirhc

Received 25 August 2017; Revised 7 November 2017; Accepted 6 December 2017; Published 1 March 2018

Academic Editor: Víctor Elvira

Copyright © 2018 Christian Gentner et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper presents a particle filter, called Log-PF, based on particle weights represented on a logarithmic scale. In practical systems, particle weights may approach numbers close to zero which can cause numerical problems. Therefore, calculations using particle weights and probability densities in the logarithmic domain provide more accurate results. Additionally, calculations in logarithmic domain improve the computational efficiency for distributions containing exponentials or products of functions. To provide efficient calculations, the Log-PF exploits the Jacobian logarithm that is used to compute sums of exponentials. We introduce the weight calculation, weight normalization, resampling, and point estimations in logarithmic domain. For point estimations, we derive the calculation of the minimum mean square error (MMSE) and maximum a posteriori (MAP) estimate. In particular, in situations where sensors are very accurate the Log-PF achieves a substantial performance gain. We show the performance of the derived Log-PF by three simulations, where the Log-PF is more robust than its standard particle filter counterpart. Particularly, we show the benefits of computing all steps in logarithmic domain by an example based on Rao-Blackwellization.

#### 1. Introduction

Many scientific problems involve dynamic systems, for example, in navigation applications. Dynamic systems can be described by state-space models where the state is only observable by noisy measurements. Recursive Bayesian filters are algorithms to estimate an unknown probability density function (PDF) of the state recursively by measurements over time. Such a filter consists of two steps: prediction and update. In the prediction step, the PDF of the state is calculated based on the system model. During the update step, the current measurement is used to correct the prediction based on the measurement model. In this way, the posterior PDF of the state is estimated recursively over time. Particle filters (PFs) are implementations of recursive Bayesian filters which approximate the posterior PDF by a set of random samples, called particles, with associated weights. Several types of PFs have been developed over the last few years [1–8]. They differ in their choice of the importance sampling density and the resampling step.

A common way is to choose the importance sampling density to be equal to the prior, for example, the bootstrap filtering algorithm [1]. However, if the width of the likelihood distribution is too small in comparison to the width of the prior distribution or if measurements are located in the tail of the prior distribution, this choice may fail; see [4]. These situations may arise when sensors are very accurate or measurements rapidly change over time such that the particle states after the prediction step might be located in the tail of the likelihood. Additionally, numerical representation of numbers may limit the computational accuracy by floating point errors. In these situations, a common way is to use the likelihood particle filter (LPF) [3, 5]. The LPF uses the likelihood distribution for the importance sampling density and the prior for the weight update. The LPF is recommended when the width of the likelihood distribution is much smaller compared to the one of the prior and accordingly, the posterior density function is more similar to the likelihood than to the prior. However, in many situations, it is impossible to draw samples from the likelihood distribution. Furthermore, the LPF is not suitable for an underdetermined system where the number of measurements is lower than the number of states per time instant. Additionally, using the likelihood as proposal distribution might increase the variance of the simulated samples according to [5].

In this paper, we derive a PF that operates in logarithmic domain (log-domain), called Log-PF. The Log-PF represents the weights in log-domain which enables a more accurate representation of low weights with a limited number of bits. Particularly, when the involved distributions contain exponentials or products of functions, the log-domain representation is computationally more efficient [9]. The derived Log-PF uses the Jacobian logarithm [10–12] to describe all steps of the PF, including weight update, weight normalization, resampling, and point estimations in log-domain. In this paper, we derive the minimum mean square error (MMSE) and the maximum a posteriori (MAP) point estimators.

The paper is structured as follows: First, we describe in Section 2 standard PFs; thereafter we derive the proposed Log-PF in Section 2. Afterwards, we derive in Section 4 two point estimators in log-domain: Section 4.1 describes the MMSE estimator and Section 4.2 the MAP estimator. We evaluate the Log-PF by simulations and compare the results to standard PF implementations and Kalman filters (KFs) in Section 5. Particularly, we show by an example based on Rao-Blackwellization the benefits by computing all steps in log-domain. For distributed particle filters like [13] similar results are expected. Finally, Section 6 concludes the paper. Throughout the paper, we will use the following notations:(i)All vectors are interpreted as column vectors.(ii) denotes an identity matrix.(iii)Matrices are denoted by bold capital letters and vectors by bold small letters.(iv) denotes the th element of vector .(v) denotes a Gaussian distributed or multivariate random variable with mean and variance .(vi) stands for expectation or sample mean of .(vii) stands for all integer numbers starting from to , thus .(viii) denotes the estimate of .(ix) defines the set for with .

#### 2. Particle Filtering

PFs represent the probability density of the state vector at time step by particles. According to [1–3, 5] and assuming a first-order hidden Markov model, the posterior filtered density is approximated as where defines the measurement vector for the time steps , stands for the Dirac distribution, denotes particle state, and denotes the normalized weight with The unnormalized weight is denoted by , while the weight update is calculated aswith the importance density , the likelihood distribution , and the transition prior distribution ; see [1–3, 5]. For , the approximation used in (33) approaches . By (33), (2), and (3), the sequential importance sampling (SIS) PF can be described which is the basis of most PFs [2].

For numerical stability reasons, weights are often computed and stored in the log-domain, which is also computationally efficient when the distributions involved contain exponentials or products. Thus, we obtain from (3) the update equation in log-domain, with where we define with the log-domain weight (log-weight) for particle . After calculating the weights in log-domain, the weights are transferred for further processing to the linear domain (lin-domain) with for , where the numerical accuracy is lost due to floating point representation. In order to obtain a more stable PF implementation, the weights can be transferred to the lin-domain by such that ; see, for example, [9].

In the following we investigate a different approach, where the transformation from the log-domain to the lin-domain is not necessary. Hence, we show that all steps of the PF can be computed in log-domain.

#### 3. Algorithm Derivation

To compute all steps of the PF in log-domain, we obtain for the approximation of the posterior filtered density from (33) using (5). The normalization of the log-weight can be calculated directly in log-domain as a simple subtraction, withwhere denotes the normalization factor with To compute the normalization factor of (9) without transferring the log-weights to the lin-domain, the Jacobian logarithm [10, 11] can be used. The Jacobian logarithm computes the logarithm of a sum of two exponentials using the operator and adding a correction term; that is, With (10) and as derived in [12], the expression can be calculated iteratively aswhere and . Hence, using the Jacobian logarithm allows computing operations such as summations like in (9) efficiently in the log-domain. For later conveniences, we express (11) by an iterative algorithm shown in Algorithm 1 by a pseudocode; that is, where defines the set for with . Thus, the normalization factor of (9) can be calculated iteratively by Hence, we obtain for the log-weight normalization of (8), Please note that a complexity reduction can be obtained if the term of Algorithm 1 is read from a hard-coded lookup table as a function of .