Journal of Electrical and Computer Engineering

Journal of Electrical and Computer Engineering / 2018 / Article
!A Corrigendum for this article has been published. To view the article details, please click the ‘Corrigendum’ tab above.

Research Article | Open Access

Volume 2018 |Article ID 5763461 | 11 pages | https://doi.org/10.1155/2018/5763461

Log-PF: Particle Filtering in Logarithm Domain

Academic Editor: Víctor Elvira
Received25 Aug 2017
Revised07 Nov 2017
Accepted06 Dec 2017
Published01 Mar 2018

Abstract

This paper presents a particle filter, called Log-PF, based on particle weights represented on a logarithmic scale. In practical systems, particle weights may approach numbers close to zero which can cause numerical problems. Therefore, calculations using particle weights and probability densities in the logarithmic domain provide more accurate results. Additionally, calculations in logarithmic domain improve the computational efficiency for distributions containing exponentials or products of functions. To provide efficient calculations, the Log-PF exploits the Jacobian logarithm that is used to compute sums of exponentials. We introduce the weight calculation, weight normalization, resampling, and point estimations in logarithmic domain. For point estimations, we derive the calculation of the minimum mean square error (MMSE) and maximum a posteriori (MAP) estimate. In particular, in situations where sensors are very accurate the Log-PF achieves a substantial performance gain. We show the performance of the derived Log-PF by three simulations, where the Log-PF is more robust than its standard particle filter counterpart. Particularly, we show the benefits of computing all steps in logarithmic domain by an example based on Rao-Blackwellization.

1. Introduction

Many scientific problems involve dynamic systems, for example, in navigation applications. Dynamic systems can be described by state-space models where the state is only observable by noisy measurements. Recursive Bayesian filters are algorithms to estimate an unknown probability density function (PDF) of the state recursively by measurements over time. Such a filter consists of two steps: prediction and update. In the prediction step, the PDF of the state is calculated based on the system model. During the update step, the current measurement is used to correct the prediction based on the measurement model. In this way, the posterior PDF of the state is estimated recursively over time. Particle filters (PFs) are implementations of recursive Bayesian filters which approximate the posterior PDF by a set of random samples, called particles, with associated weights. Several types of PFs have been developed over the last few years [18]. They differ in their choice of the importance sampling density and the resampling step.

A common way is to choose the importance sampling density to be equal to the prior, for example, the bootstrap filtering algorithm [1]. However, if the width of the likelihood distribution is too small in comparison to the width of the prior distribution or if measurements are located in the tail of the prior distribution, this choice may fail; see [4]. These situations may arise when sensors are very accurate or measurements rapidly change over time such that the particle states after the prediction step might be located in the tail of the likelihood. Additionally, numerical representation of numbers may limit the computational accuracy by floating point errors. In these situations, a common way is to use the likelihood particle filter (LPF) [3, 5]. The LPF uses the likelihood distribution for the importance sampling density and the prior for the weight update. The LPF is recommended when the width of the likelihood distribution is much smaller compared to the one of the prior and accordingly, the posterior density function is more similar to the likelihood than to the prior. However, in many situations, it is impossible to draw samples from the likelihood distribution. Furthermore, the LPF is not suitable for an underdetermined system where the number of measurements is lower than the number of states per time instant. Additionally, using the likelihood as proposal distribution might increase the variance of the simulated samples according to [5].

In this paper, we derive a PF that operates in logarithmic domain (log-domain), called Log-PF. The Log-PF represents the weights in log-domain which enables a more accurate representation of low weights with a limited number of bits. Particularly, when the involved distributions contain exponentials or products of functions, the log-domain representation is computationally more efficient [9]. The derived Log-PF uses the Jacobian logarithm [1012] to describe all steps of the PF, including weight update, weight normalization, resampling, and point estimations in log-domain. In this paper, we derive the minimum mean square error (MMSE) and the maximum a posteriori (MAP) point estimators.

The paper is structured as follows: First, we describe in Section 2 standard PFs; thereafter we derive the proposed Log-PF in Section 2. Afterwards, we derive in Section 4 two point estimators in log-domain: Section 4.1 describes the MMSE estimator and Section 4.2 the MAP estimator. We evaluate the Log-PF by simulations and compare the results to standard PF implementations and Kalman filters (KFs) in Section 5. Particularly, we show by an example based on Rao-Blackwellization the benefits by computing all steps in log-domain. For distributed particle filters like [13] similar results are expected. Finally, Section 6 concludes the paper. Throughout the paper, we will use the following notations:(i)All vectors are interpreted as column vectors.(ii) denotes an identity matrix.(iii)Matrices are denoted by bold capital letters and vectors by bold small letters.(iv) denotes the th element of vector .(v) denotes a Gaussian distributed or multivariate random variable with mean and variance .(vi) stands for expectation or sample mean of .(vii) stands for all integer numbers starting from to , thus .(viii) denotes the estimate of .(ix) defines the set for with .

2. Particle Filtering

PFs represent the probability density of the state vector at time step by particles. According to [13, 5] and assuming a first-order hidden Markov model, the posterior filtered density is approximated as where defines the measurement vector for the time steps , stands for the Dirac distribution, denotes particle state, and denotes the normalized weight with The unnormalized weight is denoted by , while the weight update is calculated aswith the importance density , the likelihood distribution , and the transition prior distribution ; see [13, 5]. For , the approximation used in (33) approaches . By (33), (2), and (3), the sequential importance sampling (SIS) PF can be described which is the basis of most PFs [2].

For numerical stability reasons, weights are often computed and stored in the log-domain, which is also computationally efficient when the distributions involved contain exponentials or products. Thus, we obtain from (3) the update equation in log-domain, with where we define with the log-domain weight (log-weight) for particle . After calculating the weights in log-domain, the weights are transferred for further processing to the linear domain (lin-domain) with for , where the numerical accuracy is lost due to floating point representation. In order to obtain a more stable PF implementation, the weights can be transferred to the lin-domain by such that ; see, for example, [9].

In the following we investigate a different approach, where the transformation from the log-domain to the lin-domain is not necessary. Hence, we show that all steps of the PF can be computed in log-domain.

3. Algorithm Derivation

To compute all steps of the PF in log-domain, we obtain for the approximation of the posterior filtered density from (33) using (5). The normalization of the log-weight can be calculated directly in log-domain as a simple subtraction, withwhere denotes the normalization factor with To compute the normalization factor of (9) without transferring the log-weights to the lin-domain, the Jacobian logarithm [10, 11] can be used. The Jacobian logarithm computes the logarithm of a sum of two exponentials using the operator and adding a correction term; that is, With (10) and as derived in [12], the expression can be calculated iteratively aswhere and . Hence, using the Jacobian logarithm allows computing operations such as summations like in (9) efficiently in the log-domain. For later conveniences, we express (11) by an iterative algorithm shown in Algorithm 1 by a pseudocode; that is, where defines the set for with . Thus, the normalization factor of (9) can be calculated iteratively by Hence, we obtain for the log-weight normalization of (8), Please note that a complexity reduction can be obtained if the term of Algorithm 1 is read from a hard-coded lookup table as a function of .

(1) Init: ;
(2) for    do
(3) ;
(4) ;

By using (14), the SIS PF can be described in log-domain as shown in Algorithm 2 by a pseudocode. Algorithm 2 is evaluated at each time step , where denotes the set for the particle states and log-weights with for time step .

(1) for  do
(2) Draw: ;
(3) Calculate: according to (4);
(4) Calculate: ;
(5) for  do
(6) Normalize: ;

One of the crucial problems of the SIS PF is degeneracy (another problem which is not discussed in this paper is the selection of the importance density of (4); see e.g., [2]).

After a few time steps all particles except for one have low weights and do not contribute anymore to the computation of the posterior PDF; that is, the distribution estimation degenerates. A suitable measure of degeneracy is the effective sample size [13, 5]. A widely used approximation for the effective sample size is with and . A small value of indicates a severe degeneracy. By using the Jacobian logarithm of (12), we obtain from (15) the effective sample size in log-domain, with

Alternative effective sample size approximations as introduced in [14] can also be represented in log-domain. Table 1 summarizes four generalized alternative effective sample size approximations in lin-domain and log-domain which depend on the parameter . Please note as in (15) is obtained from with .






The Generic PF extends the SIS PF by a resampling step to prevent degeneration as shown in Algorithm 3 by a pseudocode. The basic idea of resampling is to eliminate particles with low weights and reproduce particles with high weights. Whenever a significant degeneracy is observed in the Generic PF, that is, is less than a threshold , the particles are resampled. Algorithm 4 shows a pseudocode of the systematic resampling algorithm [15] transferred into log-domain. In Algorithm 4, denotes the uniform distribution on the interval (cf. Algorithm 4 Line 5). Similarly to the descriptions before, the Jacobian logarithm is used to construct the estimated sampled cumulative distribution function in log-domain (log-CDF); see Algorithm 4 Line 3. The estimated sampled log-CDF is presented by a vector with length and element with . By , we denote the th element of the vector . According to the estimated sampled log-CDF, particles with high weights are reproduced and particles with low weights are eliminated.

(1) for    do
(2) Draw: ;
(3) Calculate: according to (4);
(4) Calculate: ;
(5) for   do
(6) Normalize: ;
(7) Calculate according to Table 1;
(8) if  then
(9) Resample with Algorithm 4: Obtaining
;
(1) Initialize the log-CDF: ;
(2) for    do
(3) Construct log-CDF using the Jacobian logarithm:
;
(4) ;
(5) Draw starting point: ;
(6) for    do
(7) ;
(8) while    do
(9) ;
(10) Assign: ;

In Section 5, we use the sequential importance resampling (SIR) PF; see [1], as an example for comparing the performance of the linear domain PF (Lin-PF) and Log-PF. Therefore, Algorithm 5 shows a pseudocode of the SIR PF in log-domain, which is derived from the Generic PF by setting the importance density to be equal to the transitional prior distribution, with and using , that is, performing resampling at each time step [2].

(1) for    do
(2) Draw: ;
(3) Calculate: ;
(4) Calculate: ;
(5) for    do
(6) Normalize: ;
(7) Resample with Algorithm 4: Obtaining
;

Additionally, we compare in Section 5 the proposed Log-PF to the PF implementation which computes the weights in log-domain and uses (6) to obtain the weights in lin-domain, called Lin-Log-PF in the following. A pseudocode of the Generic Lin-Log-PF is shown in Algorithm 6: the weights are calculated in log-domain according to (4) and normalized and transferred to the lin-domain according to as mentioned in Section 3 and, for example, [9]. Please note further improvements can be obtained if the weights are directly propagated in log-domain if resampling is not necessary.

(1) for    do
(2) Draw: ;
(3) Calculate: according to (4);
(4) for    do
(5) Transfer and Normalize:
;
(6) Calculate: ;
(7) for    do
(8) Normalize: ;
(9) Calculate according to Table 1;
(10) if    then
(11) Resample with Algorithm 4: Obtaining
;

4. Log-PF Point Estimators

In many applications, we are interested in a point estimate of the state instead of its a posteriori PDF. In this section we derive the MMSE and MAP point estimators based on the a posteriori density estimated by the Log-PF.

4.1. Minimum Mean Square Error Estimate

The MMSE point estimate using the approximated a posteriori density, see, for example, [16], is defined by where the th element of the vector can expressed asIn order to use the Jacobian logarithm to compute (18) in log-domain, we separate the positive and negative values of with

and the corresponding log-weight accordingly. Please note is not considered in (19) because . Thus, we obtain from (18) for the MMSE estimate, where we introduced with

4.2. Maximum A Posteriori Estimate

The MAP point estimate is defined aswhich can be approximated, see [17], by for . The corresponding MAP state estimator using weights in log-domain can be calculated using the Jacobian logarithm of (12) with

5. Simulations

In this section, we demonstrate the performance of the Log-PF using floating point 64-bit number accuracy according to IEEE Standard 754 for double precision with three simulations.

5.1. Linear Processes

First, we simulate a linear Gaussian model. The KF introduced in [18] is an optimal recursive Bayesian filter which can be used if the considered system is linear and the probabilistic model is Gaussian. Hence, we compare the Log-PF and the Lin-PF to the KF as benchmark. The simulation considers the linear transition modelwith the transition matrix , the state vector , and the zero-mean multivariate Gaussian distributed process noise with standard deviation and the identity matrix . The measurement model is defined by with the measurement matrix and the zero-mean Gaussian distributed measurement noise with standard deviation . Based on the measurements , the state sequence for is estimated using a KF, the Lin-PF and the Log-PF with particles and known initial state . For the Lin-PF, we use the standard PF implementation as well as the Lin-Log-PF.

First, we compare the KF to the SIR Lin-PF, SIR Lin-Log-PF, and the SIR Log-PF. In order to see the robustness of the SIR Log-PF, we variate the measurement noise standard deviation from down to . We simulate 1000 different realizations with known initial state for each run. Figure 1 shows the root mean square error (RMSE) averaged over all time steps and simulations versus the decreasing measurement noise standard deviation . The abbreviation SIR Log-PF MAP stands for the MAP point estimate and SIR Log-PF MMSE for the MMSE point estimate of the SIR Log-PF. Respectively, the abbreviations SIR Lin-PF MAP, SIR Lin-Log-PF MAP, SIR Lin-PF MMSE, and SIR Lin-Log-PF MMSE stand for the SIR Lin-PF and SIR Lin-Log-PF point estimates. We see that the KF obtains the best estimation results followed by the SIR Log-PF and SIR Lin-Log-PF. Figure 1 shows additionally an enlarged subfigure of the region for . For , all SIR PFs obtain equivalent performance. As soon as decreases, the RMSE decreases for the SIR PFs until . For lower measurement noise standard deviations, the RMSE of the SIR Lin-PF and increases up to a limit of whereas the accuracy of the SIR Log-PF and SIR Lin-Log-PF are limited by the number of particles. This effect is caused by the number representation of the particle weights of the SIR Lin-PF. For