Abstract

As far as the increasing number of mixture components in the Gaussian mixture PHD filter is concerned, an iterative mixture component pruning algorithm is proposed. The pruning algorithm is based on maximizing the posterior probability density of the mixture weights. The entropy distribution of the mixture weights is adopted as the prior distribution of mixture component parameters. The iterative update formulations of the mixture weights are derived by Lagrange multiplier and Lambert W function. Mixture components, whose weights become negative during iterative procedure, are pruned by setting corresponding mixture weights to zeros. In addition, multiple mixture components with similar parameters describing the same PHD peak can be merged into one mixture component in the algorithm. Simulation results show that the proposed iterative mixture component pruning algorithm is superior to the typical pruning algorithm based on thresholds.

1. Introduction

The objective of multitarget tracking is to estimate target number and target states from a sequence of noisy and cluttered measurement sets. The tracked target is generally simplified as a point [13]. Most of the existing point target tracking algorithms are based on data association where the correspondence of measurements to targets has to be set up. The simplest data association algorithm is the nearest-neighbour algorithm in which the measurement closest in statistical distance to predicted state is used to update target state estimate. Probabilistic data association is another typical algorithm in which all the measurements close to the predicted state are used to update target state estimate [4]. Joint probabilistic data association is a generalization of probabilistic data association for multiple target tracking in which association probabilities of all the targets and measurements are described by confirmed matrices [5, 6]. Multitarget tracking algorithms based on data association are in individual view, where the problem of multitarget tracking is converted into the multiple problems of single target tracking. In the multitarget tracking, both the measurements and the estimations are gained in the set form. Thus, multitarget tracking is naturally a class of set-valued estimation problems. The probability hypothesis density (PHD) filter derived by Mahler based on random finite sets statistics theory is an elegant and tractable approximate solution to the multitarget tracking problem [7, 8]. Another interpretation of the PHD in bin-occupancy view is presented in [9]. By now, there have been two implementations of PHD filter, Gaussian mixture implementation [10, 11] and sequential Monte Carlo implementation [1216], which are suitable for linear Gaussian dynamics and nonlinear non-Gaussian dynamics. The convergence of Gaussian mixture implementation is discussed in [17] and the convergence of sequential Monte Carlo implementation in [15, 18, 19]. The cardinalized PHD (CPHD) filter propagating both the PHD and the distribution of target number is developed to improve the performance of the PHD filter [20]. Generally, the CPHD filter is computationally less tractable compared to the PHD filter. There have been the Gaussian mixture implementation of CPHD filter under multitarget linear Gaussian assumptions [21] and the sequential Monte Carlo implementation [22]. As promising and unified methodologies, the PHD and CPHD filters have been widely applied in many fields, such as maneuvering target tracking [23, 24], sonar tracking [25, 26], and visual tracking [2729]. As the sensor resolution is greatly improved, target tracking should be formulated as extended object tracking [30]. Extended object PHD filter is also derived by Mahler in [31]. There have been some implementations of extended object probability hypothesis density filter by now [3236]. The convergence of the Gaussian mixture implementation of extended object probability hypothesis density filter is discussed in [37]. When the Gaussian mixture model is applied in set-valued multitarget tracking, the Gaussian mixture reduction is an important topic [10, 38]. The earlier work in Gaussian mixture reduction for target tracking has been done in [39, 40]. As the Gaussian mixture reduction is implemented, there are several criterions such as maximum similarity [41], Euclidean distance [4244], and Kullback-Leibler divergence measure [45]. The concentrations of this paper are on the Gaussian mixture reduction of the Gaussian mixture implementation of PHD filter.

As far as the Gaussian mixture implementation of the PHD filter is concerned, it approximates the PHD by the summation of weighted Gaussian components under the multitarget linear Gaussian assumptions [10]. In the Gaussian mixture PHD filter, the PHD is presented by a large number of weighted Gaussian components that are propagated over time. The sum of the weights of Gaussian components is the expected target number since the integral of the PHD over the state space is the expected target number. The output of Gaussian mixture PHD filter is weighted Gaussian components. However, the Gaussian mixture PHD filter suffers from computation problems associated with the increasing number of Gaussian components as time progresses, since mixture component number increases both at prediction step and at update step. In fact, component number increases without bound. Thus, the Gaussian mixture PHD filter is infeasible without component pruning operation. The goal of this paper is to prune the Gaussian components to make the Gaussian mixture PHD filter feasible. An iterative mixture component pruning algorithm is proposed for the Gaussian mixture PHD filter. The pruning operation of mixture components is done by setting mixture weights to zeros during the iteration procedure.

The remaining parts of this paper are organized as follows. Section 2 describes the component increasing problem in Gaussian mixture PHD filter. The iterative mixture component pruning algorithm is derived in Section 3. Section 4 is devoted to the simulation study. Conclusion is provided in Section 5.

2. Problem Description

The predictor and connector of PHD filter [7, 8] are respectively, where is the PHD, is the birth PHD at time step , is the spawned PHD from at time step , is the clutter PHD, is the survival probability, is the detection probability, , is the single target likelihood, and is the measurements at time step .

Under the linear Gaussian assumptions, the Gaussian mixture PHD filter is derived in [10]. The main steps of the Gaussian mixture PHD filter are summarized as follows. If the PHD at time step is in the form of Gaussian mixture where is the mixture weight, is the Gaussian distribution, is the mean, is the covariance, and is the component number, then the predicted PHD for time step is given by where is the birth PHD is the survival PHD is the predicted mean of the Gaussian component is the predicted covariance of the Gaussian component is the spawned PHD is the spawned mean of the Gaussian component and is the spawned covariance of the Gaussian component If formula (4) is rewritten in the simple form of the Gaussian mixture then the posterior PHD at time step is where is the detected PHD is the updated weight is the updated mean is the updated covariance and is the gain

It can be seen from formula (4) that component number increases from to by at the prediction step. It is obvious in formula (13) that component number increases from to by at the update step. Hence, the number of Gaussian components representing PHD at time step in Gaussian mixture PHD filter is where is the number of components of the PHD at time step . In formula (19), the component number increases in . In particular, the component number mostly increases in at the update step. Indeed, the number of Gaussian components increases without bound so that the computation of the Gaussian mixture PHD filter is intractable after several time steps. Therefore, it is necessary to reduce the number of components to make the Gaussian mixture PHD filter feasible. The goal of this paper is to prune the Gaussian mixture components to reduce component number in Gaussian mixture PHD filter.

3. Iterative Pruning Algorithm

For simplicity, the time index is neglected and let represent component number. is the sum of the weights of the Gaussian components: In the iterative pruning algorithm, the weights of Gaussian components are normalized by at first so that Let represent the parameters of the th Gaussian component, where and are the mean and covariance, respectively. Then, the whole parameter set of Gaussian components is .

The entropy distribution of the mixture weights is adopted as the prior of : where is the entropy measure [46, 47]. The goal of this choice of prior distribution, which depends only on the mixture weights, is to reduce mixture components by the adjustment of mixture weights during the iteration procedure. If we define the log-likelihood of the measurements given the mixture parameters as where is the single target likelihood in th component, then the MAP estimate of is For the mixture weight , the MAP estimate can be computed by setting the derivative of the log-posterior to zero: The MAP estimate of is computed by maximizing under the constraint (21): where is Lagrange multiplier. Substituting formulas (22) and (23) into formula (26) gives where represents the membership that is from the th mixture component:

Formula (27) is a simultaneous transcendental equation. We solve it for the using the Lambert function [48], an inverse mapping satisfying , and therefore . The Lambert function of complex is defined as , which is a set of functions. The complex can be computed by the equation , where is the exponential function. Lambert function is a multivalued function defined in general for complex and assumed complex. If is real and , then is multivalued complex. If is real and , has two possible real values. If is real and , has one real value. Then, for the Lambert function , Setting , formula (29) can be rewritten as

In formula (27), it is assumed that Consequently, formula (30) is Comparing the Lambert function (32) to formula (27), (32) can be reduced to (27) by setting : Consequently,

Formula (27) and formula (34) constitute an iterative procedure for the MAP estimates of : (1) given , are calculated by formula (34); (2) are normalized; (3) given normalized , is computed by formula (27). The iteration procedure stops when the difference rate of log-posterior is smaller than the given threshold.

At the normalization step of the iteration procedure, if a mixture weight becomes negative, the corresponding component is removed from the mixture components by setting its weight to zero. The removed mixture component will not be considered when the log-posterior is computed in the following iterations. The mixing weights of survival mixture components are normalized at the end of this step.

The effect of entropy distribution of mixing weights is taken during the iterative procedure. The mixture weights of components negligible to the PHD become smaller and smaller iteration by iteration, since the parameter estimates are driven into low-entropy direction by entropy distribution. The low-entropy tendency can also promote competition among the mixture components with similar parameters which can then be merged into one mixture component with larger weight.

For the mean and covariance of mixture component with nonzero weight , they are updated by

The main steps of iterative mixture component pruning algorithm are summarized in Algorithm 1.

(1)  normalize by formula (20).
(2)    .
(3)    .
(4)  for     do
(5)         for     do
(6)         compute by formula (28).
(7)         end for
(8)  end for
(9)  for     do
(10)   compute by formula (31).
(11) end for
(12) for     do
(13)   compute by formula (34).
(14) end for
(15) for     do
(16)       if     do
(17)       for     do
(18)        .
(19)        .
(20)        .
(21)       end for
(22)        .
(23)        .
(24)       else  do
(25)       compute by formula (35).
(26)       compute by formula (36).
(27)       end if
(28) end for
(29) normalize ;
(30) compute by formula (27);
(31) if     do
(32)   goto step  3;
(33) end if.

4. Simulation Study

A two-dimensional scenario with unknown and time-varying target number is considered to test the proposed iterative mixture component pruning algorithm. The surveillance region is [−1000, 1000] [−1000, 1000] (in meter). The target state consists of position and velocity, while target measurement is the position. Each target moves according to the following dynamics: where is the target state, is the target position, and is the target velocity at time step . The process noises are a zero-mean Gaussian white noise with standard deviations  (). The survival probability is . The number of targets is unknown and variable over all scans. New targets appear spontaneously according to a Poisson point process with PHD function , where is the Gaussian component with mean and covariance . The spawned PHD is , where .

Each target is detected with probability . The target-originated measurement model is where the measurement noise is a zero-mean Gaussian white noise with standard deviation  (m). Clutter is modelled as a Poisson random finite set with intensity where is the average number of clutter measurements per scan and is the probability distribution over surveillance region. Here is a uniform distribution and is assumed to be 50.

The means of the Gaussian mixture components with mixing weights greater than 0.5 are chosen as the estimates of multitarget states after the mixture reduction.

The tracking results in one Monte Carlo trial are presented in Figures 1 and 2. It can be seen from Figures 1 and 2 that the Gaussian mixture PHD filter with the proposed iterative mixture component pruning algorithm is able to detect the spontaneous and spawned targets and estimate the multiple target states.

The mixture components with weights larger than 0.0005 at the 86th time step before pruning operation in the above Monte Carlo simulation trial are presented in Figure 3. The mixture components with weights larger than 0.01 after pruning operation are presented in Figure 4. It is obvious that the mixture components with similar parameters describing the same PHD peak can be merged into one mixture component.

The typical mixture component pruning algorithm based on thresholds in [10] is adopted as the comparison algorithm. The thresholds in typical mixture component pruning algorithm are weight pruning threshold , mixture component merging threshold 4, and maximum allowable mixture component number 100. We evaluate the tracking performance of proposed algorithm against the typical algorithm by Wasserstein distance [49]. The Wasserstein distance is defined as where is the estimate of multitarget state set and is the true multitarget state set. The minimum is taken over the set of all transportation matrices (a transportation matrix is one whose entries satisfy , , and ). This distance is not defined if either or is not defined. Figure 5 shows the mean Wasserstein distances of two algorithms over 100 simulation trials. Process noise, measurement noise, and clutter are independently generated at each trial. It can be seen from Figure 5 that the proposed iterative mixture component pruning algorithm is superior to the typical algorithm at most time steps. The proposed iterative mixture component pruning algorithm is worse than typical algorithm when spawned target is generated and two or more targets are close to each other. Two PHD peaks of two close targets may be regarded as one PHD peak in the proposed algorithm as a result of low-entropy tendency of entropy distribution. Then, some targets are not detected.

Figure 6 shows the estimates of target numbers of two algorithms. It is obvious that the estimates of target number of proposed algorithm are closer to the ground truth than typical algorithm at most time steps.

Figure 7 shows the mean component numbers of two algorithms after component pruning operations over 100 simulation trials. The component numbers of proposed algorithm are smaller than typical algorithm.

The case of low signal-to-noise rate (SNR) is yet considered for the further comparison of two algorithms. is assumed 80 in this low SNR case. The corresponding Wasserstein distances, target number estimates, and component numbers are presented in Figures 8, 9, and 10. It can be seen that the proposed iterative mixture component pruning algorithm is also superior to the typical mixture component pruning algorithm based on thresholds in low SNR case.

5. Conclusion

An iterative mixture component pruning algorithm is proposed for the Gaussian mixture PHD filter. The entropy distribution of the mixture weights is used as the prior distribution of mixture parameters. The update formula of the mixture weight is derived by Lagrange multiplier and Lambert function. When the mixture weight becomes negative during the iteration procedure, the corresponding mixture component is pruned by setting the weight to zero. Simulation results show that the proposed iterative mixture component pruning algorithm is superior to the typical mixture component pruning algorithm based on thresholds at most time steps.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant no. 61304261), the Senior Professionals Scientific Research Foundation of Jiangsu University (Grant no. 12JDG076), and a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).