Discrete Dynamics in Nature and Society

Volume 2016, Article ID 1082837, 10 pages

http://dx.doi.org/10.1155/2016/1082837

## An Improved Gaussian Mixture CKF Algorithm under Non-Gaussian Observation Noise

College of Automation, Harbin Engineering University, Harbin 150001, China

Received 14 March 2016; Revised 10 June 2016; Accepted 16 June 2016

Academic Editor: Juan R. Torregrosa

Copyright © 2016 Hongjian Wang and Cun Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In order to solve the problems that the weight of Gaussian components of Gaussian mixture filter remains constant during the time update stage, an improved Gaussian Mixture Cubature Kalman Filter (IGMCKF) algorithm is designed by combining a Gaussian mixture density model with a CKF for target tracking. The algorithm adopts Gaussian mixture density function to approximately estimate the observation noise. The observation models based on Mini RadaScan for target tracking on offing are introduced, and the observation noise is modelled as glint noise. The Gaussian components are predicted and updated using CKF. A cost function is designed by integral square difference to update the weight of Gaussian components on the time update stage. Based on comparison experiments of constant angular velocity model and maneuver model with different algorithms, the proposed algorithm has the advantages of fast tracking response and high estimation precision, and the computation time should satisfy real-time target tracking requirements.

#### 1. Introduction

With the universality of nonlinear problems, the principle and method of nonlinear filtering are being widely used for the nonlinear systems. One of the applications is target states estimation.

For the currently nonlinear systems, the possible solution approaches are some approximate methods, such as approximating the probability density of system state as the Gaussian density, which is called Gaussian filtering. The Gaussian filtering could sum up three classes according to the different approximate methods: the first is function approximation, which approximates the nonlinear system function using the low-order expansion such as the extended Kalman filtering (EKF) [1] and the improved algorithms (adaptive fading EKF [2], strong tracking EKF [3], and central difference Kalman filtering (CDKF) [4, 5]). The EKF is a suboptimal filter that demands not only that the system has accurate states and accurate observation models but also that the observation noise is Gaussian in nature. However, the Gaussian Hypothesis is poor when the system model is highly nonlinear, and the estimation results are divergent. The CDKF based on Stirling’s polynomial interpolation has better performance including accuracy, efficiency, and stability than EKF for the nonlinear problems but would cause the greater computation burden. The second is deterministic sampling approximation method, which approximates system state and the probability density using deterministic sampling such as the Unscented Kalman Filter (UKF) [6, 7] and the improved algorithms [8, 9]. In principle, the UKF is simple and easy to implement, as it does not require the calculation of Jacobians at each time step. However, the UKF requires updating the matrix with sigma points with weighted negative values, which makes it difficult to preserve the positive definiteness during the iterative filter process. The third is approximation using quadrature, which approximates the multidimensional integrals of Bayesian recursive equation using some numerical technologies, such as the Gaussian-Hermite Kalman Filter (GHKF) [10] and the Cubature Kalman Filter (CKF) [11–13]. GHKF acquires the statistic characteristics after nonlinear transformation by Gaussian-Hermite numerical integral, which has higher accuracy. The shortcoming of GHKF is that it may not be suitable for addressing the high-dimensional nonlinear filtering issue. The CKF is proposed based on the spherical-radial cubature rule and can solve the Bayesian filter integral problem using cubature points with the same weight. The high-degree CKFs have high accuracy and stability but at a high computational cost.

In actual application on target tracking, the noise of system processes or observations do not have ideal Gaussian density. In addition, the various Gaussian approximate filtering algorithms based on Gaussian noise do not display ideal performance because of the mismatched model. Besides Gaussian filter, other solution approaches for the nonlinear estimation problem involve the particle filter (PF) [14] based on the Monte Carlo numerical integral theory and sequence importance sampling and Gaussian sum filter [15, 16] based on the mixture of several Gaussian components. Although the PF do not require any assumption about the probability density function, they inevitably face enormous undesirable computational demands owing to the numerous stochastic sampling particles for fulfilling the estimation accuracy. The PF algorithm is unable to avoid the disadvantages of particle degradation and sample dilution; hence, the research on PF is focused on the revolution of particle degradation and sample dilution [17, 18]. A Gaussian sum CKF has been proposed for bearings-only tracking problems in the literature [19], and this CKF displays comparable performance to the particle filter. An improved Gaussian mixture filter algorithm has been proposed for highly nonlinear passive tracking systems in the literature [20], and the limited Gaussian mixture model has been used to approximate the posterior density of the state, process noise, and measurement noise. However, in all of these methods, the weights of the Gaussian components are kept constant while propagating the uncertainty through a nonlinear system and are updated only in the stage of measurement update. This assumption is valid if the system is marginally nonlinear or measurements are precise and available very frequently. The same is not true for the general nonlinear case. Terejanu et al. [21, 22] proposed a new Gaussian sum filter by adapting the weights of Gaussian mixture model in case of both time update and measurement update, which could obtain a better approximation of the posterior probability density function.

In this paper, we focus on the target tracking problem on offing and design an improved Gaussian mixture CKF based on the GMCKF algorithm. Firstly, the formulation of target tracking is described, and the observation model with glint noise based on sensor is introduced. Secondly, the derivate steps of IGMCKF based integral square difference and Gaussian mixture density are designed. At last, the comparison experiments of constant angular velocity model and maneuver model with different algorithms are present, respectively.

#### 2. Problem Formulation

Consider the target tracking problem on offing, giving the motion model and observation model:where denotes the target states, including the position and velocity of the target, and denotes the state process noise, which is assumed as Gaussian density with covariance . denotes the observation noise which is assumed as glint noise because of the complicated environment on offing and the characteristics of sensors. The observation noise could be approximated as glint noise which is composed of Gaussian noise and Laplace noise. We modelled the observation noise as 2 Gaussian components with different covariance:where denotes Gaussian noise with mean and covariance . is the glint frequency factor.

Consider that the position of sensor is , and the sensor in this paper is Mini RadaScan; the observation information of the target including distance and bearing is as follows:

#### 3. Design of Improved Gaussian Mixture Cubature Kalman Filter Algorithm

Lemma 1. *The probability density function of the -dimension vector can be approximated by the following equation:The approximate error can be arbitrarily small as soon as the number of components is large enough. denotes the weighted value of the th component; denotes the Gaussian distribution with mean and covariance . Consider*

##### 3.1. Time Update

Consider that the discrete nonlinear system with Gaussian mixture added noise and the prior and posterior density can be indicated by a Gaussian sum. The process noise and observation noise could be approximated as follows:where denotes the weighted value of the th component of the process noise and is the weighted value of the th component of the observation noise. ConsiderThen, consider that the posterior density can be approximated with a Gaussian mixture model at :We can obtain the prior density of state transition equation by the following equation:According to the Bayesian formula, the predicted state density function can be approximated using a Gaussian mixture model:where . is the set of real numbers, and is the dimension of the state vector. and are the number of Gaussian components of system state and process noise, respectively. and could be calculated by the time update steps of CKF:where is the number of cubature points.

##### 3.2. Adaptive Weight Update

Consider the following nonlinear system (1) with the probability density function of the initial conditions . According to formula (4), the Gaussian mixture approximation of the probability density function is given:The true probability density function of system state by the Chapman-Kolmogorov equation is given:

Mean-square optimal new weights can be obtained by minimizing the following integral square difference between the true probability and the approximation one in the least square algorithm:where denotes the vector of the weights of every Gaussian component at time . In order to resolve formula (17), the cost function is given as follows [23]:where the first term represents the self-likeness of the true probability density function of the system state and . The first term is not needed in the optimization process and it is used only to provide an overall magnitude of the uncertainty propagation error [24]. The second represents the cross-likeness of the true probability and the approximation one, and . The last term is the self-likeness of the approximation probability of the system state, and .

Formula (16) is based on the assumption that the Gaussian mixture approximation is equal to the true probability density function at time ; namely, . Consider

Now the derivation of the terms of cost function is given as follows:Similarly,The elements of the matrix are given as follows:where is the cubature points. Consider

According to the above relations, the final formulation of formula (14) could be obtained as follows:the result of (21) is the weights of components after update.

##### 3.3. Measurement Update

Consider the observation equation; the likelihood density of observation equation can be approximated as follows:Then, we can obtain the following equation:where The term in is calculated by the adaptive weight update (21). ConsiderThe estimation of state of Gaussian components is as follows:The Kalman gain is as follows:The covariance and cross covariance are, respectively, as follows:Hence, the output of the filter is as follows:where .

##### 3.4. Merging the Gaussian Components

As we know from (31), the number of Gaussian components is . If , some of the Gaussian components must be merged after the measurement update stage so that the total number of Gaussian components calculated in the next time index can be reduced to .

There is no harm in considering the index of Gaussian components to be , according to their weight values in descending order, such that the Gaussian component with the largest weight value has an index . The Gaussian components with indices are selected first. The other Gaussian components with indices will be merged. The weight value of the th component is as follows:and the mean and covariance are, respectively, as follows:where are the normalised weight values of the Gaussian components to be merged.

#### 4. Simulation

##### 4.1. Simulation 1

###### 4.1.1. Simulation Parameter

The proposed algorithm is simulated by Monte Carlo with , the simulation time is , simulation step size is , the covariance of observation noise in (2) is , , , and the glint frequency factor ; the other filter parameters are set as follows:The other parameters are in Table 1.