Journal of Sensors

Volume 2019, Article ID 1216838, 19 pages

https://doi.org/10.1155/2019/1216838

## Randomly Weighted CKF for Multisensor Integrated Systems

^{1}National Key Laboratory of Science and Technology on Aerospace Intelligent Control, Beijing 100854, China^{2}School of Automatics, Northwestern Polytechnical University, Xi’an 710072, China^{3}School of Engineering, RMIT University, Bundoora, VIC 3083, Australia^{4}Laboratory of Excellence on Design of Alloy Metals for low-mAss Structures (DAMAS), Université de Lorraine, France

Correspondence should be addressed to Zhaohui Gao; nc.ude.upwn.liam@oagrednaxela

Received 3 November 2018; Revised 27 January 2019; Accepted 4 August 2019; Published 11 November 2019

Academic Editor: Jesus Corres

Copyright © 2019 Hua Zong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The cubature Kalman filter (CKF) is an estimation method for nonlinear Gaussian systems. However, its filtering solution is affected by system error, leading to biased or diverged system state estimation. This paper proposes a randomly weighted CKF (RWCKF) to handle the CKF limitation. This method incorporates random weights in CKF to restrain system error’s influence on system state estimation by dynamic modification of cubature point weights. Randomly weighted theories are established to estimate predicted system state and system measurement as well as their covariances. Simulation and experimental results as well as comparison analyses demonstrate the presented RWCKF conquers the CKF problem, leading to enhanced accuracy for system state estimation.

#### 1. Introduction

Nonlinear system state estimation is an importance research topic in various fields such as multisensor data fusion and integrated navigation, weapon guidance and control, intelligent transportation, system identification, and target tracking. The extended Kalman filter is a typical estimation method for nonlinear system state estimation [1, 2]. This method approximates the nonlinear system equation using first-order Taylor expansion and then uses the conventional Kalman filter for system state estimation. However, the linear approximation of the nonlinear system equation involves error in estimating the mean and covariance of system state, resulting in divergent solutions [3, 4]. Further, the calculation of the Jacobian matrix involved in EKF is an intricate and burdensome process [5].

The unscented Kalman filter (UKF) is an improved method of EKF. This method incorporates unscented transformation in the Kalman filter to estimate system state for nonlinear systems [6–9]. It is simple in computation and has higher accuracy than EKF [10]. However, due to the use of the same structure of linear update as the Kalman filter, UKF requires accurate system models and exact noise statistics [11]. If the system models involve error, the UKF solution will be degraded or diverged. The adaptive unscented Kalman filter (AUKF) is an approach to restraining system error’s interference on system state estimation by introducing an adaptive factor [12]. However, as the adaptive factor is determined empirically, this method cannot be adapted to the uncertain nature of system error, leading to suboptimal or biased filtering solutions. The stochastic integration filter (SIF) is a local filter based on stochastic integration [13]. This filter provides the asymptotically exact evaluation of an integral. Its convergence speed is also faster than that of Monte Carlo integration. However, similar to UKF, the SIF filtering solution is still affected by system error.

The particle filter (PF) is a method to conduct recursive Bayesian estimation based on Monte Carlo simulation [14, 15]. This method is suitable for systems with strong non-Gaussian and nonlinear properties, as it represents the posterior distribution with a set of particles. However, the problem of particle degradation occurs frequently in the recursive process, leading to poor filtering accuracy [16, 17]. Further, PF also requires a large number of sample points to achieve reliable estimation, leading to a large computational cost [18–20]. The unscented particle filter (UPF) reduces the PF computational load using ( is the system state dimension) sample points to improve the importance of sampling density via unscented transformation [21]. However, it still suffers from the problem of particle degeneracy. Wei et al. studied an improved UPF by combining adaptive filtering with square-root filtering into UPF to restrain process model noise’s interference in the filtering process, thus preventing particles from degeneracy [22]. However, this improvement is achieved by sacrificing the computational performance.

The cubature Kalman filter (CKF) is a method of state estimation for nonlinear Gaussian systems. Based on the cubature criterion, this method approximates the Gaussian integral by generating a set of sample points which are named the cubature points, according to nonlinear system functions [23–25]. It is derived from the Bayesian theory and Spherical-Radial criterion via rigorous mathematical analysis. Comparing to EKF, CKF reduces the linearization error of EKF, leading to third-order accuracy for system state estimation. It also has various advantages over UKF. First, the cubature points in CKF are obtained by integral operations, while the sigma points in UKF are resulted from unscented transformation, leading to the selection of weights and sample points at random. In addition, CKF adopts 2*n* sample points under a common weight, while UKF selects sample points under different weights. Further, in terms of filtering stability, the common weight of cubature points in CKF is 1/2*n*. This means that no matter how large the system dimension is, CKF guarantees that the weight of cubature points is always positive. However, the free adjustment parameter in UKF is random. If , the weight . This will make the covariance matrix of a high dimensional system nonpositive, thus leading to deteriorated or diverged filtering solutions. For strongly nonlinear and Gaussian systems, CKF also outperforms UKF in terms of accuracy and stability [26, 27].

However, the state estimation of CKF is disturbed by system error which may be generated from various sources such as manufacturing error, installation error, calibration error, and environmental error. In CKF, the predictions of state and measurement and error covariances are calculated by arithmetic mean with a common weight. Since the predictions and error covariances at all cubature points have the same contribution to the evaluations of prediction errors, the prediction errors acquired from all cubature points cannot reflect the actual statistical characteristics of system error, thus leading to distorted or diverged solutions. The adaptive CKF is an approach to restraining system error’s disturbance on system state estimation by introducing an adaptive factor [28]. However, since the adaptive factor is determined empirically, this method cannot adapt to the uncertain nature of system error, resulting in suboptimal or biased solutions. Further, it still suffers from the problem due to the use of arithmetic mean for state and measurement predictions.

Randomly weighted estimation is a relatively new statistical method [29]. This method is extensively used in different areas, including system identification and target tracking, navigation, guidance and control, error characteristic estimation, and estimation and analysis of empirical distribution function [30]. It provides an unbiased estimation at a low computation cost. It also suits large sample computation without requirement of the accurate distribution of model parameters [31]. However, it is a difficult task to combine randomly weighted estimation into CKF to improve the filtering performance. Just recently, the research group reported a randomly weighted method to estimate system noise statistics and further restrain their influence on system state estimation [32]. However, this method focuses on the estimation of system noise statistics, rather than system model error. Further, it is a loose integration of randomly weighted estimation with CKF, where randomly weighted estimation is applied to estimate system noise statistics which are further plugged into the CKF filtering process, rather than directly applied to modify and improve the CKF filtering process itself.

In this paper, a new randomly weighted CKF (RWCKF) is presented to enhance the estimation precision of CKF. Different from our previous work [32], this method directly modifies cubature point weights based on prediction errors, rather than estimates system noise statistics, to restrain system error’s influence on system state estimation. It establishes randomly weighted theories to estimate predicted system state and system measurement together with their error covariances. Consequent upon these theories, the proposed RWCKF dynamically modifies cubature point weights as per prediction errors to restrain system error’s interference on system state estimation to improve the filtering accuracy. Simulations, experiments, and comparisons with UKF, CKF, SIF, and AUKF were performed to comprehensively examine the efficacy of the presented RWCKF.

#### 2. CKF

In order to improve CKF, let us briefly discuss and analyze the CKF procedure at first.

The mathematical models of a nonlinear system can be generally expressed as where is the system state vector at time point , is the measurement vector, and are the additive process noise and measurement noise, and and are the nonlinear functions describing the process and measurement models.

Assuming process noise and measurement noise are the independent and time-varying Gaussian white noises with nonzero means, satisfying where both and are a symmetric matrix, and is the function.

The CKF algorithm involves the following steps:

*Step 1. *Set the initial values for the system state estimation and error covariance

*Step 2. *Time update and computing cubature points.

Suppose the posterior density function at time point obeys the normal distribution , i.e., . Then, the error covariance at time point can be expressed as

Accordingly, the cubature points can be obtained by
where () represents the system state vector at time point calculated from the *i*th cubature point, denotes the matrix obtained by the Cholesky decomposition [23, 25] at time point , and denotes the *i*th cubature point, in which represents the quantity of the cubature points and is commonly chosen as large as the system state dimension by a factor of two, represents a set of symmetric points in the system state space, and is the *i*th point of . A set of cubature points can be simply determined as and , where is the system state dimension (see Appendix A for computation of the cubature points).

Then, the *i*th cubature point from time point to is estimated by

Accordingly, the state prediction and error covariance from time point to at the *i*th cubature point is described by

*Step 3. *Measurement update.

By Cholesky decomposition, can be expressed by

Calculate the system state vector from time point to at the *i*th cubature point

Then, the measurement vector from time point to at the *i*th cubature point is given by

Accordingly, the measurement prediction together with its associated autocovariance and cross-covariance matrices from time point to at the *i*th cubature point is represented by

Calculate the filtering gain at time point

Calculate the estimated state at time point

According to the mean-square error of minimum variance estimation, we have where both and are a symmetric matrix, i.e., and .

The covariance matrix of the state error at time point is represented by

If the nonlinear system model involves error, (1) will become where and represent process model error and measurement model error, respectively.

Accordingly, the predicted state and predicted measurement will become

It is evident from (19) to (21) that if the nonlinear system model involves error, the state prediction given by (7) and the measurement prediction given by (12) will become biased and further make the filtering gain biased, thus deteriorating the system state estimation obtained from (16). Accordingly, if there exists a system model error, the CKF estimation will be deteriorated.

It is also evident from the above that the predicted state and measurement as well as their covariances are computed by arithmetic mean via the same weight . Therefore, the predictions and their covariances at all cubature points have the same contribution to the evaluations of prediction errors, and thus, the prediction errors calculated from all cubature points cannot precisely characterize the actual statistical properties of system error. Accordingly, the CKF estimation may be biased or diverged.

#### 3. Randomly Weighted CKF

##### 3.1. Randomly Weighted Estimation

Assume that are the random variables which are independent of each other and identically distributed as well as share a distribution function , and the associated empirical distribution function of these variables is represented as where denotes the indicative function.

Accordingly, the randomly weighted estimate of is expressed by where is the randomly weighted factor, which is subject to Dirichlet distribution , that is, and the united probability density function of is , where denotes the Gamma function, and .

Comparing (22) with (23), we can see that the randomly weighted method utilizes randomly weighted factors for estimation, while the arithmetic mean method utilizes one commonly weighted factor for estimation. There are various strategies to determine the randomly weighted factors according to Dirichlet distribution [30, 31]. A straightforward way is as follows.

Generate an independent and identically distributed sample sequence of the uniform distribution and assume this sample sequence is . Let be the order statistic of , where and . Then, , and the joint density function for is the Dirichlet distribution .

##### 3.2. Randomly Weighted CKF

The proposed RWCKF adopts the randomly weighted estimation to address the CKF limitation caused by arithmetic mean estimation.

*State Prediction*. According to the randomly weighted concept described by (22) and (23), based on (7), the randomly weighted estimate of the predicted state is obtained as
where represents the randomly weighted factor.

In a similar way, from (8), the randomly weighted estimate of predicted state’s covariance matrix is expressed by

*Measurement Prediction*. From (12), the randomly weighted estimate of the predicted measurement is obtained as

From (13), the randomly weighted estimate of predicted measurement’s autocovariance matrix is represented by

By (14), the randomly weighted estimate of predicted measurement’s cross-covariance matrix is obtained as

*Update of System State*. Calculate the filtering gain at time point

Compute the state estimation where is the predicted state which is expressed by where represents the posterior density function at time point .

Compute the estimation of state error covariance

It should be noted that in practice, it cannot guarantee that the state error covariance matrix obtained from (32) is always positive. If this covariance matrix is negative, the filtering process will be diverged. This problem can be overcome by either selecting an appropriate initial state value [10] or using the matrix decomposition technique such as singular value decomposition [33] and square root decomposition [34].

From the above equations, we can see that the proposed RWCKF adopts the randomly weighted concept in CKF. Thus, we can adaptively select appropriate random weights based on the prediction residuals to improve the estimation accuracy for state prediction and measurement prediction together with their covariances and cross-covariance, leading to enhanced accuracy for state estimation in comparison with CKF. Figure 1 illustrates the procedure of the RWCKF algorithm.