Abstract

In the Kalman filtering applications, the conventional dynamic model which connects the states information of two consecutive epochs by state transition matrix is usually predefined and assumed to be invariant. Aiming to improve the adaptability and accuracy of dynamic model, we propose multiple historical states involved filtering algorithm. An autoregressive model is used as the dynamic model which is subsequently combined with observation model for deriving the optimal window-recursive filter formulae in the sense of minimum mean square error principle. The corresponding test statistics characteristics of system residuals are discussed in details. The test statistics of regional predicted residuals are then constructed in a time-window for model bias testing with two hypotheses, that is, the null and alternative hypotheses. Based on the innovations test statistics, we develop a model bias processing procedure including bias detection, location identification, and state correction. Finally, the minimum detectable bias and bias-to-noise ratio are both computed for evaluating the internal and external reliability of overall system, respectively.

1. Introduction

The Kalman filter (KF) has been widely used in various fields, such as target tracking, navigation estimation, distributed network, and multiagent consensus problem [15]. Under the assumption of correct system functional and stochastic models, it generates a statistically optimal estimate of underlying system state by recursively operating on streams of noisy input data [6]. However, in real applications, the mathematical system model is not perfect such that the assumption is not always satisfied. For example, false modeled noise, improper system model, or unexpected sudden changes of state existed during filtering process [7], thus degrading the solution precision and even leading to divergence of filter.

Many adaptive KF methods have been developed for dealing with abnormal system noise by compensating their influence into stochastic model where the corresponding variance of noise is inflated to reduce the contributions of dynamic model or observation model to the KF final solution [8, 9]. Such methods essentially reduce or even withdraw the inaccurate a priori knowledge of noise by adjusting the weights between the dynamic model and observation model. In fact, these methods are based on stochastic model, that is, variance of noises. A more straightforward strategy is to use their first moment information which can be referred to functional model modification. The unmodelled error is treated as a systematic error, namely, bias which is examined by system innovations based testing theory through generalized likelihood ratio method or the specific chi-square or consistency test [10, 11], and then the bias is further identified and adapted in the ways of data snooping [12, 13]. Another representative method is the interactive multiple model based Kalman filtering. It is in principle a hybrid state estimation filtering where a set of models must be selected in advance to capture the complex target dynamics [14]. Recently, a self-constructed trajectory based dynamic model has been constructed and successfully applied in navigation campaigns [2]. By using multiple states information, it well accommodates the dynamic model bias that cannot be adequately described in the predefined invariant dynamic model of conventional KF, for instance, accelerating, turning, and maneuverings versus constant velocity dynamic model. Substantially, it is an autoregressive (AR) model and this provides a new thought to establish a time-variant dynamic model. However, there is still potentially a considerable possibility of bias occurrence. Moreover, unexpected bias occurs during observation and thus also contaminates the observation model. It is therefore of great importance to develop the quality control strategy for dynamic system.

Aiming to construct a general accurate dynamic model, we conduct an optimal window-recursive filter for processing AR- dynamic model in the sense of minimum mean square error (MMSE) principle. Considering the test statistics characteristics of filtering, the regional innovation test statistics are constructed in a time-window for two hypotheses testing, that is, the null and alternative hypotheses. Based on the innovations test statistics, we further develop a model bias processing procedure including bias detection, location identification, and state correction. The minimum detectable bias and bias-to-noise ratio are both computed for the internal and external reliability analyses of overall system as well. This paper is organized as follows. Section 2 optimally derives the recursive formulae for multiple historical states involved filtering. The characteristics of local model test statistics are discussed in Section 3. The regional test statistics for system innovation are constructed in the two scenarios in Section 4. The system model bias processing procedure is developed in three stages in Section 5. Section 6 investigates the internal and external reliability of our proposed filter model. The concluding remarks are given in Section 7.

2. Principle of Multiple Historical States Involved Filtering

The -order autoregressive model are widely expressed by where the subscript is the time step; denotes the number of involved epochs; is the actual state with dimension ; is the autoregressive coefficient; is the zero-mean normal distributed process noise with the variance of . The intuitive representation of (1) in state space form can be given asHowever, as is getting large, the computational burden of such state augmentation will be accordingly increased. For this reason, we rewrite AR- model (1) into matrix form without state augmentation: where is the transition matrix transforming the states of the previous epochs into the current one, is the vector that consists of the stacked state vectors from the epoch to the epoch , and the corresponding covariance matrix is . On the other hand, the linear or linearized observation model readswhere is the measurement vector with dimension ; is the zero-mean normal distributed measurement noise with the variance of ; is the design matrix. It is easy to compute the predicted state and its variance bywhere and denote the predicted state and its corresponding variance, respectively; is the variance of . Let us start derivations with the following defined linear expression of estimated state:where is the so-called gain matrix to be determined. Denoting the a priori and posterior estimate errors and , respectively, it is rather easy to compute the estimated state variance withwhere denotes the expectation operator. Alternative expression of (8) can be written asin which is the identity matrix which has the same dimension with state vector. Next we will find out in the sense of MMSE satisfying the following condition:with the fact ofwhere denotes the trace of matrix. Thereby the unique solution of is solved by Furthermore, we also obtain the state estimate covariance:Note that differing from conventional KF formulae derivation, the correlation between and exists and should be rigorously considered when the time-window moves forward. Rewrite (8) into following form:Inserting (5) into (14) yieldsNoting that is uncorrelated with , the covariance matrix between and is then derived in terms of the error propagation law asWe further symbolize the variance of in block matrix asThus, with the window moving forward one step, the new epoch will be introduced and the first epoch removed. It is also rather easy to derive the filtering solution and . It should be pointed out that when the time-window length , the multiple historical states involved filtering algorithm reduces to the KF. Its implementation procedure is summarized as follows:(i)Initialize the historical state information of and of epochs.(ii)Compute the predicted state vectors and by (5) and (6).(iii)Sequentially compute , , , and with (12)~(14) and (16).(iv)Save along with which is the submatrix of (17).(v)If no observation interruption occurs, update time epoch to and jump to (ii). Otherwise, restart the filter from (i) to reinitialize the information of multiple states.

The above multiple states involved filtering generates optimal estimators of state only when the system noises are mutually uncorrelated and zero-mean normal distributed. However, such assumption is not always satisfied in real applications. Misspecifications in the system model and unexpected errors inevitably generate biased estimation results. It is therefore of great importance to correctly deal with these system model faults and outliers. In the next section, we will investigate the model bias processing theory to ensure the accuracy and reliability of overall system.

3. Test Statistics Characteristics of Local Filter Test

According to (3) and (4), three error sources exist in the filter solutions, that is, the system process noise , the observation noise , and the errors of . Based on this point of view, the system models can be reformulated as the following residual equations for pseudo-observations [7]: where , , and with their pseudo-observation variance matrices byDefining a new observation vector , we obtain its corresponding partitioned matrix . By denoting the new residual as , then the following test statistics widely used in system diagnosing of local system model can be constructed as where is the redundancy of . In practice, the following alternate chi-square test is more popular: where is the system innovation defined as the difference between raw observation and predicted one derived from the predicted state:Evidently, it mixes up all the error sources in (18)~(20). Further inserting (5) into (24), the innovation vector arrives atAccordingly, its variance matrix is derived byIt should be pointed out that the two test statistics in (22) and (23) have been proved to be equivalent [15]. Therefore, we use the distribution of system innovation vector to test the mathematical model of filter.

4. Regional Test Statistics for System Innovation

The aforementioned local test statistics can be potentially used to detect sudden failures or outliers in system model. However, there are two drawbacks: (i) the local testing is too insensitive to detection of the slowly growing sensor errors such as gyroscope drift and accelerometer zero-bias; (ii) only the current observation information is included in the test statistics, thus degrading the reliability of testing results. Therefore, in this section, we introduce a sliding window based test statistics which contain information of multiple system innovations rather than one, thereby enhancing system reliability in the aspect of model fault detection. The predicted residual vectors of epochs can be expressed asThe following null and alternative hypotheses are considered:where is an unknown residual bias vector with dimension . Define the following relationship between and unknown model bias vector : where is a projection matrix which can be interpreted as the transformation from system model bias into innovation bias. Here two typical model errors scenarios are considered.

Case A. A bias error vector occurs in the dynamic model at epoch , and the dynamic model at arbitrary epoch is expressed aswhere is the coefficient matrix denoting how the bias vector enters into the dynamics. For convenience, the transition matrix is denoted as in the latter expressions.

Then (30) can accommodate the unmodeled error in dynamic model, for example, maneuverings, accelerating for vehicles with constant velocity. The matrix is conducted bywhere denotes the Kronecker product; and the elements satisfy the Dirac function:

Case B. A bias error vector occurs in the observational model at epoch , and the observational model at arbitrary epoch is expressed aswhere is the coefficient matrix of observation bias vector. Thereby the matrix is computed by

4.1. Generalized Likelihood Ratio Test Statistics

Thus far, the alternative hypothesis has been specified. Next we introduce the generalized likelihood ratio test statistic for testing against as follows [13]:Reject if where is the covariance matrix of . As aforementioned, the bias occurs at epoch , and then (35) can be further rewritten as where the innovation residual vectors from to are calculated byEvidently, the innovation residuals vectors in (38) are strongly correlated with each other. Thus, is not a diagonal matrix and its covariance should be rigorously considered. Let us reformulate the above arbitrary innovation residual vector asDefining , then and are accordingly specified asInserting (40) into (39), and reformulating (38) into matrix form, we obtainwhere Therefore, in terms of the error propagation law, the covariance matrix is derived bywhere is the covariance of . For the independence of subvectors, is written in a form of block diagonal matrix as

5. The System Model Bias Processing Procedure

In this section, based on the testing principle of previous section, we propose a system model bias processing procedure including three steps, that is, bias detection, location identification, and state correction.

Step 1 (bias detection). First we implement the regional testing with test statistic to examine whether the bias exists in the time-window of filter. Therefore, the overall system model testing with (35) is carried out based on following and , respectively,where is the noncentrality parameter and can be computed by will be accepted if ; that is, no bias occurs during the filtering process. Otherwise, we will accept the alternative hypothesis .

Step 2 (time and location identification). If the null hypothesis is rejected, we need to locate the starting time and position of model bias. Since the time when the bias occurs is still unknown, the test statistics screening strategy is proposed here to find the most likely starting time of bias from the beginning of the time-window to the current epoch. The starting time and location of bias can be identified as follows:(i)Similar to the data snooping method, we construct alternative hypotheses for potential blunders through making the elements in coefficient matrix or of model bias equal 1 in order: that is, , where ranges from 1 to dimension of or .(ii)Compute all the candidates of test statistics with the specified coefficient matrix or in (i) in the time-window.(iii)Find the maximum value of with the corresponding or and the epoch index and the element order are the most likely starting time and location of bias if the alternative hypothesis is true.

Step 3 (state correction). Once the starting time and location of bias are identified, a bias correction step further needs to be undertaken to make the null hypothesis accepted. It can be fulfilled by additional bias parameterization, which treats the bias vector as a new parameter to be estimated. Accordingly, the dynamics of bias vector is assumed as a constant vector:With this operation, the identified alternative hypothesis turns into a new null hypothesis. Based on the two-step filtering theory of bias treatment [12], the final optimum estimate which exempts the bias influences can be expressed by two components aswhere is the bias-free estimate calculated as if no bias exists in system model; is the estimate correction term induced by bias vector during the recursive filtering and it is given byThe two terms on the right hand side of (49) can be computed by where the bias-free predicted residual is as follows:The introduced intermedia matrix isand the gain matrices of bias-free state and bias state arein which and is recursively computed by means ofwith Note that in (53) is the variance of bias-free estimated state. The initial information of introduced at epoch is given as . Up to now, the bias-corrected state estimation under the alternative hypothesis can be recursively realized.

6. Reliability Analysis

When we implement the overall system innovation model testing, to ensure the probability of success in bias detection, we should control the probabilities of committing both type I error and type II error which correspond to the probabilities of making wrong decisions during test [16, 17]. According to Neyman-Pearson principle, if the size of type I error is fixed, type II error is naturally associated with the alternative hypothesis . In this section, with the given sizes and , we will have insight into the system reliability, which is also known as internal reliability and external reliability.

6.1. Minimal Detectable Bias

The minimal detectable bias (MDB) is considered as a measure of the internal reliability, which indicates a lower bound of the bias that can be successfully detected. It has a strong relation with the alternative hypothesis and size of type II error , the probability of accepting null hypothesis when is true. It is worthy of pointing out that one should maintain a protection against the size of type II error . It depends on the parameters involved in , that is, the chosen level of significance , the noncentrality parameter , and the degrees of freedom . More straightforwardly, type II error manifests the power of statistical test through the following function:where the subscript is the function symbol. Another expression of (57) readsObviously, given the protected power of statistical test , , and against type II error, we obtain the corresponding noncentrality parameter . Consider the fixed noncentrality parameter in (58) and relation with system innovation information of alternative hypothesis in (46); accordingly the MDB is then calculated by

6.2. Bias-to-Noise Ratio

Besides internal reliability mentioned above, the influences of the model bias on the final state solutions also should be evaluated since the model biases are not always detectable. As an important external reliability criterion, the bias-to-noise ratio is an efficient tool for quantizing the influences of model bias of overall system. A scalar squared bias-to-noise ratio is defined as [13]where is the influences term of model bias on state in (48) for the alternative hypothesis and is the covariance matrix of bias-free state for the null hypothesis. Further inserting (13) and (49) into (60) yieldsEvidently, a large value of indicates the significant influences of model bias on state estimation, while a small value of exhibits a good external reliability of system. It is therefore of importance to design a reasonable system model structure for minimizing the influences of model bias on solutions.

7. Concluding Remarks

This paper proposes a model bias processing approach for the autoregressive (AR) dynamic model involved dynamic system. The optimal time-window recursive filtering formulae are derived in the case of AR- dynamic model in the sense of minimum mean square error (MMSE) principle. The test statistics of regional system innovation, that is, predicted residuals under the null and alternative hypotheses, are constructed for model bias testing specified in two bias occurrence scenarios. Based on the system innovation test statistics, we develop a model bias processing procedure including bias detection, location identification, and state correction. The minimum detectable bias and bias-to-noise ratio are both computed and comprehensively analyzed for evaluating the internal reliability and external reliability, respectively, of overall system. In the future, since the recursive time-window will inevitably generate the strong correlations between innovations and state estimates, the multiple hypotheses will be further constructed considering the probability of wrong decisions on the bias location when the null hypothesis is rejected. In addition, the computations for the regional testing and processing of model bias are time-consuming; thus, an efficient computation strategy is also desired in the future work.

Conflict of Interests

The authors declare that they have no competing interests.

Acknowledgments

This work is supported by the State Key Laboratory of Geodesy and Earth Dynamics (SKLGED 2014-3-3-E), the NASG Key Laboratory of Land Environment and Disaster Monitoring (LEDM2014B09), the Key Laboratory of Precise Engineering and Industry Surveying of National Administration of Surveying, Mapping and Geoinformation (PF2015-11), and the National Natural Science Fund of China (41304018 and 41574017).