Abstract

This paper presents modifications to the stochastic stability lemma which is then used to estimate the convergence rate and persistent error of the linear Kalman filter online without using knowledge of the true state. Unlike previous uses of the stochastic stability lemma for stability proof, this new convergence analysis technique considers time-varying parameters, which can be calculated online in real-time to monitor the performance of the filter. Through simulation of an example problem, the new method was shown to be effective in determining a bound on the estimation error that closely follows the actual estimation error. Different cases of assumed process and measurement noise covariance matrices were considered in order to study their effects on the convergence and persistent error of the Kalman filter.

1. Introduction

Since its introduction in 1960, the linear Kalman filter (LKF) [1] has been used widely in industry. When the LKF is implemented in real-time applications, it is often difficult to quantify the performance of the filter without access to some reference “truth.” Offline simulations can provide some indication of the filter performance; however accurate mathematical models are not always available. For the LKF, there are two primary sources of error in the estimation: initialization error and stochastic errors due to the process and measurement noise. In the early stages of the filter, the initialization error is dominant, and it takes some amount of time for the estimated state to converge to the true state from this incorrect initial state. After the initial error convergence, the errors due to the noise terms remain, resulting in “persistent” errors. Because of these types of error, there is a need to analyze the performance of the LKF online by quantifying the convergence rate and persistent error bounds of the real system. Such a tool could benefit many safety or performance critical systems, such as the aircraft health management system. Existing techniques for online performance analysis of the LKF include outlier detection [2], performance reliability prediction [3], and confidence bounds from the covariance matrix; for example, see [4]. Confidence bounds can also be established through use of the Chebyshev inequality [5], although these bounds tend to be too large for practical use [6]. Some other investigations for confidence bounds on the Kalman filter consider the non-Gaussian case using enhancements to the Chebyshev inequality [6] or the Kantorovich inequality [7]. The work presented herein offers a novel online method for monitoring the performance of the LKF by providing an upper bound on the estimation error.

This work was inspired by previous investigations of the stability and convergence properties of Kalman filters. Early continuous-time LKF stability work derived conditions for stability of the homogeneous (no noise) equations [8] and different causes of divergence [9]. For discrete-time systems, first upper and lower bounds were derived for the error covariance matrix [10]. Then, it was determined that stochastic controllability and observability of the system were sufficient conditions to prove asymptotic stability of the homogenous equations [11, 12]. Later, after some comments in [13, 14], corrections were provided for the calculations of the error covariance bounds [15]. This work was expanded to handle singular state transition matrices [16] and consider convergence properties of the algebraic Riccati equation [17] and parameter identification [18]. Lyapunov stability methods were later applied to the LKF equations as an alternative means to demonstrate stability of the homogeneous equations [19]. More recently, the conditions for stability of the discrete-time Kalman filter for linear time-invariant (LTI) systems were evaluated with respect to perturbations in the initial error covariance [20]. This existing work provided a necessary basis for investigating the convergence and persistent error properties of the LKF for stochastic systems.

An important and useful tool for analyzing the stochastic stability of a system is the stochastic stability lemma [21, 22]. This lemma has been used to approach the stability of the extended Kalman Filter (EKF) [23] and later for a general class of nonlinear filters including EKF and unscented Kalman filter (UKF) [24, 25]. A common problem with existing convergence analysis techniques for nonlinear state estimators is extremely loose bounds on the system and noise matrices, leading to very conservative and unrealistic requirements on the initial error and noise of the system [23]. A method for the relaxation of these conditions for EKF was considered in a related work [26]. Using the stochastic stability lemma, these works [23, 26] perform an Offline prediction of the stability of the state estimation. This process involves the calculation of a convergence rate and persistent error which establish an upper bound on the estimation error.

In addition to its previous uses for nonlinear systems, the stochastic stability lemma can also be used to establish important results for the LKF. Since the LKF is an adaptive process even for linear time-invariant (LTI) systems, it becomes useful to analyze the convergence rate and persistent error as a function of time. Motivated by this idea, the stochastic stability lemma is reconsidered here, and modifications are presented within to handle the time-varying nature of the LKF. Using this modified stochastic stability lemma, the convergence properties of the LKF are evaluated, thus providing a more realistic bound on the estimation error. Determining a bound on the estimation error is useful for applications where a reference “truth” value is not available for validation. This technique provides an upper bound on the filter performance, which can be used to represent the worst case scenario for the LKF estimation results. The purpose of this paper is to present the modified stochastic stability lemma, develop means of calculating an online bound of the estimation error of the LKF, quantify the convergence rate of the LKF, and offer some insight into the effects of different assumed values of the noise covariance matrices. This work also provides a foundation for future nonlinear stochastic state estimation convergence analysis.

The rest of this paper is organized as follows. In Section 2 the LKF equations are defined. In Section 3, the derivation of the modified stochastic stability lemma is presented. Section 4 utilizes the modified stochastic stability lemma from Section 3 to analyze the convergence of the LKF. Section 5 presents the convergence analysis of an example LKF problem. Finally, the conclusions are given in Section 4.

Throughout this paper, denotes the Euclidean norm of vectors, is the expected value of , is the expected value of conditioned on , I denotes an identity matrix of appropriate dimensions, and denote the minimum and maximum eigenvalues of a matrix, the matrix inequality implies that is positive definite, and similarly implies that is positive semidefinite.

2. Linear Kalman Filter Equations

Consider a discrete-time linear stochastic state space system of the following form: where x is the state vector, y is the output vector, F and H are system matrices, and wk-1 and vk are the process and measurement noise vectors that are zero-mean, white, uncorrelated and have assumed covariance matrices and , respectively. For this system, the LKF can be implemented using the following standard set of equations [27]: where P is the error covariance matrix and K is the Kalman gain matrix. In order to analyze the convergence of the LKF, it is important to understand its error dynamics. Defining the error in the a posteriori state estimate as and substituting in for the estimated state from (2) gives Inserting the output vector definition from (1) gives Collecting terms reduces the error dynamics to Substituting the definition of the state vector from (1) leads to Recognizing the state error at reduces the estimation error dynamics to the following form:

Remark 1. This form of the estimation error is one possibility of representing the error dynamics. An autoregressive form of quantifying the estimation error is discussed in [28, 29]. Another interesting possibility for estimation error quantification is presented in [6].

3. Modified Stochastic Stability Lemma

The basis of this convergence analysis is the stochastic stability lemma [21, 22], which is given as follows.

Lemma 2 (stochastic stability lemma). If there exists a stochastic process V() with the following properties: then the random variable is exponentially bounded in mean square with probability one, as in where α is the convergence rate and , , and μ, are constants. The proof for this lemma is provided in [22]. This lemma has been used to determine stability properties of the EKF in [23]. A modified version of this lemma is presented here which includes time-varying parameters.

Lemma 3 (modified stochastic stability lemma). Assume that there is a stochastic process V() and parameters and such that the following inequalities are satisfied for all k: then the random variable is bounded in mean square with probability one by the following inequality:

Proof. An important property of expectations from statistics is central to this proof [30] which can be extended for conditional expectations to Taking the conditional expectation of (12) with respect to gives which can be simplified using (15) This method can be applied recursively for , , thus giving Taking the expectation of this inequality and applying (14) gives Taking the expectation of (10) gives which can be inserted into (19), thus giving Similarly for the lower bound, the expected value is taken for (11), and the result is used to obtain which is rearranged to obtain the final result in (13).

Remark 4. It is important to note the differences between Lemmas 2 and 3. In Lemma 3, the terms and are time-varying quantities, whereas for Lemma 2, these terms were both considered as constants with respect to the discrete-time, . Additionally, the bounds for the stochastic process are treated differently. The upper bound of the process is considered only for the initial time step, while the lower bound is considered as a time-varying quantity. The usefulness of Lemma 3 is not for stability analysis, but for the online monitoring of convergence and estimation error bounds. The consideration of time-varying parameters is the key to the following online convergence and error analysis.

4. Online Convergence and Error Analysis

This section considers a new approach to analyzing the convergence and estimation error of the LKF in real-time. Using Lemma 3, the main result of this paper can be stated.

Theorem 5 (Kalman filter convergence theorem). Consider a linear stochastic system using the LKF equations as described in Section 2. Let the following assumptions hold.(1) The system matrix, , is nonsingular (invertible) for all k.(2) The assumed initial covariance is bounded by (3) The state error covariance matrix is bounded by the following inequality for all k: (4) The assumed process and measurement noise covariance matrices are conservative; that is, Then the expected value of the estimation error is bounded in mean square with probability one by where the time varying parameters , , and are given by

Proof. The proof of this theorem is detailed in the following sections.
Remarks.   The bound in (23) only matters for the assumed initial covariance matrix. Since this has a known value, the constant should be selected as the minimum eigenvalue of the inverse of the assumed initial covariance matrix and this bound will be automatically satisfied.
It is worth noting in (24) that if the error covariance approaches infinity (divergence), then the term will approach zero, which would lead to an infinite bound on the estimation error, thus indicating divergence of the filter as expected. For a stable system, however, the error covariance matrix has an upper bound, which can be determined from the stochastic controllability and observability properties of the system [11, 15].
The parameters α and μ are both functions of the same matrix, where α is the minimum eigenvalue and μ is the trace of the matrix. Since the eigenvalues of this matrix lie between 0 and 1 (the a priori covariance is always greater than or equal to the process noise covariance matrix) and recalling that the trace of a matrix is equal to the sum of its eigenvalues [31], the parameter μ will satisfy for all , where is the number of states in the filter. From here, it is interesting to note that increasing the parameter , which corresponds to the convergence of the stochastic process, will in turn also increase the parameter μ, which corresponds to the persistent error bound due to noise. This introduces a tradeoff in convergence and persistent error, which can be tuned through the selection of the process and measurement noise covariance matrices.
Using Lemma 3 for analysis of the LKF convergence leads to three important time-varying parameters: , , and . The parameter represents the convergence of the stochastic process, as defined in the following section by (31), while the parameter represents the convergence of the error covariance. The parameter corresponds to the persistent error bound on the filter due to the process and measurement noise. That is, in (27) it is shown that the initial error term will vanish as increases, thus leaving the term containing which contains the persistent response. This makes sense because as a LKF progresses in time, eventually the performance will converge within a region determined from the process and measurement noise, since these phenomena do not disappear with time. Together, these three parameters determine a bound on the convergence and persistent error of the filter using (27). Due to the time-varying nature of these parameters, the bound must be determined online and therefore cannot provide an Offline prediction of the filter convergence as in [23, 26].
The proof of Theorem 5 is provided next.

4.1. Defining and Decomposing the Estimation Error Analysis

As recommended in other works, for example, [19, 23], a candidate Lyapunov function is selected to define the stochastic process using a quadratic form of the estimation error and inverse error covariance matrix, as in Note that this function is used in the context of Lemma 3, not using traditional Lyapunov stability theorems; therefore it is only being used as a tool for analyzing the convergence, not to prove the stability of the filter. Inserting the error dynamics from (7) into this function gives Taking the conditional expectation with respect to and using the assumption that the process and measurement noise are uncorrelated give Now the problem of analyzing the LKF estimation error has been divided into three parts: the homogeneous problem in (34), the process noise problem in (35), and the measurement noise problem in (36). The homogeneous problem considers the deterministic part of the filter, that is, no noise. The process and measurement noise problems consider the effects of the stochastic uncertainty in the prediction and measurement equations, respectively. Each of these three parts is considered separately in the following sections.

4.2. The Homogeneous Problem

The homogeneous part of the problem is defined by (34). This part of the problem is related to the convergence rate of the filter. For this part of the analysis, a bound is desired in the form This inequality is desired as it is the assumption given by (12) ignoring for now the noise terms and assuming that = 0 for all . Substituting in for (31) and (34) gives This scalar inequality is equivalent to the matrix inequality The following relationship can be derived from the LKF equations in (2): Substituting (40) into (39) gives Taking the inverse of this inequality gives Note that this operation requires that the system matrix, F, be nonsingular for all (assumption (1)). The covariance matrices are invertible because they are positive definite by definition. Starting from the covariance prediction equation in (2) and rearranging give Substituting this equation into the matrix inequality yields Now, the system matrix can be removed from the inequality The covariance update equation from (2) is used to relate the a posteriori covariance and a priori covariance, as in Rearranging this inequality results in the following simplifications: Therefore the time-varying parameter, α, can be determined as the minimum eigenvalue of the matrix, as in (28). From the covariance prediction equation in (2), it is clear that the a priori covariance is greater than the process noise covariance matrix; therefore α is always between 0 and 1. Note that increasing Q will increase α. Alternatively, increasing R will decrease α. If the parameter α is selected as in (28), the desired inequality (37) is satisfied, thus satisfying the homogeneous part of the problem. Next, the process noise is considered.

4.3. The Process Noise Problem

For the process noise problem, the quantity of interest is given by (35). Since this is a scalar equation, the trace can be taken without changing the value Using the trace property of multiplication reordering [31] and removing the deterministic terms from the expectation yield Using (40) simplifies the equation to Inserting the covariance update equation from (2) gives which simplifies to Since the process noise covariance matrix can be chosen freely for the LKF, it is assumed that the assumed process noise covariance matrix is greater than the actual covariance of the process noise, as in (25). This bound is motivated by the idea that it is better to assume greater rather than less noise than there actually is in the system. This leads to the bound on the process noise term While increasing Q was shown to increase the convergence rate in the previous section, it is clear here that this increase in convergence comes at the expense of a larger bound on the process noise term. This selection of Q becomes a tradeoff between the convergence and the accuracy of the estimate; that is, assuming an unnecessarily large Q will lead to faster convergence but larger persistent errors of the filter due to process noise. Next, the measurement noise problem is considered.

4.4. The Measurement Noise Problem

For the measurement noise problem, the quantity of interest is given by (36). Since this is a scalar equation, the trace can be taken without changing the value Using the trace property of multiplication reordering [31] and removing the deterministic terms from the expectation yield Using the second equation for the Kalman gain yields Inserting the covariance update equation from (2) gives the relationship in terms of the a priori covariance Using the matrix inversion lemma [32], this term can be rewritten as Similarly as for the process noise, the assumed measurement noise covariance matrix is selected as an upper bound on the actual measurement noise covariance, as in (26), which determines the bound for the measurement noise term This inequality can be simplified to the following form: From here, it is shown that increasing the assumed measurement noise covariance matrix, R, will in fact lead to a smaller bound on the estimation error due to measurement noise. Now that each part of the problem has been considered separately, the results are combined and Lemma 3 is applied.

4.5. Final Result from the Modified Stochastic Stability Lemma

Combining the results from the previous sections gives the following inequality: which is equivalent to (12) with This term can be simplified further. First the trace property of multiplication reordering [31] is used to obtain Then, applying the matrix inversion lemma [32] gives Further simplification yields Then, combining the terms gives (29). Thus, the inequality in (12) has been satisfied.

In order to apply Lemma 3, the inequalities (10) and (11) also need to be satisfied. These inequalities are guaranteed by the assumptions (23) and (24) in Theorem 5. Thus, the necessary conditions for Lemma 3 have been satisfied; therefore the estimation error of the LKF is bounded in mean square with probability one, and the bound is given by (27). This completes the proof of Theorem 5. In the following section, a LKF example is provided to illustrate the usefulness of Theorem 5 for LKF convergence analysis.

5. An Illustrative Example

To demonstrate the convergence analysis method from Section 4, a simple LKF example is presented. This example problem was adapted from Example  5.1 in [27] to include process noise. The system equations are defined in the form of (1) with system matrices defined by and the true process and measurement noise covariance matrices are given by where is the sampling time, which for this example is considered to be 0.02. The initial conditions are assumed to be while the true initial state for the system is actually Note that this considers a case of reasonably large initialization error.

In order to apply Theorem 5, certain assumptions need to be satisfied. From the definition of F, it is clear that this matrix is invertible. Four different cases of assumed process and measurement covariance matrices were considered, as summarized in Table 1.

It is shown in Table 1 that (25) and (26) are satisfied. Note that these cases vary the assumed noise properties, not the actual noise. The true noise covariance matrices are given by (67) for all cases. The value for the initial Lyapunov function upper bound, , is calculated from the assumed initial covariance matrix with (23). Additionally, the values for the time-varying convergence rate, , noise parameter, , and Lyapunov function bound are defined using (28), (29), and (24), respectively. These values are calculated online at each time step of the filter. Using these equations, the convergence properties can be calculated online with (27).

For the given example, the presented convergence analysis technique is applied, and the results are given as follows. Since the initial covariance is the identity matrix, = 1. The time-varying convergence and error parameters are shown in Figure 1 for each of the considered cases of assumed process and measurement noise covariance.

The parameter represents the convergence rate of the stochastic process, represents the persistent error of the stochastic process, and represents the convergence of the error covariance. From these time-varying parameters, the bound on the expected value of the norm of the estimation error squared can be determined from (27). This bound is verified with respect to the actual estimation error which was determined from simulation as shown in Figure 2.

It is shown in Figure 2 that the estimation error does not exceed the theoretical bounds. The online bounds are relatively close to the estimation error, thus providing a reasonable guide to the convergence and steady-state error of the filter performance. This is useful because a reference truth is not available to evaluate the performance of a filter in most practical applications. This method provides a means of calculating an upper bound on the performance of the filter using only known values from the filtering process.

There are some interesting observations to make from Figures 1 and 2 regarding the different noise covariance assumptions. Case 1, which represents perfect knowledge of the simulated noise properties, offers a very good approximation to the convergence and persistent error of the example filter. Increasing the assumption on the process noise (Case 2) leads to an increase in , but also an increase in , as predicted. However, this increase in assumed process noise significantly increased the parameter , thus leading to a slowly converging, loose bound on the estimation error. A similar performance bound was seen for Case 4 due to the dominant effect of the parameter ; however the parameters and were similar to Case 1. This makes sense because the ratio between the assumed Q and R remained the same for Cases 1 and 4. For Case 3, increasing the assumed measurement noise decreased the parameters and as expected, but the parameter also decreased, further decreasing the convergence of the estimation error. This lead to a slower converging bound, but a tighter bound on the persistent error. This demonstrates a tradeoff in the selection of the measurement noise covariance, which could be used for filter tuning depending on the application and desired convergence properties.

The predicted estimation error bound from Offline analysis [23, 26] using Lemma 2 is also provided as a reference to demonstrate the effectiveness of using this new online method. To relate the time-varying parameters to previous Offline work using Lemma 2 [23, 26], the following relations are used:

The bound from Case 1 is used for this comparison, as shown in Figure 3.

While the Offline estimation error bound is valid, it is extremely loose and does not provide a realistic portrayal of the convergence of the estimation error. This shows that the presented online method is useful for more closely determining the convergence and persistent error of the LKF but is limited in that it cannot predict these bounds prior to the filtering process and it cannot be used for Offline stability analysis.

6. Conclusions

This paper presented a modified stochastic stability lemma and a Kalman filter convergence theorem, which are new tools that can be used to quantify the performance of Kalman filters online. Through an example, it was shown that this new convergence analysis method is effective in determining an upper bound on the performance of the LKF. Also, useful information about the convergence of the particular LKF algorithm can be calculated. This analysis is applied during the filtering process, thus providing the capability for real-time convergence and performance monitoring. Different cases of noise covariance assumptions were considered, showing that increasing the assumed process noise tends to significantly slow the convergence of the filter and increase the persistent error bound, while increasing the assumed measurement noise tends to slow the convergence but decreases the persistent error bound. Future work will involve extending this technique to nonlinear systems.

Acknowledgments

This work was supported in part by NASA Grants no. NNX10AI14G and no. NNX12AM56A and NASA West Virginia Space Grant Consortium Graduate Fellowship.