Abstract
The modified Riccati equation arises in the implementation of Kalman filter in target tracking under measurement uncertainty and it cannot be transformed into an equation of the form of the Riccati equation. An iterative solution algorithm of the modified Riccati equation is proposed. A method is established to decide when the proposed algorithm is faster than the classical one. Both algorithms have the same behavior: if the system is stable, then there exists a steady-state solution, while if the system is unstable, then there exists a critical value of the measurement detection probability, below which both iterative algorithms diverge. It is established that this critical value increases in a logarithmic way as the system becomes more unstable.
1. Introduction
The discrete time modified Riccati equation emanating from Kalman filter was originally formulated in [1]. It plays an important role in target tracking [1β10]. Theoretical properties of the modified Riccati equation have been derived in [2, 3]. It is well known [2] that the modified Riccati equation cannot be transformed into an equation of the form of the Riccati equation. The discrete time Riccati equation arises in linear estimation, namely, in the implementation of the discrete time Kalman filter [11]. The modified Riccati equation is solvable under certain conditions [2, 9] and has existence and uniqueness properties similar to the Riccati equation [2].
In Section 2, the modified Riccati equation associated with target tracking under measurement uncertainty is presented; the case without clutter but with detection probability of less than one is considered. In Section 3, an iterative solution algorithm of the modified Riccati equation is proposed and compared to the classical one. A method is established to distinguish the faster algorithm. If the system is stable, then both algorithms do converge to the steady-state solution. If the system is unstable, then there exists a critical value of the measurement detection probability, below which both algorithms diverge. In Section 4, it is established that this critical value increases in a logarithmic way as the system becomes more unstable.
2. The Modified Riccati Equation
Consider the following state space equations at time : where is the state vector at time , is the measurement vector, is the system transition matrix, and is the output matrix. It is assumed that and are zero mean, independent, white, Gaussian noise processes with constant covariance matrices given by and , that is, ββis the plant noise covariance matrix is the measurement noise covariance matrix.
At the initial time , the state is independent of the processes and for any and is a Gaussian random variable with mean and covariance , that is, For , denoting , the state prediction and the prediction error covariance matrix , and using the discrete-time invariant Kalman filter equations as described in [11β13], we derive the following recursion for the symmetric prediction error covariance matrix , the Riccati equation: with initial condition .
The state space equations in (1) can be used in target tracking to describe a linear target motion and measurement model. In addition, consider that a measurement is received with detection probability [2], where
Using the Kalman filter equations we are able to derive [2, 3] a Kalman-like recursion for the symmetric prediction error covariance matrix , the modified Riccati equation: Note that the prediction error covariance matrix is nonnegative definite .
Also in the modified Riccati equation (5) note that the matrix is nonsingular, when is a positive definite matrix , which has the significance that no measurement is exact; this is reasonable in physical problems.
It is remarkable that the following special cases are implied by (5).(i)Setting , the classical Riccati equation (3) is derived. The difference between the modified Riccati equation and the Riccati equation is the term of detection probability . It is obvious that the modified Riccati equation (5) cannot be transformed into the classical Riccati equation (3).(ii)Setting in (5), the classical Lyapunov equation is derived: which arises from the Riccati equation (3) in the infinite measurement noise case .
3. Iterative Solutions of the Modified Riccati Equation
Concerning the modified Riccati equation, it is known [9] that for stable systems, which means that all eigenvalues of lie inside the unit circle, the modified Riccati equation always convergesββand the limiting value of the prediction error covariance is the steady state solution of the discrete time modified Riccati equation.
The classical implementation of the modified Riccati Equation (cmRE) arises from (5), which consists of the direct implementation of the recursion of the following equation: It is obvious that this equation is equivalent to the modified Riccati equation (5) achieving a reduction in computational burden by using as a common factor .
Notice that if , then the nonsingularity of in (7) is guaranteed.
It is known [2] that the steady state solution of the modified Riccati equation is independent of the initial condition . So, for convenience, we are able to use zero initial condition . Then we are able to use as initial condition for the classical implementation.
Note that the convergence is achieved, when , where is a small positive number and denotes the norm of the matrix , which is equal to the largest singular value of . Then the steady state solution satisfies the steady state modified Riccati equation:
The proposed implementation of modified Riccati Equation (pmRE) consists of the direct implementation of the recursion of the following equation: This equation is equivalent to the modified Riccati equation (5) and can be derived from (5) using the matrix inversion lemma, under the condition that for the prediction error covariance matrix is positive definite . This is guaranteed, if , due to the fact that , if we use zero initial condition .
Notice that if , then the existence of in (9) is guaranteed and the nonsingularity of becomes obvious. Also note that in (9) is computed once (initialization process).
Both the classical and the proposed algorithms for solving the modified Riccati equation are recursive ones. Thus, the total computational time required for the implementation of each algorithm is where is the per recursion calculation burden required for the online calculations of each algorithm, is the number of recursions (steps) that each algorithm executes, and is the time required to perform a scalar operation.
Note that the two algorithms are equivalent to each other with respect to their behaviour: they calculate theoretically such steady-state prediction error variance that satisfies (8). Then, it is reasonable to assume that both algorithms compute the limiting solution of the modified Riccati equation executing the same number of recursions, depending on the desired accuracy. Thus, in order to compare the algorithms with respect to their computational time, we have to compare their per recursion calculation burden required for the online calculations; the calculation burden of the offline calculations (initialization process) is not taken into account.
The computational analysis is based on the calculation burden of the matrix operations, which are summarized in Table 1 and needed for the implementation of the filtering algorithm [14].
Then, the (per recursion) computational requirements of the classical and the proposed algorithms for solving the modified Riccati equation are computed as The details of (11) are given in Tables 2 and 3.
From the above computational requirements where we derive the following conclusions.(1)The per recursion calculation burden of the classical algorithm depends on the state dimension and on the measurement dimension , while the per recursion calculation burden of the proposed algorithm depends only on the state dimension .(2)The proposed algorithm is faster than the classical one if the following relation holds: Figure 1 depicts the relation between the dimensions and that may hold in order to decide which algorithm is faster. Then, relation (12) is approached by the relation: Thus it becomes obvious that we are able to establish the following method to distinguish the faster algorithm:ββif , then the proposed algorithm is faster than the classical algorithm, else the classical algorithm is faster than the proposed algorithmβ.
4. Convergence of the Modified Riccati Equation
Concerning the modified Riccati equation, it is known [9] that, for unstable systems (there is at least one eigenvalue of that lies strictly outside the unit circle), there exists a critical value of detection probability , below which the modified Riccati equation diverges.
Simulation results were taken concerning the modified Riccati equation. Both the classical and the proposed algorithms were implemented in order to solve the modified Riccati equation. Various stable as well as unstable models were considered. The following results were confirmed.(i) Both algorithms have the same behavior. If the system is stable (all eigenvalues of lie inside the unit circle), then the modified Riccati equation always converges: there always exists a steady-state solution. If the system is unstable (there is at least one eigenvalue of that lies strictly outside the unit circle), then there exists a critical value of detection probability , below which the modified Riccati equation diverges.(ii) The critical value of detection probabilityββ increases in a logarithmic way as the maximum absolute eigenvalue of increases, that is, the system becomes more unstable. Figure 2 depicts the relation between the system stability and the modified Riccati equation convergence.(iii) In the special case where the maximum absolute eigenvalue of lies in the unit circle, the critical value of detection probability takes its minimum value of the order of 0.04.(iv) The maximum absolute eigenvalue of , below which the modified Riccati equation always diverges, is of the order of 10.