Abstract

The modified Riccati equation arises in the implementation of Kalman filter in target tracking under measurement uncertainty and it cannot be transformed into an equation of the form of the Riccati equation. An iterative solution algorithm of the modified Riccati equation is proposed. A method is established to decide when the proposed algorithm is faster than the classical one. Both algorithms have the same behavior: if the system is stable, then there exists a steady-state solution, while if the system is unstable, then there exists a critical value of the measurement detection probability, below which both iterative algorithms diverge. It is established that this critical value increases in a logarithmic way as the system becomes more unstable.

1. Introduction

The discrete time modified Riccati equation emanating from Kalman filter was originally formulated in [1]. It plays an important role in target tracking [1–10]. Theoretical properties of the modified Riccati equation have been derived in [2, 3]. It is well known [2] that the modified Riccati equation cannot be transformed into an equation of the form of the Riccati equation. The discrete time Riccati equation arises in linear estimation, namely, in the implementation of the discrete time Kalman filter [11]. The modified Riccati equation is solvable under certain conditions [2, 9] and has existence and uniqueness properties similar to the Riccati equation [2].

In Section 2, the modified Riccati equation associated with target tracking under measurement uncertainty is presented; the case without clutter but with detection probability of less than one is considered. In Section 3, an iterative solution algorithm of the modified Riccati equation is proposed and compared to the classical one. A method is established to distinguish the faster algorithm. If the system is stable, then both algorithms do converge to the steady-state solution. If the system is unstable, then there exists a critical value of the measurement detection probability, below which both algorithms diverge. In Section 4, it is established that this critical value increases in a logarithmic way as the system becomes more unstable.

2. The Modified Riccati Equation

Consider the following state space equations at time π‘˜β‰₯0: π‘₯π‘˜+1=𝐹π‘₯π‘˜+π‘€π‘˜,π‘§π‘˜=𝐻π‘₯π‘˜+πœπ‘˜,(1) where π‘₯π‘˜ is the 𝑛×1 state vector at time π‘˜, π‘§π‘˜ is the π‘šΓ—1 measurement vector, 𝐹 is the 𝑛×𝑛 system transition matrix, and 𝐻 is the π‘šΓ—π‘› output matrix. It is assumed that {π‘€π‘˜} and {πœπ‘˜} are zero mean, independent, white, Gaussian noise processes with constant covariance matrices given by 𝐸[π‘€π‘˜π‘€π‘‡π‘˜]=𝑄 and 𝐸[πœπ‘˜πœπ‘‡π‘˜]=𝑅, that is, 𝑄  is the 𝑛×𝑛 plant noise covariance matrix 𝑅 is the π‘šΓ—π‘š measurement noise covariance matrix.

At the initial time π‘˜=0, the state π‘₯0 is independent of the processes {π‘€π‘˜} and {πœπ‘˜} for any π‘˜ and π‘₯0 is a Gaussian random variable with mean βˆ’π‘₯0 and covariance 𝑃0, that is, 𝐸π‘₯0ξ€»=βˆ’π‘₯0ξ‚Έξ‚€π‘₯,𝐸0βˆ’βˆ’π‘₯0π‘₯0βˆ’βˆ’π‘₯0𝑇=𝑃0.(2) For π‘˜β‰₯0, denoting π‘π‘˜={𝑧0,𝑧1,…,π‘§π‘˜}, the state prediction π‘₯π‘˜/π‘˜βˆ’1=𝐸[π‘₯π‘˜/π‘π‘˜βˆ’1] and the prediction error covariance matrix π‘ƒπ‘˜/π‘˜βˆ’1=𝐸[(π‘₯π‘˜βˆ’π‘₯π‘˜/π‘˜βˆ’1)(π‘₯π‘˜βˆ’π‘₯π‘˜/π‘˜βˆ’1)𝑇/π‘π‘˜βˆ’1], and using the discrete-time invariant Kalman filter equations as described in [11–13], we derive the following recursion for the symmetric 𝑛×𝑛 prediction error covariance matrix π‘ƒπ‘˜+1/π‘˜, the Riccati equation: π‘ƒπ‘˜+1/π‘˜=𝑄+πΉπ‘ƒπ‘˜/π‘˜βˆ’1πΉπ‘‡βˆ’πΉπ‘ƒπ‘˜/π‘˜βˆ’1π»π‘‡ξ€Ίπ»π‘ƒπ‘˜/π‘˜βˆ’1𝐻𝑇+π‘…βˆ’1π»π‘ƒπ‘˜/π‘˜βˆ’1𝐹𝑇(3) with initial condition 𝑃0/βˆ’1=𝑃0.

The state space equations in (1) can be used in target tracking to describe a linear target motion and measurement model. In addition, consider that a measurement is received with detection probability 𝑝𝑑 [2], where 0≀𝑝𝑑≀1.(4)

Using the Kalman filter equations we are able to derive [2, 3] a Kalman-like recursion for the symmetric 𝑛×𝑛 prediction error covariance matrix π‘ƒπ‘˜+1/π‘˜, the modified Riccati equation: π‘ƒπ‘˜+1/π‘˜=𝑄+πΉπ‘ƒπ‘˜/π‘˜βˆ’1πΉπ‘‡βˆ’π‘π‘‘πΉπ‘ƒπ‘˜/π‘˜βˆ’1π»π‘‡ξ€Ίπ»π‘ƒπ‘˜/π‘˜βˆ’1𝐻𝑇+π‘…βˆ’1π»π‘ƒπ‘˜/π‘˜βˆ’1𝐹𝑇.(5) Note that the prediction error covariance matrix is nonnegative definite (π‘ƒπ‘˜+1/π‘˜β‰₯0).

Also in the modified Riccati equation (5) note that the π‘šΓ—π‘š matrix π»π‘ƒπ‘˜/π‘˜βˆ’1𝐻𝑇+𝑅 is nonsingular, when 𝑅 is a positive definite matrix (𝑅>0), which has the significance that no measurement is exact; this is reasonable in physical problems.

It is remarkable that the following special cases are implied by (5).(i)Setting 𝑝𝑑=1, the classical Riccati equation (3) is derived. The difference between the modified Riccati equation and the Riccati equation is the term of detection probability 𝑝𝑑. It is obvious that the modified Riccati equation (5) cannot be transformed into the classical Riccati equation (3).(ii)Setting 𝑝𝑑=0 in (5), the classical Lyapunov equation is derived: π‘ƒπ‘˜+1/π‘˜=𝑄+πΉπ‘ƒπ‘˜/π‘˜βˆ’1𝐹𝑇(6) which arises from the Riccati equation (3) in the infinite measurement noise case (π‘…β†’βˆž).

3. Iterative Solutions of the Modified Riccati Equation

Concerning the modified Riccati equation, it is known [9] that for stable systems, which means that all eigenvalues of 𝐹 lie inside the unit circle, the modified Riccati equation always converges  and the limiting value 𝑃 of the prediction error covariance is the steady state solution of the discrete time modified Riccati equation.

The classical implementation of the modified Riccati Equation (cmRE) arises from (5), which consists of the direct implementation of the recursion of the following equation: π‘ƒπ‘˜+1/π‘˜ξ‚†π‘ƒ=πΉπ‘˜/π‘˜βˆ’1βˆ’π‘π‘‘πΉπ‘ƒπ‘˜/π‘˜βˆ’1π»π‘‡Γ—ξ€Ίπ»π‘ƒπ‘˜/π‘˜βˆ’1𝐻𝑇+π‘…βˆ’1π»π‘ƒπ‘˜/π‘˜βˆ’1𝐹𝑇+𝑄.(7) It is obvious that this equation is equivalent to the modified Riccati equation (5) achieving a reduction in computational burden by using as a common factor 𝐹.

Notice that if 𝑅>0, then the nonsingularity of π»π‘ƒπ‘˜/π‘˜βˆ’1𝐻𝑇+𝑅 in (7) is guaranteed.

It is known [2] that the steady state solution of the modified Riccati equation is independent of the initial condition 𝑃0/βˆ’1=𝑃0. So, for convenience, we are able to use zero initial condition 𝑃0=0. Then we are able to use 𝑃1/0=𝑄 as initial condition for the classical implementation.

Note that the convergence is achieved, when β€–π‘ƒπ‘˜+1/π‘˜βˆ’π‘ƒπ‘˜/π‘˜βˆ’1β€–<πœ€, where πœ€ is a small positive number and ‖𝑀‖ denotes the norm of the matrix 𝑀, which is equal to the largest singular value of 𝑀. Then the steady state solution 𝑃 satisfies the steady state modified Riccati equation: 𝑃=𝑄+πΉπ‘ƒπΉπ‘‡βˆ’π‘π‘‘πΉπ‘ƒπ»π‘‡ξ€Ίπ»π‘ƒπ»π‘‡ξ€»+π‘…βˆ’1𝐻𝑃𝐹𝑇.(8)

The proposed implementation of modified Riccati Equation (pmRE) consists of the direct implementation of the recursion of the following equation: π‘ƒπ‘˜+1/π‘˜=ξ€·1βˆ’π‘π‘‘ξ€Έξ€·π‘„+πΉπ‘ƒπ‘˜/π‘˜βˆ’1𝐹𝑇+𝑝𝑑𝑃𝑄+πΉβˆ’1π‘˜/π‘˜βˆ’1+π»π‘‡π‘…βˆ’1π»ξ€»βˆ’1𝐹𝑇.(9) This equation is equivalent to the modified Riccati equation (5) and can be derived from (5) using the matrix inversion lemma, under the condition that for π‘˜β‰₯1 the prediction error covariance matrix π‘ƒπ‘˜/π‘˜βˆ’1 is positive definite (π‘ƒπ‘˜/π‘˜βˆ’1>0). This is guaranteed, if 𝑄>0, due to the fact that 𝑃1/0=𝑄, if we use zero initial condition 𝑃0/βˆ’1=𝑃0=0.

Notice that if 𝑅>0, then the existence of π»π‘‡π‘…βˆ’1𝐻 in (9) is guaranteed and the nonsingularity of π‘ƒβˆ’1π‘˜/π‘˜βˆ’1+π»π‘‡π‘…βˆ’1𝐻 becomes obvious. Also note that π»π‘‡π‘…βˆ’1𝐻 in (9) is computed once (initialization process).

Both the classical and the proposed algorithms for solving the modified Riccati equation are recursive ones. Thus, the total computational time required for the implementation of each algorithm is π‘‘π‘Ž=π΅π‘Žπ‘†π‘Žπ‘‘π‘œ,(10) where π΅π‘Ž is the per recursion calculation burden required for the online calculations of each algorithm, π‘†π‘Ž is the number of recursions (steps) that each algorithm executes, and π‘‘π‘œ is the time required to perform a scalar operation.

Note that the two algorithms are equivalent to each other with respect to their behaviour: they calculate theoretically such steady-state prediction error variance 𝑃 that satisfies (8). Then, it is reasonable to assume that both algorithms compute the limiting solution of the modified Riccati equation executing the same number of recursions, depending on the desired accuracy. Thus, in order to compare the algorithms with respect to their computational time, we have to compare their per recursion calculation burden required for the online calculations; the calculation burden of the offline calculations (initialization process) is not taken into account.

The computational analysis is based on the calculation burden of the matrix operations, which are summarized in Table 1 and needed for the implementation of the filtering algorithm [14].

Then, the (per recursion) computational requirements of the classical and the proposed algorithms for solving the modified Riccati equation are computed as 𝐡cmRE=12ξ€·6𝑛3+𝑛2ξ€Έ+𝑛+3𝑛2π‘š+3π‘›π‘š2+16ξ€·16π‘š3βˆ’3π‘š2ξ€Έ,π΅βˆ’π‘špmRE=13ξ€·34𝑛3+3𝑛2ξ€Έ.+5𝑛(11) The details of (11) are given in Tables 2 and 3.

From the above computational requirements where we derive the following conclusions.(1)The per recursion calculation burden of the classical algorithm depends on the state dimension 𝑛 and on the measurement dimension π‘š, while the per recursion calculation burden of the proposed algorithm depends only on the state dimension 𝑛.(2)The proposed algorithm is faster than the classical one if the following relation holds: 𝐡cmREβˆ’π΅pmRE=3𝑛2π‘š+3π‘›π‘š2+16ξ€·16π‘š3βˆ’3π‘š2ξ€Έβˆ’1βˆ’π‘š6ξ€·50𝑛3+3𝑛2ξ€Έ+7𝑛>0.(12)Figure 1 depicts the relation between the dimensions 𝑛 and π‘š that may hold in order to decide which algorithm is faster. Then, relation (12) is approached by the relation: π‘š>𝑛.(13) Thus it becomes obvious that we are able to establish the following method to distinguish the faster algorithm:β€‰β€œif π‘š>𝑛, then the proposed algorithm is faster than the classical algorithm, else the classical algorithm is faster than the proposed algorithm”.

4. Convergence of the Modified Riccati Equation

Concerning the modified Riccati equation, it is known [9] that, for unstable systems (there is at least one eigenvalue of 𝐹 that lies strictly outside the unit circle), there exists a critical value of detection probability 𝑝𝑑, below which the modified Riccati equation diverges.

Simulation results were taken concerning the modified Riccati equation. Both the classical and the proposed algorithms were implemented in order to solve the modified Riccati equation. Various stable as well as unstable models were considered. The following results were confirmed.(i) Both algorithms have the same behavior. If the system is stable (all eigenvalues of 𝐹 lie inside the unit circle), then the modified Riccati equation always converges: there always exists a steady-state solution. If the system is unstable (there is at least one eigenvalue of 𝐹 that lies strictly outside the unit circle), then there exists a critical value of detection probability 𝑝𝑑, below which the modified Riccati equation diverges.(ii) The critical value of detection probability  𝑝𝑑 increases in a logarithmic way as the maximum absolute eigenvalue of 𝐹 increases, that is, the system becomes more unstable. Figure 2 depicts the relation between the system stability and the modified Riccati equation convergence.(iii) In the special case where the maximum absolute eigenvalue of 𝐹 lies in the unit circle, the critical value of detection probability takes its minimum value of the order of 0.04.(iv) The maximum absolute eigenvalue of 𝐹, below which the modified Riccati equation always diverges, is of the order of 10.