- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

ISRN Applied Mathematics

Volume 2014 (2014), Article ID 417623, 10 pages

http://dx.doi.org/10.1155/2014/417623

## Iterative and Algebraic Algorithms for the Computation of the Steady State Kalman Filter Gain

^{1}Department of Electronic Engineering, Technological Educational Institute of Central Greece, 3rd km Old National Road Lamia-Athens, 35100 Lamia, Greece^{2}Department of Computer Science and Biomedical Informatics, University of Thessaly, 2-4 Papasiopoulou Street, 35100 Lamia, Greece

Received 24 February 2014; Accepted 31 March 2014; Published 4 May 2014

Academic Editors: F. Ding, L. Guo, and H. C. So

Copyright © 2014 Nicholas Assimakis and Maria Adam. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The Kalman filter gain arises in linear estimation and is associated with linear systems. The gain is a matrix through which the estimation and the prediction of the state as well as the corresponding estimation and prediction error covariance matrices are computed. For time invariant and asymptotically stable systems, there exists a steady state value of the Kalman filter gain. The steady state Kalman filter gain is usually derived via the steady state prediction error covariance by first solving the corresponding Riccati equation. In this paper, we present iterative per-step and doubling algorithms as well as an algebraic algorithm for the steady state Kalman filter gain computation. These algorithms hold under conditions concerning the system parameters. The advantage of these algorithms is the autonomous computation of the steady state Kalman filter gain.

#### 1. Introduction

The Kalman filter gain arises in Kalman filter equations in linear estimation and is associated with linear systems. State space systems have been widely used in estimation theory to describe discrete time systems [1–5]. It is known [1] for time invariant systems that if the signal process model is asymptotically stable, then there exists a steady state value of the Kalman filter gain. Thus, the steady state gain is associated with time invariant systems described by the following state space equations: for , where is the -dimensional state vector at time , is the -dimensional measurement vector at time , is the system transition matrix, is the output matrix, is the plant noise at time , and is the measurement noise at time . Also, and are Gaussian zero-mean white random processes with covariance matrices and , respectively.

The discrete time Kalman filter [1, 6] is the most well-known algorithm that solves the filtering problem. In fact, Kalman filter faces simultaneously two problems as follows.(i)*Estimation*: the aim is to recover at time information about the state vector at time using measurements up till time .(ii)*Prediction*: the aim is to obtain at time information about the state vector at time using measurements up till time ; it is clear that prediction is related to the forecasting side of information processing.Kalman filter uses the measurements up till time in order to produce the (one step) prediction of the state vector and the corresponding prediction error covariance matrix , as well as producing the estimation of the state vector and the corresponding estimation error covariance matrix . The Kalman filter equations needed for the computation of the prediction and estimation error covariance matrices are as follows:
for , with initial condition for the time instant, where there are no measurements given. Note that is the* Kalman filter gain*.

From (2) to (4), we are able to derive the* Riccati equation*, which is an iterative equation with respect to the prediction error covariance:

In the general case, where and are positive definite matrices, using in (5) the matrix inversion lemma: the Riccati equation is formulated as

The Riccati equation is a nonlinear iterative equation with respect to the prediction error covariance. For time invariant systems, it is well known [1] that if the signal process model is asymptotically stable, then there exists a steady state value of the prediction error covariance matrix. In fact, the prediction error covariance tends to the steady state prediction error covariance.

The steady state prediction error covariance satisfies the* steady state Riccati equation*

Then, from (2), it is clear that there also exists a steady state value of the Kalman filter gain [7]. The steady state gain can be calculated by

Also, from (3), it is clear that there also exists a steady state value of the estimation error covariance matrix [7], which can be calculated by

It is obvious from (9) that the steady state Kalman filter gain can be derived via the steady state prediction error covariance. The covariance matrix in Kalman filter plays an important role in many applications [1, 4, 6, 8–10]. The steady state prediction error covariance can be derived by solving the Riccati equation emanating from Kalman filter. The discrete time Riccati equation has attracted recent attention. In view of the importance of the Riccati equation, there exists considerable literature on its algebraic solutions; for example, in [1, 7, 11, 12], the authors have derived an eigenvector solution, while the author of [13] has included solving scalar polynomials. Other methods are based on the iterative solutions [1, 13–18] concerning per-step or doubling algorithms. The iterative algorithms that provide the steady state Kalman filter gain together with the prediction error covariance are the Chandrasekhar algorithms [1], as well as the iterative algorithm that calculate the Kalman gain only once for a period of the stationary channel, as opposed to each data sample in the conventional filter [19]. A geometric illustration of the Kalman filter gain is given in [20].

In this paper, we present algorithms for the steady state Kalman filter gain autonomous computation. These algorithms hold under conditions concerning the system parameters. The paper is organized as follows: two new per-step iterative algorithms, a new doubling iterative algorithm, and an algebraic algorithm for the computation of the steady state Kalman filter gain are presented in Section 2. In Section 3, two examples verify the results. Finally, Section 4 summarizes the conclusions.

#### 2. New Algorithms for the Steady State Kalman Filter Computation

##### 2.1. Assumptions

We assume the general case, where and are positive definite matrices.

The Kalman filter gain is a matrix of dimension .

We define the matrix It is clear that is a nonsymmetric matrix of dimension .

It is also clear that there exists a steady state value

Also, we define the matrix Note that is an symmetric positive semidefinite matrix and is a positive definite if ; this means that is a nonsingular matrix in the case, with , [21].

##### 2.2. Indirect Steady State Kalman Filter Gain Computation

In this section, we present algorithms for computation. Then, we show how to compute the steady state Kalman filter through .

###### 2.2.1. Iterative Algorithms for Computation

In this section, we present two iterative per-step algorithms and an iterative doubling algorithm for computation.

*Per-Step Iterative Algorithm **1*. Using (2) and (11), it is derived that

Thus, arises

Using the Riccati equation (7), (15), the nonsingularity of , and some algebra we have

Also, from (2) and (13), we can write

Since the matrices are nonsingular, the last equation yields

Substituting in (16) the matrix by (18), it follows whereby it is implied that

Thus, the above equation can be written as

Combining (21) with (11), the following nonlinear iterative equation with respect to is derived: where

The algorithm uses the initial condition . It is known [1] that the prediction error covariance tends to the steady state prediction error covariance and that the convergence is independent of the initial uncertainty, that is, independent of the value of the initial condition . Thus, we are able to assume zero initial condition and so we are to use the initial condition .

It is clear that tends to a steady state value and by (22) satisfies

*Per-Step Iterative Algorithm **2*. We rewrite (22) as
Thus, the following nonlinear iterative equation with respect to is derived:
where
The algorithm uses the initial condition . It is known [1] that the prediction error covariance tends to the steady state prediction error covariance and that the convergence is independent of the initial uncertainty, that is, independent of the value of the initial condition . Thus, we are able to assume zero initial condition . In this case, in order to avoid , we are to use the initial condition .

It is clear that tends to a steady state value and by (26) satisfies

*Doubling Iterative Algorithm*. *Ι*n (22), setting
we take
or
where
is a matrix of dimension and , , , as in (23).

We are able to use zero initial condition , so ; that is, and hence

We define with initial condition

Then, and, using the doubling principle [1] , we have

Then we are able to derive, after some algebra, the following nonlinear iterative equations: with initial conditions

Then, since it is clear that tends to a steady state value .

###### 2.2.2. Algebraic Algorithm for Computation

In this section, we present an algebraic algorithm for computation. As in (29), setting and using the parameters , , , by (23), we derive which is a matrix of dimension . Since it is evident that is a nonsingular matrix and its eigenvalues occur in reciprocal pairs.

Thus, (43) can be written where is a diagonal matrix containing the eigenvalues of , with diagonal matrix with all the eigenvalues of lying outside the unit circle, and is the matrix containing the corresponding eigenvectors of , with

We are able to use zero initial condition , so ; that is, and hence

Then, from (50) and (45)–(48), we are able to write

that is,

Substituting in (42) the matrices , from (52), we have that

Furthermore, the diagonal matrix contains all the eigenvalues of lying inside the unit circle, which follows that . Then, tends to a steady state value with , and from (53) arises

###### 2.2.3. Steady State Kalman Filter Gain Computation

All algorithms presented in Sections 2.2.1 and 2.2.2 compute the steady state value . Taking into account the assumptions of Section 2.1, we are able to conclude that, under the condition , the steady state gain is

##### 2.3. Direct Steady State Kalman Filter Gain Computation

In this section, we present algorithms for the direct computation of the steady state Kalman filter . The proposed algorithms compute directly the steady state Kalman filter gain, that is, without using . All these algorithms hold under the assumption that . Note that, since , and are nonsingular matrices.

###### 2.3.1. Iterative Algorithms for Computation

In this section, we present two iterative per-step algorithms and an iterative doubling algorithm for computation.

*Per-Step Iterative Algorithm **1*. Using (11), (22), and (13), we are able to derive the following nonlinear iterative equation with respect to the Kalman filter gain :

The nonsingularity of and (13) allow us to write the equality in (56) as where

The initial condition is . It is known [1] that the prediction error covariance tends to the steady state prediction error covariance and that the convergence is independent of the initial uncertainty, that is, independent of the value of the initial condition . Thus, we are able to assume zero initial condition and so we are to use the initial condition .

It is clear that tends to a steady state value satisfying

*Per-Step Iterative Algorithm **2*. Using (57), we are able to derive the following nonlinear iterative equation with respect to the Kalman filter gain :
where are given by (58) and

The algorithm uses the initial condition . It is known [1] that the prediction error covariance tends to the steady state prediction error covariance and that the convergence is independent of the initial uncertainty, that is, independent of the value of the initial condition . Thus, we are able to assume zero initial condition . In this case, in order to avoid , we are to use the initial condition .

It is clear that tends to a steady state value satisfying

*Doubling Iterative Algorithm*. *Ι*n (57), setting
we take
or
where
is a matrix of dimension and , , , as in (58).

Working as in the doubling iterative algorithm of Section 2.2.1 and using zero initial condition , so ; we are able to derive the following nonlinear iterative equations: with initial conditions It is clear that tends to a steady state value .

###### 2.3.2. Algebraic Algorithm for Computation

In this section, we present an algebraic algorithm for computation. Working as in the algebraic algorithm of Section 2.2.2 and using the parameters , , , by (58), we derive which is a matrix of dimension .

Then, the steady state Kalman filter is

##### 2.4. Advantages of the Proposed Algorithms

All algorithms for the computation of the steady state Kalman filter gain , presented in Section 2, are summarized in Table 1. It is clear that the direct computation of the Kalman filter gain is feasible only if the following restriction holds: . The advantage of the presented algorithms is the autonomous computation of the steady state Kalman filter gain. Especially, the steady state Kalman filter gain is important, when we want to compute the parameters of the steady state Kalman filter It is obvious from (71) that the parameters of the steady state Kalman filter are related to the steady state Kalman filter gain.

In particular, the steady state prediction error covariance can be computed via the steady state gain and is given by

Indeed, from (2), arises , which leads to Since , the matrix is nonsingular [21]; thus from the last equation arises immediately the formula of the steady state prediction error covariance in (72).

Also, by (3), the steady state estimation error covariance can be computed via the steady state prediction error covariance

#### 3. Examples

In this section, two examples verify the results of Section 2.

*Example 1. *A model of dimensions and is assumed with parameters:
In this example, we have with .

Using all algorithms presented in Section 2.2, we computed

Then, using (55), we computed the steady state gain

*Example 2. *A model of dimensions is assumed with parameters:
In this example, we have .

Using all algorithms presented in Section 2.2, we computed

Then, using (55), we computed the steady state gain

We also computed the same steady state gain, using all algorithms presented in Section 2.3, since .

#### 4. Conclusions

The Kalman filter gain arises in Kalman filter equations in linear estimation and is associated with linear systems. The gain is a matrix through which the estimation and the prediction of the state as well as the corresponding estimation and prediction error covariance matrices are computed. For time invariant and asymptotically stable systems, there exist steady state values of the estimation and prediction error covariance matrices. There exists also a steady state value of the Kalman filter gain.

The steady state Kalman filter gain is usually derived via the steady state prediction error covariance by first solving the corresponding Riccati equation. In view of the importance of the Riccati equation, there exists considerable literature on its algebraic or iterative solutions, including the Chandrasekhar algorithms, which are the only iterative algorithms that provide the steady state Kalman filter gain together with the prediction error covariance.

Iterative per-step and doubling algorithms as well as an algebraic algorithm for the steady state Kalman filter computation were presented. These algorithms hold under conditions concerning the system parameters. The advantage of these algorithms is the autonomous computation of the steady state Kalman filter gain. This is important if we want to compute only the steady state Kalman filter gain or to compute the parameters of the steady state Kalman filter, which are related to the steady state Kalman filter gain.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### References

- B. D. O. Anderson and J. B. Moore,
*Optimal Filtering*, Dover Publications, New York, NY, USA, 2005. - N. Assimakis and M. Adam, “Global systems for mobile position tracking using Kalman and Lainiotis filters,”
*The Scientific World Journal*, vol. 2014, Article ID 130512, 8 pages, 2014. View at Publisher · View at Google Scholar - F. Ding, “Combined state and least squares parameter estimation algorithms for dynamic systems,”
*Applied Mathematical Modelling*, vol. 38, no. 1, pp. 403–412, 2014. View at Publisher · View at Google Scholar · View at MathSciNet - F. Ding and T. Chen, “Hierarchical identification of lifted state-space models for general dual-rate systems,”
*IEEE Transactions on Circuits and Systems I: Regular Papers*, vol. 52, no. 6, pp. 1179–1187, 2005. View at Publisher · View at Google Scholar · View at MathSciNet - B. Ristic, S. Arulampalam, and N. Gordon,
*Beyond the Kalman Filter*, Artech House, Boston, Mass, USA, 2004. - M. S. Grewal and A. P. Andrews,
*Kalman Filtering: Theory and Practice Using MATLAB*, John Wiley & Sons, Hoboken, NJ, USA, 3rd edition, 2008. - N. Assimakis and M. Adam, “Kalman filter Riccati equation for the prediction, estimation and smoothing error covariance matrices,”
*ISRN Computational Mathematics*, vol. 2013, Article ID 249594, 7 pages, 2013. View at Publisher · View at Google Scholar - J. R. P. de Carvalho, E. D. Assad, and H. S. Pinto, “Kalman filter and correction of the temperatures estimated by PRECIS model,”
*Atmospheric Research*, vol. 102, no. 1-2, pp. 218–226, 2011. View at Publisher · View at Google Scholar · View at Scopus - R. Furrer and T. Bengtsson, “Estimation of high-dimensional prior and posterior covariance matrices in Kalman filter variants,”
*Journal of Multivariate Analysis*, vol. 98, no. 2, pp. 227–255, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Y. Teruyama and T. Watanabe, “Effectiveness of variable-gain Kalman filter based on angle error calculated from acceleration signals in lower limb angle measurement with inertial sensors,”
*Computational and Mathematical Methods in Medicine*, vol. 2013, Article ID 398042, 12 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet - S. Han, “A closed-form solution to the discrete-time Kalman filter and its applications,”
*Systems & Control Letters*, vol. 59, no. 12, pp. 799–805, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - D. R. Vaughan, “A nonrecursive algebraic solution for the disczte Riccati equation,”
*IEEE Transactions on Automatic Control*, vol. 15, no. 5, pp. 597–599, 1970. View at Google Scholar · View at Scopus - R. Leland, “An alternate calculation of the discrete-time Kalman filter gain and Riccati equation solution,”
*Transactions on Automatic Control*, vol. 41, no. 12, pp. 1817–1819, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - N. Assimakis, “Discrete time Riccati equation recursive multiple steps solutions,”
*Contemporary Engineering Sciences*, vol. 2, no. 7, pp. 333–354, 2009. View at Google Scholar - N. D. Assimakis, D. G. Lainiotis, S. K. Katsikas, and F. L. Sanida, “A survey of recursive algorithms for the solution of the discrete time riccati equation,”
*Nonlinear Analysis: Theory, Methods and Applications*, vol. 30, no. 4, pp. 2409–2420, 1997. View at Google Scholar · View at Scopus - N. Assimakis, S. Roulis, and D. Lainiotis, “Recursive solutions of the discrete time Riccati equation,”
*Neural, Parallel and Scientific Computations*, vol. 11, no. 3, pp. 343–350, 2003. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - D. G. Lainiotis, N. D. Assimakis, and S. K. Katsikas, “A new computationally effective algorithm for solving the discrete Riccati equation,”
*Journal of Mathematical Analysis and Applications*, vol. 186, no. 3, pp. 868–895, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - D. G. Lainiotis, N. D. Assimakis, and S. K. Katsikas, “Fast and numerically robust recursive algorithms for solving the discrete time Riccati equation: the case of nonsingular plant noise covariance matrix,”
*Neural, Parallel and Scientific Computations*, vol. 3, no. 4, pp. 565–583, 1995. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. Liyanage and I. Sasase, “Steady-state Kalman filtering for channel estimation in OFDM systems utilizing SNR,” in
*Proceedings of the IEEE International Conference on Communications (ICC '09)*, pp. 1–6, Dresden, Germany, June 2009. View at Publisher · View at Google Scholar · View at Scopus - T. R. Kronhamn, “Geometric illustration of the Kalman filter gain and covariance update algorithms,”
*IEEE Control Systems Magazine*, vol. 5, no. 2, pp. 41–43, 1985. View at Google Scholar · View at Scopus - R. A. Horn and C. R. Johnson,
*Matrix Analysis*, Cambridge University Press, Cambridge, UK, 2005.