Abstract

We relate a deterministic Kalman filter on semi-infinite interval to linear-quadratic tracking control model with unfixed initial condition.

1. Introduction

In [1], Sontag considered the deterministic analogue of Kalman filtering problem on finite interval. The deterministic model allows a natural extension to semi-infinite interval. It is of a special interest because for the standard linear-quadratic stochastic control problem extension to semi-infinite interval leads to complications with the standard quadratic objective function (see, e.g., [2]). According to [1], the model which we are going to consider has the following form: Here we assume that the pair , where is a vector subspace of the Hilbert space (with a Hilbert space of -value square integrable functions) is defined as follows: Here is an by matrix; is an by matrix; is an by and positive definite; is an by and positive definite; is an by matrix; . Notice that in (1.1)–(1.3) is not fixed and we minimize over all triple satisfying our assumption.

Notice also that we interpret (1.1)–(1.3) as an estimation problem of the form where we try to estimate with the help of observation by minimizing perturbations , and choosing an appropriate initial condition .

2. Solution of the Deterministic Problem

Consider the algebraic Riccati equation where . Assuming that the pair is stabilizable and the pair is detectable, there exists a negative definite symmetric solution to (2.1) such that the matrix is stable (see, e.g., Theorem  12.3 in [3]). According to [4], we have described a complete solution of the linear-quadratic control problem on a semi-infinite interval with the linear term in the objective function. The major motivation for this extension comes from [5] where we consider applications of primal-dual interior-point algorithms to the computational analysis of multicriteria linear-quadratic control problems in mini-max form. To compute a primal-dual direction it is required to solve linear-quadratic control problems with the same quadratic and different linear parts on each iteration. Using the results in [5], we can describe the optimal solution to (1.1)–(1.3) with fixed as follows.

There exists a unique solution satisfying the differential equation Moreover, can be explicitly described as follows: The optimal solution to (1.1)–(1.3) has the form For details see [5].

Notice that does not depend on . To solve the original problem (1.1)–(1.3) we need to express the minimal value of the functional (1.1) in term of .

Theorem 2.1. Let be an optimal solution of (1.1)–(1.3) with fixed given by (2.2)–(2.5). Then

Remark 2.2. Notice that is a strictly convex function of and hence minimum of as a function of is attained at Hence (2.2)–(2.5) gives a complete solution of the original problem (1.1)–(1.3).

Proof. Let be feasible solution to (1.1)–(1.3), where is fixed. Consider where we suppressed an explicit dependence on time. Notice that by (2.5) for any feasible solution implies that . Furthermore, let , where Now , and consequently Consequently, Using (2.1) and (2.2), we obtain Hence, taking into account that (see, for details [5]), we obtain where .
Notice, that and . This shows that, indeed, is an optimal solution to (1.1)–(1.3) (with fixed ) and proves (2.6).

Remark 2.3. By (2.14) and , we have and the equality occurs if and only if (see also (2.9)). Hence is a unique solution to the problem (1.1)–(1.3). Similary reasoning works in discrete time case.

3. Steady-State Deterministic Kalman Filtering

In light of (2.7), it is natural to consider the process as a natural estimate for the optimal solution to problem (1.1)–(1.3). Let us find the differential equation for .

Proposition 3.1. One has

Remark 3.2. Notice that is a solution to the algebraic equation In other words, the differential equation (3.2) is a precise deterministic analogue for the stochastic differential equation describing the optimal (steady-state) estimation in Kalman filtering problem. See, for example, [2].

Proof. Using (2.2) and (3.1), we obtain Since is a solution to (2.1), we have Hence, Hence, we obtain (3.2).

Remark 3.3. Notice that due to (3.1) and consequently would be an optimal solution to (1.1)–(1.3) if it were feasible for this problem.

4. The Solution of the Discrete Deterministic Problem

It is natural to consider the discrete version for the problem (1.1)–(1.3). In this case, the problem can be reformulated as follows:

Here we let denote a sequence for . We say that if , where is a norm induced by an inner product in . Let .

Like in the continuous case, we assume that the pair , where is a vector subspace of the Hilbert space .

Observe now the inner product in has the following form:

The vector subspace now takes the following form: Here is an by matrix. is an by matrix. is an by and positive definite. is an by and positive definite. is an by matrix and .

As in the continuous case, we interpret (4.1)–(4.3) as an estimation problem of the form where we try to estimate with the help of observation by minimizing perturbations , and choosing an appropriate initial condition .

According to [4], a general cost function for a discrete linear-quadratic control problem with linear term on the cost function has the following form: where and . The solution to the particular class of problems can be completely described by solving several system of recurrence relations and the following discrete algebraic Riccati equation (DARE): We assume that this equation has a positive definite stabilizing solution . For sufficient conditions, see [6].

In our situation, we have It is easy to see that and . By [4], there is a unique solution of the following recurrence relations For details on an explicit solution of the above recurrence relation, see [4]. For simplicity, we let and we also let So our recurrence relation for now takes the form with the corresponding DARE The optimal solution to (4.1)–(4.3) has the following form: For details, see [4]. To solve the original problem (4.1)–(4.3) we need to express the minimal value of the functional (4.1) in terms of .

Theorem 4.1. Let be an optimal solution of (4.1)–(4.3) with fixed given by (4.15)-(4.16). Then

Proof. For simplicity of notation, we use for . Let where We assume that . Since and , we have By recalling now the definition of , we have Therefore, We then rearrange the terms and complete the square to obtain a useful expression for Δ: Notice, since we fixed , we let and take summation of both sides: By the definition of , . Therefore, As a result, Then the proof is completed.

As in continuous case, for the discrete case, is a strictly convex function of and hence minimum of as a function of is attained at where is the unique solution to (4.13).

Since we have (4.27), it is natural to consider the process as an estimate for the optimal solution to problem (4.1)−(4.3). Let us find the recurrence relation for .

Proposition 4.2. Assuming that the closed loop matrix is invertible, one has

Proof. We can rewrite (4.13) in the form
Using the algebraic Riccati equation we can rewrite (4.30) in the form which is equivalent to The result follows.

Remark 4.3. Notice that (4.29) is the analogue of the “limiting” discrete Kalman filter [6, Page 384, (17.6.1)].

5. Concluding Remarks

In this paper, we relate a deterministic Kalman filter on semi-infinite interval to linear-quadratic tracking control model with unfixed initial condition. Solutions of the deterministic problems both continuous and discrete cases are described. This extends the result of Sontag to semi-infinite interval.

Acknowledgments

The research of L. Faybusovich was partially supported by National science foundation. Grant DMS07-12809. The research of T. Mouktonglang was partially supported by the Centre of Excellence in Mathematics and the Commission for Higher Education (CHE), Sri Ayutthaya Road, Bangkok, Thailand.