Research Article | Open Access

# Stochastic Linear Quadratic Optimal Control with Indefinite Control Weights and Constraint for Discrete-Time Systems

**Academic Editor:**Weihai Zhang

#### Abstract

The Karush-Kuhn-Tucker (KKT) theorem is used to study stochastic linear quadratic optimal control with terminal constraint for discrete-time systems, allowing the control weighting matrices in the cost to be indefinite. A generalized difference Riccati equation is derived, which is different from those without constraint case. It is proved that the well-posedness and the attainability of stochastic linear quadratic optimal control problem are equivalent. Moreover, an optimal control can be denoted by the solution of the generalized difference Riccati equation.

#### 1. Introduction

The linear quadratic (LQ) optimal control problem has been pioneered by Kalman [1] for deterministic systems; it is an assumption that the control weighting matrix in the cost is strictly definite. The definite LQ control problem has been investigated extensively by many researchers [2, 3]. The optimal control for the definite LQ problem has a feedback given by the solution of the Riccati equation. The extension of deterministic LQ problem to stochastic case has been playing an important role in engineering design and applications; see monographs [4â€“7]. Stochastic LQ control problem for the ItÃ´ systems is initiated by Wonham [4], while the nonlinear regulator problem is discussed in [8] and has caused a sequence of works [9â€“11]. Some of the works on this subject reveal that, for stochastic ItÃ´ systems, even if the state and control weighting matrices and are indefinite, the corresponding stochastic LQ problem may be still well posed, which is first found in [12].

For the discrete-time LQ control problems with control and/or state dependent noises, there have been some works in literature [13, 14]. It is worth noting that the state weight matrix is nonnegative and the control weight matrix is positive definite in both papers. However, the control weighting matrix is not required to be positive definite, or even negative [15â€“18]. In addition, most previous researchers mainly study indefinite stochastic LQ problems without constraints. In fact, some constraints are of considerable importance in many physical systems. The finite time indefinite stochastic LQ control with linear terminal state constraint is discussed in [19] and has been extended in [20â€“22]. It is generally known that, for the system components are perturbed by an additive Gaussian white noise, the LQ problem is called linear quadratic Gaussian problem. As said in [15], many real systems are not only subject to Gaussian white noise, but also subject to non-Gaussian noise.

In this paper, different from [20â€“22], we discuss a stochastic optimal control of discrete-time systems which are subject to non-Gaussian noises. We concentrate our attention on the finite horizon indefinite stochastic LQ control with terminal inequality constraint. Such constraints are often seen in filtering problems [23, 24]. The existence of optimal linear state feedback control in terms of KKT theorem will be shown. We present the fact that the solvability of the GDRE, the well-posedness, and the attainability of the LQ problem are all equivalent. The outline of this paper is organized as follows. In Section 2, we give some definitions and preliminaries. Section 3 contains our main theorems. A necessary condition for the existence of optimal linear state feedback control is derived. Moreover, it is shown that the solvability of the GDRE, the well-posedness, and the attainability of the LQ problem are all equivalent. In Section 4, we give the structure of the optimal control. Section 5 concludes the paper.

For convenience, we adopt the following notations in this note. : is the transpose of a matrix ; is the trace of a square matrix ; : is positive definite (positive semidefinite) symmetric matrix; represents the mathematical expectation of a random variable ; is the -dimensional Euclidean space with the usual -norm ; is the vector space of all matrices with entries in ; is the Moore-Penrose pseudoinverse of a matrix ; is the identity matrix with appropriate dimension; .

#### 2. Preliminaries

Consider the discrete-time stochastic system where is the given initial state and and are, respectively, the system state and controlled input. , , , and are matrix-valued functions with appropriate dimensions.

The noises are defined on a complete probability space . Without loss of generality, we assume that are scalar random variables. The initial state is assumed to be independent of and satisfies ,â€‰â€‰, , , and .

We denote the -algebra generated by ; that is, . belongs to the admissible control set . is measurable square integrable stochastic process; namely, . Let ; then the constraint in (1) can be denoted by , where is constant and has row full rank.

We consider the following cost function correlated with the system where , and are symmetric matrices with appropriate dimension, which are possibly indefinite. We define

In the sequel, we study the LQ problem for the systems (1)â€“(3), that is to say, finding a control to minimize . Firstly, we state some useful definitions and lemmas that are essential to the discussions of our main results.

*Definition 1. *If for any , systems (1)â€“(3) are called well posed.

*Definition 2. *If there exists an admissible control such that
then systems (1)â€“(3) are said to be attainable and is called an optimal control.

If a linear feedback control is optimal for the LQ problem (1)â€“(3), then it must be also optimal linear feedback control of the following form: where is matrix-valued function.

MP (mathematical programming) where .

*Definition 3 (regularity condition see [25]). *Let . If the gradient vectors , , and , , are linearly independent, this linear independence is called a regularity condition (or constraint qualification).

*Definition 4 (regular point see [25]). *Let . Then is called a regular point of the constraints if the gradient vectors , , , , are linearly independent.

Lemma 5 (KKT theorem see [25]). *In MP above, suppose that the objective function and the constraint functions , are continuously differentiable at a point . If is a local minimum that satisfies some regularity conditions, then there exist a vector in and a vector in , called KKT multipliers, such that
**
where the Lagrangian function .*

Lemma 6 (see [26]). *Let a matrix be given, matrix which is called the Moore-Penrose pseudoinverse of , such that
*

Lemma 7 (see [26]). *Let a symmetric matrix be given. Then
*

Lemma 8 ((extended Schur's lemma) see [27]). *Let matrices , , and be given with appropriate sizes. Then the following conditions are equivalent:*(i)*, and ;*(ii)*;
*(iii)*. *

Lemma 9 (see [15]). *Let matrices , , and be given. Then the matrix equation has a solution if and only if . is given by , where is a matrix with an appropriate dimension.*

Lemma 10 (see [15]). *Let matrices , , and be given with appropriate sizes. Consider the following quadratic form:
**
where and are random variables defined on a probability space . Then the following conditions are equivalent: *(i)* for any random variable ;*(ii)*there exists a symmetric matrix such that for any random variable ;*(iii)* and ;*(iv)* and ;*(v)*there exists a symmetric matrix such that .**Moreover, if any of the above condition holds, then (ii) is satisfied by . In addition, for any satisfying (v). Finally, for any random variable , the random variable is optimal with the following optimal value:
*

#### 3. Well-Posedness and Attainability under State Feedback Control

In this section, we transform the LQ problem into an equivalent deterministic optimization problem. By means of the KKT theorem, we present a generalized difference Riccati equation (GDRE) without any positiveness constraint. Then, it is shown that the well-posedness and the attainability are equivalent to the solvability of GDRE.

Theorem 11. *If the LQ optimal control problem (1)â€“(3) is attainable by and the regular point is a locally optimal solution of problem (1)â€“(3), then the following generalized difference Riccati equation (GDRE) has solutions with :
**In addition,
*

*Proof . *Let and for any ; it can be shown that the LQ problem (1)â€“(3) can be rewritten as the following optimization problem:

Obviously, the problem (14) is a MP problem indicated as
where

According to KKT theorem, the Lagrangian function is defined as
where and the matrices are Lagrangian multipliers.

Moreover, the following result,
is obvious.

By calculating, we conclude that and satisfy the equations of the form

From Lemma 9, (19) has a solution if and only if and , where

We substitute the above gains into (21); then the corresponding equations are formed as

The only thing to note is that we can assume is symmetric. Otherwise, we take .

Now we add the equality
to (2) and use (23); then we have

By completion of square, we obtain

Here, we must prove that . Let us assume that there exists a with a negative eigenvalue . Let be the unitary eigenvector about ; it implies that and . For any , let us suppose that a control sequence is given by
The corresponding cost is
Letting , it yields , which is in contradiction with the attainability of the LQ problem (1)â€“(3).

From the above discussion and (21), it can be seen that the optimal value is given by
This proof is complete.

The following corollary shows that when in GDRE (12), then , , and are all unique.

Corollary 12. *If the LQ optimal control problem (1)â€“(3) is attainable by and the regular point is a locally optimal solution of problem (1)â€“(3), then the following GDRE has unique solutions with :
*

In addition,

The following result is useful in the sequel, which gives an equivalent connection between the solvability of the GDRE and the well-posedness of the LQ problem.

Theorem 13. *The LQ problem (1)â€“(3) is well posed; then there exist solutions to the GDRE (12). Conversely, if the GDRE (12) has solutions , then the LQ problem (1)â€“(3) is well posed. Moreover, the optimal cost satisfies
*

*Proof. *Necessity part: consider the following cost from to
According to the optimal principle, if is finite, so is for any . As is finite, we can infer that is finite for any .

Let and . By (1) and (33), it follows that
Applying Lemma 10 to the above quadratic form, there exists a symmetric matrix such that
It is obvious that the above are GDRE (12) for .

Hence, assume that GDRE (12) admits a pair of solutions with

From (33), we have
By Lemma 10, it is straightforward that the finiteness of is equivalent to the following:
Moreover, .

Sufficiency part: let
Assume satisfy
for and .

As in the preceding,

By Lemma 8, we get that
In other words,
which implies that the LQ problem (1)â€“(3) is well posed.

We are now equipped to present the main result in this section.

Theorem 14. *The following assertions are equivalent.*(i)*The LQ problem (1)â€“(3) is attainable.*(ii)*The LQ problem (1) â€“ (3) is well posed.*(iii)*The GDRE (12) is solvable.**In addition, the feedback control law is achieved by
**
where are solutions to the GDRE (12) and .*

*Proof. *By Theorem 13, it is easy to have that (ii) is equivalent to (iii). Our objective is to show that (i) is equivalent to (iii). From Theorem 11, we only need to show (iii) (i).

Suppose the GDRE (12) admits a pair of solutions . By the same way as Theorem 11, the following can be proved:
So, the optimal value and the feedback .

#### 4. Relation between Optimal Synthesis and GDRE

In this section, we first attempt to verify that any optimal control can be denoted by virtue of the solution of the GDRE (12) with two degrees of freedom and the optimal cost is given.

Theorem 15. *Assume that the GDRE (12) admits a solution. Then the optimal control satisfies the following:
**
where are arbitrary random variables defined on the probability space . And the optimal cost value is given by
**
where solve the GDRE (12).*

*Proof. *Suppose the GDRE (12) admits solutions . As the preceding calculation, we have

Let and ; then

So, can be rewritten as

Because of , we immediately obtain that
and the control .

Now, we are interested in arbitrary control sequence which minimizes the cost function . So we deduce that