Stochastic Systems and Control: Theory and ApplicationsView this Special Issue
Research Article | Open Access
Control for MJLS with Infinite Markov Chain
With the help of a stochastic bounded real lemma, we deal with finite horizon control problem for discrete-time MJLS, whose Markov chain takes values in an infinite set. Besides, a unified control design for , , and is given.
As we know, control is one of the most important robust control designs, usually used to eliminate the effect of disturbance . In particular, lots of results have been contributed to the stochastic theory; see [1–7], among many others. For Itô systems and discrete-time systems with multiplicative noise, stochastic -type control problems have been considered in [2, 5], respectively. Reference  has designed a state feedback controller for nonlinear stochastic systems. References [4, 7] have dealt with control in the presence of stochastic uncertainty.
On the other hand, after Kalman presented the question “When is a linear control system optimal?” which preserves the quadratic form in the performance index,  has discussed or Linear Quadratic Gaussian (LQG) control, while  has retained the original weights and sought out a producer which can also achieve the desired degree of stability, and others have contributed to the stochastic LQ controller design [10–12]. Because of the popularity of the performance for engineering, more and more researchers have been attracted by the mixed control topic; see [13–16]. For example, [14, 15] have investigated the finite horizon and infinite horizon control problem for discrete-time Markov jump systems.
It should be pointed out that mostly researches on Markov jump systems assume that Markov chain takes values in a finite set, such as [6, 13–15, 17–20]. However, infinite Markov jump systems, where Markov process has an infinite state space, can be used to describe more plants in real world. Recently, infinite Markov jump systems have aroused more and more concern [1, 21, 22]. Specially, for discrete-time case,  has explored exponential stability and input-state stability which is strongly detectable;  has established the finite horizon stochastic bounded real lemma to attain a prescribed disturbance attenuation level. Based on , this note seeks to set up result for infinite Markov jump systems, unlike  which considered the finite jump case.
This paper aims to handle the mixed control problem through the solvability of four coupled difference matrix-valued recursions (CDMRs) for discrete-time infinite Markov jump systems. The rest of the paper is organized as follows: Section 2 provides some useful definitions and lemmas. In Section 3, we get the necessary and sufficient condition for the finite horizon control problem based on the stochastic bounded real lemma. A unified control design for , , and control is given in Section 4. And Section 5 concludes the paper.
For convenience, we adopt the following notations. is the set of all real numbers; is -dimensional real vector space; is the vector space of all matrices with entries in ; is the transpose of a matrix ; (): is a positive semidefinite (positive definite) symmetric matrix; is the identity matrix; is the operator norm of or the Euclidean norm of ; is the set of all nonnegative integers; is the set of all positive integers; is the set of all symmetric matrices.
Consider the following discrete-time infinite Markov jump systems defined on a complete probability space :where , and represent the system state, disturbance signal, and measurement output, respectively. Markov chain takes values in with switching governed by a stationary transposition probability matrix , where . Let be a sequence of real random variables which satisfies and (Kronecker function). Make be the -algebra generated by . With the case , set . For any given , the -algebras and are independent of each other. denotes the set of –valued processes , which is -measurable and Apparently, is a real Hilbert space with the norm being .
Make be the set which satisfies . We can easily verify that is a Banach space with . Define another Banach space with . will be written as with the case , so does . For , when and , will be written as .
For , if for all , we say . And it is easy to know that . What is more, for a given real Banach space , let represent a Banach space including all bounded linear operators mapping to . For , the induced norm is denoted by .
Next, we introduce the linear perturbation operator as follows.
Definition 1. Define a linear perturbation operator of system (1) as follows: where satisfies system (1) corresponding to and . When , the norm of is determined by When , which means the system is unperturbed, the problem is trivial (in this case, ).
Define the functional: which is connected with the performance.
To make the formulae more brief, we adopt the following notations. Denote as a series of symmetric matrices derived by the time and the mode ; that is . In subsequent analysis, we will use the notations as follows with :
Lemma 2. Given , , and being the solution of system (1), for any given , we have where for
Proof. Due to the assumption, we know that is independent of the Markov chain and is uncorrelated with , so and are uncorrelated with . Besides, and are -measurable, ; thus When , we have where Taking above and summarizing together, we get that According to the definition of , we obtain By simple computation, we can get the desired result.
In this subsection, we attempt to discuss the finite horizon control problem. Consider the following linear systems with infinite Markov jump:
Given and , our objective is to find such that(i), for all and ;(ii)when the worst case disturbance is enforced on system (15), will minimize the energy functional , .
If there exists such that (i) and (ii) hold simultaneously, we say that the mixed control problem is solvable. Before discussing, we provide the following coupled matrix recursions which is defined on :where
Proof. Assume CDMRs (16)–(19) have a pair of solutions (), . Constructing and putting into system (15), we get for all and . By Lemma 3, together with the completing squares technique, it yields from (16) that where and is defined by (17). As shown above, is minimized by , , and is the worst case disturbance. By Lemma 2 and the technique of completing squares, we obtain where , and is defined by (19). By reasoning as above, it is revealed that is the controller to minimize the . Thus, we conclude that () is a pair of solutions to the finite horizon control problem.
Suppose that system (15) has a pair of solutions to the finite horizon control problem with . Putting into (15), we come to the following equations:Applying Lemma 3 to (23), we justify that satisfies (16) on [0, ], also . From the proof of the sufficiency, we confirm that the worst case disturbance with given by (17). Enforcing on system (15), we obtainAccording to the assumption, we deduce that is the optimal solution of the following problem: This is a standard LQ control for Markov jump systems defined on a finite horizon. Similar to the proof of Theorem in , it is not difficult to prove that (18) is solvable with . The proof is ended.
Remark 5. Compared to finite horizon control problem considered in , whose Markov chain takes values in a finite set, the dynamical model taken into account in this note is more general.
Example 6. Consider the following one-dimensional discrete-time infinite Markov jump system:In (26), the transition probability is defined by . Set , the solutions of the four coupled matrix recursions (16)–(19) are given by ).
4. , , and Control
In this section, we will develop a unified control design of , , and control for system (15). Give the following indices:where and are defined on .
Definition 7. We call an admissible set a Nash equilibrium, if the following inequalities hold simultaneously for all admissible ,For convenient, we give the following coupled matrix recursions on :where (I) Control. Letting and in (27), it is easy to get that the performance index holds naturally, and which turns into a stochastic LQ optimal control problem. Because of in , taking in (15), we get By Theorem 4, we know , where with and is the solution of the following equations:Also, the optimal value is obtained by
Remark 8. Via a series of calculation, we can rewrite the first equality of (36) under the case of positiveness of as Obviously, we have ; thus .
(II) Control. Letting in (27), it follows that In this case, the Nash game (28) turns into a zero-sum game. We know from Theorem 4 that there exists a Nash equilibrium if and only if (29)–(32) is solvable. By computing, we obtain Noticing that , we conclude the above equation has a unique solution such that . Showing (31) in another way,Substituting into (41), it is easy to recognize thatSimilar as Theorem in , we can prove that , which concludes that . Since indicates that (42) is consistent with (29). Thus, when , the solution of (42) satisfies