Stochastic Systems 2014View this Special Issue
Nonlinear Stochastic Control with Markov Jumps and -Dependent Noise: Finite and Infinite Horizon Cases
This paper is concerned with the control problem for nonlinear stochastic Markov jump systems with state, control, and external disturbance-dependent noise. By means of inequality techniques and coupled Hamilton-Jacobi inequalities, both finite and infinite horizon control designs of such systems are developed. Two numerical examples are provided to illustrate the effectiveness of the proposed design method.
control is one of the most important robust control approaches, which can efficiently eliminate the effect of the exogenous disturbance [1, 2]. Since Hinrichsen and Pritchard introduced control to linear stochastic systems , the nonlinear stochastic control and filtering problems have received considerable attention in both theory and practical applications [4–9]. In , the nonlinear stochastic designs were first developed by solving a second-order nonlinear Hamilton-Jacobi inequality. The filtering problems for general nonlinear continuous-time and discrete-time stochastic systems were discussed in  and , respectively. In , the quantized control problem for a class of nonlinear stochastic time-delay network-based systems with probabilistic data missing is studied.
On the other hand, Itô stochastic systems with Markov jumps have attracted increasing attention due to their powerful modeling ability in many fields [10, 11]. For linear stochastic systems with Markov jumps, many important issues have been studied, such as stability and stabilization [12, 13], observability and detectability , optimal control , and control . The control issues for nonlinear stochastic Markov jump systems (NSMJSs) have also been widely investigated. In , the notion of exponential dissipativity of NSMJSs was introduced, and it was used to estimate the possible variations of the output feedback control. In , the stabilization of nonlinear Markov jump systems with partly unknown transition probabilities was studied via fuzzy control. In , the control problems of NSMJSs were studied, which extended the results of  to stochastic systems with Markovian jump parameters.
Most of the existing literature was concerned with stochastic Markov jump systems with state-dependent noise or both state and disturbance dependent noise (-dependent noise for short) [16, 19]. However, for most natural phenomena described by Itô stochastic systems, not only state but also control input or external disturbance maybe corrupted by noise. By introducing three coupled Hamilton-Jacobi equations (HJEs), the finite and infinite horizon control problems were solved for Itô stochastic systems with all system state, control, and disturbance-dependent noise (-dependent noise for short) in  and , respectively. In , the finite/infinite horizon control of nonlinear stochastic systems with -dependent noise was solved by means of a Hamilton-Jacobi inequality (HJI) instead of three coupled Hamilton-Jacobi equations (HJEs). However, the control problems of nonlinear stochastic systems with Markov jumps and -dependent noise have never been tackled and deserved further research.
In this paper, the finite and infinite horizon control problems are studied for nonlinear stochastic Markov jumps systems with -dependent noise. Firstly, a very useful elementary identity is proposed. Then, by using the technique of squares completion, a sufficient condition for finite/infinite horizon control of NSMJSs is presented based on a set of coupled HJIs. By means of linear matrix inequalities (LMIs), a sufficient condition for infinite horizon control of linear stochastic Markov jump systems is derived. Finally, two numerical examples are provided to show the effectiveness of our obtained results.
For conveniences, we make use of the following notations throughout this paper. is the -dimensional Euclidean space. is the set of all real matrices. : is a positive definite (positive semi-definite) symmetric matrix. is the transpose of a matrix . is the identity matrix. is the Euclidean norm of a vector . (resp., ) is the space of nonanticipative stochastic processes with respect to increasing -algebras satisfying (resp., ).
2. Definitions and Preliminaries
Consider the following time-varying nonlinear stochastic Markov jump systems with -dependent noise: where , , , and represent the system state, control input, exogenous input, and regulated output, respectively. is the one-dimensional standard Wiener process defined on the complete probability space , with the natural filter generated by and up to time . The jumping process is a continuous-time discrete-state Markov process taking values in a finite set . The transition probabilities for the process are defined as where , , and denotes the switching rate from mode at time to mode at time and for all . In this paper, the processes and are assumed to be independent. For every , , , , , , , and are Borel measurable functions of suitable dimensions, which guarantee that system (1) has a unique strong solution . The finite horizon control for system (1) is defined as follows.
Definition 1. For given , the state feedback control is called a finite horizon control of system (1), if for the zero initial state and any nonzero , we have with where is an operator associated with system (1) which is defined as
Consider the time-invariant nonlinear stochastic Markov jump systems with -dependent noise The infinite horizon control for system (5) is defined as follows.
Definition 2. For given , the control is called an infinite horizon control of system (1), if the following is considered.(i)For the zero initial state and any nonzero , we have with where is an operator associated with system (5) which is defined as (ii)System (5) is internally stable; that is, the following system is globally asymptotically stable in probability .
To give our main results, we need the following lemmas.
Lemma 3 (see ). (Generalized Itô formula). Let and be given -valued, -adapted process, , and . Then for given , , we have where
Lemma 4. If , is a symmetric matrix and exists, we have
Proof. This lemma is very easily proved by using completing squares technique, so the proof is omitted.
3. Main Results
3.1. Finite Horizon Case
The following sufficient condition is presented for the finite horizon control of system (1). For convenience, denote in this subsection.
Theorem 5. Assume that there exists a set of nonnegative functions , , and for all nonzero , . If solves the following coupled HJIs: then is a finite horizon control of system (1).
Proof. For any and the initial state , , applying Lemma 3, we have
where denotes and
Since , , we have
which means that
Considering the above inequality and (14), we have
Applying Lemma 4 to and , we have where
Substituting (20) into (18), and considering (12), it yields Taking and considering the second item of (12), (22) leads to which means in Definition 1. The theorem is proved.
Remark 6. The proof of Theorem 5 is based on an elementary identity (11), which avoids using stochastic dissipative theory as done in [4, 5]. We believe that the identity (11) will have many other applications in system analysis and synthesis.
3.2. Infinite Horizon Case
In contrast to the finite horizon case, the infinite horizon control exhibits more complexity due to the additional requirement of stabilizing the closed-loop system internally. The following sufficient condition is derived for the infinite horizon control of system (5). In this subsection, denote .
Theorem 7. Assume that there exists a set of nonnegative functions , , and for all nonzero , . If solves the following coupled HJIs: then is an infinite horizon control of system (5).
Proof. Similar to the proof of Theorem 5, it is easy to show under condition (24). Next, we need to prove system (8) to be globally asymptotically stable in probability. Let and let be the infinitesimal generator of the system (8) which is similar to in Lemma 3; then
It can be checked that
The following inequality is used during the calculation of (29):
Implementing (28) and (29) into (26) and considering (24), it yields which implies that (8) is globally asymptotically stable in probability from . This theorem is completed.
Remark 8. The methods proposed in [19, 24] cannot be applied to study the problem of NSMJSs with -dependent noise, although they are suitable for NSMJSs with -dependent noise. One reason for this is that and are no longer separable in the conditions, when they enter the diffusion term simultaneously. In the proofs of Theorems 5 and 7, we resort to Lemma 4 to solve this problem.
Remark 9. When there is no Markov jump parameters, system (1)/(5) will reduce to general nonlinear stochastic systems with -dependent noise, which was studied in [20, 21]. However, all the conditions of control design for nonlinear stochastic systems in [20, 21] were given in terms of three coupled HJEs, which were difficult to be solved. According to this paper, the sufficient condition for control of nonlinear stochastic systems can be derived by means of a single set of coupled HJIs, which is easier to be verified than three coupled HJEs in [20, 21].
From Theorem 7, the following nonlinear stochastic bounded real lemma will be derived for Markov jump system:
Lemma 10. For a prescribed , system (32) is internally stable and , if there exists a set of nonnegative functions , , and for all nonzero , , satisfying the following coupled HJIs:
Next, we present a sufficient condition for the following linear stochastic Markov jump systems with -dependent noise:
Corollary 11. System (34) is internally stable and for given , if there exist matrices , satisfying the following LMIs: where Moreover, the state feedback gain matrices are given by .
Proof. Firstly, consider the following unforced linear stochastic Markov jump systems:
Let ; (33) in Lemma 10 can be written as follows:
By Schur complement, (38) is equivalent to which also implies (39). Moreover, (40) can be rewritten as which yields according to Schur complement. Pre- and postmultiplying (42) by and denoting , we have where and are defined as in (35).
Considering closed-loop system (34) with state feedback control , . Replacing by , , respectively, and setting in (43) yield (35). Therefore, the state feedback gain matrices can be obtained by .
Remark 12. Although HJIs (12) in Theorem 5 or (24) in Theorem 7 can be solved by trial and error in some simple cases (see Example 13 in Section 4), they are difficult to be dealt with for high-dimensional systems. In order to avoid solving the HJIs, the Taylor series approach  or fuzzy approach based on Takagi-Sugeno model [24, 26] can be considered to design the nonlinear stochastic controller.
4. Numerical Example
In this section, two numerical examples are provided to illustrate the effectiveness of the developed results.
Example 13. Consider one-dimensional two-mode time-invariant nonlinear stochastic Markov jump systems with generator matrix , and the two subsystems are as follows: Set , , with and to be determined; then HJIs (24) become For given , the above inequalities have solutions and . According to Theorem 7, the infinite controllers of system (44) are and .
Example 14. Consider two-dimensional two-mode linear stochastic Markov jump systems (34) with the following parameters: With the choice of , a possible solution of LMIs (35) in Corollary 11 can be found by using Matlab LMI control toolbox: Then, the control gain matrices of system (34) are as follows:
Figure 1 shows the result of the changing between modes during the simulation with the initial mode at mode 1. The initial condition is chosen as and exogenous input . By means of Euler-Maruyama method , the state responses of unforced system (37) and controlled system (34) are shown in Figures 2 and 3, respectively. From Figure 3, one can find that the controlled system (34) can achieve stability and attenuation performance in the sense of mean square by using the proposed controller.
In this paper, we have studied the control problem for NSMJSs with -dependent noise. A sufficient condition for finite/infinite horizon control has been derived in terms of a set of coupled HJIs. It can be found that Lemma 4 plays an essential role during the proof of Theorems 5 and 7. The validity of the obtained results has been verified by two examples.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is supported by National Natural Science Foundation of China (nos. 61203053 and 61174078), China Postdoctoral Science Foundation (no. 2013M531635), Research Fund for the Doctoral Program of Higher Education of China (no. 20120133120014), Special Funds for Postdoctoral Innovative Projects of Shandong Province (no. 201203096), Fundamental Research Fund for the Central Universities (nos. 11CX04042A, 12CX02010A, and 14CX02093A), Research Fund for the Taishan Scholar Project of Shandong Province of China, and SDUST Research Fund (no. 2011KYTD105).
L. Sheng, W. Zhang, and M. Gao, “Relationship between Nash equilibrium strategies and control of stochastic Markov jump systems with multiplicative noise,” IEEE Transactions on Automatic Control, vol. 59, no. 10, 2014.View at: Google Scholar
Z. Lin, Y. Lin, and W. Zhang, “A unified design for state and output feedback control of nonlinear stochastic Markovian jump systems with state and disturbance-dependent noise,” Automatica, vol. 45, no. 12, pp. 2955–2962, 2009.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
V. Dragan, T. Morozan, and A.-M. Stoica, Mathematical Methods in Robust Control of Linear Stochastic Systems, Springer, New York, NY, USA, 2006.View at: MathSciNet
M. Abu-Khalaf, J. Huang, and F. L. Lewis, Nonlinear H2/H∞ Constrained Feedback Control, Springer, London, UK, 2006.