Stochastic Systems and Control: Theory and ApplicationsView this Special Issue
Research Article | Open Access
Stabilization of Stochastic Markovian Jump Systems with Partially Unknown Transition Probabilities and Multiplicative Noise
We are concerned with the problems of stability and stabilization for stochastic Markovian jump systems subject to partially unknown transition probabilities and multiplicative noise, including the continuous- and discrete-time cases. Sufficient conditions guaranteeing systems considered to be asymptotically stable in the mean square are presented in the form of LMIs. Furthermore, the desired state feedback controllers are designed. It is shown that, by introducing the free-weighing matrix method, the results we have obtained not only are less conservative than the existing ones but also can be regarded as extensions of the corresponding results of Markovian jump systems without noise. Numerical examples are finally provided to illustrate the effectiveness of the proposed theoretical results.
Over the past years, considerable attention has been devoted to the study of a class of stochastic systems governed by Itô’s differential equation because of their extensive applications in some practical areas such as economics, finance, biology, and fault detection [1–5]. It was shown that many results related to these systems have been presented for stability [6–8], linear quadratic optimal control [9–12], output feedback control [13, 14], and control [15, 16].
On the other hand, Markovian jump linear systems (MJLS), which are referred to as the stochastic systems with abrupt changes, have come to play an important role in practical applications owing to the powerful modeling capability of Markov chains [17–19]. Up to now, a great number of interesting and important results on this subject have been addressed; see, for example, [20–23] and the references therein. Very recently, the authors in  proposed a novel sliding mode observer-based fault tolerant control scheme to investigate the stabilization of nonlinear Markovian jump systems with output disturbances and actuator and sensor faults simultaneously. Furthermore, the state and fault estimation problem for stochastic switched systems with disturbances and sensor and actuator fault was addressed in . It should be pointed out that the obtained results in [24, 25] are very important and useful to study the practical switched systems with sensor and actuator failures. It was shown that most of results on MJLS were based on the assumption that the transition probabilities were accessible. However, in some cases such as networked control systems, it is difficult or even impossible to acquire complete information on the transition probabilities of MJLS. Therefore, there is a strong incentive to further study more general MJLS with incomplete knowledge of transition probabilities. Until now, lots of results on this topic have been addressed, for instance, stability and stabilization [26–28] and control [28–31]. Among the aforementioned works,  proposed the free-connection weighing matrices method, which obtained less conservative results than those in [26–28, 32]. To the authors’ knowledge, there is little work done on stochastic Markovian jump systems with partially unknown transition rates and multiplicative noise . Such systems are much more advanced and realistic, so the research on this topic should be theoretically interesting and challenging.
In this paper, we will investigate problems of stability and stabilization for stochastic Markovian jump systems with partially unknown transition rates and multiplicative noise, including the continuous- and discrete-time cases. With the aid of the free-weighting matrices, a new stability criterion is established, which is less conservative than that in  for the continuous-time case. Moreover, it is theoretically shown that the previous results are special cases if the free-weighting matrices are chosen to be some special forms. Furthermore, the results of stability and stabilization in the discrete-time case are successfully obtained. What we have obtained can be regarded as generations of corresponding results without noise as those in [30–32]. Numerical examples are finally given to illustrate the effectiveness of the proposed theoretical results.
The outline of this paper is listed as follows: Section 2 contains some preliminary results. In Section 3, the problems of stochastic stability and stabilization for the system considered are addressed, including the continuous-time and discrete-time cases. In Section 4, two simulation examples are given to illustrate the effectiveness of the proposed theoretical results. Conclusions are presented in Section 5.
Notations. is the space of all - dimensional real vectors with usual -norm ; is the space of all real matrices; is the set of all symmetric matrices; (resp., ): is a real symmetric positive definite (resp., negative definite) matrix; (resp., ): is a real semi-positive definite (resp., semi-negative definite) matrix; is the transpose of ; is the expectation operator; is the identity matrix. is the simple notation of .
Consider the following continuous- and discrete-time stochastic systems subject to Markov jump parameters and multiplicative noise, respectively: where (resp., ) is the state vector and (resp., ) is the control input. For the continuous-time stochastic systems (1), is one-dimensional, standard Wiener process that is defined on the complete probability space with a filtering ; is a right continuous homogeneous Markov chain taking values in a finite state space with transition probability matrix given by where , , and represents the transition rate from to , which satisfies . For the discrete-time stochastic systems (2), is a wide stationary, second-order process, , and with being a Kronecker delta; the parameter denotes a discrete-time Markov chain taking values in a finite set with transition probabilities , and transition probabilities matrix is given as , where and it satisfies for any . In the case of (resp., ), the system matrices of the th mode are given by .
In this paper, the transition rates of Markovian jump process are considered to be partially accessible. For example, the transition rate matrix for system (1) or for (2) with operation modes is described as where the unknown transition rate is represented by . For each , we define , where or is known for and or is unknown for . Furthermore, in the case of , it is given as with , and denotes the sequence number of the th known element in the th row of matrix or .
3. Stochastic Stability and Stabilization Analysis
In this section, the problems of stability and stabilization for stochastic Markovian jump systems with multiplicative noise and partially unknown transition rates in both continuous- and discrete-time cases are investigated. The state feedback controllers guaranteeing systems to be mean square stable are designed.
Lemma 2. System (1) with is asymptotically stable in the mean square if there are matrices such that the following LMIs hold for each :
Proof. Selecting a stochastic Lyapunov functional candidate where is a positive matrix. The infinitesimal operator acting on is given as follows: For arbitrary , it is shown from (6) that . Therefore, by , system (1) with is asymptotically stable in the mean square.
Lemma 3. System (2) with is asymptotically stable in the mean square if there are matrices for each such that the following LMIs hold:
3.1. Continuous-Time Case
This section aims to develop a new stability criterion for system (1) by making using of free-weighting matrices. It will be shown that what we have obtained are less conservative than the existing ones in .
Theorem 4. Unforced system (1) with partially unknown transition rates is asymptotically stable in the mean square if there are matrices satisfying the following LMIs for each :
Proof. It is noted that leads to for arbitrary symmetric matrices . Next, the left side of (6) can be rewritten as If , we have . In this case, inequalities (10) and (11) can result in (6).
On the other hand, if , can be represented as Obviously, (10), (11), and (12) together with (14) can deduce (6). This completes the proof.
Below, we further discuss the stabilization problem of system (1). Employing the state feedback controller to system (1), then (1) becomes a close-loop system which is described as where is the controller gain to be determined.
Theorem 5. The closed-loop system (15) with partially unknown transition rates is asymptotically stable in the mean square if there are matrices , , and for each satisfying the following LMIs: where with . The stabilizing state feedback controllers are presented as .
Proof. Substituting the state feedback controller to (1), we derive the following closed-loop system: If , by Schur complement Lemma, the inequality (16) is equivalent to Pre- and postmultiplying (22) and (19) by , respectively, we have Let ; according to Theorem 4, LMIs (23) imply that the closed-loop system (21) is asymptotically mean square stable.
If , from Schur complement Lemma, (17) and (18) are, respectively, equivalent to Similar to the obtained procedures of (23), inequalities (17), (18), and (19) deduce that the closed-loop system (21) is asymptotically stable in the mean square by Theorem 4.
Remark 6. It can be seen from Theorem 4 that a new stability criterion for system (1) has been established by introducing free-weighting matrices , which is less conservative than Theorem of . In the case of , Theorem 4 coincides with Theorem of . Obviously, provides more degrees of freedom for the scope of variables. In addition, it should be mentioned that Theorem of  is incorrect, because the condition that (which appeared in of ) may not be true.
3.2. Discrete-Time Case
In this section, we focus our attention on the stability and stabilization problems for discrete-time stochastic Markovian jump systems subject to incomplete knowledge of transition probability and multiplicative noise. Sufficient conditions for the stability and stabilization of systems under consideration are formulated as LMIs.
Theorem 7. System (2) with is asymptotically stable in the mean square if there exist matrices , such that the following inequalities hold: where
Proof. Because of for each , the following equality holds for arbitrary symmetric matrix Note that the left side of (9) can be expressed as Considering , (29) can be rewritten as It is easy to see that holds if the following inequalities are satisfied: From Schur complement lemma, it can be verified that (31) and (32) are, respectively, equivalent to (25) and (26). Therefore, it is shown from Lemma 3 that system (2) with is asymptotically stable in the mean square. This completes the proof.
Next, we are set about to investigate the stabilization problem of system (2) with partially unknown transition rates. The state feedback controller is given in the form of . Applying this controller to system (2) results in the following closed-loop system:
Theorem 8. The closed-loop system (33) is asymptotically stable in the mean square if there exist matrices , such that the following LMIs hold: where The desired state feedback controller gains are given by .
Proof. Applying the state feedback controller to system (2), we derive the following closed-loop system: Pre- and postmultiply the left sides of (34) and (35) by respectively, and let then (34) and (35) are, respectively, equivalent to It is easy from Theorem 7 to see that the closed-loop system (33) is asymptotically stable in the mean square.
In this section, two numerical examples are proposed to demonstrate the effectiveness of our presented approaches, including the continuous- and discrete-time cases.
Consider the continuous-time stochastic Markov jumping system in the form of (15) with the following matrices: The transition rate matrix with the partially unknown elements is presented as
Based on Theorem 5 the controller gains for system (15) are given as follows: Consider discrete-time stochastic Markov jumping system (33) with four operation modes and the following system matrices: The transition rate matrix with the partially unknown elements is presented as By Theorem 8, the controller gains for system (33) are given as
Remark 9. It was shown that the random packet loss and channel delay in the network control system are often modeled as Markov chains and the variation of delays and packet dropouts may be random in the different period of networks . Therefore, it is difficult to obtain complete elements of the transition probabilities matrix. The same problems may arise in a single-link robot arm in [31, 35], whose dynamic equation is presented as where is the angle position of the arm, is the control input, is the acceleration of gravity, is the length of the arm, is the coefficient of viscous friction which is assumed to be time invariant, is the mass of the payload, and is the moment of inertia. Let , and . Under this condition, is usually denoted as when is about 0 rad. Next, consider that system (48) can be modeled as a Markovian jump system with 4 subsystems: where However, it might occur that is subject to some random environmental noise effects [1, 2]. In this case, (49) becomes stochastic Markovian jump systems with partially unknown transition probabilities and multiplicative noise If the parameters are taken as , , , , , , , , , , , then the controller gains for system (15) are provided in (44).
In this paper, the stability and stabilization problems for a class of stochastic Markovian jump linear systems (MJLS) with partly unknown transition rates have been studied. The LMI-based sufficient conditions ensuring systems considered to be stable are given in the continuous- and discrete-time cases. Numerical examples are provided to show the validness and applicability of the developed results.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work was supported by NSF of China (Grant nos. 61573227 and 61703248), the State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources (Grant no. LAPS16011), Shandong Provincial Natural Science Foundation, China (Grant no. ZR2015FM014), and the SDUST Research Fund no. 2015TDJH105.
- X. Mao, Stochastic Differential Equations and Their Applications, Horwood, Chichester, UK, 1997.
- B. Øksendal, Stochastic Differential Equations: An Introduction with Applications, Springer, Berlin, Germany, 6th edition, 1998.
- Y. Wang and Z. Huang, “Backward stochastic differential equations with non-Lipschitz coefficients,” Statistics & Probability Letters, vol. 79, no. 12, pp. 1438–1443, 2009.
- X. Meng, “Stability of a novel stochastic epidemic model with double epidemic hypothesis,” Applied Mathematics and Computation, vol. 217, no. 2, pp. 506–515, 2010.
- X. Meng, L. Wang, and T. Zhang, “Global dynamics analysis of a nonlinear impulsive stochastic chemostat system in a polluted environment,” Journal of Applied Analysis and Computation, vol. 6, no. 3, pp. 865–875, 2016.
- W. Zhang, H. Zhang, and B.-S. Chen, “Generalized Lyapunov equation approach to state-dependent stochastic stabilization/detectability criterion,” IEEE Transactions on Automatic Control, vol. 53, no. 7, pp. 1630–1642, 2008.
- H. Ma and Y. Jia, “Stability analysis for stochastic differential equations with infinite Markovian switchings,” Journal of Mathematical Analysis and Applications, vol. 435, no. 1, pp. 593–605, 2016.
- X. Li, X. Lin, and Y. Lin, “Lyapunov-type conditions and stochastic differential equations driven by G-Brownian motion,” Journal of Mathematical Analysis and Applications, vol. 439, no. 1, pp. 235–255, 2016.
- H. Ma and T. Hou, “A separation theorem for stochastic singular linear quadratic control problem with partial information,” Acta Mathematicae Applicatae Sinica. English Series, vol. 29, no. 2, pp. 303–314, 2013.
- G. Li and W. Zhang, “Study on indefinite stochastic linear quadratic optimal control with inequality constraint,” Journal of Applied Mathematics, vol. 2013, Article ID 805829, 9 pages, 2013.
- X. Liu, Y. Li, and W. Zhang, “Stochastic linear quadratic optimal control with constraint for discrete-time systems,” Applied Mathematics and Computation, vol. 228, pp. 264–270, 2014.
- G. Li and M. Chen, “Infinite horizon linear quadratic optimal control for stochastic difference time-delay systems,” Advances in Difference Equations, vol. 2015, article 14, 2015.
- Z. Yan, G. Zhang, J. Wang, and W. Zhang, “State and output feedback finite-time guaranteed cost control of linear Itô stochastic systems,” Journal of Systems Science & Complexity, vol. 28, no. 4, pp. 813–829, 2015.
- Y. Zhao and W. Zhang, “Observer-based controller design for singular stochastic Markov jump systems with state dependent noise,” Journal of Systems Science & Complexity, vol. 29, no. 4, pp. 946–958, 2016.
- M. Gao, L. Sheng, and W. Zhang, “Stochastic H2/H∞ control of nonlinear systems with time-delay and state-dependent noise,” Applied Mathematics and Computation, vol. 266, pp. 429–440, 2015.
- W. Zhang, L. Ma, and T. Zhang, “Discrete-time mean-field stochastic H2/H∞ control,” Journal of Systems Science & Complexity, vol. 30, no. 4, pp. 765–781, 2017.
- O. L. Costa and M. D. Fragoso, “Stability results for discrete-time linear systems with Markovian jumping parameters,” Journal of Mathematical Analysis and Applications, vol. 179, no. 1, pp. 154–178, 1993.
- E. Boukas, Stochastic Switching Systems: Analysis and Design, Birkhäauser, Boston, Mass, USA, 2005.
- O. L. V. Costa, M. D. Fragoso, and R. P. Marques, Discrete-time Markovian Jump Linear Systems, Springer, London, UK, 2005.
- J. Gao, B. Huang, and Z. Wang, “LMI-based robust H∞ control of uncertain linear jump systems with time-delays,” Automatica, vol. 37, no. 7, pp. 1141–1146, 2001.
- E. K. Boukas and Z. K. Liu, “Robust H∞ control of discrete-time Markovian jump linear systems with mode-dependent time-delays,” IEEE Transactions on Automatic Control, vol. 46, no. 12, pp. 1918–1924, 2001.
- Y. Fang and K. A. Loparo, “Stabilization of continuous-time jump linear systems,” IEEE Transactions on Automatic Control, vol. 47, no. 10, pp. 1590–1603, 2002.
- Y. Zhao and W. Zhang, “New results on stability of singular stochastic Markov jump systems with state-dependent noise,” International Journal of Robust and Nonlinear Control, vol. 26, no. 10, pp. 2169–2186, 2016.
- H. Li, P. Shi, and D. Yao, “Adaptive sliding mode control of markov jump nonlinear systems with actuator faults,” IEEE Transactions on Automatic Control, vol. 62, no. 4, pp. 1933–1939, 2016.
- S. Yin, H. Gao, J. Qiu, and O. Kaynak, “Descriptor reduced-order sliding mode observers design for switched systems with sensor and actuator faults,” Automatica, vol. 76, pp. 282–292, 2017.
- L. Zhang, E.-K. Boukas, and J. Lam, “Analysis and synthesis of Markov jump linear systems with time-varying delays and partially known transition probabilities,” IEEE Transactions on Automatic Control, vol. 53, no. 10, pp. 2458–2464, 2008.
- L. Zhang and E.-K. Boukas, “Stability and stabilization of Markovian jump linear systems with partly unknown transition probabilities,” Automatica, vol. 45, no. 2, pp. 463–468, 2009.
- L. Zhang and E.-K. Boukas, “H∞ control for discrete-time Markovian jump linear systems with partly unknown transition probabilities,” International Journal of Robust and Nonlinear Control, vol. 19, no. 8, pp. 868–883, 2009.
- Z. Wu, H. Su, and J. Chu, “H∞ model reduction for discrete singular Markovian jump systems,” Proceedings of the Institution of Mechanical Engineers. Part I: Journal of Systems and Control Engineering, vol. 223, no. 7, pp. 1017–1025, 2009.
- Y. Wang, Y. Sun, Z. Zuo, and M. Z. Chen, “Robust H∞ control of discrete-time Markovian jump systems in the presence of incomplete knowledge of transition probabilities and saturating actuator,” International Journal of Robust and Nonlinear Control, vol. 22, no. 15, pp. 1753–1764, 2012.
- Y. Zhang, Y. He, M. Wu, and J. Zhang, “H∞ control for discrete-time Markovian jump systems with partial information on transition probabilities,” Asian Journal of Control, vol. 15, no. 5, pp. 1397–1406, 2013.
- Y. Zhang, Y. He, M. Wu, and J. Zhang, “Stabilization for Markovian jump systems with partial information on transition probability based on free-connection weighting matrices,” Automatica, vol. 47, no. 1, pp. 79–84, 2011.
- L. Sheng and M. Gao, “Stabilization control of stochastic Markov jump systems with partly unknown transition probabilities,” Control and Decision, vol. 26, no. 11, pp. 1716–1720, 1725, 2011.
- P. Seiler and R. Sengupta, “An H∞ approach to networked control,” IEEE Transactions on Automatic Control, vol. 50, no. 3, pp. 356–364, 2005.
- L. Sheng and M. Gao, “Stabilization for Markovian jump nonlinear systems with partly unknown transition probabilities via fuzzy control,” Fuzzy Sets and Systems, vol. 161, no. 21, pp. 2780–2792, 2010.
Copyright © 2017 Yong Zhao and You Fu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.