- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Annual Issues
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 869842, 11 pages
New Results on Stability and Stabilization of Markovian Jump Systems with Partly Known Transition Probabilities
Department of Control Science and Engineering, School of Electronics and Information Engineering, Tongji University, Shanghai 201804, China
Received 28 November 2011; Revised 21 January 2012; Accepted 28 January 2012
Academic Editor: Xue-Jun Xie
Copyright © 2012 Yafeng Guo and Fanglai Zhu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper investigates the problem of stability and stabilization of Markovian jump linear systems with partial information on transition probability. A new stability criterion is obtained for these systems. Comparing with the existing results, the advantage of this paper is that the proposed criterion has fewer variables, however, does not increase any conservatism, which has been proved theoretically. Furthermore, a sufficient condition for the state feedback controller design is derived in terms of linear matrix inequalities. Finally, numerical examples are given to illustrate the effectiveness of the proposed method.
Over the past few decades, Markov jump systems (MJSs) have drawn much attention of researchers throughout the world. This is due to their important roles in many practical systems. That is, MJSs are quite appropriate to model the plants whose structures are subject to random abrupt changes, which may result from random component failures, abrupt environment changes, disturbance, changes in the interconnections of subsystems, and so forth .
Since the transition probabilities in the jumping process determine the behavior of the MJSs, the main investigation on MJSs is to assume that the information on transition probabilities is completely known (see, e.g., [2–5]). However, in most cases, the transition probabilities of MJSs are not exactly known. Whether in theory or in practice, it is necessary to further consider more general jump systems with partial information on transition probabilities. Recently, [6–9] considered the general MJSs with partly unknown transition probabilities. But in these papers, when the terms containing unknown transition probabilities were separated from others, the fixed connection weighting matrices were introduced, which may lead to the conservatism. As noticed, it, currently , have achieved an excellent work of reducing the conservatism. The basic idea is to introduce free-connection weighting matrices to substitute the fixed connection weighting matrices. However, this means that the method of  has to increase the number of decision variables. As shown in , more decision variables imply the augmentation of the numerical burden. Therefore, developing some new methods without introducing any additional variable meanwhile without increasing conservatism will be a valuable work, which motivates the present study.
In this paper, we are concerned with the problem of the stability and stabilization of MJSs with partly unknown transition probabilities. By fully unitizing the relationship among the transition rates of various subsystems, we obtain a new stability criterion. The proposed criterion avoids introducing any connection weighting matrix; however, do not increase any conservatism comparing to that of , which has been proved theoretically. More importantly, because the proposed stability criterion need not introduce any slack matrix, the relationships among Lyapunov matrices are highlighted. Therefore, it helps us to understand the effect of the unknown transition probabilities on the stability. Then, based on the proposed stability criterion, the condition for the controller design is derived in terms of LMIs. Finally, numerical examples are given to illustrate the effectiveness of the proposed method.
In this paper, and denote the -dimensional Euclidean space and the set of all real matrices, respectively. represents the set of positive integers. The notation means that is a real symmetric and positive definite (semipositive-definite) matrix. For notation , represents the sample space, is the -algebra of subsets of the sample space, and is the probability measure on . stands for the mathematical expectation. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.
2. Problem Formulation
Consider the following stochastic system with Markovian jump parameters: where is the state vector, denotes initial condition, and is a right-continuous Markov process on the probability space taking values in a finite state space with generator , given by where , , and , for , is the transition rate from mode at time to mode at time , and . are known matrix functions of the Markov process.
Since the transition probability depends on the transition rates for the continuous-time MJSs, the transition rates of the jumping process are considered to be partly accessible in this paper. For instance, the transition rate matrix for system (2.1) with operation modes may be expressed as where “” represents the unknown transition rate.
For notational clarity, for all , we denote with is known for and is unknown for .
Moreover, if , it is further described as where is a nonnegative integer with and represent the known element of the set in the row of the transition rate matrix .
For the underlying systems, the following definitions will be adopted in the rest of this paper. More details refer to .
Definition 2.1. The system (2.1) with is said to be stochastically stable if the following inequality holds
for every initial condition and .
To this end, we introduce the following result on the stability analysis of systems (2.1).
Remark 2.3. Since the unknown transition rates may have infinitely admissible values, it is impossible to be directly used the inequalities of Lemma 2.2 to test the stability of the system.
3. Stochastic Stability Analysis
In this section, a stochastic stability criterion for MJSs is given without any additional weighting matrix.
Theorem 3.1. The system (2.1) with a partly unknown transition rate matrix (2.3) and is stochastically stable if there exist matrices , such that the following LMIs are feasible for .
If , where and .
Proof. Based on Lemma 2.2, we know that the system (2.1) with is stochastically stable if (2.6) holds. Now we prove that (3.1)–(3.3) guarantee that (2.6) holds by the following two cases.Case I (). In this case, (2.6) can be rewritten as
Note that in this case and ; then from (3.2), we have
Therefore, if , inequalities (3.1) and (3.2) imply that (2.6) holds.Case II (). Note that in this case and . So, if , then we must have . Therefore, (2.6) becomes
which is equivalent to (3.3) noticing .
Else, if , then we must have . Then, (2.6) can be rewritten as Obviously, (3.3) implies that (3.7) holds. Then, if , (3.3) implies that (2.6) holds.
Therefore, if LMIs (3.1)–(3.3) hold, we conclude that system (2.1) is stochastically stable according to Lemma 2.2. The proof is completed.
Theorem 3.1 proposed in this paper does not introduce any free variable. It involves variables, while Theorem 3.3 in  involves variables. Namely, the number of variables in this paper is only half of . Generally, reducing the number of decision variables easily results in increasing conservatism of stability criteria. However, Theorem 3.1 in this paper does not increase conservatism while with less variables. To show this, we rewrite it as follows.
Now we have the following conclusion.
Proof. If , (3.9)–(3.10) imply that (3.2) and the following inequality hold:
From (3.11) and (3.8), we can obtain (3.1).
If , (3.9) and (3.10) guarantee that In addition, under this circumstance, we have Then, From (3.12), (3.13), and (3.8), we obtain that (3.3) holds. The proof is completed.
Remark 3.4. The stability condition in  and that in Theorem 3.1 are derived via different techniques. Now Theorem 3.3 proves that the former can be simplified to the latter without increasing any conservatism. More importantly, because Theorem 3.1 of this paper does not involve any slack matrix, the relationships among Lyapunov matrices are highlighted. Therefore, it is clearer how the unknown transition probabilities affect on the stability.
4. State-Feedback Stabilization
In this section, the stabilization problem of system (2.1) with control input is considered. The mode-dependent controller with the following form is designed: where for all are the controller gains to be determined. In the following, for given .
Theorem 4.1. The closed-loop system (4.2) with a partly unknown transition rate matrix (2.3) is stochastically stable if, there exist matrices and , such that the following LMIs are feasible for .
If , where with described in (2.4) and .
Moreover, if (4.3)–(4.5) are true, the stabilization controller gains from (4.1) are given by
Proof. It is clear that the system (4.2) is stable if the following conditions are satisfied.
If , Pre- and postmultiply the left sides of (4.8)–(4.10) by , respectively, and introduce the following new variables: Then, inequalities (4.8)–(4.10) are equivalent to the following matrix inequalities, respectively.
If , By applying Schur complement, inequalities (4.12)–(4.14) are equivalent to LMIs (4.3)–(4.5), respectively.
Therefore, if LMIs (4.3)–(4.5) hold, the closed-loop system (4.2) is stochastically stable according to Theorem 3.1. Then, system (2.1) can be stabilized with the state feedback controller (4.1), and the desired controller gains are given by (4.7). The proof is completed.
Remark 4.2. The number of variables involved in Theorem 4.1 in this paper is also less than that in the corresponding result of . Furthermore, it can be seen that no conservativeness is introduced when deriving Theorem 4.1 from Theorem 3.1. Therefore, the stabilization method presented in Theorem 4.1 is not more conservative than that of , too.
5. Numerical Example
In this section, an example is provided to illustrate the effectiveness of our results.
Consider the following MJSs, which borrowed from  with small modifications, The partly transition rate matrix is considered as where the parameter in matrix can take different values for extensive comparison purpose.
We consider the stabilization of this system corresponding to different by using different approaches. Considering the precision of comparison, we let increase starting from 0 with a very small constant increment 0.01. Using the LMI toolbox in MATLAB, both the LMIs in Theorem 5 of  and the ones in Theorem 4.1 of this paper are feasible for all , and are infeasible when increases to 1.65. It can be seen that for this example the stabilization method in our paper is not conservative than that in .
Now by some simulation results, we further show the effectiveness of the stabilization method of this paper. For example, when , in our method, the controller gains are obtained as
Figure 1 is the state response cures in 1000 random sampling with initial condition when . In each random sampling, the transition rate matrix is randomly generated but satisfies the partly transition rate matrix in (5.2). Figure 1 shows that the open-loop system is unstable.
Applying the controllers in (5.3), the trajectory simulation of state response for the closed-loop system with 1000 random sampling is shown in Figure 2 under the given initial condition . In this case, the transition rate matrix is also randomly generated but satisfies the partly transition rate matrix in (5.2). Figure 2 shows that the stabilizing controller effectively keeps the running reliability of the system.
This paper has considered the problem of stability and stabilization of a class continuous-time MJSs with unknown transition rates. A new stability criterion has been proposed. The merit of the proposed criterion is that it has less decision variables without increasing conservatism comparing those in the literature to date. Then, the mode-dependent state feedback controller designing method has been proposed, which possess the same merit as the stability criterion. Numerical examples have been given to illustrate the effectiveness of the results in this paper.
This work was supported by National Natural Science Foundation of China (61104115 and 61074009), Research Fund for the Doctoral Program of Higher Education of China (20110072120018), and Young Excellent Talents in Tongji University (2009KJ035).
- C. E. de Souza, A. Trofino, and K. A. Barbosa, “Mode-independent filters for Markovian jump linear systems,” IEEE Transactions on Automatic Control, vol. 51, no. 11, pp. 1837–1841, 2006.
- E. K. Boukas, Stochastic Switching Systems: Analysis and Design, Birkhauser, Boston, Mass, USA, 2006.
- S. Xu, J. Lam, and X. Mao, “Delay-dependent control and filtering for uncertain Markovian jump systems with time-varying delays,” IEEE Transactions on Circuits and Systems —part. I: regular papers, vol. 54, no. 9, pp. 2070–2077, 2007.
- M. S. Mahmoud, “Delay-dependent filtering of a class of switched discrete-time state delay systems,” Signal Processing, vol. 88, no. 11, pp. 2709–2719, 2008.
- H. Li, B. Chen, Q. Zhou, and W. Qian, “Robust stability for uncertain delayed fuzzy Hopfield neural networks with Markovian jumping parameters,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 39, no. 1, pp. 94–102, 2009.
- L. X. Zhang, E. K. Boukas, and J. Lam, “Analysis and synthesis of Markov jump linear systems with time-varying delays and partially known transition probabilities,” IEEE Transaction on Automatic Control, vol. 53, no. 10, pp. 2458–2464, 2008.
- L. X. Zhang and E. K. Boukas, “Stability and stabilization of Markovian jump linear systems with partly unknown transition probabilities,” Automatica, vol. 45, no. 2, pp. 463–468, 2009.
- L. X. Zhang and E. K. Boukas, “Mode-dependent filtering for discrete-time Markovian jump linear systems with partly unknown transition probabilities,” Automatica, vol. 45, no. 6, pp. 1462–1467, 2009.
- L. X. Zhang and E. K. Boukas, “ control for discrete-time Markovian jump linear systems with partly unknown transition probabilities,” International Journal of Robust and Nonlinear Control, vol. 19, no. 8, pp. 868–883, 2009.
- Y. Zhang, Y. He, M. Wu, and J. Zhang, “Stabilization for Markovian jump systems with partial information on transition probability based on free-connection weighting matrices,” Automatica, vol. 47, no. 1, pp. 79–84, 2011.
- D. Peaucelle and F. Gouaisbaut, ““Discussion on: “Parameter-dependent Lyapunov function approach to stability analysis and design for uncertain systems with time-varying delay”,” European Journal of Control, vol. 11, no. 1, pp. 69–70, 2005.