Abstract

This paper investigates the problem of stability and stabilization of Markovian jump linear systems with partial information on transition probability. A new stability criterion is obtained for these systems. Comparing with the existing results, the advantage of this paper is that the proposed criterion has fewer variables, however, does not increase any conservatism, which has been proved theoretically. Furthermore, a sufficient condition for the state feedback controller design is derived in terms of linear matrix inequalities. Finally, numerical examples are given to illustrate the effectiveness of the proposed method.

1. Introduction

Over the past few decades, Markov jump systems (MJSs) have drawn much attention of researchers throughout the world. This is due to their important roles in many practical systems. That is, MJSs are quite appropriate to model the plants whose structures are subject to random abrupt changes, which may result from random component failures, abrupt environment changes, disturbance, changes in the interconnections of subsystems, and so forth [1].

Since the transition probabilities in the jumping process determine the behavior of the MJSs, the main investigation on MJSs is to assume that the information on transition probabilities is completely known (see, e.g., [25]). However, in most cases, the transition probabilities of MJSs are not exactly known. Whether in theory or in practice, it is necessary to further consider more general jump systems with partial information on transition probabilities. Recently, [69] considered the general MJSs with partly unknown transition probabilities. But in these papers, when the terms containing unknown transition probabilities were separated from others, the fixed connection weighting matrices were introduced, which may lead to the conservatism. As noticed, it, currently [10], have achieved an excellent work of reducing the conservatism. The basic idea is to introduce free-connection weighting matrices to substitute the fixed connection weighting matrices. However, this means that the method of [10] has to increase the number of decision variables. As shown in [11], more decision variables imply the augmentation of the numerical burden. Therefore, developing some new methods without introducing any additional variable meanwhile without increasing conservatism will be a valuable work, which motivates the present study.

In this paper, we are concerned with the problem of the stability and stabilization of MJSs with partly unknown transition probabilities. By fully unitizing the relationship among the transition rates of various subsystems, we obtain a new stability criterion. The proposed criterion avoids introducing any connection weighting matrix; however, do not increase any conservatism comparing to that of [10], which has been proved theoretically. More importantly, because the proposed stability criterion need not introduce any slack matrix, the relationships among Lyapunov matrices are highlighted. Therefore, it helps us to understand the effect of the unknown transition probabilities on the stability. Then, based on the proposed stability criterion, the condition for the controller design is derived in terms of LMIs. Finally, numerical examples are given to illustrate the effectiveness of the proposed method.

Notation
In this paper, and denote the -dimensional Euclidean space and the set of all real matrices, respectively. represents the set of positive integers. The notation means that is a real symmetric and positive definite (semipositive-definite) matrix. For notation , represents the sample space, is the -algebra of subsets of the sample space, and is the probability measure on . stands for the mathematical expectation. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.

2. Problem Formulation

Consider the following stochastic system with Markovian jump parameters: where is the state vector, denotes initial condition, and is a right-continuous Markov process on the probability space taking values in a finite state space with generator , given by where , , and , for , is the transition rate from mode at time to mode at time , and .   are known matrix functions of the Markov process.

Since the transition probability depends on the transition rates for the continuous-time MJSs, the transition rates of the jumping process are considered to be partly accessible in this paper. For instance, the transition rate matrix for system (2.1) with operation modes may be expressed as where “” represents the unknown transition rate.

For notational clarity, for all , we denote with is known for and is unknown for .

Moreover, if , it is further described as where is a nonnegative integer with and represent the known element of the set in the row of the transition rate matrix .

For the underlying systems, the following definitions will be adopted in the rest of this paper. More details refer to [2].

Definition 2.1. The system (2.1) with is said to be stochastically stable if the following inequality holds for every initial condition and .
To this end, we introduce the following result on the stability analysis of systems (2.1).

Lemma 2.2 (see [2]). The system (2.1) with is stochastically stable if and only if there exists a set of symmetric and positive-definite matrices , , satisfying

Remark 2.3. Since the unknown transition rates may have infinitely admissible values, it is impossible to be directly used the inequalities of Lemma 2.2 to test the stability of the system.

3. Stochastic Stability Analysis

In this section, a stochastic stability criterion for MJSs is given without any additional weighting matrix.

Theorem 3.1. The system (2.1) with a partly unknown transition rate matrix (2.3) and is stochastically stable if there exist matrices , such that the following LMIs are feasible for .
If ,
If , where and .

Proof. Based on Lemma 2.2, we know that the system (2.1) with is stochastically stable if (2.6) holds. Now we prove that (3.1)–(3.3) guarantee that (2.6) holds by the following two cases.Case I (). In this case, (2.6) can be rewritten as Note that in this case and ; then from (3.2), we have Therefore, if , inequalities (3.1) and (3.2) imply that (2.6) holds.Case II (). Note that in this case and . So, if , then we must have . Therefore, (2.6) becomes which is equivalent to (3.3) noticing .
Else, if , then we must have . Then, (2.6) can be rewritten as Obviously, (3.3) implies that (3.7) holds. Then, if , (3.3) implies that (2.6) holds.

Therefore, if LMIs (3.1)–(3.3) hold, we conclude that system (2.1) is stochastically stable according to Lemma 2.2. The proof is completed.

Theorem 3.1 proposed in this paper does not introduce any free variable. It involves variables, while Theorem 3.3 in [10] involves variables. Namely, the number of variables in this paper is only half of [10]. Generally, reducing the number of decision variables easily results in increasing conservatism of stability criteria. However, Theorem 3.1 in this paper does not increase conservatism while with less variables. To show this, we rewrite it as follows.

Theorem 3.2 (see [10]). The system (2.1) with partly unknown transition rate matrix (2.3) and is stochastically stable if there exist matrices , , such that the following LMIs are feasible for ,

Now we have the following conclusion.

Theorem 3.3. Suppose for system (2.1) with partly unknown transition rate matrix (2.3) there exist and , , such that (3.8)–(3.10) hold; then the matrices , , satisfy (3.1)–(3.3).

Proof. If , (3.9)–(3.10) imply that (3.2) and the following inequality hold: From (3.11) and (3.8), we can obtain (3.1).
If , (3.9) and (3.10) guarantee that In addition, under this circumstance, we have Then, From (3.12), (3.13), and (3.8), we obtain that (3.3) holds. The proof is completed.

Remark 3.4. The stability condition in [10] and that in Theorem 3.1 are derived via different techniques. Now Theorem 3.3 proves that the former can be simplified to the latter without increasing any conservatism. More importantly, because Theorem 3.1 of this paper does not involve any slack matrix, the relationships among Lyapunov matrices are highlighted. Therefore, it is clearer how the unknown transition probabilities affect on the stability.

4. State-Feedback Stabilization

In this section, the stabilization problem of system (2.1) with control input is considered. The mode-dependent controller with the following form is designed: where for all are the controller gains to be determined. In the following, for given .

Using (4.1), the system (2.1) is represented as The following theorem is proposed to design the mode-dependent stabilizing controller with the form (4.1) for system (2.1).

Theorem 4.1. The closed-loop system (4.2) with a partly unknown transition rate matrix (2.3) is stochastically stable if, there exist matrices and , such that the following LMIs are feasible for .
If ,
If , where with described in (2.4) and .
Moreover, if (4.3)–(4.5) are true, the stabilization controller gains from (4.1) are given by

Proof. It is clear that the system (4.2) is stable if the following conditions are satisfied.
If ,
If , Pre- and postmultiply the left sides of (4.8)–(4.10) by , respectively, and introduce the following new variables: Then, inequalities (4.8)–(4.10) are equivalent to the following matrix inequalities, respectively.
If ,
If , By applying Schur complement, inequalities (4.12)–(4.14) are equivalent to LMIs (4.3)–(4.5), respectively.
Therefore, if LMIs (4.3)–(4.5) hold, the closed-loop system (4.2) is stochastically stable according to Theorem 3.1. Then, system (2.1) can be stabilized with the state feedback controller (4.1), and the desired controller gains are given by (4.7). The proof is completed.

Remark 4.2. The number of variables involved in Theorem 4.1 in this paper is also less than that in the corresponding result of [10]. Furthermore, it can be seen that no conservativeness is introduced when deriving Theorem 4.1 from Theorem 3.1. Therefore, the stabilization method presented in Theorem 4.1 is not more conservative than that of [10], too.

5. Numerical Example

In this section, an example is provided to illustrate the effectiveness of our results.

Consider the following MJSs, which borrowed from [10] with small modifications, The partly transition rate matrix is considered as where the parameter in matrix can take different values for extensive comparison purpose.

We consider the stabilization of this system corresponding to different by using different approaches. Considering the precision of comparison, we let increase starting from 0 with a very small constant increment 0.01. Using the LMI toolbox in MATLAB, both the LMIs in Theorem 5 of [10] and the ones in Theorem 4.1 of this paper are feasible for all , and are infeasible when increases to 1.65. It can be seen that for this example the stabilization method in our paper is not conservative than that in [10].

Now by some simulation results, we further show the effectiveness of the stabilization method of this paper. For example, when , in our method, the controller gains are obtained as

Figure 1 is the state response cures in 1000 random sampling with initial condition when . In each random sampling, the transition rate matrix is randomly generated but satisfies the partly transition rate matrix in (5.2). Figure 1 shows that the open-loop system is unstable.

Applying the controllers in (5.3), the trajectory simulation of state response for the closed-loop system with 1000 random sampling is shown in Figure 2 under the given initial condition . In this case, the transition rate matrix is also randomly generated but satisfies the partly transition rate matrix in (5.2). Figure 2 shows that the stabilizing controller effectively keeps the running reliability of the system.

6. Conclusions

This paper has considered the problem of stability and stabilization of a class continuous-time MJSs with unknown transition rates. A new stability criterion has been proposed. The merit of the proposed criterion is that it has less decision variables without increasing conservatism comparing those in the literature to date. Then, the mode-dependent state feedback controller designing method has been proposed, which possess the same merit as the stability criterion. Numerical examples have been given to illustrate the effectiveness of the results in this paper.

Acknowledgment

This work was supported by National Natural Science Foundation of China (61104115 and 61074009), Research Fund for the Doctoral Program of Higher Education of China (20110072120018), and Young Excellent Talents in Tongji University (2009KJ035).