Abstract

This paper is devoted to investigating the stability and stabilisation problems for discrete-time piecewise homogeneous Markov jump linear system with imperfect transition probabilities. A sufficient condition is derived to ensure the considered system to be stochastically stable. Moreover, the corresponding sufficient condition on the existence of a mode-dependent and variation-dependent state feedback controller is derived to guarantee the stochastic stability of the closed-loop system, and a new method is further proposed to design a static output feedback controller by introducing additional slack matrix variables to eliminate the equation constraint on Lyapunov matrix. Finally, some numerical examples are presented to illustrate the effectiveness of the proposed methods.

1. Introduction

The control theory of Markov jump systems with incomplete transition probability has emerged as a hot topic. In fact, except for the two descriptions of uncertain transition probability, the polytopic uncertain transition probability is also an important way for describing such scenario.

Recently, Markov jump linear systems have been attracting more and more attentions and many valuable results have been obtained [13]. However, the transition probabilities (TPs) in the above mentioned literature are assumed to be completely known. Generally speaking, the TPs in some jumping processes are hard to be precisely estimated in practice. Therefore, the control theory of Markov jump system with uncertain transition probability has attracted more and more attention from theoretical research and practical application. Many results on this topic have been reported [415]. Among the existing literatures about the description for the uncertain transition probability, three typical descriptions are popular. The first one is the polytopic uncertain TPs, where the transition probability matrix is unknown but belongs to a given polytope. Regarding Markov jump systems with this kind of TPs, stability analysis and controller synthesis problems have been considered in [47]. The second one is assuming that the estimated values of TPs can be obtained easily. A well-known method to handle this kind of TPs is bounded TPs. In this regard, the precise value of the TPs does not require to be known; only their bounds (upper bounds and lower bounds) are known [12, 13]. The last one is dividing the TPs into two cases: completely known and completely unknown. A necessary and sufficient condition of stability of Markov jump linear systems with partly unknown TPs is proposed in [15]. Since the above literature can only treat one of these uncertain TPs, recently, several methods are proposed to deal with the combined description of the above three uncertain TPs. The paper [9] considered the stability analysis of continuous-time Markov jump system and proposed another description for the TPs, generally uncertain TPs, in which each transition rate can be completely unknown or only its estimate is known. This description of the TPs may be less restrictive than the bounded uncertain TPs and the partly unknown TPs. The paper [16] presented a series of LMI-based conditions to ensure the norm of the output to be minimized, by which the above three kinds of uncertain TPs could be handled in a unified framework. The paper [17] studied the state feedback controller design for discrete-time Markov jump linear system, in which the elements of the uncertain rows in the transition probability matrix were modelled as belonging to the Cartesian product of simplexes and the above three uncertain TPs could be adequately represented.

On the other hand, the works in the above mentioned literature assume that the TPs matrix is time-invariant. But the assumption is actually fragile in some applications. In reality, however, this assumption is often violated because the failure probabilities of a component usually depend on many factors, for example, the external changed environment, its age, humidity, and the degree of usage. However, there are few literatures to investigate Markov jump linear systems with time-varying TPs [1820]. In [19], the author treated the discrete-time Markov jump system with a class of finite piecewise homogeneous Markov chains, and variation of the TPs matrix is governed by a higher-level homogeneous Markov chain and filtering for the systems is obtained. However, it is only considered that the higher-level TPs (HTPs) are partly unknown and TPs are completely known. As far as we know, the case where both HTPs and TPs are partly unknown has not been investigated which leaves a room for us to improve.

In the paper, the problem of stabilisation for discrete-time piecewise homogeneous Markov jump linear system with imperfect TPs will be investigated where each element of HTPs and TPs can be completely unknown or only its upper and lower bound are known. By using convex combination method, a sufficient condition is proposed to guarantee the stochastic stability of the discrete-time piecewise homogeneous Markov jump linear system with imperfect TPs. Based on this stochastic stability criterion, design approaches for state feedback and static output feedback are provided which can stabilize the resulting closed-loop systems.

The remainder of this paper is stated as follows. In Section 2, the imperfect TPs are formulated and some definitions and lemmas are stated. In Section 3, a sufficient condition is established firstly such that the unforced discrete-time piecewise homogeneous Markov jump linear system with imperfect TPs is stochastically stable, and then design a mode-dependent and variation-dependent state feedback and a mode-dependent and variation-dependent static output feedback controller for discrete-time piecewise homogeneous Markov jump linear system with imperfect TPs such that the corresponding closed-loop system is stochastically stable. In Section 4, some numerical examples are provided to illustrate the feasibility and applicability of the developed results. Section 5 concludes the paper.

The notation used in this paper is standard. The superscript “” stands for matrix transposition. represents the sets of positive integers. denotes the dimensional Euclidean space. The notation refers to the Euclidean vector norm. stands for the mathematical expectation. In addition, in symmetric block matrices or long matrix expressions, star is used as an ellipsis for the terms that are introduced by symmetry and stands for a block-diagonal matrix. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operation.

2. Preliminaries and Problem Formulation

Consider the following discrete-time piecewise homogenous Markov jump linear systems defined on a complete probability space :where is the state, is the input, and is the measurable output. For fixed , , , and are constant matrices and has full column rank. The process is described by a discrete-time Markov chain, which takes values in the finite set with mode transition probabilities:where ,   , denotes the transition probability from mode at time to mode at time and . The TPs matrix of system (1) can be further defined by Moreover, the time-dependent variable is governed by another Markov chain which is independent with . The variation of is in the finite set and can be regarded as a higher-level Markov chain with generator , where , , denotes the transition probability from at time to at time with ; that is,

For more details regarding this refer to [19]. Meanwhile, in [19], the TPs matrix for a fixed is considered to be known and only the completely known and unknown elements are contained in the HTPs , which may lead to conservativeness. In this paper, we extend this hypothesis to the more general case, which may be called imperfect TPs (both HTPs and TPs are deficient); specifically, the elements in HTPs matrix and TPs matrix are considered to be bounded or completely unknown. Certainly, the precisely known element can be taken as a special case of bounded one. Without loss of generality, taking and , for example, the HTPs matrix and TPs matrix of system (1) can be expressed aswhere “” represents the unknown elements, and the others denote the elements whose upper and lower bound are known.

To proceed fluently, the following notations may be helpful to the derivation of the main results.(1)For each , introduce the two sets below: upper and lower bound of are known for ; is completely unknown for .(2)For each , define the following two sets: upper and lower bound of are known for ; is completely unknown for .(3)For each and , denote , .

Remark 1. Observe that if , it is further described as , where , , represents the index of the th element whose upper and lower bound are known in the th row of matrix . In the same way, if , it is further described as , where , , represents the index of the th element whose upper and lower bound are known in the th row of matrix .

Now, we give the following definition and lemma which play an indispensable role in the subsequent section.

Definition 2 (see [19]). System (1) is said to be stochastically stable if, for every initial condition and , the following holds:

Lemma 3 (see [19]). System (1) is stochastically stable when if there exist a set of positive definite matrices for each satisfyingwhere .

This paper aims at deriving a criterion to guarantee the stochastic stability of Markov jump system (1) whose both HTPs (4) and TPs (2) are bounded or completely unknown and then designing a mode-dependent and variation-dependent state feedback controller and a mode-dependent and variation-dependent static output feedback controller such that the corresponding closed-loop system is stochastically stable, respectively.

3. Main Results

In this section, a sufficient condition is first presented on the stochastic stability for the unforced systems (1) with imperfect TPs. Next a mode-dependent and variation-dependent state feedback controller and a mode-dependent and variation-dependent static output feedback controller are designed, respectively, by using the sufficient condition.

3.1. Stability Analysis

Now, we focus on the stability analysis of system (1).

Theorem 4. The unforced systems (1) with imperfect TPs (2) and (4) are stochastically stable, if there exist a group of positive definite matrices , such that the following LMIs hold for each :where , , , and .

It is worth noting that, for given and , (8) contains multiple LMIs. The number of LMIs depends on elements in with , with and sets and .

Proof. Consider Since , , the property of convex combination implies that (7) is equivalent toOn the other hand, for each , we have where is dependent on .
Since , , , and , (10) is equivalent toNoticing that , , where and are sets, it follows from (8) that (12) holds.

Remark 5. Theorem 4 provides a sufficient condition on the stochastic stability of the unforced systems (1) with imperfect TPs. When , , Theorem 4 reduces to the condition on the stochastic stability for the unforced systems (1) with partly unknown HTPs and partly unknown TPs. Further, if , , , Theorem 4 reduces to Lemma 3.

Remark 6. It should be noted that implies the HTPs disappear; in this regard, if and , Theorem 4 reduces to the condition on the stochastic stability of the unforced systems (1) with homogeneous partly known transition probabilities [15].

When and , , the following corollary is obtained.

Corollary 7. The unforced systems (1) with partly unknown HTPs (4) and completely known TPs (2) are stochastically stable, if there exist a group of positive definite matrices such that the following LMIs are established for each , :

Remark 8. Corollary 7 is less conservative when compared with the result in [19], where the stability conditions are given byThe inequalities yield which is (13). Therefore, Corollary 7 is less conservative than the result in [19].

3.2. Controller Design

Next the stabilisation problems of system (1) by state feedback and static output feedback will be considered. Firstly, let us focus on the design of the mode-dependent and variation-dependent state feedback controller, which is in the following form:where for , are the controller gains to be determined. Using (1), the closed-loop system is represented as

According to Theorem 4, in the next, a mode-dependent and variation-dependent state feedback controller in the form of (16) will be designed such that the closed-loop system (17) is stochastically stable.

Theorem 9. The closed-loop system (17) with imperfect TPs (2) and (4) is stochastically stable if there exist a set of positive definite matrices and matrices , such that the following LMIs hold for each , :where , , , and . Therefore,Moreover, if the above LMIs are feasible, the stabilising controller gain is given by

Proof. Considering the closed-loop system (17), based on Theorem 4, it is easy to see that the closed-loop system (17) is stochastically stable if the following LMIs hold:where , , , and .
Noticing Remark 1, by using Schur complement lemma, (21) is equivalent towhere Denote that , , , , and , then multiply (22) by and its transpose on both sides, and apply the change of variable , and (18) is obtained readily. Therefore, if (18) holds, the underlying system is stochastically stable. Meanwhile, the desired controller gain is given by (20).

In the following, a mode-dependent and variation-dependent static output feedback controller will be designed such that the closed-loop system is stochastically stable. The mode-dependent and variation-dependent static feedback controller in the following form is designed:where for , are the controller gains to be determined. Using (1), the closed-loop system is represented as

Theorem 10. The closed-loop system (25) with imperfect TPs (2) and (4) is stochastically stable if there exist a set of scalars and a set of positive definite matrices and nonsingular , , and , such that the following LMIs hold for each , :where the parameters share the same meaning with Theorem 9 and Moreover, if the above LMIs are feasible, the stabilising controller gain is given by

Proof. Note that the closed-loop system (25) is stochastically stable if (22) holds for , , where is replaced by . Multiply (22) by and its transpose on both sides. Since , using (27) and applying the change of variable , (26) is obtained easily. Meanwhile, the desired controller gain is given by (29).

Remark 11. Here an additional slack matrix is introduced. The advantages of introduction of the slack matrix are twofold: one is that it makes the equality constraint and the gain independent of Lyapunov matrix; the other is that the previous results by using equality constraint can be regarded as a special case of the obtained results. Actually, if we take and , Theorem 10 reduces to that by using equality constraint . Thus, compared with the results based on equality constraint , the solvability of (26) and (27) is much larger; as a result, Theorem 10 is less conservative than that with equality constraint on Lyapunov matrices. Besides, a numerical comparison is made in Example 4 in order to further verify this point.

Condition (27) may be difficult to be solved using the LMI toolbox of MATLAB. To overcome this drawback, replace the condition by the following one that may approximate constraint (27):where is a given sufficiently small positive scalar. By Schur complement, (30) is equivalent to the following LMI:

4. Numerical Examples

In this section, four examples are given to show the validity of the proposed method. Firstly, an example is presented in which state feedback controller is used to stabilize the considered system.

Example 1. Consider the discrete-time Markov jump systems (1) with 3 TPs matrices and 3 operation modes; the detailed data is shown as Here the imperfect TPs matrices are described aswhere , , and . By solving (18), the resulting state feedback controller gains are

The purpose here is to design a mode-dependent and variation-dependent stabilizing controller of the form of (16) such that the resulting closed-loop system (17) is stochastically stable with the HTPs matrix (33) and TPs matrices (34). Using the LMI toolbox in MATLAB, the controller gains are calculated, which are shown above. A case for variation of TPs matrices and variation of modes according to (33) and (34) (illustrated in Figure 1) is given. With the controller, system responses are shown in Figure 2, where initial condition is . It can be seen from Figure 2 that the open-loop system is diverging; after applying the controller to the open-loop system, the resulting closed-loop system is stochastically stable.

Next, an example which shows that the resulting closed-loop system is stochastically stable by using static output feedback controller will be given.

Example 2. Consider the discrete-time Markov jump systems (1) with 3 TPs matrices and 3 operation modes; the parameters are given by Here imperfect TPs matrices are the same as Example 1. By taking and solving (26) and (31), the resulting static output feedback controller gains are Take initial condition . It is seen from Figure 3 that the open-loop system is diverging, whereas, after applying the controller to the open-loop system, the resulting closed-loop system is stochastically stable.

Remark 3. It is obtained from the Examples 1 and 2 that, compared with previous results, the conditions on stability and stabilisation proposed in the paper can solve the case where each element of HTPs and TPs can be completely unknown or only its upper and lower bound are known, but the result in [19] only deals with the case where HTPs are partly unknown. Therefore, the methods in the paper are more effective.

In the sequel, we will show the less conservativeness of the proposed method than some existing results; furthermore, we also check how the stabilization region is effected by the acquisition of knowledge about the transition probabilities.

Example 4. Consider the discrete-time Markov jump systems (1) with the following parameters: where , . Here imperfect TPs matrices are described asThe stability of systems (1) can be checked using Corollary 7 in this paper and the result in [19], for different values of pairs . The result is depicted in Figure 4 and reveals that Corollary 7 in the paper is less conservative than the result in [19].

The HTPs matrix in (39) is extended to the following form:where , .

It is seen that, in the HTPs matrix (41), some known elements in the HTPs matrix (39) become such that only upper and lower bounds of them are known. In Figure 5, “red ” indicates that takes any value between 0.2 to 0.4 and takes any value between 0.4 and 0.6; there is the same state feedback controller such that close-loop system is stochastically stable, so the controller has a certain robustness. When we take , , the HTPs matrix (41) reduces to HTPs matrix (39). Compared with the HTPs matrix (39), stabilisation region of the system with HTPs matrix (41) is reduced due to more knowledge of elements in the HTPs matrix.

The TPs matrix in (40) is extended to the following form:

It can be seen from Figure 6 that Theorem 9 can handle the case where TPs are partly unknown simultaneously. That the first row of in (40) is unknown indicates the jump from mode 1 to mode 1 and mode 2 is not random but arbitrary. is analogous. Compared with the TPs matrix (40), stabilisation region of the system with TPs matrix (42) is reduced due to more knowledge of elements in the TPs matrices.

Now, we consider the joint effect of imperfect TPs matrices (41) and (42) on the stabilisation region, which contains the above two case. Therefore, in Figure 7, stabilisation region of the system in the case of imperfect TPs matrices (41) and (42) contains that in the case of imperfect TPs matrices (39) and (40).

In particular, in order to illustrate the role of the slack matrix variables shown in Remark 11, the corresponding stabilisation regions with and without using these slack variables are plotted in Figure 8, from which it is known that the use of these slack variables can reduce the conservatism of the previous approach.

From Example 4, it is easily seen that the more accurate transition probability knowledge the system has, the bigger the region of stabilisation is, in other words, the more easily the system can be stabilized.

In the end of this section, a practical example of DC motor is provided.

Example 5. A DC motor device [21] driving an inertial load is considered. By the methods in [21], the DC motor description is expressed by the following model:where the parameters , , , of DC motor device are borrowed in [21] as and the parameters are given as

Here imperfect TPs matrices are the same as Example 1. By solving (18), the resulting state feedback controller gains are

By taking and solving (26) and (31), the resulting static output feedback controller gains are The initial condition is supposed to be . It is seen from Figure 9 that the open-loop system is diverging, whereas, after applying the controllers to the open-loop system, the resulting closed-loop systems are stochastically stable.

5. Conclusions

This paper has considered the stochastic stability and stabilisation problems for discrete-time piecewise homogeneous Markov jump linear systems with imperfect TPs. A new less conservative condition for stochastic stability has been proposed by virtue of LMI technique. Based on the stability result, the stabilising state feedback controllers and static output feedback controllers are constructed through the explicit solutions of LMIs. At last, several numerical examples are given. Examples 1 and 2 show that the obtained results can deal with the more general uncertain transition probability than [19]; a comparison with [19] is made in Example 4, which illustrates the less conservative property of the obtained result, and furthermore, from Example 4, it can be informed that the more accurate transition probability knowledge the system has, the bigger the region of stabilisation is. Example 5 shows that the proposed results in this paper can be used to deal with the stabilisation probability of the DC motor.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Natural Science Foundation of China under Grant 61273008, the Natural Science Foundation of Liaoning Province under Grant 2014020019, and the Nature Science of Foundation of Shenyang, under Grant F14-231-1-02, respectively.