Abstract

This paper investigates the finite-time control problem for discrete-time Markov jump systems subject to saturating actuators. A finite-state Markovian process is given to govern the transition of the jumping parameters. The finite-time controller via state feedback is designed to guarantee that the resulting system is mean-square locally asymptotically finite-time stabilizable. Based on stochastic finite-time stability analysis, sufficient conditions that ensure stochastic control performance of discrete-time Markov jump systems are derived in the form of linear matrix inequalities. Finally, a numerical example is provided to illustrate the effectiveness of the proposed approach.

1. Introduction

During the past several decades, the issue of finite-time control has drawn increasing attention of academic researchers in the area of control field, and various results have been reported. To this end, a considerable amount of research has been carried out; see Hong et al. [1]; He and Liu [2, 3]; Li et al. [4]; Song et al. [5]; Lan et al. [6]. Among the proposed solutions, state feedback control is an important approach to improve finite-time control performance. For instance, by using both state feedback and dynamic output feedback control, finite-time control of the robot system is studied in Hong et al. [7]. Furthermore, based on dynamic observer-based state feedback and Lyapunov-Krasovskii functional approach, the finite-time control problem for time-delay nonlinear jump systems was addressed in the work of He and Liu [2, 3, 8].

On the other hand, more and more attention has been paid to the study of actuator saturation due to its practical and theoretical importance. Therefore, various approaches were investigated to handle systems with actuator saturation, such as in the work of Cao and Lin [9]; the stability of discrete-time systems with actuator saturation was analyzed by a saturation-dependent Lyapunov function. By introducing a time-varying sliding surface, the robust stabilization problem of linear unstable plants with saturating actuators was studied in Corradini and Orlando [10]. Furthermore, the controller design method of Markov jumping systems subject to actuator saturation was presented in Liu et al. [11]. Via dynamic anti-windup fuzzy design, the robust stabilization problem of state delayed T-S fuzzy systems with input saturation was proposed in Song et al. [12]. Other results can refer to [1317] and references therein.

It is well known that the control problem of Markov jump systems has also been extensively studied and a large variety of control problems have been widely investigated, for instance, the stabilization of Markov jump systems with time delays [1824], robust control [25], control of singular Markov systems [26], control of discrete-time stochastic Markov jump systems [27, 28], and fuzzy dissipative control for nonlinear Markovian jump systems [29]. Furthermore, robust stability for uncertain delayed neural networks with Markov jumping parameters was analyzed in Li et al. [30]. Robust filter was designed for uncertain discrete Markov jump singular systems with mode-dependent time delay in Ma and Boukas [31]. Delay-dependent robust stabilization problem for uncertain stochastic switching systems with distributed delays was studied in Shen et al. [32]. Via retarded output feedback, passivity-based control problem for Markov jump systems was addressed in Shen et al. [33]. Observer based finite-time control problem of discrete-time Markov jump systems was studied in Zhang and Liu [34]. However, to the best of our knowledge, the problem of finite-time stabilization of discrete-time stochastic systems has not been fully investigated and it is the main purpose of our study.

In this paper, the attention is focused on the finite-time control problem of discrete-time Markov jump systems with actuator saturation. A state feedback controller is designed to ensure the stochastic finite-time boundedness and stochastic finite-time stabilization of the resulting closed-loop system for all admissible disturbances. The desired controller can be designed via solving a convex optimization problem. Finally, a numerical example is employed to show the effectiveness of the proposed method.

Notation. Throughout the paper, for symmetric matrices and , the notation (resp., ) means that the matrix is positive semidefinite (resp., positive definite). is the identity matrix with appropriate dimension. The notation represents the transpose of the matrix (resp., ) means the largest (resp., smallest) eigenvalue of the matrix ; is a probability space; is the sample space, is the -algebra of subsets of the sample space, and is the probability measure on denotes the expectation operator with respect to some probability measure . Matrices, if not explicitly stated, are assumed to have compatible dimensions. The symbol is used to denote a matrix which can be inferred by symmetry, .

2. Preliminaries and Problem Description

2.1. Preliminaries

Throughout this paper, we will use the following definitions and lemmas.

Lemma 1 (see [12]). For the matrix and the system , the appropriate matrix is given if is in the set , where is defined as follows: then for any diagonal positive matrix , we derive

Lemma 2 (see [32]). For the given symmetric matrix : where , , and ; the following conditions are equivalent:(1), (2), (3).

Definition 3 (see [34]). The resulting closed-loop system (12) is stochastic finite-time stable (SFTB) with respect to with , , and if there exists state feedback controller such that

Definition 4 (see [34]). The resulting closed-loop system (12) is said to be stochastic finite-time stable via state feedback with respect to with , , , and if the system (11)-(12) is SFTB with respect to and under the zero-initial condition the output satisfies for any nonzero which satisfies (10), where is a prescribed positive scalar. Moreover, the state feedback controller (11) is called controller of MJS (12).

2.2. Problem Description

Consider the following discrete-time Markov jump system in the probability space : where is the state vector, is the controlled output, and is the saturated control input. is the external disturbances. is a discrete-time Markov process and takes values from a finite set with transition probabilities given by where , for , and . Moreover, the transition rates matrix of the system is defined by The inputs of the plant are supposed to be bounded as follows: For the system , to simplify the notation, we denote for each , and the other symbols are similarly denoted. , , , , , and are known mode-dependent constant matrices with appropriate dimensions.

Assumption 5 (see [34]). The external disturbance is varying and satisfies the following constraint condition: For the system , we construct the following state feedback controller: Then, the resulting closed-loop discrete-time Markov jump system (MJS) is as follows: where .

3. Main Results

In this section, we investigate the design of a state feedback controller which guarantees the locally finite-time stabilizable of the resulting closed-loop system. Some sufficient conditions and the method of designing state feedback controller are given.

Theorem 6. For each , there exists a feedback controller , , such that the resulting closed-loop system (12) is SFTB with respect to with , if there exist scalars , , and , three sets of mode-dependent symmetric matrices , , and , and two sets of mode-dependent matrices and , such that the following conditions hold: where

Proof. Define the following Lyapunov function for each : It is readily obtained that where
Then by pre- and postmultiplying (13) by with , , we have By using Schur complement lemma, we derive It follows that Since , we get It is shown that Then we have Since , it is easily found that Let and noting that it can be verified that Similarly, for all , we can obtain Then it is not difficult to find that which implies Then, one can obtain that Set it is easy to see that It is obvious that (37) is equivalent to (14). Based on Lemma 1, it is easy to obtain condition (15). This completes the proof.

Theorem 7. For each , there exists a feedback controller , , such that the resulting closed-loop system (12) is said to be stochastic finite-time stable via state feedback with respect to , if there exist three scalars , , and , two sets of mode-dependent symmetric matrices and , and two sets of mode-dependent matrices and , such that the following conditions hold:

Proof. Choose the similar Lyapunov function as Theorem 6 and denote Thus, in the light of Theorem 6, we have Then by pre- and postmultiplying (38) by and considering Schur complement Lemma and (43), we derive holds for all . According to (44), one can obtain that Then, we have Under the zero-value initial condition and noting that , for all , it is shown that Since and from (47), we have The following proof is similar to the process of Zhang and Liu [34].
Since , it follows that and then by pre- and postmultiplying (49) by and its transpose, respectively, we derive condition (40). This completes the proof.

Remark 8. For the given scalars , we can take as the optimized variable to obtain an optimized finite-time stabilized controller. The attenuation lever can be reduced to the minimum possible value such that LMIs (38)–(41) hold. The optimization problem can be described as follows:

4. Illustrative Examples

In this section, a numerical example is provided to demonstrate the effectiveness of the proposed method. Consider the following systems with four operation modes.

Mode 1. Consider

Mode 2. Consider

Mode 3. Consider

Mode 4. Consider

The transition rate matrix is given as follows:

In this case, we choose the initial values for , , , , , , and ; Theorem 6 yields to , , , and the bounds of the input saturation .

Based on Theorem 7, we derive

Remark 9. The figures are given on the last page. In this part Figure 1 is of the jump rates; Figure 2 shows the states of the open-loop Markovian jump system; Figure 3 shows the states of the closed-loop Markovian jump system. By applying the controller studied in this paper to the closed-loop plant, it is obviously noticed that and converge to zero quickly. Based on the figures provided, the controller we designed guarantees that the resulting closed-loop systems are mean-square locally asymptotically finite-time stabilizable.

5. Conclusions

This paper considers the finite-time stabilization problem for a class of discrete-time Markov jump systems with input saturation. The finite-time controller via state feedback is designed to guarantee the stochastic finite-time boundedness and stochastic finite-time stabilization of the considered closed-loop system for all admissible disturbances. Based on stochastic finite-time stability analysis, sufficient conditions are derived in the form of linear matrix inequalities. Finally, simulation results are given to illustrate the effectiveness of the proposed approach. In the future, we will study the finite-time stabilization problem for a class of Markov jump systems with constrained input and time delay.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant 61203047, the Natural Science Foundation of Anhui Province under Grant 1308085QF119, and the Key Foundation of Natural Science for Colleges and Universities in Anhui province under Grant KJ2012A049.