- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2014 (2014), Article ID 938781, 16 pages

http://dx.doi.org/10.1155/2014/938781

## Robust Finite-Time Control for Nonlinear Markovian Jump Systems with Time Delay under Partially Known Transition Probabilities

Institute of Automation, Qufu Normal University, Qufu, Shandong 273165, China

Received 7 November 2013; Accepted 7 December 2013; Published 20 February 2014

Academic Editor: Hao Shen

Copyright © 2014 Dong Yang and Guangdeng Zong. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper is concerned with the problem of robust finite-time control for a class of nonlinear Markovian jump systems with time delay under partially known transition probabilities. Firstly, for the nominal nonlinear Markovian jump systems, sufficient conditions are proposed to ensure finite-time boundedness, finite-time boundedness, and finite-time state feedback stabilization, respectively. Then, a robust finite-time state feedback controller is designed, which, for all admissible uncertainties, guarantees the finite-time boundedness of the corresponding closed-loop system. All the conditions are presented in terms of strict linear matrix inequalities. Finally a numerical example is provided to demonstrate the effectiveness of all the results.

#### 1. Introduction

Markovian jump systems, a class of hybrid dynamical systems, which consists of an indexed family of continuous or discrete-time subsystems and a set of Markovian chain that orchestrates the switching between them at stochastic time instants, have received extensive attention over the past few decades [1, 2]. Many real world processes, such as economic systems [3], manufacturing systems [4], electric power systems [5], and communication systems [6], may be modeled as Markovian jump systems when any malfunction of sensors or actuators cause a jump behavior in process performance. Recently, nonlinear Markovian jump systems have been extensively applied and developed in various disciplines of science and engineering, and a great number of excellent works have been developed [7–9].

Generally speaking, the behavior of nonlinear Markovian jump systems is determined by the transition probabilities in the jumping process. Usually, it is assumed that the information on transition probabilities was completely known. However, transition probabilities may be partially known for some real systems. For example, the networked control systems can be modeled by nonlinear Markovian jump systems with partially known transition probabilities when the packet dropouts or channel delays occur [10]. In addition, there are few results about the known bounds of transition probability rates or the fixed connection weighting matrices [11, 12]. Therefore, it is reasonable to study Markovian jump systems with partially known transition probabilities, especially, when it is difficult to measure the bounds of transition probability rates. It stimulates the research interests of the author.

Uncertainties and time delay frequently occur in various engineering systems, which usually is a source of instability and often causes undesirable performance and even makes the system out of control [14, 15]. Therefore, time delay systems with robustness have received an increasing attention among the control community [16–18]. On the other hand, one may be interested in not only system stability but also a bound of system trajectories over a fixed short time [19]. For instance, for the problem of robot arm control [7], when the robot works under different environmental conditions with changing payloads, it requests that the angle position of the arm should not exceed some threshold in a prescribed time interval. Meanwhile, the scholars attach more importance to the control problem, which is to find a stable controller such that the disturbance attenuation level is below a prescribed level. There are a great number of useful and interesting results about control problem for linear and nonlinear Markovian jump systems in the literature [20–25]. To the best of our knowledge, the synthesis issue of robust finite-time control for nonlinear Markovian jump systems with time delay under partially known transition probabilities has not been fully investigated until now, which motivates us to carry out the present study.

In this paper, we investigate the problem of robust finite-time control for nonlinear Markovian jump systems with time delay under partially known transition probabilities. The main contributions lie in the fact that some tractable sufficient conditions are provided to ensure finite-time boundedness or finite-time state feedback stabilization. A robust finite-time state feedback controller is designed, which guarantees the finite-time boundedness of the closed-loop system. Seeking computational convenience, all the conditions are cast in the format of linear matrix inequalities. Finally, a numerical example is provided to demonstrate the effectiveness of the main results.

*Notations.* Throughout this paper, the notations used are fairly standard. For real symmetric matrices and , the notation (resp., ) means that the matrix is positive semi-definite (resp., positive definite). represents the transpose matrix of , and represents the inverse matrix of . () is the maximum (resp., minimum) eigenvalue of a matrix . represents the block diagonal matrix of and . is the unit matrix with appropriate dimensions, and the term of symmetry is stated by the asterisk in a matrix. stands for the -dimensional Euclidean space, is the set of all real matrices, and means a set of positive numbers. denotes the Euclidean norm of vectors. denotes the mathematical expectation of the stochastic process or vector. is the space of -dimensional square integrable function vector over .

#### 2. Problem Formulation and Preliminaries

Give a probability space (, , , where is the sample space, is the algebra of events, and is the probability measure defined on . The random process is a Markovian stochastic process taking values in a finite set with the transition probability rate matrix , , and the transition probability from mode at time to mode at time is expressed as with the transition probability rates , for , and , where , and .

Consider the following nonlinear Markovian jump system with time delay in the probability space (, , ): where is the state vector, is the control input, is an arbitrary external disturbance, is the control output, represents a vector-valued initial function, and is the constant delay. : is an unknown nonlinear function. , , , , , , , and are known mode-dependent constant matrices with appropriate dimensions. , , and are unknown matrices, denoting the uncertainties in the system, and the uncertainties are time-varying but norm bounded uncertainties satisfying where , , , , , and are known mode-dependent matrices with appropriate dimensions and is the time-varying unknown matrix function with Lebesgue norm measurable elements satisfying Consider the following state feedback controller: where and are the state feedback gains to be designed. Then the closed-loop system is as follows:

For notational simplicity, when , , , , , , , , , , , , , , , , , , , and are, respectively, denoted as , , , , , , , , , , , , , , , , , , and .

In addition, the transition probability rates are considered to be partially known; that is, some elements in matrix are unknown. For instance, for system (2) with four subsystems, the transition probability rate matrix may be as where “?” represents the unknown transition probability rate. , we denote , and Moreover, if , it is further described as where represents the thknown transition probability rate of the set in the throw of the transition probability rate matrix .

*Remark 1. *When , , it is reduced to the case where the transition probability rates of the Markovian jump process are completely known. When , , it means that the transition probability rates of the Markovian jump process are completely unknown. Mixing the above two aspects, here, a general form is considered.

In this paper, the following assumptions, definitions, and lemmas play an important role in our later development.

*Assumption 2. *The external disturbance is varying and satisfies the constraint condition:

*Assumption 3. *, , and satisfies the following inequality
where

*Definition 4 ( finite-time stability). *For a given time constant , system (2) is said to be finite-time stable with respect to (), if
where , .

*Definition 5 ( finite-time boundedness). *For a given time constant , system (2) is said to be finite-time bounded with respect to (), if the condition (13) holds, where , .

*Definition 6 ( finite-time boundedness). *For a given time constant , system (2) is said to be finite-time bounded with respect to (), if there exists a positive constant , such that the following two conditions are true: (1)system (2) is finite-time bounded with respect to ();(2)under zero initial condition (), for any external disturbance satisfying condition (10), the control output of system (2) satisfies

*Definition 7 ( finite-time state feedback stabilization). *The system (2) is said to be finite-time state feedback stabilizable with respect to (), if there exist a positive constant and a state feedback controller in the form of (5), such that the closed-loop system (6) is finite-time bounded.

*Definition 8 (see [26]). *In the Euclidean space , introduce the stochastic Lyapunov function for system (2) as , and the weak infinitesimal operator satisfies

*Remark 9. *It easily follows from (12) that , . So and can be decomposed as

*Remark 10. *It is noticed that finite-time stability can be regarded as a particular case of finite-time boundedness by setting . That is, finite-time boundedness implies finite-time stability, but the converse is not true.

Lemma 11 (see [27]). *Let , , , and be real matrices of appropriate dimensions with ; then for a positive scalar , there holds:
*

*The aim in this paper is to find a tractable solution to the problem of finite-time state feedback stabilization.*

*3. Main Results*

*3.1. Finite-Time Boundedness Analysis*

*In this subsection, we will consider the problem of finite-time boundedness for the nominal system of nonlinear Markovian jump system (2) with for all ; that is,
Under the controller (5), the closed-loop system is
*

*Theorem 12. Given , if there exist positive constants and , symmetric positive definite matrices , and , and symmetric matrices , such that for all
then system (18) () under partially known transition probabilities is finite-time bounded with respect to (), where*

*Proof. *For system (18) (), choose a Lyapunov function candidate
where . Then by Definition 8, we get
Based on Lemma 11, there exist scalars such that
Substituting (27) into (26) yields
It is easy to obtain that
From (28) and (29), the following holds:
Due to the fact that for arbitrary symmetric matrices , (30) can be written as
Noticing that for all and for all , if (the elements of the diagonal are known), by inequalities (20) and (21), the following inequalities hold:

If (the elements of the diagonal are unknown), according to the inequalities (20)–(22), inequality (32) holds. Multiplying (32) by yields
Applying Dynkin’s formula for (33), we obtain
which shows

This together with and gives rise to

Considering that
and combining (36) and (37), it follows that
Condition (38) implies that, for , .

The proof is complete.

*Corollary 13. Given , if there exist positive constants , , and , symmetric positive definite matrices , and , and symmetric matrices , such that for all
then system (18) under partially known transition probabilities is finite-time bounded with respect to (), where
*

*3.2. Finite-Time Performance Analysis*

*3.2. Finite-Time Performance Analysis**In this subsection, based on Corollary 13, some sufficient conditions will be provided ensuring the finite-time boundedness of system (18) and the finite-time stabilization of system (19).*

*Theorem 14. Given and satisfying (10), system (18) under partially known transition probabilities is finite-time bounded with respect to (), if there exist positive constants , , and , symmetric positive definite matrices and , and symmetric matrices , such that for all
where
*

*Proof. *From (44), the following inequality holds:
This together with (49) implies (39). Then based on (39)–(42), system (18) is finite-time bounded.

Then, let us prove that inequality (14) is satisfied for any external disturbance under zero initial condition. For system (18), choosing a Lyapunov function candidate (25), we have
for any symmetric matrices .

According to inequality (44), (45), and (46), we derive
Under zero initial condition, using Dynkin’s formula yields
Further, it implies that
Therefore expression (14) holds with .

The proof is complete.

*Corollary 15. Given and satisfying (10), system (19) under partially known transition probabilities is finite-time state feedback stabilizable via a state feedback controller (5) with respect to , if there exist positive constants , , and , symmetric positive definite matrices and , and symmetric matrices , such that for all
where
*

*It is clear that (54) is a nonlinear matrix inequality due to the existence of the nonlinear terms , , , and . In order to solve the desired controller , we give the following result.*

*Theorem 16. Given , system (18) under partially known transition probabilities is finite-time state feedback stabilizable via a state feedback controller with respect to , if there exist positive scalars , , , , and , symmetric positive definite matrices , symmetric matrices , and matrices and such that for all
where
with described in (9) and . Moreover, the finite-time state feedback controller gains in (5) are given by .*

*Proof. *It is clear that system (18) is finite-time state feedback stabilizable if the conditions (54)–(57) are satisfied. Notice that inequality (54) is equivalent to the following condition:
Pre- and postmultiplying inequality (66) by block diagonal matrix diag , letting , , and , we have
where
Since , , inequality (67) is discussed in the following two cases.*Case 1*. When , the left side of (67) becomes
where
Applying Schur complement lemma to (69), then (59) easily follows.*Case 2*. When , the inequality (69) turns into
where
Similar to the proving process of the case one, we can prove that (60) is true.

Pre- and postmultiplying inequalities (55) and (56) by , respectively, and letting , , and , we have
Inequality (73) is equivalent to LMI (61). Denoting and taking into consideration, we conclude that condition (57) holds. Hence, the following conditions
guarantee that
It should be easily observed that condition (76) implies LMI (63) and (75) is equivalent to (64). Therefore if LMIs (59)–(64) hold, the closed-loop system (19) is finite-time bounded, and then system (18) can be stabilized via the state feedback controller (5).

This completes the proof of Theorem 16.

*3.3. Robust Finite-Time Control*

*3.3. Robust Finite-Time Control*

*In this subsection, a robust finite-time state feedback controller is designed to guarantee the finite-time state feedback stabilization of system (2).*

*Theorem 17. Given , the problem of robust finite-time state feedback stabilizable for system (2) under partly known transition probabilities is solvable, if there exist positive scalars , , , , , , , , and , symmetric positive definite matrices , symmetric matrices , and matrices and such that for all
where
with described in (9) and . Moreover, the finite-time state feedback controller gains in (5) are given by .*

*Proof . *In (59) and (60), replacing , , and with , , and , respectively, the following conditions are obtained:

Based on Lemma 11, there exist scalars , , , and such that