Abstract

This paper is concerned with a class of discrete-time nonhomogeneous Markov jump systems with multiplicative noises and time-varying transition probability matrices which are valued on a convex polytope. The stochastic stability and finite-time stability are considered. Some stability criteria including infinite matrix inequalities are obtained by parameter-dependent Lyapunov function. Furthermore, infinite matrix inequalities are converted into finite linear matrix inequalities (LMIs) via a set of slack matrices. Finally, two numerical examples are given to demonstrate the validity of the proposed theoretical methods.

1. Introduction

There are a lot of dynamic systems which are subjected to abrupt variations in parameters caused by external surroundings or internal structure in practical engineering field. Markov jump system models have attracted more and more attention because they effectively described this kind of systems and have become a hot topic in control theory. Many mature and systematic results have been obtained [15].

It should be pointed out that most of the current research is done in the framework of homogeneous Markov process (or Markov chain), which is under the assumption that the transition rate (or transition probability) matrix of the system at any time is the same. In fact, due to the impact of the objective environment, this assumption is difficult to satisfy. For example, in a Markov jump networked control system, the transition probability is time-variant because the packet dropouts and network delays are different at different periods. Another typical example can be found in the fault-prone systems, where Markov process is used to describe the failure rate that is influenced by the factors of the age and the usage rate. Obviously, it is not a homogeneous process. Driven by these practical problems, people turn to the nonhomogeneous Markov jump systems [621].

In order to describe the nonhomogeneous property, several assumptions are put forward. In [6], the time-varying transition probabilities of discrete-time Markov jump systems are considered to be finite piecewise homogeneous with two types of variations in the finite set: arbitrary variation and stochastic variation, which implies that the transition probabilities are varying but invariant within an interval. The estimation problem is investigated. This assumption is generalized to the continuous-time Markov jump system [7], Markov jump neural networks [8, 9], complex networks [10], and singular Markov jump systems [11], and the robust stability, stochastic stability, passivity analysis, and synchronization are studied, respectively. Another way of describing time-varying characteristics is in a polytopic sense. It is proposed for the first time in [12] and a sufficient condition for stochastic stability is derived by using a parameter-dependent stochastic Lyapunov function. The main idea is that the transition probability matrix is assumed to be valued in a polytope convex set with some given vertices when the exact transition probability is not known. The polytopic model is more general and includes the piecewise homogeneous Markov jump system model with arbitrary switch as a special case. Subsequently, many new results are obtained on this model. In [1315], mode-dependent, variation-dependent, and observer-based controllers are designed to satisfy the stochastic stability and prescribed performance index. The other control problems, such as control [16] and fault detection [17], are also considered. An application to DC motors can be found in [18]. The model is used not only in discrete-time systems but also in continuous-time systems; see [19]. However, up to now, the model has not been applied to Markov jump systems with multiplicative noises by the authors’ knowledge. This kind of systems with multiplicative noises is often used in engineering practice. For example, a practical model with the control input dependent noise in CDMA systems can be found in [22]. This note aims to make an attempt to investigate the stability of such systems.

Our purpose is to address the stability analysis and stabilization controller design for discrete-time nonhomogeneous Markov jump systems with multiplicative noises and with polytopic transition probability matrices. It is well known that stability analysis is the basis for the design and synthesis in all control systems and fruitful results have been obtained; see [2330] and the references therein. In this article, two types of stability, stochastic stability (SS) in mean square sense and finite-time stability (FTS), will be considered.

The organization of this paper is as follows. In Section 2, we provide model formulation and give some definitions. Section 3 is devoted to stochastic stability and stabilization. In Section 4, the finite-time stability and stabilization will be discussed. Some numerical examples are presented in Section 5. Finally, a brief concluding remark is given in Section 6.

Notations 1. is -dimensional Euclidean space; is the transpose of matrix ; is the determinant of matrix ; : is a positive definite matrix; is the identity matrix; is the Euclidean norm of vector ; is the expectation operator; is Kronecker function; diag is a block diagonal matrix with the block diagonal elements ; () is the minimum (maximum) eigenvalue of matrix ; Cond() is the condition number of matrix ; is the symmetric part of a symmetric matrix; , , and .

2. Problem Formulation

Consider the following discrete-time stochastic Markov jump system with multiplicative noises:And the corresponding controlled system is as follows:where and are the system state and control input, respectively. is the initial state and , , , and are real coefficient matrices with the appropriate dimension. is a sequence of real random variables defined on a complete probability space with and . The jump process is described by a discrete-time nonhomogeneous Markov chain which takes values in a finite set . The transition probability is which means the probability from mode at time to mode at time and satisfies , . The transition probability matrix is . It is assumed that is time-varying and takes values in a polytope with its vertices ; that is,where are given matrices and the entries of are written by . is independent of .

To be convenient, we denote the coefficient matrices associated with as .

In this paper, we mainly formulate some conditions of SS and FTS. They are two different and independent stability concepts: SS describes the asymptotic behavior of the systems in infinite time domain, while FTS reflects transient performance of the systems in finite-time interval. Now, let us recall the definitions.

Definition 2 (see [1]). System (1) is called stochastically stable if for any initial state and initial mode ,System (2) is called stochastically stabilizable if there exist a sequence of feedback controls such that for any initial state and initial mode , the closed-loop systemis stochastically stable.

Definition 3 (see [23]). System (1) is called finite-time stable with respect to , ifwhere is a given matrix, , and are the given real numbers. System (2) is said to be finite-time stabilizable if there exist a sequence of feedback controls such that the closed-loop system (5) is finite-time stable.

3. Stochastic Stability and Stabilization

In this section, some sufficient conditions will be provided for the stochastic stability and stabilization of systems (1) and (2).

Theorem 4. If there exist a set of matrices , such that for every and , ,where , , , and , then system (1) is stochastically stable.

Proof. Let and be the -algebra generated by . Then, So we have obviously, ; thus Hence, reducing to Therefore, system (1) is stochastically stable from Definition 2. The proof is completed.

Remark 5. Theorem 4 develops a sufficient condition for SS of system (1) when transition probability matrix is arbitrarily valued in a polytope with given vertices. For finite Markov jump switching systems, the stochastic stability is equivalent to asymptotic mean square stability (AMSS) [1]. So Theorem 4 is also a sufficient condition for AMSS.

Remark 6. A convex parameter space method was used in control design of uncertain linear system without jumping. State feedback conditions can be easily derived by exploiting the change of matrix variable due to Geromel and coworkers [31].

It is hard to verify the conditions in Theorem 4 because there are infinite matrix inequalities. Hence it has more theoretical significance than its practical significance. To be solvable, we introduce a set of slack matrices and convert the infinite matrix inequalities into finite LMIs in the following theorem.

Theorem 7. The following two statements are equivalent:
(1) There exist a set of matrices , , and , such that (7) holds for every and , , , satisfying , , , and .
(2) There exist a set of matrices and matrices , such thatwhere and .
The proof is similar to Proposition   in [12], so it is omitted.

From Theorem 7, if are replaced with , respectively, it is easy to get the following stabilization condition and controller design method.

Corollary 8. If there exist a set of matrices and matrices , such that the LMIshold, then system (2) is stochastically stabilized by with the feedback gains .

4. Finite-Time Stability and Control

In this section, we will consider the finite-time stability. Firstly a theoretical criterion is given.

Theorem 9. If there exist a scalar and matrices , , such thatwhere , , , and , then system (1) is finite-time stable with respect to .

Proof. Let . Applying the same procedure as the proof of Theorem 4, we can get It follows from (15) that ; that is, While soHence, from Definition 3, system (1) is finite-time stable with respect to . The proof is completed.

Next, we convert the infinite matrix inequalities (15) and (16) into finite matrix inequalities by use of slack matrices and properties of the condition number and design a state feedback controller to make system (2) finite-time stabilizable.

Theorem 10. (1) If there exist a scalar , matrices , and matrices , such thatwhere and , then system (1) is finite-time stable.
(2) If there exist a scalar , matrices , and matrices , such thatand (22) hold, then system (2) is finite-time stabilized by with the feedback gains .

Proof. (1) The proof can be divided into two parts. Firstly, we prove that (15) holds if (21) holds. We adopt the method used in Proposition of [12].
From (21), . As is positive definite, must be positive definite. While , it follows that is invertible. On the other hand, which means that . Then,which is equivalent to

Because of the invertibility of and , we have

Let ; then (27) can be rewritten aswhere . By Schur complement, (28) is equivalent to

Obviously, (15) holds after multiplying (29) by the corresponding coefficients and adding them.

Next, we prove that (16) holds if (22) is true. Before proving this, we need to recall two properties of the condition number:(a)Cond, ;(b)Cond.

It follows from (a) and (b) that

When (22) holds and , one has

(2) This can be easily proved by the same procedure as .

Remark 11. We should point out that unlike the equivalence of (7) and (13), (21) is just a sufficient condition of (15) due to the existence of .

Remark 12. In fact, (22) can be guaranteed by the following LMIs:So we can obtain a set of feasible feedback controls by solving (23) and (32) or by solving (23) and checking if they satisfy (22).

5. Numerical Examples

In this section, two illustrative examples are presented to show the effectiveness of the proposed main results.

Example 1. Consider system (2) with coefficient matrices as follows:The initial state is given as . The vertices of the polytope transition probability matrices are given as follows: By solving LMIs (14), it can be concluded that system (2) is stochastically stabilized via feedback gains The system modes and the state trajectories of the open-loop and closed-loop systems are, respectively, shown in Figures 13.

Example 2. Consider system (2) with coefficient matrices as follows: The initial state is given as and the vertices of the polytope transition probability matrices are given as follows:Given , , , and and choosing , by solving LMIs (23) and checking (22), it can be concluded that system (2) is finite-time stabilized via feedback gainsThe system modes and the state trajectories of the open-loop and closed-loop systems are, respectively, shown in Figures 46.

6. Conclusions

In this paper, we have investigated the stochastic stability and finite-time stability of a kind of discrete-time nonhomogeneous Markov jump systems with multiplicative noises and polytopic transition probability matrices. Some sufficient conditions for stability are proposed by parameter-dependent Lyapunov function and stabilization controllers are designed by using LMI toolbox. The simulation results show the effectiveness of the developed techniques. This research will motivate us to study the continuous-time version in future.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61473160, 61503224), the Special Funds for Postdoctoral Innovative Projects of Shandong Province (201403009), the Research Award Funds for Outstanding Young Scientists of Shandong Province (BS2014SF005), and SDUST Research Fund (2015TDJH105).