Abstract

Necessary and sufficient condition for stochastic stability of discrete-time linear switched system with a random switching signal is considered in this paper, assuming that the switching signal allows fixed dwell time before a Markov switch occurs. It is shown that the stochastic stability of the system is equivalent to that of an auxiliary system with state transformations at switching time, whose switching signal is a Markov chain. The stochastic stability is studied using a stochastic Lyapunov approach. The effectiveness of the proposed approach is demonstrated by a numerical example.

1. Introduction

The study of hybrid system is motivated by several real world technological processes involving the interconnection of logical and discrete dynamics. The evolution of logical variables may be modeled either within a deterministic or within stochastic framework. Among stochastic hybrid systems, a widely investigated class is given by random switched linear system, which consists of a set of linear systems and a random switching signal. If the switching signal is a Markov process or Markov chain, Markov jump linear systems are considered [1, 2]. In [1], some stable conditions for mean square stability for discrete-time jump linear system with finite state Markov chain are presented and the stochastic stability is also considered. The analysis and synthesis problems of stochastic stability in Markov jump linear system have been extensively addressed, such as Markov jump Lur’e system in [3], state and mode detection delay system in [4], antilinear system in [5], and singular system in [6], in Borel space [7]. Due to the probabilistic description of communications, Markov jump systems are well suited to model changes induced by nature, for example, unexpected events and random faults.

Among different assumptions on the switching signal, the arbitrary switching framework is described by considering the switching signal to be an exogenous perturbation [8]. The properties of stability and performance must hold for any possible switching rules [911]. If the switching signal could be designed or governed by a supervisor, the deterministic models are more adequate. For example, in many hybrid systems, the switching signal may be designed in order to improve some properties of the systems [8].

To the best of our knowledge, there exists a vast literature on both stochastic and deterministic hybrid systems, but fewer contributions have investigated the stability of discrete-time hybrid systems subject to both stochastic jumps and deterministic switching. For continuous-time system, [12] considers the stability analysis of linear switched systems with a random switching signal, which could be partitioned into the deterministic part and random part. In [13], the stability of a class of Markov jump linear systems characterized by piecewise-constant transition rates and system dynamics is investigated. The switching signal proposed in this paper has a wide-ranging application; for example, in general models of queuing theory, the interarrival time is not exponentially distributed and may contain a deterministic component [14].

In this paper, the stochastic stability of a discrete-time switched linear system with a random switching signal is considered. The dwell time of each mode could consist of two parts: fixed dwell time and random dwell time, which means that the system almost surely stays in each mode for a few time instants before the Markov switch occurs. Through an auxiliary Markov jump linear system with state transitions at switching time, whose stochastic stability is equivalent to that of original system, a necessary and sufficient condition is proposed by using the stochastic Lyapunov approach. When the parameters of the random switching signal are known, the system stability can be checked by solving a set of coupled linear matrix inequalities. Noting that the fixed dwell time could be designed, one can change the dwell time to affect the stochastic stability of the system, which will be introduced in the numerical example.

Compared to the previous work [17], a new class of random switching signal is proposed and more general views of switching signals are given. Compared to [12], the stochastic stability is considered in discrete-time system. When the system matrices contain zero eigenvalues, the proof of equivalence between the stochastic stability of system and its auxiliary system is technically difficult, which is solved by using the Jordan decompositions of the system matrices. Moreover, the results in this paper also lay a foundation for novel hybrid controller design.

The remainder of this paper is organized as follows. The mathematical model of the concerned system is formulated and some preliminaries are given in Section 2. In Section 3, a Markov jump switched linear system with state transitions at switching time is proposed, whose stochastic stability is proved to be equivalent to that of the original system. A necessary and sufficient condition is given in Section 4. A numerical example is provided in Section 5.

Notation. The notation used throughout this paper is fairly standard. The superscript stands for matrix transposition. and denote the set of positive integers and the set of nonnegative integers, respectively. , , and denote the -dimensional Euclidean space, the set of real matrices, and the set of real symmetric positive definite matrices, respectively. The notation means that is real symmetric and positive definite, and means . For a matrix , denote and as the 2-norm and the set of eigenvalues of , respectively. Let the space be a complete probability space. and denote the mathematical expectation and the generated -algebra.

2. Problem Information and Preliminaries

Consider the following discrete-time linear switched system defined on the complete probability space :where and is the state vector. Switching signal , governing the switching among different system modes, takes values in the finite set . Suppose that the system switches its operation mode to at time , which means that and , and the switching signal could be descried as follows. For the time , where , no switching is allowed almost surely; that is,For , mode switching occurs according to the mode transition probabilities given bywhere and . The mode transition probability matrix is denoted by . If the next switching occurs at time , we can define . The dwell time of the system in mode is defined as , which indicates the total time length of the system has been in mode .

Remark 1. The parameter , which is a fixed number for every mode , plays the roles of “dwell time” in deterministic switched systems and is called the fixed dwell time of system in (1). According to (3), is a random variable and is called the random dwell time of the system. The discrete-time switching signal is motivated by the continuous-time case in [12]. There are some differences between the discrete-time and the continuous-time case. In continuous-time case, at the time , the next switching might occur after a short time interval, but in discrete-time case, when the system switches its operation mode at time , the mode may not change in the time interval , which means , even if the parameter .

Remark 2. System (1) with the switching signal is no longer a traditional Markov jump linear system because of the fixed dwell time . This type of system has been studied in [1], in which the switching rule is a Markov chain. Here, a modified model setting technique, in which the fixed dwell time is needed for every mode , is proposed in order to relax the restrictions on switching signal. Obviously, if for every mode , , which means that there does not exist the fixed dwell time for every , then the switching signal reduces into a Markov chain.

The following Lemma is useful to study the property of random dwell time .

Lemma 3. , we have , .

Proof. ConsiderThe proof is completed.

According to Lemma 3, we can get that is a random variable of geometric distribution with parameter .

An example is given to illustrate the property of the switching signal .

Example 4. Suppose that the mode set and the fixed dwell time , , and . A sample path of the switching signal is given in Figure 1, in which the symbols “+” and “” denote the modes at the fixed dwell time and random dwell time, respectively. When the system switches into mode 2 at time , from Figure 1, we can get that the mode switching will not happen almost surely at the time , which contains time intervals. After the time , the system is allowed to switch modes and obey the switching rules (3). Although the mode may not change in the time , which seems to be a part of the fixed dwell time interval, the time is set into the random dwell time interval.

For linear switched system (1), the following definition will be adopted in the rest of this paper.

Definition 5. The discrete-time linear switched system in (1) is said to be stochastically stable, if for any initial condition , , the following inequality holds

The above stochastic stability definition is also a uniform stability in the sense that inequality (5) is required to be true over all the switching signal defined by (2) and (3).

When the fixed dwell time , , system (1) becomes a well-known Markov jump linear system in [1] and the definition of stochastic stability becomes the definition in [1]. Some questions are naturally put forward: Is the stochastic stability of system (1) in this paper equivalent to that of the Markov jump linear system in [1]? Will the values of affect the stochastic stability of the system? An example is given as follows to answer the above two questions.

Example 6. Suppose the mode set and the system matrices and . The mode transition probability matrix is given by . We consider the following two situations.

Situation 1 (). Easily, we can get that . System (1) is not stochastically stable.

Situation 2 (, ). , for every initial condition , , and it is almost sure that there exists time such that . Then, , which follows that , . We have . System (1) is stochastically stable.

Remark 7. From Example 6, we can get that the values of the fixed dwell time may affect the stochastic stability of system.

3. Markov Jump Linear System with State Transitions

In this section, we will study the stochastic stability of system (1) with dwell time (2) and (3). The difficulty is how to deal with the system states in the fixed dwell time interval . Thus, a Markov jump linear system with state transitions is constructed in which the state transitions at switching time are used to replace the system states in the fixed dwell time intervals. Moreover, the stochastic stability of the constructed system is shown to be equivalent to that of system (1).

For each switching time , denote as the operation mode at time . Then we have the system state that will evolve from at time to at time almost surely. An auxiliary system is built, in which there is a state transition from to at time , to squeeze the fixed dwell time interval . According to this idea, the auxiliary system with state transitions can be written aswhere is the system state and is the auxiliary variable of the state . , , denotes the time when the auxiliary system switches its operation modes. The Markov chain , which governs the switching among different system modes in , is described by a discrete-time homogeneous Markov chain with mode transition probabilities. Considerwhere are the same as those in (3). Suppose that system (6) switches to mode at time , and the dwell time of the system in mode is denoted by . According to Lemma 3, we can get that is a random variable of geometric distribution with parameter . According to term (7), it is easy to get that in the auxiliary system equals the random dwell time in system (1), and the two system sample state paths of system (1) and auxiliary system (6) satisfy the following properties for every :(i), .(ii).(iii).(iv), .

The fourth property is a direct result of the first three properties: .

In order to get the equivalence of the stochastic stability between system (1) and its auxiliary system (6), we need the following lemma.

Lemma 8. Given a geometrically distributed random variable with the parameter and a constant satisfied , then

Proof. is a geometrically distributed random variable with ; by Lemma 3, we have , . It follows thatThus,The proof is completed.

For the systems in (1) and (6), we define the filtrations , and , respectively. First result of this paper which establishes the equivalence between system (1) and Markov switched system (6) with state transitions is proposed as follows.

Theorem 9. System (1) is stochastically stable, if and only if the system (6) is stochastically stable.

Proof.
Necessity. Suppose that system (1) is stochastically stable; that is, for every and , we have . Then, based on the properties (i)–(iv), we haveIt follows that system (6) is stochastically stable.
Sufficiency. In order to prove the sufficiency, two cases are considered as follows, respectively.
Case 1. All of have no zero eigenvalues, which means that are nonsingular, .
Then, there exists a constant , such that , and . It follows that , , . Thus, for any and , we havewhere . The last “=” holds because of the properties of , , and Lemma 8. If , set . If , we can obtain thatwhere , , and . The last inequality holds because of term (12).
Case 2. There exists , whose eigenvalues contain zeros.
Suppose that system (1) switches its mode to at time and the eigenvalues of contain zeros. Using Jordan decompositions of the matrices, there exists a nonsingular matrix , such that , in which is the Jordan matrix whose diagonal blocks are Jordan blocks with eigenvalue and is that with the other nonzero eigenvalues. Under the coordination transformation , denote that , in which . We haveSimilarly, denoting , one can obtainBecause contains no zero eigenvalues, together with the conclusion of Case 1, there exists a constant , such thatwhere .
is Jordan matrix whose blocks are Jordan blocks with eigenvalue ; then we have , . It follows thatBy (16) and (17),where and .
Using (13) and (18), we havewhere . The first “” holds because of (13) and (18), and the second “” holds because of property (iv).
Together with the fact thatwe haveThus,which means that system (1) is stochastically stable.
The proof is completed.

Remark 10. In the proof of sufficiency, two cases are considered. When there exist some whose eigenvalues contain zero, we need to use the state information in to estimate that in , which is different from and more technically difficult than the continuous-time system in [12].

Remark 11. From Theorem 9, we can get that the stochastic stability of system (1) is equivalent to stochastic stability of system (6). The switching signal of system (6) is a Markov chain; thus we can use Lyapunov approach to study the stochastic stability of system (1).

4. A Necessary and Sufficient Condition of Stochastic Stability

In this section, a necessary and sufficient condition of the stochastic stability of system (1) is proposed.

Theorem 12. System (1) is stochastically stable if and only if there exist matrices , , such that

Proof. It follows from (6) and (7) thatAccording to Theorem 9, the stochastic stability of systems (1) and (6) is equivalent.
Necessity. If system (6) is stochastically stable, we will show that there exist such that (23) holds. Given any positive matrices , , define a function such thatObviously, because , , we have that the left side of (25) is nondecreasing on . It is also bounded as , because of the stochastic stability of system (6). Then the limit of the left side exists as . Define a new matrix-valued function of , such thatfor any and . Then one can obtainBy (25),the last “=” holds because is a Markov chain. Therefore,On the other hand, we haveTogether with , , and (29), taking limit on the right side of (30), for every , we can getSufficiency. , denote thatWe only need to prove that system (6) is stochastically stable. Consider the following Lyapunov function:Then we havewhere . For any and , we haveThus,which means that system (6) is stochastically stable.
The proof is completed.

5. Numerical Example

In this section, a numerical example is given to demonstrate the validity and applicability of the developed theoretical result.

Consider a 2-dimensional discrete-time linear switched system (1) consisting of operation modes with the fixed dwell time , . The system matrices are given as follows:

The transition probability matrix is given byand the initial condition is chosen as .

For the Markov jump linear system with fixed dwell time , using the LMI toolbox of Matlab, we can get that there do not exist positive definite matrices , such that term (23) holds, which means that the system is not stochastically stable. The switching modes and the sampled system state trajectories with fixed dwell time are shown in Figures 2 and 3, respectively.

If we set the fixed dwell time, , , and , using the LMI toolbox of Matlab, we can get that there exist positive definite matrices , , such that term (23) holds, whereAccording to Theorem 12, one can obtain that system (1) is stochastically stable. Figure 4 shows the switching signal between mode 1 and mode 3, in which the symbols “+” and “” denote the mode at the fixed dwell time and random dwell time, respectively. Figure 5 shows that the sampled system state trajectories of and tend to the zero equilibrium.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by The National Natural Science Funds of China [60874006] and [11401540], Foundation of Henan Department of Science and Technology [112300410194], and Foundation of Henan Educational Committee [12B120004].