Abstract
The problems of almost sure (a.s.) stability and a.s. stabilization are investigated for hybrid stochastic systems (HSSs) with time-varying delays. The different time-varying delays in the drift part and in the diffusion part are considered. Based on nonnegative semimartingale convergence theorem, Hölder’s inequality, Doob’s martingale inequality, and Chebyshev’s inequality, some sufficient conditions are proposed to guarantee that the underlying nonlinear hybrid stochastic delay systems (HSDSs) are almost surely (a.s.) stable. With these conditions, a.s. stabilization problem for a class of nonlinear HSDSs is addressed through designing linear state feedback controllers, which are obtained in terms of the solutions to a set of linear matrix inequalities (LMIs). Two numerical simulation examples are given to show the usefulness of the results derived.
1. Introduction
In the past decades, the problems of stability analysis and stabilization synthesis of stochastic systems have received significant attentions, and many results have been reported; see, for example [1–7] and the references therein. Commonly, the above problems can be solved not only in moment sense [8–10] but also in a.s. sense [11, 12]. However, in recent years, much interest has been focused on a.s. stability problems for stochastic systems; see, for example [8, 13] and the references therein.
It is well known that a lot of dynamical systems have variable structures subject to abrupt changes in their parameters, which are usually caused by abrupt phenomena such as component failures or repairs, changing subsystem interconnections, and abrupt environmental disturbances. The HSSs, which are regarded as the stochastic systems with Markovian switching in this paper, have been used to model the previous phenomena; see, for example [14–18] and the references therein. The HSSs combine a part of the state that takes values in continuously and another part of the state that is a Markov chain taking discrete values in a finite space . One of the important issues in the study of HSSs is the analysis of stability. In particular, it is not necessary for the stable HSSs to require every subsystem to be stable; in other words, even all the subsystems are unstable; as the result of Markovian switching, the HSSs may be stable. These reveal that the Markovian jumps play an important role in the stability analysis of HSSs. Therefore, in the past few decades, a great deal of literature has appeared on the topic of stability analysis and stabilization synthesis of HSSs; see, for example [2, 13, 14, 19, 20].
On the other hand, time delays are frequently encountered in a variety of dynamic systems, such as nuclear reactors, chemical engineering systems, biological systems, and population dynamics models. They are often a source of instability and poor performance of systems. So the problems of stability analysis and stabilization synthesis of HSDSs have been of great importance and interest. The classical efforts can be classified into two categories, namely, moment sense criteria, see, for example [21–23], and a.s. sense criteria, see, for example [24, 25]. Among the existing results, in [25], based on the techniques proposed in [26] which were developed via the results of [11], a.s. stability and stabilization of HSDSs were studied. In [24], the a.s. stability analysis problem for a general class of HSDSs was derived from extending the results in [25] to HSSs with mode-dependent interval delays. However, to the author’s best knowledge, when the different time-varying delays in the drift part and in the diffusion part are considered, the a.s. stability analysis and stabilization synthesis problems for nonlinear HSDSs have not been adequately addressed and remain an interesting and challenging research topic. This situation motivates the present study.
In this paper, we are concerned with a.s. stability analysis and stabilization synthesis problems for HSDSs. The purpose of stability is to develop conditions such that the underlying systems are a.s. stable. Following the same idea as in dealing with the stability problem, linear state feedback controllers are designed such that the special nonlinear or linear closed-loop systems are a.s. stable. The explicit expressions for the desired state feedback controllers are given by means of the solutions to a set of LMIs. Two numerical simulation examples are exploited to verify the effectiveness of the theoretical results. The main contribution of this paper is mainly twofold: the different time-varying delays in the drift part and in the diffusion part are considered for nonlinear HSDSs; for a class of nonlinear HSDSs, the stabilization synthesis problem is investigated in the a.s. sense.
This paper is organized as follows. In Section 2, we formulate some preliminaries. In Section 3, we investigate the a.s. stability for the hybrid stochastic systems with time-varying delays. In Section 4, the results of Section 3 are then applied to establish a sufficient criterion for the stabilization. In Section 5, two examples are discussed for illustration. Finally, conclusions are drawn in Section 6.
Notation 1. The notation used here is fairly standard unless otherwise specified. and denote, respectively, the dimensional Euclidean space and the set of all real matrices, and let . be a complete probability space with a natural filtration satisfying the usual conditions (i.e., it is right continuous, and contains all -null sets). If are real numbers, then stands for the maximum of and , and the minimum of and . represents the transpose of the matrix . and denote the largest and smallest eigenvalue of , respectively. denotes the Euclidean norm in . stands for the mathematical expectation. means the probability. denotes the family of all continuous -valued function on with the norm . being the family of all -measurable bounded -value random variables . denotes the family of functions such that .
2. Problem Formulation
In this paper, let be a right-continuous Markov chain on the probability space taking values in a finite state space with generator given by where and is the transition rate from mode to mode if while . Assume that the Markov chain is independent of the Brownian motion . It is known that almost all sample paths of are right-continuous step functions with a finite number of simple jumps in any finite subinterval of .
Let us consider a class of stochastic systems with time-varying delays: with initial data and , where , and are positive constant and and are nonnegative differential functions which denote the time-varying delays and satisfy The nonlinear functions and satisfy the local Lipschitz condition in ; that is, for any , there is such that for all and , and moreover, with some nonnegative number .
Remark 2.1. It should be pointed out that the systems (2.2) can be seen as the specialization of multiple time-varying delays systems which are of the form
But it is easy to see that the results in this paper can be applied to the systems (2.5) by the similar assumption in (2.4).
Let denote the family of all nonnegative functions on that are twice continuously differentiable in and once in . If , define an operator associated with (2.2) from to by
Remark 2.2. is thought as a single notation and is defined on while is defined on .
Definition 2.3. The system (2.2) is said to be a.s. stable if for all and
3. Main Results
Theorem 3.1. Assume that there exist nonnegative functions , , such that where and are positive numbers satisfying . Then system (2.2) is almost surely stable.
To prove this theorem, let us present the following lemmas.
Lemma 3.2 (see [24, 25]). If , then for any , the generalized Itô’s formula is given as where function and martingale measure are defined as, for example, and in [25].
Lemma 3.3 (see [27]). Let and be two continuous adapted increasing processes on with , let be a real-valued continuous local martingale with ., and let be a nonnegative -measurable random variable such that . Denote for all . If is nonnegative, then
where . means . In particular, if ., then,
That is, all of the three processes , and converge to finite random variables with probability one.
Lemma 3.4 (see [25]). Under the conditions of Theorem 3.1, for any initial data and , (2.2) has a unique global solution.
Proof. Fix any initial data , , and let be the bound for . For each integer , define
where we set when . Define similarly. By (2.4), we can observe that and satisfy the global Lipschitz condition and the linear growth condition. By the known existence-and-uniqueness theorem, there exists a unique global solution on to the equation
with initial data and .
Define the stopping time
where we set as usual. It is easy to show that if , which implies that is increasing in . Letting , the property above also enables us to define for as if .
It is clear that is a unique solution of (2.2) for . To complete the proof, we only need to show . By Lemma 3.2, we have that for any ,
where operator is defined similarly as was defined by (2.6). By the definitions of and , if , we hence observe that
By the conditions of (3.1) and (3.2), we derive that
On the other hand,
This yields
Letting and using (3.3), we obtain . Since is arbitrary, we must have . The proof is therefore complete.
Let us now begin to prove our main result.
Proof. Let for all . Inequality (3.2) implies whenever . Fix any initial value and any initial state , and for simplicity write .
By Lemma 3.2 and condition (3.1), we have
Since , applying Lemma 3.3 we obtain that
Define as . Then, it is obvious to see from (3.17) that
On the other hand, by (3.3) we have a.s.. It is easy to find an integer such that a.s. because of . Furthermore, for any integer , we can define the stopping time
where as usual. Clearly, a.s. as . Moreover, for any given , there is such that for any .
It is straightforward to see from (3.16) that a.s.; then we claim that
The rest of the proof is carried out by contradiction. That is, assuming that (3.20) is false, we have
Furthermore, there exist and such that
where is a set of natural numbers and are a sequence of stopping times defined by
By the local Lipschitz condition (2.4), for any given , there exists such that
for all and .
For any , let ; by Hölder’s inequality and Doob’s martingale inequality, we compute
where is the indicator of set .
Since is continuous in , it must be uniformly continuous in the closed ball . For any given , we can choose such that whenever and . Furthermore, let us choose
By inequality (3.25) and Chebyshev’s inequality, we have
Meanwhile, we can also choose sufficiently small for
And then, (3.27) and (3.28) yield
where .
In the following, we can obtain from (3.16) and (3.29) that
This is a contradiction. So there is an with such that
Finally, any fixed , is bounded in . By Bolzano-Weierstrass theorem, there is an increasing sequence such that converges to some with . Since whenever , we must have if and only if . This implies that the solution of (2.2) is a.s. stable, and the proof is therefore completed.
Remark 3.5. The techniques proposed in Theorem 3.1 can be used to deal with the a.s. stability problem for other HSDSs, such as the ones in [25]. In a very special case when for all and , it is easy to see that , and Theorem 3.1 is exactly Theorem 2.1 in [25]. Similarly, Theorem 2.2 in [25] can be generalized to system (2.2) as a LaSalle-type theorem (see [24, 26]) for HSSs with multiple time-varying delays.
4. Almost Sure Stabilization of Nonlinear HSDSs
Consider the following nonlinear HSDSs: where are known constant matrices with appropriate dimensions and represents a scalar Brownian motion (Wiener process) on that is independent of Markov chain and satisfies: and are both functions from to which satisfy local Lipschitz condition and the following assumptions: where, for each , , are known constant matrices with appropriate dimensions, and , are positive definite matrices.
In the sequel, we denote the matrix associated with the th mode by where the matrix could be , or .
As the given HSDSs (4.1) is nonlinear, we here consider the resulting systems can be stabilized only by linear state feedback controller which is of the form where are controller parameters to be designed.
Under control law (4.5), the closed-loop system can be given as follow: The stabilization problem is therefore to design matrices for the closed-loop system (4.6) to be a.s. stable. In order to guarantee the solvability of , the following theorem is given.
Theorem 4.1. If there exist sequences of scalars , positive definite matrices and matrices such that the following LMIs hold, where then the controlled system (4.6) is a.s. stable and the state feedback controller determined by
Proof. Let and .
The operator has the form
So
where
By assumption 1, it is easy to see that we can choose and such that for all .
Noting that and , we can pre- and postmultiply (4.7) by , and using Schur complements, we can obtain
where
This implies
Let , and .
Clearly
Moreover, by (4.24) we further obtain
The required assertion now follows from Theorem 3.1.
If the systems (4.6) reduces to linear HSDSs of the form where , and are known constant matrices with appropriate dimensions.
Then, the following corollary follows directly from Theorem 4.1.
Corollary 4.2. If there exist sequences of scalars , positive definite matrices and matrices such that the following LMIs hold, where then the controlled system (4.19) is a.s. stable and the state feedback controller determined by
Proof. Let and .
The operator has the form
So
where
It is easy to see that we can choose and such that for all .
Noting that and , we can pre- and postmultiply (4.7) by , and using Schur complements, we can obtain
where
This implies
Let , and .
Clearly
Moreover, by (4.24) we further obtain
The required assertion now follows from Theorem 3.1.
5. Examples
In this section we will provide two examples to illustrate our results. In the following examples we assume that is a scalar Brownian motion, is a right-continuous Markov chain independent of and taking values in , and the step size . By using the YALMIP toolbox, simulations results are shown in Figures 1–3. Figure 1 gives a portion of state of Example 5.1 for clear display. Figure 2 simulates the numerical results for Example 5.1. The simulation results have illustrated our theoretical analysis. Following from Theorem 4.1, the simulation results for Example 5.2 can be founded in Figure 3, which verify our desired results.
Example 5.1. Let
Consider scalar nonlinear HSDSs:
where
.
To examine the stability of system (5.2), we consider a Lyapunov function candidate as for . Then we have
By the elementary inequality for all , and , we see that inequality
holds for any , where .
From inequalities (5.4)–(5.5), we have
for all and . By , it is easy to see that ; then, we choose constant such that , and hence conditions of Theorem 3.1 are satisfied.
Example 5.2. Let
Consider scalar nonlinear closed-loop HSDSs:
with
, , , , , , , , , , , .
By Theorem 4.1 we can find the feasible solution for the a.s. stability.
6. Conclusions
In this paper, we have investigated the a.s. stability analysis and stabilization synthesis problems for nonlinear HSDSs. Some sufficient conditions are given to guarantee the resulting systems to be a.s. stable. Under these conditions, a.s. stabilization problem for a class of nonlinear HSDSs is solved in terms of the solutions to a set of LMIs. Finally, the results of this paper have been demonstrated by two numerical simulation examples.
Acknowledgment
This work is supported in part by the National Natural Science Foundation of P.R. China (no. 60974030).