Abstract

We discuss stochastic functional differential equation under regime switching . We obtain unique global solution of this system without the linear growth condition; furthermore, we prove its asymptotic ultimate boundedness. Using the ergodic property of the Markov chain, we give the sufficient condition of almost surely exponentially stable of this system.

1. Introduction

Recently, many papers devoted their attention to the hybrid system, they concerned that how to change if the system undergoes the environmental noise and the regime switching. For the detailed understanding of this subject, [1] is good reference.

In this paper we will consider the following stochastic functional equation: The switching between these regimes is governed by a Markovian chain on the state space . is defined by ; . denote the family of continuous functions from to , which is a Banach space with the norm . satisfies local Lipschitz condition as follows.

Assumption A. For each integer , there is a positive number such that for all and those with .
Throughout this paper, unless otherwise specified, we let be a complete probability space with a filtration satisfying the usual conditions (i.e., it is right continuous and contains all P-null sets). Let ( ), , be the standard Brownian motion defined on this probability space. We also denote by . Let be a right-continuous Markov chain on the probability space taking values in a finite state space with the generator given by where . Here is the transition rate from to and if while We assume that the Markov chain is independent on the Brownian motion ; furthermore, and are independent.
In addition, throughout this paper, let denote the family of all positive real-valued functions on which are continuously twice differentiable in and once in . If for the following equation there exists , define an operator from to by where Here we should emphasize that [1, Page 305] the operator (thought as a single notation rather than acting on ) is defined on although is defined on .

2. Global Solution

Firstly, in this paper, we are concerned about that the existence of global solution of stochastic functional differential equation (1.1).

In order to have a global solution for any given initial data for a stochastic functional equation, it is usually required to satisfy the local Lipschitz condition and the linear growth condition [1, 2]. In addition, as a generation of linear condition, it is also mentioned in [3, 4] with one-sided linear growth condition. The authors improve the results using polynomial growth condition in [5, 6]. After that, these conditions were mentioned under regime systems [79].

Replacing the linear growth condition or the one-sided linear growth condition, we impose the so-called polynomial growth condition on the function for (1.1).

Assumption B. For each , there exist nonnegative constants and probability measures , on such that for any .

Theorem 2.1. Under the conditions of Assumptions A and B, if , and for , there almost surely exists a unique globally solution to (1.1) on for any given initial data .

Proof. Since the coefficients of (1.1) are locally Lipschitz, there is a unique maximal local solution on , where is the explosion time. In order to prove this solution is global, we need to show that a.s. Let be sufficiently large such that . For each , we define the stopping time Clearly is increasing as . Set , if we can obtain that a.s., then a.s. for all . That is, to complete the proof, also equivalent to prove that, for any as . If this conclusion is false, there is a pair of constants and such that So there exists an integer such that
To prove the conclusion that we desired, for any , define a -function: by where is positive constant sequence. Applying the generalized Itô formula, where is computed as
Let . For any , we get
Therefore,
According to Assumption B, the first term in (2.7)
By the Young inequality and noting that , it is obvious that where the first inequality we have used the elementary inequality: for any and , . Therefore we have where Using the elementary inequality: for any and , , we obtain that , also we have Therefore, Noting that and , by the boundedness property of polynomial functions, there exists a positive constant such that . Taking expectation from two sides of (2.6) leads to and from (2.12) and (2.15), we have where we denote .
By the Fubini theorem and a substitution technique, we may compute that Similarly, Therefore we rewrite (2.17) into where is bounded and is independent of .
By the definition of or , so let for and by (2.6), noting that for every , there is some such that equals either or hence Letting implies that So we must obtain a.s., as required. The proof is complete.

3. Asymptotic Boundedness

Theorem 2.1 shows that the solution of SDE (1.1) exists globally and will not explode under some reasonable conditions. In the study of stochastic system, stochastically ultimate boundedness is more important topic comparing with nonexplosion of the solution, which means that the solution of this system will survive under finite boundedness in the future. Here we examine the 2pth moment boundedness.

Lemma 3.1. Under the conditions of Theorem 2.1, for any , there exists a constant independent on the initial data such that the global solution of SDE (1.1) has the property that

Proof. First, Theorem 2.1 indicates that the solution of (1.1) almost surely remain in for all with probability 1.
Applying the Itô formula to and taking expectation yields Here is defined as before (2.7).
Now we consider the function
Similar to the proof of Theorem 2.1, then we know (3.3) is upper bounded; there exists constant such that ; therefore, (3.2) implies that
We have the following calculus transformation: Similarly,
We therefore have from (3.4)
Clearly, denote ; therefore, which yields This means that the solution is bounded in the 2pth moment; the stochastically ultimate boundedness will follow directly.

Definition 3.2. The solutions of SDE (1.1) are called stochastically ultimately bounded, if for any , there is a positive constant , such that the solution of SDE (1.1) with any positive initial value has the property that

Theorem 3.3. The solution of (1.1) is stochastically ultimately bounded under the condition Lemma 3.1; that is, for any , there is a positive constant , such that for any positive initial value the solution of Lemma 3.1 has the property that

Proof. This can be easily verified by Chebyshev's inequality and Lemma 3.1 by choosing sufficiently large because of the following

4. Stabilization of Noise

From Sections 2 and 3, we know that under the condition and , the Brownian noise can suppress the potential explosion of the solution and guarantee this global solution to be bounded in the sense of the 2pth moment. Clearly, the boundedness results are also dependent only on the choice of under the condition and independent of . This implies that the noise plays no role to guarantee existence and boundedness of the global solution to (1.1). This section is devoted to consider the effect of noise , we will show that the system (1.1) is exponential stability if for some sufficiently large .

For the purpose of stability study, we impose the following the general polynomial growth condition.

Assumption C. For each , there exist nonnegative constants and probability measures , on such that for any .
Clearly, Assumption C is stronger than the one-sided polynomial growth condition Assumption B. Therefore, Theorems 2.1 and Lemma 3.1 still hold under Assumption C.
In [10, Page 165], for a given nonlinear SDE with Markovian switching any solution starting form a nonzero state will remain to be non-zero. But for the system (1.1) the drift coefficient is a functional, so we will prove this non-zero property under Assumption C.

Lemma 4.1. Let be the global solution of (1.1). Under Assumption C, if and , for any non-zero initial data that is, almost all the sample path of any solution starting from a non-zero state will never reach the origin.

Proof. For any initial data satisfying , for sufficiently large positive number , such that . For each integer , define the stopping time Clearly, is increasing as and a.s. If we can show that a.s., the desired result on follows. This is equivalent to proving that, for any as .
To prove this statement, define a -function where and . Applying the Itô formula and taking the expectation yield where is defined as for any . By Assumption C and the Young inequality, the first term of (4.8) will be written as Substituting (4.9) into (4.8) gives where Noting (2.9) and , using the boundedness property of polynomial functions, there exists a constant such that .
Furthermore, we may estimate that It therefore follows that
We know that which implies that as required. The proof is completed.

This lemma shows that almost all the sample path of any solution of (1.1) starting from a non-zero state will never reach the origin. Because of this nice property, the Lyapunov functions we can choose need not be imposed globally but only in a deleted neighborhood of the origin.

Especially, the hybrid system always switch from any regime to another regime, so it is reasonable to assume that the Markov chain is irreducible. It means to the condition that irreducible Markov chain has a unique stationary probability distribution which can be determined by solving the following linear equation subject to and for any , where is generator .

Theorem 4.2. Suppose the Markov chain is irreducible, under Assumption A and C, if for and , the solution of SDE (1.1) with any initial data satisfying has the property where In particular, the nonlinear hybrid system (1.1) is almost surely exponentially stable if

Proof. By Theorem 2.1 and Lemma 4.1, (1.1) almost surely admits a global solution for all and almost surely. Applying the Itô formula to the function leads to Define ; clearly is a continuous local martingale with the quadratic variation For any , choose such that and each positive integer ; the exponential martingale inequality yields
Since , by the Borel-Cantelli lemma, there exists an with such that for any , there exists an integer , when and , This, together with Assumption C, denote and ; noting the definition of (4.17), we therefore have where
Applying the strong law of large number [2, Page 12] to the Brownian motion, we therefore have Moreover, letting , by the ergodic property of the Markov chain, we have Combined (4.25) and (4.26), it follows from (4.23) Thus the assertion (4.16) follows.
Clearly, if system (1.1) is almost surely exponentially stable; the proof is completed.

Remark 4. This results is generation of Theorem 4.2 in [6]. The author consider functional differential equation: for example, with , hence, we choose satisfying ; the stochastic functional system is almost surely exponentially stable. We can observe the numerical simulation in Figure 1.

Example 4.3. Let us assume that the Markov chain is on the state space with the generator where and . It is easy to see that the Markov chain has its stationary probability distribution given by
As pointed out in Section,we may regard SDE (1.1) as the result of the following two equations:
Noting that has the form that is, for a given state 1, may be written as with when . Applied the condition (4.17) with to the system (4.34) yields satisfying , which shows that the trajectory of (4.34) will not satisfied the conditions of Theorem 4.2 in [6] although it has global solution.
However, as the result of Markovian switching, the overall behavior, that is SDE (1.1) will be almost surely exponentially stable as long as namely, the transition rate from to is less than the transition rate from to , which ensure that

Example 4.4. Consider another stochastic differential equation with Markovian switching, where is a Markov chain taking values in . Here subsystem of (1.1) is writtern as three different equations: where , , , , , where , , , where , , , , , we compute

Case 1. Let the generator of the Markov chain be By solving the linear equation subject to and for any , we obtain the unique stationary (probability) distribution Then

Case 2. Suppose the generator of the Markov chain be
By solving the linear equation subject to and for any , we obtain the unique stationary distribution
Then Therefore, by Theorems 4.2, System (1.1) is almost surely exponentially stable in Case 1.
We can see the impact of the Markov chain . The distribution ( ) of plays a very important role, which, combined with , determine that system (1.1) is almost surely exponentially stable. If spends enough time in the “good” states (the state where for some ), even if there exist some “bad” states (the states where for some ), the system (1.1) will still be almost surely exponentially stable.

Acknowledgment

This work is partially supported by the National Natural Science Foundation of China (11101183, 11171056, 11171081 and 11271157), 985 program of Jilin University. We also wish to thank the high performance computing center of Jilin University.