Abstract and Applied Analysis

Volume 2012, Article ID 241702, 26 pages

http://dx.doi.org/10.1155/2012/241702

## Stochastic Delay Logistic Model under Regime Switching

School of Mathematical Science, Anhui University, Hefei, Anhui 230039, China

Received 2 April 2012; Accepted 14 June 2012

Academic Editor: Elena Braverman

Copyright © 2012 Zheng Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper is concerned with a delay logistical model under regime switching diffusion in random environment. By using generalized Itô formula, Gronwall's inequality, and Young's inequality, some sufficient conditions for existence of global positive solutions and stochastically ultimate boundedness are obtained, respectively. Also, the relationships between the stochastic permanence and extinction as well as asymptotic estimations of solutions are investigated by virtue of -function technique, -matrix method, and Chebyshev's inequality. Finally, an example is given to illustrate the main results.

#### 1. Introduction

The delay differential equation has been used to model the population growth of certain species, known as the delay logistic equation. There is an extensive literature concerned with the dynamics of this delay model. We here only mention Gopalsamy [1], Kolmanovskiĭ, and Myshkis [2], Kuang [3] among many others.

In (1.1), the state denotes the population size of the species. Naturally, we focus on the positive solutions and also require the solutions not to explode at a finite time. To guarantee positive solutions without explosion (i.e., there exists global positive solutions), it is generally assumed that , , and [4] (and the references cited therein).

On the other hand, the population growth is often subject to environmental noise, and the system will change significantly, which may change the dynamical behavior of solutions significantly [5, 6]. It is therefore necessary to reveal how the noise affects on the dynamics of solutions for the delay population model. First of all, let us consider one type of environmental noise, namely, white noise. In fact, recently, many authors have discussed population systems subject to white noise [7–9]. Recall that the parameter in (1.1) represents the intrinsic growth rate of the population. In practice, we usually estimate it by an average value plus an error term. According to the well-known central limit theorem, the error term follows a normal distribution. In term of mathematics, we can therefore replace the rate by where is a white noise (i.e., is a Brownian motion) and is a positive number representing the intensity of noise. As a result, (1.1) becomes a stochastic differential equation (SDE, in short) We refer to [4] for more details.

To our knowledge, much attention to environmental noise is paid on white noise ([10–14] and the references cited therein). But another type of environmental noise, namely, color noise or say telegraph noise, has been studied by many authors (see, [15–19]). In this context, telegraph noise can be described as a random switching between two or more environmental regimes, which are different in terms of factors such as nutrition or as rain falls [20, 21]. Usually, the switching between different environments is memoryless and the waiting time for the next switch has an exponential distribution. This indicates that we may model the random environments and other random factors in the system by a continuous-time Markov chain with a finite state space . Therefore, the stochastic delay logistic (1.3) in random environments can be described by the following stochastic model with regime switching: The mechanism of ecosystem described by (1.4) can be explained as follows. Assume that initially, the Markov chain , then the ecosystem (1.4) obeys the SDE until the Markov chain jumps to another state, say, . Then the ecosystem satisfies the SDE for a random amount of time until jumps to a new state again.

It should be pointed out that the stochastic logistic systems under regime switching have received much attention lately. For instance, the study of stochastic permanence and extinction of a logistic model under regime switching was considered in [18], a new single-species model disturbed by both white noise and colored noise in a polluted environment was developed and analyzed in [22], a general stochastic logistic system under regime switching was proposed and was treated in [23].

Since (1.4) describes a stochastic population dynamics, it is critical to find out whether or not the solutions will remain positive or never become negative, will not explode to infinity in a finite time, will be ultimately bounded, will be stochastically permanent, will become extinct, or have good asymptotic properties.

This paper is organized as follows. In the next section, we will show that there exists a positive global solution with any initial positive value under some conditions. In Sections 3 and 4, we give the sufficient conditions for stochastic permanence or extinction, which show that both have closed relations with the stationary probability distribution of the Markov chain. If (1.4) is stochastically permanent, we estimate the limit of the average in time of the sample path of its solution in Section 5. Finally, an example is given to illustrate our main results.

#### 2. Global Positive Solution

Throughout this paper, unless otherwise specified, let be a complete probability space with a filtration satisfying the usual conditions (i.e., it is right continuous and contains all -null sets ). Let , be a scalar standard Brownian motion defined on this probability space. We also denote by the interval and denote by the interval . Moreover, let and denote by the family of continuous functions from to .

Let be a right-continuous Markov chain on the probability space, taking values in a finite state space , with the generator given by where , is the transition rate from to and if , while . We assume that the Markov chain is independent of the Brownian motion . It is well known that almost every sample path of is a right continuous step function with a finite number of jumps in any finite subinterval of . As a standing hypothesis we assume in this paper that the Markov chain is irreducible. This is a very reasonable assumption as it means that the system can switch from any regime to any other regime. Under this condition, the Markov chain has a unique stationary (probability) distribution which can be determined by solving the following linear equation: subject to We refer to [9, 24] for the fundamental theory of stochastic differential equations.

For convenience and simplicity in the following discussion, define where is a constant vector.

As in model (1.4) denotes population size at time , it should be nonnegative. Thus, for further study, we must give some condition under which (1.4) has a unique global positive solution.

Theorem 2.1. *Assume that there are positive numbers such that
**
Then, for any given initial data , there is a unique solution to (1.4) on and the solution will remain in with probability 1, namely, for all a.s.*

*Proof. *Since the coefficients of the equation are locally Lipschitz continuous, for any given initial data , there is a unique maximal local solution on , where is the explosion time. To show that this solution is global, we need to prove a.s.

Let be sufficiently large for
For each integer , define the stopping time
where throughout this paper we set (as usual denotes the empty set). Clearly, is increasing as . Set , where a.s. If we can show that a.s., then a.s. and a.s. for all . In other words, we need to show a.s. Define a -function by
which is not negative on . Let and be arbitrary. For , it is not difficult to show by the generalized Itô formula that
where is defined by
Using condition (2.5), we compute
Moreover, there is clearly a constant such that
Substituting these into (2.10) yields

Noticing that , we obtain that
where is a positive constant. Substituting these into (2.9) yields

Now, for any , we can integrate both sides of (2.15) from 0 to and then take the expectations to get
Compute
and, similarly
Substituting these into (2.16) gives
where .

By the Gronwall inequality, we obtain that
Note that for every equals either or , thus
It then follows from (2.20) that
where is the indicator function of . Letting gives and hence . Since is arbitrary, we must have , so as required.

Corollary 2.2. *Assume that there is a positive number such that
**
Then the conclusions of Theorem 2.1 hold.*

The following theorem is easy to verify in applications, which will be used in the sections below.

Theorem 2.3. *Assume that
**
Then for any given initial data , there is a unique solution to (1.4) on and the solution will remain in with probability 1, namely, for all a.s.*

*Proof. *The proof of this theorem is the same as that of the theorem above. Let
then we have (2.9) and (2.10). By (2.24), we get
where is a positive constant. The rest of the proof is similar to that of Theorem 2.1 and omitted.

Note that condition (2.5) is used to derive (2.13) from (2.10). In fact, there are several different ways to estimate (2.10), which will lead to different alternative conditions for the positive global solution. For example, we know Therefore, if we assume that then hence from which we can show in the same way as in the proof of Theorem 2.1 that the solution of (1.4) is positive and global. In other words, the arguments above can give an alternative result which we describe as a theorem as below.

Theorem 2.4. *Assume that there are positive numbers such that
**
Then for any given initial data , there is a unique solution to (1.4) on and the solution will remain in with probability 1, namely, for all a.s.*

Similarly, we can establish a corollary as follows.

Corollary 2.5. *Assume that there is a positive number such that
**
Then the conclusions of Theorem 2.4 hold. *

#### 3. Asymptotic Bounded Properties

For convenience and simplicity in the following discussion, we list the following assumptions.(A1) For each .() For each .() For each .(A2) For some .(A3).().(A4) For each .() For each .

*Definition 3.1. *Equation (1.4) is said to be stochastically permanent if for any , there exist positive constants such that
where is the solution of (1.4) with any positive initial value.

*Definition 3.2. *The solutions of (1.4) are called stochastically ultimately bounded, if for any , there exists a positive constant , such that the solutions of (1.4) with any positive initial value have the property that

It is obvious that if a stochastic equation is stochastically permanent, its solutions must be stochastically ultimately bounded. So we will begin with the following theorem and make use of it to obtain the stochastically ultimate boundedness of (1.4).

Theorem 3.3. *Let hold and is an arbitrary given positive constant. Then for any given initial data , the solution of (1.4) has the properties that
**
where both and are positive constants defined in the proof.*

*Proof. *By Theorem 2.3, the solution will remain in for all with probability 1. Let
Define the function by
By the generalized Itô formula, we have
where is defined by
By (3.5) and Young's inequality, we obtain that
where . Moreover,
By (3.10) and (3.11), one has
which yields
where

By the generalized Itô formula, Young's inequality and (3.5) again, it follows
where . This implies

The inequality above implies
where and the desired assertion (3.4) follows by setting .

*Remark 3.4. *From (3.3) of Theorem 3.3, there is a such that
Since is continuous, there is a such that
Taking , we have
This means that the th moment of any positive solution of (1.4) is bounded.

*Remark 3.5. *Equation (3.4) of Theorem 3.3 shows that the average in time of the th () moment of solutions of (1.4) is bounded.

Theorem 3.6. *Solutions of (1.4) are stochastically ultimately bounded under .*

*Proof. *This can be easily verified by Chebyshev's inequality and Theorem 3.3.

Based on the results above, we will prove the other inequality in the definition of stochastic permanence. For convenience, define Under (A3), it has . Moreover, let be a vector or matrix. By we mean all elements of are positive. We also adopt here the traditional notation by letting We will also need some useful results.

Lemma 3.7 (see [24]). * If , then the following statements are equivalent.*(1)* is a nonsingular -matrix (see [24] for definition of -matrix).*(2)*All of the principal minors of are positive; that is,
*(3)* is semipositive, that is, there exists in such that .*

Lemma 3.8 (see [18]). *(i) Assumptions and imply that there exists a constant such that the matrix
**
is a nonsingular -matrix, where .**(ii) Assumption (A4) implies that there exists a constant such that the matrix is a nonsingular -matrix.*

Lemma 3.9. *If there exists a constant such that is a nonsingular -matrix and , then the global positive solution of (1.4) has the property that
**
where is a fixed positive constant (defined by (3.35) in the proof).*

*Proof. *Let on . Applying the generalized Itô formula, we have

By Lemma 3.7, for the given , there is a vector such that
namely,

Define the function by . Applying the generalized Itô formula again, we have
where is defined by

Now, choose a constant sufficiently small such that
that is,
Then, by the generalized Itô formula again,
It is computed that
where
which implies
Then
Recalling the definition of , we obtain the required assertion.

Theorem 3.10. *Under (A1′′), (A2), and (A3), (1.4) is stochastically permanent.*

The proof is a simple application of the Chebyshev inequality, Lemmas 3.8 and 3.9, and Theorem 3.6. Similarly, it is easy to obtain the following result.

Theorem 3.11. *Under (A1′′) and (A4), (1.4) is stochastically permanent.*

*Remark 3.12. *It is well-known that if , and , then the solution of (1.1) is persistent, namely,
Furthermore, we consider its associated stochastic delay equation (1.4), that is,
where , for , and . Thus, applying Theorem 3.10 or Theorem 3.11, we can see that (1.4) is stochastically permanent, if the noise intensities are sufficiently small in the sense that

Corollary 3.13. *Assume that for some ,, and . Then the subsystem
**
is stochastically permanent.*

#### 4. Extinction

In the previous sections we have shown that under certain conditions, the original (1.1) and the associated SDE (1.4) behave similarly in the sense that both have positive solutions which will not explode to infinity in a finite time and, in fact, will be ultimately bounded. In other words, we show that under certain condition the noise will not spoil these nice properties. However, we will show in this section that if the noise is sufficiently large, the solution to (1.4) will become extinct with probability 1.

Theorem 4.1. *Assume that (A1) holds. Then for any given initial data , the solution of (1.4) has the property that
*

* Proof. *By Theorem 2.3, the solution will remain in for all with probability 1. We have by the generalized Itô formula and that
where is used in the last step. Then,
where . The quadratic variation of is given by
Therefore, applying the strong law of large numbers for martingales [24], we obtain

It finally follows from (4.3) by dividing on the both sides and then letting that
which is the required assertion (4.1).

Similarly, it is easy to prove the following conclusions.

Theorem 4.2. *Assume that (A1) and (A3′) hold. Then for any given initial data , the solution of (1.4) has the property that
**
That is, the population will become extinct exponentially with probability 1.*

Theorem 4.3. *Assume that (Al) and (A4′) hold. Then for any given initial data , the solution of (1.4) has the property that
**
where . That is, the population will become extinct exponentially with probability 1.*

*Remark 4.4. *If the noise intensities are sufficiently large in the sense that
then the population represented by (1.4) will become extinct exponentially with probability 1. However, the original delay equation (1.1) may be persistent without environmental noise.

*Remark 4.5. *Let A and A2 hold, . Then, SDE (1.4) is either stochastically permanent or extinctive. That is, it is stochastically permanent if and only if , while it is extinctive if and only if .

Corollary 4.6. *Assume that for some ,
**
Then for any given initial data , the solution of subsystem
**
tend to zero a.s.*

#### 5. Asymptotic Properties

Lemma 5.1. *Assume that (A1′) holds. Then for any given initial data , the solution of (1.4) has the property
*

*Proof. *By Theorem 2.3, the solution will remain in for all with probability 1. It is known that
From (3.3) of Theorem 3.3, it has

By the well-known BDG's inequality [24] and the Hölder's inequality, we obtain
Note that
Therefore,
This, together with (5.3), yields

From (5.7), there exists a positive constant such that
Let be arbitrary. Then, by Chebyshev's inequality,
Applying the well-known Borel-Cantelli lemma [24], we obtain that for almost all
for all but finitely many . Hence, there exists a , for almost all , for which (5.10) holds whenever . Consequently, for almost all , if and , then
Therefore,
Letting , we obtain the desired assertion (5.1).

Lemma 5.2. *If there exists a constant such that is a nonsingular -matrix and , then the global positive solution of SDE (1.4) has the property that
*

*Proof. *Applying the generalized Itô formula, for the fixed constant , we derive from (3.26) that
where on . By (3.37), there exists a positive constant such that

Let be sufficiently small for
Then (5.14) implies that
By directly computing, we have
By the BDG's inequality, it follows
Substituting this and (5.18) into (5.17) gives
Making use of (5.15) and (5.16), we obtain

Let be arbitrary. Then, we have by Chebyshev's inequality that
Applying the Borel-Cantelli lemma, we obtain that for almost all ,
holds for all but finitely many . Hence, there exists an integer , for almost all , for which (5.23) holds whenever . Consequently, for almost all , if and ,
Therefore,
Let , we obtain
Recalling the definition of , this yields
which further implies
This is our required assertion (5.13).

Theorem 5.3. *Assume that (A1′′), (A2), and (A3) hold. Then for any given initial data , the solution of (1.4) obeys
*