Abstract

An optimal harvesting problem for a stochastic food-chain system with Markovian switching is investigated in this paper. Firstly, the existence, uniqueness, and positivity of the food-chain system’s solution are proved. Secondly, persistent in mean of the system is obtained. Then the optimal harvesting policy is discussed. Finally, the main results are illustrated by several examples.

1. Introduction

Optimal harvesting problem is an important and interesting topic from both biological and mathematical point of view. Since Clark’s works [1, 2], one of the most important area-optimal harvesting problems have received a lot of attention and been studied widely. Among these studies, a large number of literatures were focused on deterministic models [39], with some on the stochastic versions [1017], but only a few on the food-chain systems. Furthermore, it is well known that the theory of food chains illustrated the balance of nature and that no animal or plant can exist independently. Motivated by these arguments presented above, we are interested in the optimal harvesting problems on the following stochastic food-chain system:with the initial value . Where , , is a standard Brownian motion and is the harvesting effort (control parameter), and , , represent the population densities of three species (resource, consumer, and predator) at time , respectively. All parameters are positive constants and parametric functions are continuous and positive. , , represent the intrinsic growth rate of species , , , respectively; measures the strength of competition among individuals of species ; is the maximum value of the per capita reduction rate of due to ; , , and have similar meaning to ; measures the extent to which the environment provides protection to species and ; measures the extent to which the environment provides protection to species and ; be a right continuous Markov chain; , , represents the intensity of the white noise. This system is the extension of a predator-prey model with modified Leslie-Gower and Holling-type II schemes with stochastic perturbation which was discussed by Ji et al. [18], Song et al. [19], and Guo et al. [20], and there the factor of Markovian switching is not considered.

In the most literatures [1, 2] the sustainable yield function is used as the harvesting function. Here the harvesting function associated with (1) is This type of harvesting function is also used by some other papers, such as Wang [21, chapter 4] and Zou and Wang [22] and defined as the time averaging yield function. The optimal harvesting problem considered in this paper is then stated as follows. Find a harvesting effort such that

Based on the aforementioned discussion, obviously, the first and most important duty is to discuss the existence of and then the optimal harvesting problem. Therefore, the rest of the paper is organized as follows. In Section 2, we show that system has a global positive solution. In Section 3, we obtain some long time behavior of the solution, especially the property of persistent in mean, which ensures the existence of the time averaging yield function and its explicit expression is given. In Section 4, the optimal harvesting policies are investigated. In Section 5, we illustrate our main results through several numerical examples. Last but not least, conclusions are drawn in Section 6.

On the other hand, for convenience, we give some notations and assumptions in the rest of this section.

Throughout this paper, unless otherwise specified, let be a complete probability space with a filtration satisfying the usual conditions (i.e., it is increasing and right continuous while contains all P-null sets). The standard Brownian motion , , is defined on this probability space.

The right continuous Markov chain on this probability space taking values in a finite-state space with the generator is given by where . Here is the transition rate from to and if , while We assume that the Markov chain and the Brownian motion are independent of each other, . As a standing hypothesis we also assume in this paper that the Markov chain is irreducible. This is very reasonable as it means that the system will switch from any regime to any other regime. This is equivalent to the condition that, for any , one can find finite numbers such that . Under this condition, the Markov chain has a unique stationary distribution and for any .

In order to obtain some properties of the system, some assumptions are given in the following. These assumptions are conventional; they guarantee that the ecosystem is not collapsed as time lapses.

Assumption 1. Consider , , .

Assumption 2. , where , ,  , and is positive and sufficiently small.

Let , , , and denotes a float constant in the rest of this paper, which expresses different constants in different positions.

The key method used in this paper is the comparison theorem for stochastic equations. This theorem for stochastic differential equations was developed by Ikeda and Watanabe [23] and has been used by many authors [2426].

2. Positive and Global Solutions

As the state of the system is the population density of species in the system at time , it should be nonnegative. Moreover, in order for a stochastic differential equation to have a unique global (i.e., no explosion in a finite time) solution for any given initial data, the coefficients of the equation are generally required to satisfy the linear growth condition and local Lipschitz condition [25]. However, the coefficients of each equation in system obey neither the linear growth condition nor local Lipschitz continuous. In this section, we show existence and uniqueness of the positive solution.

Lemma 3. For any initial value , , , system has a unique positive local solution for almost surely (a.s.), where is the explosion time.

Proof. To begin with, consider the following equations: on with initial value , , , where , , . Notice that the last equations’ coefficients satisfy the local Lipschitz condition; thus there is a unique solution on . Therefor, it follows from Itô’s formula that , , is the unique positive local solution of system with initial value , , .

Lemma 3 only tells us that there is a unique positive local solution to (1). Next, we show this solution is global, that is, . For convenience, we define six equations:

Obviously, when , by the comparison theorem for stochastic equations [27, Theorem  3.1], it yields Furthermore, , , , , , are all existing on , and hence we have the following.

Theorem 4. There is a unique positive solution , , on to (1) a.s. for any initial value , , , and relationsare all satisfied on .

3. The Long Time Behavior

Theorem 4 shows that the solution of the system (1) will remain in the positive cone . This nice property provides us with a great opportunity to discuss how the solution varies in in detail. In this section we will give some long time behavior of the solution, especially the property of persistent in mean, which ensures the existence of .

Lemma 5 (see [28]). If Assumption 1 is satisfied, then one has

Lemma 6. If Assumption 1 is satisfied, then one has

Proof. Firstly, we give an auxiliary equationObviously, , and using the similar method of Lemma 5, we have Therefore, Next, we need only to prove
The quadratic variation of is , and by the strong law of large numbers for local martingales, we have Therefore, for all , , we have From this, we have On the other hand, from Lemma 5, we have By the arguments as above, when , we can getTherefore, we obtain that is For the arbitrary of , we must have Hence, , and the second conclusion can be proved similarly.

Theorem 7. If Assumption 1 is satisfied, then we have

Proof. Following from , , and . Obviously, we have In the following, we prove the second conclusion.
Let ; applying Itôs formula gives Hence Based on the first conclusion in this theorem, the strong law of large numbers for local martingales, and the ergodic property of Markov chain, the second conclusion is proved.

Theorem 8. If Assumptions 1 and 2 are satisfied, then one has

Proof. Based on and , we have Next, we need only to prove
The quadratic variation of is , and by the strong law of large numbers for local martingales, we have Therefore, for all , , we have From this, we have On the other hand, from Lemma 5, we have By the arguments as above, when , we can get Based on the second conclusion of Theorem 7, for all , , when , we have Thus, Hence, we obtain , and furthermore In other words, For the arbitrary of , we must have The first assertion is proved. The assertion can be proved similarly.
Similarly to the proof of the second assertion of Theorem 7, the last two assertions can be proved.

Definition 9 (see [5]). The system is said to be persistent in mean, if

Theorem 10. If Assumptions 1 and 2 are satisfied, then the system is persistent in mean.

Proof. is already proved in Theorem 8.
From the third assertion of Theorem 8, we have that is, .
From the second assertion of Theorem 7, we have that is, .
This theorem is proved.

4. The Optimal Harvesting Policies

Based on the explicit expression of the time averaging yield function obtained in the last section, here we discuss the optimal harvesting problem mentioned in Section 1.

Theorem 11. If Assumptions 1 and 2 is satisfied, then the optimal harvesting effort is where , , , and the optimal harvesting output is

Proof. Based on Theorem 8, the optimization problem can be expressed as follows:From the definitions of and , we get . Therefore, the above optimization problem can be simplified as follows:Because the objective function is concave, and we can obtain the unique maximum point easily as here the is obtained by letting .
Substituting it into the harvesting function, we obtain the optimal harvesting output This theorem is proved.

Remark 12. (i) That the feasible zone of optimization problem (46) is nonempty is guaranteed by Assumptions 1 and 2.
(ii) From the explicit expression of the optimal harvesting effort, we can easily investigate how the parameters influence on it, such that is decreasing in , and this claim coincides with the fact that if the consumer’s () consuming capacity is enhanced ( augments), the harvesting effort must reduce ( go down), or the resource () will be extinct and the whole ecosystem is crashed.

5. Numerical Results

We present numerical experiments in this section to show how the proposed model works in the constructive examples. The results enhance the readers to understand the theoretical conclusions from the practical applications.

Here, we use the Milstein method [29] to construct the discretization equation of (1); that is,where , , and , , are the Gaussian random variables.

For simplicity, assume that the random environments are modeled by a two-state Markov chain with state set and generator

The other parameters are defined as follows: , , , , , , , , , , , , , , , , , , . In this scenario, , , , , , ,  , and . Obviously, Assumptions 1 and 2 are satisfied.

Based on the aforementioned discussion, we obtain the following results.

Figures 1 and 2 show that the solutions of (1) are positive in the deterministic environment (without regime switching); that is, or . Figure 4 shows that the solutions of (1) are positive in the random environment (with regime swithcing); the random environment is described by Figure 3. They are all identical to Theorem 4. Figure 5 shows , , and in the random environment described by Figure 3, and this is consistent with Theorems 7 and 8.

6. Conclusions

This paper studies an optimal harvesting problem for a food-chain system with markovian switching. Based on the properties, the food-chain system’s solution is existing, unique, and positive; the system is persistent in mean, and the rationality of the optimal harvesting problem is proved. Then the optimal harvesting policy is obtained.

Nevertheless, there are rooms to continue work on this issue, such that more than one of control variables in the system are considered. The permanence and extinction of the system and the stability in distribution need to be investigated too.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors thank the editor and referees for their helpful comments that improved the presentation of the paper.