Abstract

Stochastic systems with Markovian switching have been used in a variety of application areas, including biology, epidemiology, mechanics, economics, and finance. In this paper, we study the Euler-Maruyama (EM) method for neutral stochastic functional differential equations with Markovian switching. The main aim is to show that the numerical solutions will converge to the true solutions. Moreover, we obtain the convergence order of the approximate solutions.

1. Introduction

Stochastic systems with Markovian switching have been successfully used in a variety of application areas, including biology, epidemiology, mechanics, economics, and finance [1]. As well as deterministic neutral functional differential equations and stochastic functional differential equations (SFDEs), most neutral stochastic functional differential equations with Markovian switching (NSFDEsMS) cannot be solved explicitly, so numerical methods become one of the powerful techniques. A number of papers have studied the numerical analysis of the deterministic neutral functional differential equations, for example, [2, 3] and references therein. The numerical solutions of SFDEs and stochastic systems with Markovian switching have also been studied extensively by many authors. Here we mention some of them, for example, [416]. moreover, Many well-known theorems in SFDEs are successfully extended to NSFDEs, for example, [1724] discussed the stability analysis of the true solutions.

However, to the best of our knowledge, little is yet known about the numerical solutions for NSFDEsMS. In this paper we will extend the method developed by [5, 16] to NSFDEsMS and study strong convergence for the Euler-Maruyama approximations under the local Lipschitz condition, the linear growth condition, and contractive mapping. The three conditions are standard for the existence and uniqueness of the true solutions. Although the method of analysis borrows from [5], the existence of the neutral term and Markovian switching essentially complicates the problem. We develop several new techniques to cope with the difficulties which have risen from the two terms. Moreover, we also generalize the results in [19].

In Section 2, we describe some preliminaries and define the EM method for NSFDEsMS and state our main result that the approximate solutions strongly converge to the true solutions. The proof of the result is rather technical so we present several lemmas in Section 3 and then complete the proof in Section 4. In Section 5, under the global Lipschitz condition, we reveal the order of the convergence of the approximate solutions. Finally, a conclusion is made in Section 6.

2. Preliminaries and EM Scheme

Throughout this paper let be a complete probability space with a filtration satisfying the usual conditions, that is, it is right continuous and increasing while contains all P-null sets. Let be an -dimensional Brownian motion defined on the probability space. Let be the Euclidean norm in . Let , and let . Denoted by the family of continuous functions from to with the norm . Let and be the family of -measurable -valued random variables such that . If is an -valued stochastic process on , we let for .

Let , , be a right-continuous Markov chain on the probability space taking values in a finite state space with the generator given by where . Here is the transition rate from to if while . We assumethat the Markov chain is independent of the Brownian motion . It is well known that almost every sample path of is a right-continuous step function with finite number of simple jumps in any finite subinterval of .

In this paper, we consider the -dimensional NSFDEsMS with initial data and , where , and . As a standing hypothesis we assume that both and are sufficiently smooth so that (2.2) has a unique solution. We refer the reader to Mao [10, 12] for the conditions on the existence and uniqueness of the solution . The initial data and could be random, but the Markov property ensures that it is sufficient to consider only the case when both and are constants.

To analyze the Euler-Maruyama (EM) method, we need the following lemma (see [6, 7, 1012, 16]).

Lemma 2.1. Given , let for . Then is a discrete Markov chain with the one-step transition probability matrix

For the completeness, we give the simulation of the Markov chain as follows. Given a stepsize , we compute the one-step transition probability matrix Let and generate a random number which is uniformly distributed in . Define where we set as usual. Generate independently a new random number which is again uniformly distributed in , and then define

Repeating this procedure a trajectory of can be generated. This procedure can be carried out independently to obtain more trajectories.

Now we can define the Euler-Maruyama (EM) approximate solution for (2.2) on the finite time interval . Without loss of any generality, we may assume that is a rational number; otherwise we may replace by a larger number. Let the step size be a fraction of and , namely, for some integers and . The explicit discrete EM approximate solution , is defined as follows: where and is a -valued random variable defined by for , , where in order for to be well defined, we set .

That is, is the linear interpolation of , . We hence have We therefore obtain It is obvious that .

In our analysis it will be more convenient to use continuous-time approximations. We hence introduce the -valued step process and we define the continuous EM approximate solution as follows: let for , while for , , Clearly, (2.12) can also be written as In particular, this shows that , that is, the discrete and continuous EM approximate solutions coincide at the grid points. We know that is not computable because it requires knowledge of the entire Brownian path, not just its -increments. However, , so the error bound for will automatically imply the error bound for . It is then obvious that Moreover, for any , and letting be the integer part of , then These properties will be used frequently in what follows, without further explanation.

For the existence and uniqueness of the solution of (2.2) and the boundedness of the solution’s moments, we impose the following hypotheses (e.g., see [11]).

Assumption 2.2. For each integer and , there exists a positive constant such that for , with .

Assumption 2.3. There is a constant such that for and .

Assumption 2.4. There exists a constant such that for all , and , for , and .

We also impose the following condition on the initial data.

Assumption 2.5. for some , and there exists a nondecreasing function such that with the property as .

From Mao and Yuan [11], we may therefore state the following theorem.

Theorem 2.6. Let . If Assumptions 2.32.5 are satisfied, then for any , where is a constant dependent on , , , , .

The primary aim of this paper is to establish the following strong mean square convergence theorem for the EM approximations.

Theorem 2.7. If Assumptions 2.22.5 hold,

The proof of this theorem is very technical, so we present some lemmas in the next section, and then complete the proof in the sequent section.

3. Lemmas

Lemma 3.1. If Assumptions 2.32.5 hold, for any , there exists a constant such that where is independent of .

Proof. For , , set and Recall the inequality that for and any , . Then we have, from Assumption 2.4, By , noting , Consequently, choose , then Hence, which implies Since with , by the Hölder inequality, we have Hence, for any , By Assumption 2.4 and the fact , we compute that Assumption 2.3 and the Hölder inequality give Applying the Burkholder-Davis-Gundy inequality, the Hölder inequality and Assumption 2.3 yield where is a constant dependent only on . Substituting (3.11), (3.12), and (3.13) into (3.10) gives Hence from (3.7), we have By the Gronwall inequality we find that From the expressions of and , we know that they are positive constants dependent only on , , , , and , but independent of . The proof is complete.

Lemma 3.2. If Assumptions 2.32.5 hold, then for any integer , where , , and   is a constant dependent on but independent of .

Proof. For , where , from (2.7), so We therefore have When , by Assumption 2.5 and , When , from (2.13), we have Recall the elementary inequality, for any , and , . Then Consequently We deal with these four terms, separately. By (3.21), Noting that (where has been defined in Lemma 3.1), by Assumption 2.3 and (2.15), By the Hölder inequality, for any integer , where is a constant dependent on .
Substituting (3.25), (3.26), and (3.27) into (3.24), choosing , and noting , we have Combining (3.21) with (3.28), from (3.20), we have as required.

Lemma 3.3. If Assumptions 2.32.5 hold, where , are a constant independent of    and , is a constant dependent on    but independent of .

Proof. Fix any and . Let , , and be the integers for which , , and , respectively. Clearly, From (2.7), which yields so by (3.20) and Lemma 3.2, noting from (2.15), Therefore, it is a key to compute . We discuss the following four possible cases.
Case 1 (). We again divide this case into three possible subcases according to .
Subcase  1 (. From (2.13), which yields From Assumption 2.4, (3.24), (3.26), and Lemma 3.2, we have
By the Hölder inequality, Assumption 2.3, and Lemma 3.1, Setting and and applying the Hölder inequality yield The Doob martingale inequality gives By (3.27), we therefore have Substituting (3.37), (3.38), and (3.41) into (3.36) and noting give Subcase  2 (). From (2.13), so we have Since from (3.26), (3.27), and the Subcase 1, noting , we have Subcase  3 (). From (2.13), we have so from the Subcase 2, we have From these three subcases, we have
Case  2   and . In this case, applying Assumption 2.5 and case 1, we have
Case 3 ( and . In this case, using Assumption 2.5,
Case  4 (). In this case, , so using Assumption 2.5, Substituting these four cases into (3.34) and noting the expression of , there exist , , and such that This proof is complete.

Remark 3.4. It should be pointed out that much simpler proofs of Lemmas 3.2 and 3.3 can be obtained by choosing if we only want to prove Theorem 2.7. However, here we choose to control the stochastic terms and by in Section 3, which will be used to show the order of the strong convergence.

Lemma 3.5. If Assumption 2.4 holds, where is a positive constant independent of .

Proof. Let , the integer part of , and be the indication function of the set . Then, by Assumption 2.4 and Lemma 3.1, we obtain where in the last step we use the fact that and are conditionally independent with respect to the algebra generated by . But, by the Markov property, where is a positive constant independent of . Then This proof is complete.

Lemma 3.6. If Assumption 2.3 holds, there is a constant , which is independent of such that

Proof. Let , the integer part of . Then with being . Let be the indication function of the set . Moreover, in what follows, is a generic positive constant independent of , whose values may vary from line to line. With these notations we derive, using Assumption 2.3, that where in the last step we use the fact that and are conditionally independent with respect to the algebra generated by . But, by the Markov property, So, by Lemma 3.1, Therefore Similarly, we can show (3.59).

4. Convergence of the EM Methods

In this section we will use the lemmas above to prove Theorem 2.7. From Lemma 3.1 and Theorem 2.6, there exists a positive constant such that Let be a sufficient large integer. Define the stopping times where we set as usual. Let Obviously, Recall the following elementary inequality: We thus have, for any , Hence Now, Similarly, Thus We also have Moreover, Using these bounds in (4.7) yields Setting and for any , by the Hölder inequality, when , for , Assumption 2.4 yields Then, we have Hence, for any , by Lemmas 3.23.5, Since when , we have By Assumption 2.2, Lemma 3.3, and Lemma 3.6, we may compute By the Doob martingale inequality, Lemma 3.3, Lemma 3.6, and Assumption 2.2, we compute Therefore, (4.17) can be written as Choosing and noting , we have By the Gronwall inequality, we have By (4.13), Given any , we can now choose sufficient small such that , then choose sufficient large such that and finally choose so small such that and thus as required.

Remark 4.1. Obviously, according to Theorem 2.7, for neutral stochastic delay differential equations with Markovian switching [19], we can easily obtain that the numerical solutions converge to the true solutions in mean square under Assumptions 2.22.4.

5. Convergence Order of the EM Method

To reveal the convergence order of the EM method, we need the following assumptions.

Assumption 5.1. There exists a constant such that for all , ,  , and ,

It is easy to see from the global Lipschitz condition that, for any , In other words, the global Lipschitz condition implies linear growth condition with the growth coefficient

Assumption 5.2. for some , and there exists a positive constant such that

We can state another theorem, which reveals the order of the convergence.

Theorem 5.3. If Assumptions 5.1, 5.2, and 2.4 hold, for any positive constant ,

Proof. Since may be replaced by , from Lemmas 3.2 and 3.3, there exist constants and such that and . Here we do not need to define the stopping times and , and we may repeat the proof in Section 4 and directly compute The proof is complete.

6. Conclusion

The EM method for neutral stochastic functional differential equations with Markovian switching is studied. The results show that the numerical solution converges to the true solution under the local Lipschitz condition. In addition, the results also show that the order of convergence of the numerical method is close to , although the order of the strong convergence in mean square for the EM scheme applied to both SDEs and SFDEs is one [6, 7, 11] under the global Lipschitz condition. Hence, we can control the numerical solution’s error; this method may value some path-dependent options more quickly and simply [25].

Acknowledgments

the work was supported by the Fundamental Research Funds for the Central Universities under Grant 2012089, China Postdoctoral Science Foundation funded project under Grant 2012M511615, the Research Fund for Wuhan Polytechnic University, and the State Key Program of National Natural Science of China (Grant no. 61134012).