Journal of Applied Mathematics

Volume 2012, Article ID 675651, 32 pages

http://dx.doi.org/10.1155/2012/675651

## Approximations of Numerical Method for Neutral Stochastic Functional Differential Equations with Markovian Switching

^{1}School of Mathematics and Computer Science, Wuhan Polytechnic University, Wuhan 430023, China^{2}School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wuhan 430073, China

Received 9 April 2012; Accepted 17 September 2012

Academic Editor: Zhenyu Huang

Copyright © 2012 Hua Yang and Feng Jiang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Stochastic systems with Markovian switching have been used in a variety of application areas, including biology, epidemiology, mechanics, economics, and finance. In this paper, we study the Euler-Maruyama (EM) method for neutral stochastic functional differential equations with Markovian switching. The main aim is to show that the numerical solutions will converge to the true solutions. Moreover, we obtain the convergence order of the approximate solutions.

#### 1. Introduction

Stochastic systems with Markovian switching have been successfully used in a variety of application areas, including biology, epidemiology, mechanics, economics, and finance [1]. As well as deterministic neutral functional differential equations and stochastic functional differential equations (SFDEs), most neutral stochastic functional differential equations with Markovian switching (NSFDEsMS) cannot be solved explicitly, so numerical methods become one of the powerful techniques. A number of papers have studied the numerical analysis of the deterministic neutral functional differential equations, for example, [2, 3] and references therein. The numerical solutions of SFDEs and stochastic systems with Markovian switching have also been studied extensively by many authors. Here we mention some of them, for example, [4–16]. moreover, Many well-known theorems in SFDEs are successfully extended to NSFDEs, for example, [17–24] discussed the stability analysis of the true solutions.

However, to the best of our knowledge, little is yet known about the numerical solutions for NSFDEsMS. In this paper we will extend the method developed by [5, 16] to NSFDEsMS and study strong convergence for the Euler-Maruyama approximations under the local Lipschitz condition, the linear growth condition, and contractive mapping. The three conditions are standard for the existence and uniqueness of the true solutions. Although the method of analysis borrows from [5], the existence of the neutral term and Markovian switching essentially complicates the problem. We develop several new techniques to cope with the difficulties which have risen from the two terms. Moreover, we also generalize the results in [19].

In Section 2, we describe some preliminaries and define the EM method for NSFDEsMS and state our main result that the approximate solutions strongly converge to the true solutions. The proof of the result is rather technical so we present several lemmas in Section 3 and then complete the proof in Section 4. In Section 5, under the global Lipschitz condition, we reveal the order of the convergence of the approximate solutions. Finally, a conclusion is made in Section 6.

#### 2. Preliminaries and EM Scheme

Throughout this paper let be a complete probability space with a filtration satisfying the usual conditions, that is, it is right continuous and increasing while contains all P-null sets. Let be an -dimensional Brownian motion defined on the probability space. Let be the Euclidean norm in . Let , and let . Denoted by the family of continuous functions from to with the norm . Let and be the family of -measurable -valued random variables such that . If is an -valued stochastic process on , we let for .

Let , , be a right-continuous Markov chain on the probability space taking values in a finite state space with the generator given by where . Here is the transition rate from to if while . We assumethat the Markov chain is independent of the Brownian motion . It is well known that almost every sample path of is a right-continuous step function with finite number of simple jumps in any finite subinterval of .

In this paper, we consider the -dimensional NSFDEsMS with initial data and , where , and . As a standing hypothesis we assume that both and are sufficiently smooth so that (2.2) has a unique solution. We refer the reader to Mao [10, 12] for the conditions on the existence and uniqueness of the solution . The initial data and could be random, but the Markov property ensures that it is sufficient to consider only the case when both and are constants.

To analyze the Euler-Maruyama (EM) method, we need the following lemma (see [6, 7, 10–12, 16]).

Lemma 2.1. *Given , let for . Then is a discrete Markov chain with the one-step transition probability matrix
*

For the completeness, we give the simulation of the Markov chain as follows. Given a stepsize , we compute the one-step transition probability matrix Let and generate a random number which is uniformly distributed in . Define where we set as usual. Generate independently a new random number which is again uniformly distributed in , and then define

Repeating this procedure a trajectory of can be generated. This procedure can be carried out independently to obtain more trajectories.

Now we can define the Euler-Maruyama (EM) approximate solution for (2.2) on the finite time interval . Without loss of any generality, we may assume that is a rational number; otherwise we may replace by a larger number. Let the step size be a fraction of and , namely, for some integers and . The explicit discrete EM approximate solution , is defined as follows: where and is a -valued random variable defined by for , , where in order for to be well defined, we set .

That is, is the linear interpolation of , . We hence have We therefore obtain It is obvious that .

In our analysis it will be more convenient to use continuous-time approximations. We hence introduce the -valued step process and we define the continuous EM approximate solution as follows: let for , while for , , Clearly, (2.12) can also be written as In particular, this shows that , that is, the discrete and continuous EM approximate solutions coincide at the grid points. We know that is not computable because it requires knowledge of the entire Brownian path, not just its -increments. However, , so the error bound for will automatically imply the error bound for . It is then obvious that Moreover, for any , and letting be the integer part of , then These properties will be used frequently in what follows, without further explanation.

For the existence and uniqueness of the solution of (2.2) and the boundedness of the solution’s moments, we impose the following hypotheses (e.g., see [11]).

*Assumption 2.2. * For each integer and , there exists a positive constant such that
for , with .

*Assumption 2.3. *There is a constant such that
for and .

*Assumption 2.4. *There exists a constant such that for all , and ,
for , and .

We also impose the following condition on the initial data.

*Assumption 2.5. * for some , and there exists a nondecreasing function such that
with the property as .

From Mao and Yuan [11], we may therefore state the following theorem.

Theorem 2.6. *Let . If Assumptions 2.3–2.5 are satisfied, then
**
for any , where is a constant dependent on , , , , .*

The primary aim of this paper is to establish the following strong mean square convergence theorem for the EM approximations.

Theorem 2.7. *If Assumptions 2.2–2.5 hold,
*

The proof of this theorem is very technical, so we present some lemmas in the next section, and then complete the proof in the sequent section.

#### 3. Lemmas

Lemma 3.1. *If Assumptions 2.3–2.5 hold, for any , there exists a constant such that
**
where is independent of .*

*Proof. * For , , set and
Recall the inequality that for and any , . Then we have, from Assumption 2.4,
By , noting ,
Consequently, choose , then
Hence,
which implies
Since
with , by the Hölder inequality, we have
Hence, for any ,
By Assumption 2.4 and the fact , we compute that
Assumption 2.3 and the Hölder inequality give
Applying the Burkholder-Davis-Gundy inequality, the Hölder inequality and Assumption 2.3 yield
where is a constant dependent only on . Substituting (3.11), (3.12), and (3.13) into (3.10) gives
Hence from (3.7), we have
By the Gronwall inequality we find that
From the expressions of and , we know that they are positive constants dependent only on , , , , and , but independent of . The proof is complete.

Lemma 3.2. *If Assumptions 2.3–2.5 hold, then for any integer ,
**
where , , and is a constant dependent on but independent of .*

*Proof. *For , where , from (2.7),
so
We therefore have
When , by Assumption 2.5 and ,
When , from (2.13), we have
Recall the elementary inequality, for any , and , . Then
Consequently
We deal with these four terms, separately. By (3.21),
Noting that (where has been defined in Lemma 3.1), by Assumption 2.3 and (2.15),
By the Hölder inequality, for any integer ,
where is a constant dependent on .

Substituting (3.25), (3.26), and (3.27) into (3.24), choosing , and noting , we have
Combining (3.21) with (3.28), from (3.20), we have
as required.

Lemma 3.3. *If Assumptions 2.3–2.5 hold,
**
where , are a constant independent of and , is a constant dependent on but independent of .*

*Proof. *Fix any and . Let , , and be the integers for which , , and , respectively. Clearly,
From (2.7),
which yields
so by (3.20) and Lemma 3.2, noting from (2.15),
Therefore, it is a key to compute . We discuss the following four possible cases.*Case *1 (). We again divide this case into three possible subcases according to .*Subcase* 1 (. From (2.13),
which yields
From Assumption 2.4, (3.24), (3.26), and Lemma 3.2, we have

By the Hölder inequality, Assumption 2.3, and Lemma 3.1,
Setting and and applying the Hölder inequality yield
The Doob martingale inequality gives
By (3.27), we therefore have
Substituting (3.37), (3.38), and (3.41) into (3.36) and noting give
*Subcase* 2 (). From (2.13),
so we have
Since
from (3.26), (3.27), and the Subcase 1, noting , we have
*Subcase* 3 (). From (2.13), we have
so from the Subcase 2, we have