Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 127397, 24 pages
http://dx.doi.org/10.1155/2012/127397
Research Article

Numerical Solutions of Stochastic Differential Delay Equations with Poisson Random Measure under the Generalized Khasminskii-Type Conditions

Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China

Received 3 April 2012; Accepted 22 May 2012

Academic Editor: Zhenya Yan

Copyright © 2012 Minghui Song and Hui Yu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The Euler method is introduced for stochastic differential delay equations (SDDEs) with Poisson random measure under the generalized Khasminskii-type conditions which cover more classes of such equations than before. The main aims of this paper are to prove the existence of global solutions to such equations and then to investigate the convergence of the Euler method in probability under the generalized Khasminskii-type conditions. Numerical example is given to indicate our results.

1. Introduction

To take into consideration stochastic effects such as corporate defaults, operational failures, market crashes or central bank announcements in financial market, the research on stochastic differential equations (SDEs) with Poisson random measure (see [1, 2]) is important, since Merton initiated the model of such equations in 1976 (see [3]). Due to the rate of change of financial dynamics system depending on its past history, SDDE with Poisson random measure (see [4, 5]), the case we propose and consider in this work, is meaningful.

Since there is no explicit solution for an SDDE with Poisson random measure, one needs, in general, numerical methods which can be classified into strong approximations and weak approximations (see [68]).

We here give an overview of the results on the strong approximations of differential equation driven by Wiener process and Poisson random measure. Platen [9] presented a convergence theorem with order 𝛾{0.5,1,1.5,} and originally introduced the jump-adapted methods which are based on all the jump times. Moreover, Bruti-Liberati and Platen (see [10]) get the jump-adapted order 1.5 scheme, and they also construct the derivative free or implicit jump-adapted schemes with desired order of strong convergence. In [11], for a class of pure jump systems, the order of Taylor schemes is given under weaker conditions than the current literature. In [7, 10], Bruti-Liberati and Platen present the drift-implicit schemes which have order 𝛾{0.5,1}. Recently, [8] develops adaptive time stepping algorithms based on a jump augmented Monte Carlo Euler-Maruyama method, which achieve a prescribed precision. Mao [4] presents the convergence of numerical solutions for variable delay differential equations with Poisson random measure. In [12], the improved Runge-Kutta methods are presented to improve the accuracy behaviour of problems with small noise for SDEs driven by Poisson random measure. Clearly, the results above require that the SDEs with poisson random measure satisfy the global Lipschitz conditions and the linear growth conditions. In [5], the Euler scheme is proved to converge to the analytic solution for SDDEs with Wiener process and Poisson random measure under weaker conditions than the global Lipschitz condition and the linear growth condition.

However, there are many SDDEs with Poisson random measure, especially highly nonlinear equations, which do not satisfy the above-mentioned conditions and classical Khasminskii-type conditions (see [1315]). And in Section 5, we give such highly nonlinear equation. Our work is motivated by [16] in which the generalized Khasminskii-type conditions are applied to SDDEs with Wiener process. The main contribution in our paper is to present Euler method for SDDEs with Poisson random measure under the generalized Khasminskii-type conditions which cover more classes of these equations than all the mentioned classical conditions above.

Our work is organized as follows. In Section 2, the properties of SDDEs with Poisson random measure are given under the generalized Khasminskii-type conditions. In Section 3, Euler method is analyzed under such conditions. In Section 4, we present the convergence in probability of the Euler method. In Section 5, an example is given.

2. The Generalized Khasminskii-Type Conditions for SDDEs with Poisson Random Measure

2.1. Problem's Setting

Throughout this paper, unless otherwise specified, we use the following notations. Let || be the Euclidean norm in 𝐑𝑑,𝑑𝐍. Let 𝑢1𝑢2=max{𝑢1,𝑢2} and 𝑢1𝑢2=min{𝑢1,𝑢2}. If 𝐴 is a vector or matrix, its transpose is denoted by 𝐴𝑇. If 𝐴 is a matrix, its trace norm is denoted by |𝐴|=trace(𝐴𝑇𝐴). Let 𝜏>0 and 𝐑+=[0,). Let 𝐶([𝜏,0];𝐑𝑑) denote the family of continuous functions from [𝜏,0] to 𝐑𝑑 with the norm |𝜑|=sup𝜏𝜃0|𝜑(𝜃)|. Denote by 𝐶(𝐑𝑑;𝐑+) the family of continuous functions from 𝐑𝑑 to 𝐑+. Let 𝐶2(𝐑𝑑;𝐑+) denote the family of continuously two times differentiable 𝐑+-valued functions from 𝐑𝑑 to 𝐑+. [𝑧] denotes the largest integer which is less than or equal to 𝑧 in 𝐑. 𝐼𝒜 denotes the indicator function of a set 𝒜.

The following 𝑑-dimensional SDDE with Poisson random measure is considered in our paper: 𝑑𝑥(𝑡)=𝑎(𝑥(𝑡),𝑥((𝑡𝜏)))𝑑𝑡+𝑏(𝑥(𝑡),𝑥((𝑡𝜏)+))𝑑𝑊(𝑡)𝜀𝑐(𝑥(𝑡),𝑥((𝑡𝜏)),𝑣)̃𝑝𝜙(𝑑𝑣×𝑑𝑡),(2.1) for 𝑡>0, where ̃𝑝𝜙(𝑑𝑣×𝑑𝑡)=𝑝𝜙(𝑑𝑣×𝑑𝑡)𝜙(𝑑𝑣)𝑑𝑡. Here 𝑥(𝑡) denotes lim𝑠𝑡𝑥(𝑠). The initial data of (2.1) is given by []{𝑥(𝑡)𝜏𝑡0}=𝜉(𝑡)𝐶𝜏,0;𝐑𝑑,(2.2) where 𝑥(𝜏)=𝑥(𝜏).

The drift coefficient 𝑎𝐑𝑑×𝐑𝑑𝐑𝑑, the diffusion coefficient 𝑏𝐑𝑑×𝐑𝑑𝐑𝑑×𝑚0, and the jump coefficient 𝑐𝐑𝑑×𝐑𝑑×𝜀𝐑𝑑 are assumed to be Borel measurable functions and the coefficients are sufficiently smooth.

The randomness in (2.1) is generated by the following (see [8]). An 𝑚0-dimensional Wiener process 𝑊={𝑊(𝑡)=(𝑊1(𝑡),,𝑊𝑚0(𝑡))𝑇} with independent scalar components is defined on a filtered probability space (Ω𝑊,𝑊,(𝑊𝑡)𝑡0,𝐏𝑊). A Poisson random measure 𝑝𝜙(𝜔,𝑑𝑣×𝑑𝑡) is on Ω𝐽×𝜀×[0,), where 𝜀𝐑𝑟{0} with 𝑟𝐍, and its deterministic compensated measure 𝜙(𝑑𝑣)𝑑𝑡=𝜆𝑓(𝑣)𝑑𝑣𝑑𝑡.𝑓(𝑣) is a probability density, and we require finite intensity 𝜆=𝜙(𝜀)<. The Poisson random measure is defined on a filtered probability space (Ω𝐽,𝐽,(𝐽𝑡)𝑡0,𝐏𝐽). The process 𝑥(𝑡) is thus defined on a product space (Ω,,(𝑡)𝑡0,𝐏), where Ω=Ω𝑊×Ω𝐽,=𝑊×𝐽,(𝑡)𝑡0=(𝑊𝑡)𝑡0×(𝐽𝑡)𝑡0, 𝐏=𝐏𝑊×𝐏𝐽, and 0 contains all 𝐏-null sets. The Wiener process and the Poisson random measure are mutually independent.

To state the generalized Khasminskii-type conditions, we define the operator 𝐿𝑉𝐑𝑑×𝐑𝑑𝐑 by 𝐿𝑉(𝑥,𝑦)=𝑉𝑥1(𝑥)𝑎(𝑥,𝑦)+2𝑏trace𝚃(𝑥,𝑦)𝑉𝑥𝑥+(𝑥)𝑏(𝑥,𝑦)𝜀𝑉(𝑥+𝑐(𝑥,𝑦,𝑣))𝑉(𝑥)𝑉𝑥(𝑥)𝑐(𝑥,𝑦,𝑣)𝜙(𝑑𝑣),(2.3) where 𝑉𝐶2𝐑𝑑;𝐑+,𝑉𝑥=𝜕𝑉(𝑥)𝜕𝑥1,,𝜕𝑉(𝑥)𝜕𝑥𝑑,𝑉𝑥𝑥=𝜕2𝑉(𝑥)𝜕𝑥𝑖𝜕𝑥𝑗𝑑×𝑑.(2.4)

Now the generalized Khasminskii-type conditions are given by the following assumptions.

Assumption 2.1. For each integer 𝑘1, there exists a positive constant 𝐶𝑘, dependent on 𝑘, such that ||𝑎(𝑥,𝑦)𝑎𝑥,𝑦||2||𝑏(𝑥,𝑦)𝑏𝑥,𝑦||2𝐶𝑘||||𝑥𝑦2+||𝑥𝑦||2,(2.5) for 𝑥,𝑦𝐑𝑑 with |𝑥||𝑦|𝑘. And there exists a positive constant 𝐶 such that 𝜀||𝑐(𝑥,𝑦,𝑣)𝑐𝑥,||𝑦,𝑣2||||𝜙(𝑑𝑣)𝐶𝑥𝑦2+||𝑥𝑦||2,(2.6) for 𝑥,𝑦𝐑𝑑.

Assumption 2.2. There are two functions 𝑉𝐶2(𝐑𝑑;𝐑+) and 𝑈𝐶(𝐑𝑑;𝐑+) as well as two positive constants 𝜇1 and 𝜇2 such that lim|𝑥|𝑉(𝑥)=,(2.7)𝐿𝑉(𝑥,𝑦)𝜇1(1+𝑉(𝑥)+𝑉(𝑦)+𝑈(𝑦))𝜇2𝑈(𝑥),(2.8) for all (𝑥,𝑦)𝐑𝑑×𝐑𝑑.

Assumption 2.3. There exists a positive constant 𝐶 such that 𝜀||||𝑐(0,0,𝑣)2𝜙(𝑑𝑣)𝐶.(2.9)

Assumption 2.4. There exists a positive constant 𝐿 such that the initial data (2.2) satisfies ||||𝜉(𝑡)𝜉(𝑠)𝐿|𝑡𝑠|1/2,for𝜏𝑡,𝑠0.(2.10)

2.2. The Existence of Global Solutions

In this section, we analyze the existence and the property of the global solution to (2.1) under Assumptions 2.1, 2.2, and 2.4.

In order to demonstrate the existence of the global solution to (2.1), we redefine the following concepts mainly according to [17, 18].

Definition 2.5. Let {𝑥(𝑡)}𝑡𝜏 be an 𝐑𝑑-valued stochastic process. The process is said to be càdlàg if it is right continuous and for almost all 𝜔Ω the left limit lim𝑠𝑡𝑥(𝑠) exists and is finite for all 𝑡>𝜏.

Definition 2.6. Let 𝜎 be a stopping time such that 0𝜎𝑇 a.s. An 𝐑𝑑-valued, 𝑡-adapted, and càdlàg process {𝑥(𝑡)𝜏𝑡<𝜎} is called a local solution of (2.1) if 𝑥(𝑡)=𝜉(𝑡) on 𝑡[𝜏,0] and, moreover, there is a nondecreasing sequence {𝜎𝑘}𝑘1 of stopping times such that 0𝜎𝑘𝜎 a.s. and 𝑥𝑡𝜎𝑘=𝑥(0)+𝑡𝜎𝑘0𝑎(𝑥(𝑠),𝑥((𝑠𝜏)))𝑑𝑠+𝑡𝜎𝑘0𝑏(𝑥(𝑠),𝑥((𝑠𝜏)+))𝑑𝑊(𝑠)𝑡𝜎𝑘0𝜀𝑐(𝑥(𝑠),𝑥((𝑠𝜏)),𝑣)̃𝑝𝜙(𝑑𝑣×𝑑𝑠)(2.11) holds for any 𝑡[0,𝑇] and 𝑘1 with probability 1. If, furthermore, lim𝑡𝜎||||sup𝑥(𝑡)=whenever𝜎<𝑇,(2.12) then it is called a maximal local solution of (2.1) and 𝜎 is called the explosion time. A local solution {𝑥(𝑡)𝜏𝑡<𝜎} to (2.1) is called a global solution if 𝜎=.

Lemma 2.7. Under Assumptions 2.1 and 2.4, for any given initial data (2.2), there is a unique maximal local solution to (2.1).

Proof. From Assumption 2.4, for the initial data (2.2), we have max𝜏𝑡0||||𝜉(𝑡)max𝜏𝑡0||||+||||𝜉(𝑡)𝜉(0)𝜉(0)𝐿||||.𝜏+𝜉(0)(2.13) For each integer 𝑘[𝐿𝜏+|𝜉(0)|]+1, we define 𝑧[𝑘]=|𝑧|𝑘|𝑧|𝑧,0[𝑘]=0,(2.14) for 𝑧𝐑𝑑. And then we define the truncation functions 𝑎𝑘𝑥(𝑥,𝑦)=𝑎[𝑘],𝑦[𝑘],𝑏𝑘𝑥(𝑥,𝑦)=𝑏[𝑘],𝑦[𝑘],𝑐𝑘(𝑥,𝑦,𝑣)=𝑐(𝑥,𝑦,𝑣),(2.15) for 𝑥,𝑦𝐑𝑑 and each 𝑘[𝐿𝜏+|𝜉(0)|]+1. Moreover, we define the following equation: 𝑑𝑥𝑘(𝑡)=𝑎𝑘𝑥𝑘(𝑡),𝑥𝑘((𝑡𝜏))𝑑𝑡+𝑏𝑘𝑥𝑘(𝑡),𝑥𝑘((𝑡𝜏))+𝑑𝑊(𝑡)𝜀𝑐𝑘𝑥𝑘(𝑡),𝑥𝑘((𝑡𝜏)),𝑣̃𝑝𝜙(𝑑𝑣×𝑑𝑡),(2.16) on 𝑡[0,𝑇] with initial data 𝑥𝑘(𝑡)=𝜉(𝑡) on 𝑡[𝜏,0]. Obviously, the equation satisfies the global Lipschitz conditions and the linear growth conditions. Therefore according to [4], there is a unique global solution 𝑥𝑘(𝑡) to (2.16) and its solution is a càdlàg process (see [17]). We define the stopping time 𝜎𝑘[]||𝑥=𝑇inf𝑡0,𝑇𝑘||(𝑡)𝑘,(2.17) for 𝑘[𝐿𝜏+|𝜉(0)|]+1, and 𝜎1==𝜎[𝐿𝜏+|𝜉(0)|]=𝜎[𝐿𝜏+|𝜉(0)|]+1,(2.18) where we set inf𝜙= (as usual 𝜙 denotes the empty set) throughout our paper. We can easily get 𝑥𝑘(𝑡)=𝑥𝑘+1(𝑡),𝜏𝑡𝜎𝑘,(2.19) which means {𝜎𝑘}𝑘1 is a nondecreasing sequence and then let lim𝑘𝜎𝑘=𝜎 a.s. Now, we define {𝑥(𝑡)𝜏𝑡<𝜎} with 𝑥(𝑡)=𝜉(𝑡) on 𝑡[𝜏,0] and 𝑥(𝑡)=𝑥𝑘𝜎(𝑡),𝑡𝑘1,𝜎𝑘,𝑘1,(2.20) where 𝜎0=0. And from (2.16) and (2.19), we can also obtain 𝑥𝑡𝜎𝑘=𝑥𝑘𝑡𝜎𝑘=𝑥0+𝑡𝜎𝑘0𝑎(𝑥(𝑠),𝑥((𝑠𝜏)))𝑑𝑠+𝑡𝜎𝑘0𝑏(𝑥(𝑠),𝑥((𝑠𝜏)+))𝑑𝑊(𝑠)𝑡𝜎𝑘0𝜀𝑐(𝑥(𝑠),𝑥((𝑠𝜏)),𝑣)̃𝑝𝜙(𝑑𝑣×𝑑𝑠),(2.21) for any 𝑡[0,𝑇] and 𝑘1 with probability 1. Moreover, if 𝜎<𝑇, then lim𝑡𝜎||𝑥||sup(𝑡)lim𝑘||𝑥𝜎sup𝑘||=lim𝑘||𝑥sup𝑘𝜎𝑘||=.(2.22) Hence {𝑥(𝑡)𝜏𝑡<𝜎} is a maximal local solution to (2.1).
To show the uniqueness of the solution to (2.1), let {𝑥(𝑡)𝜏𝑡<𝜎} be another maximal local solution. As the same proof as Theorem 2.8 in [17], we infer that 𝐏𝑥(𝑡,𝜔)=𝑥(𝑡,𝜔),(𝑡,𝜔)𝜏,𝜎𝑘𝜎𝑘×Ω=1,𝑘1.(2.23) Hence by 𝑘, we get 𝐏𝑥(𝑡,𝜔)=𝑥(𝑡,𝜔),(𝑡,𝜔)𝜏,𝜎𝜎×Ω=1.(2.24)
Therefore 𝑥(𝑡) is a unique local solution and then it is a unique maximal local solution to (2.1).
So we complete the whole proof.

Now, the existence of the global solution to (2.1) is shown in the following theorem.

Theorem 2.8. Under Assumptions 2.1, 2.2, and 2.4, for any given initial data (2.2), there is a unique global solution 𝑥(𝑡) to (2.1) on 𝑡[𝜏,).

Proof. According to Lemma 2.7, there exists a unique maximal local solution to (2.1) on [𝜏,𝜎). Hence in order to show that this local solution is a global one, we only need to demonstrate 𝜎= a.s. Using Itô's formula (see [1]) to 𝑉(𝑥(𝑡)), we have 𝑉𝑑𝑉(𝑥(𝑡))=𝑥(𝑥(𝑡))𝑎(𝑥(𝑡),𝑥((𝑡𝜏)+1))2𝑏trace𝚃(𝑥(𝑡),𝑥((𝑡𝜏)))𝑉𝑥𝑥(𝑥(𝑡))𝑏(𝑥(𝑡),𝑥((𝑡𝜏)+))𝑑𝑡𝜀(𝑉(𝑥(𝑡)+𝑐(𝑥(𝑡),𝑥((𝑡𝜏)),𝑣))𝑉(𝑥(𝑡))𝑉𝑥(𝑥(𝑡))𝑐(𝑥(𝑡),𝑥((𝑡𝜏)),𝑣)𝜙(𝑑𝑣)𝑑𝑡+𝑉𝑥(𝑥(𝑡))𝑏(𝑥(𝑡),𝑥((𝑡𝜏)+))𝑑𝑊(𝑡)𝜀(𝑉(𝑥(𝑡)+𝑐(𝑥(𝑡),𝑥((𝑡𝜏)),𝑣))𝑉(𝑥(𝑡)))̃𝑝𝜙(𝑑𝑣×𝑑𝑡)=𝐿𝑉(𝑥(𝑡),𝑥((𝑡𝜏)))𝑑𝑡+𝑉𝑥(𝑥(𝑡))𝑏(𝑥(𝑡),𝑥((𝑡𝜏)+))𝑑𝑊(𝑡)𝜀(𝑉(𝑥(𝑡)+𝑐(𝑥(𝑡),𝑥((𝑡𝜏)),𝑣))𝑉(𝑥(𝑡)))̃𝑝𝜙(𝑑𝑣×𝑑𝑡),(2.25) for 𝑡[0,𝜎).
Our proof is divided into the following steps.
Step 1. For any integer 𝑘[𝐿𝜏+|𝜉(0)|]+1 and 0𝑡𝜏, by taking integration and expectations and using Assumption 2.2 to (2.25), we get 𝑥𝐄𝑉𝑡𝜎𝑘𝐄𝑉(𝑥(0))𝐄𝑡𝜎𝑘0𝜇1(1+𝑉(𝑥(𝑠))+𝑉(𝑥((𝑠𝜏)))+𝑈(𝑥((𝑠𝜏))))𝜇2𝑈(𝑥(𝑠))𝑑𝑠,(2.26) which means 𝑥𝐄𝑉𝑡𝜎𝑘𝐶1+𝜇1𝐄𝑡𝜎𝑘0𝑉(𝑥(𝑠))𝑑𝑠𝜇2𝐄𝑡𝜎𝑘0𝑈(𝑥(𝑠))𝑑𝑠,(2.27) where 𝐶1=𝑉(𝑥(0))+𝜇1𝜏+𝜇10𝜏𝑉(𝜉(𝑠))𝑑𝑠+𝜇10𝜏𝑈(𝜉(𝑠))𝑑𝑠<.(2.28) From (2.27), we obtain 𝑥𝐄𝑉𝑡𝜎𝑘𝐶1+𝜇1𝐄𝑡𝜎𝑘0𝑉(𝑥(𝑠))𝑑𝑠𝐶1+𝜇1𝑡0𝑥𝐄𝑉𝑠𝜎𝑘𝑑𝑠,(2.29) by the Gronwall inequality (see [18]), which leads to 𝑥𝐄𝑉𝑡𝜎𝑘𝐶1𝑒𝜇1𝜏,(2.30) for 0𝑡𝜏 and 𝑘[𝐿𝜏+|𝜉(0)|]+1. Let 𝜍𝑘=inf|𝑥|𝑘,0𝑡<𝐿𝑉(𝑥),for𝑘||||𝜏+𝜉(0)+1.(2.31) Therefore, from (2.30), we have 𝜍𝑘𝐏𝜎𝑘𝑉𝑥𝜎𝜏𝐄𝑘𝐼{𝜎𝑘𝜏}𝑥𝐄𝑉𝜏𝜎𝑘𝐶1𝑒𝜇1𝜏,(2.32) by taking 𝑘, which gives 𝐏𝜎𝜏=0.(2.33) Hence we get 𝐏𝜎>𝜏=1.(2.34) It thus follows from (2.30) and (2.34) that 𝐄𝑉(𝑥(𝑡))𝐶1𝑒𝜇1𝜏,0𝑡𝜏,(2.35) by taking 𝑘.
Moreover, from (2.27), we get 𝐄𝜏𝜎𝑘0𝑈(𝑥(𝑠))𝑑𝑠𝜇21𝐶1+𝜇1𝐄𝜏𝜎𝑘0𝑉(𝑥(𝑠))𝑑𝑠𝜇21𝐶1+𝜇1𝜏0𝑥𝐄𝑉𝑠𝜎𝑘,𝑑𝑠(2.36) by taking 𝑘, which gives 𝐄𝜏0𝑈(𝑥(𝑠))𝑑𝑠𝜇21𝐶1+𝜏𝜇1𝐶1𝑒𝜇1𝜏<,(2.37) where (2.34) and (2.35) are used.
Step 2. For any integer 𝑘[𝐿𝜏+|𝜉(0)|]+1 and 0𝑡2𝜏, the similar analysis as above gives 𝑥𝐄𝑉𝑡𝜎𝑘𝐶2+𝜇1𝐄𝑡𝜎𝑘0𝑉(𝑥(𝑠))𝑑𝑠𝜇2𝐄𝑡𝜎𝑘0𝑈(𝑥(𝑠))𝑑𝑠,(2.38) where 𝐶2=𝑉(𝑥(0))+2𝜇1𝜏+𝜇10𝜏𝑉(𝜉(𝑠))𝑑𝑠+𝜇10𝜏𝑈(𝜉(𝑠))𝑑𝑠+𝜇1𝐄𝜏0𝑉(𝑥(𝑠))𝑑𝑠+𝜇1𝐄𝜏0𝑈(𝑥(𝑠))𝑑𝑠<,(2.39) from (2.35) and (2.37).
Thus, 𝑥𝐄𝑉𝑡𝜎𝑘𝐶2+𝜇1𝐄𝑡0𝑉𝑥𝑠𝜎𝑘𝑑𝑠,(2.40) which gives 𝑥𝐄𝑉𝑡𝜎𝑘𝐶2𝑒2𝜇1𝜏,(2.41) for 0𝑡2𝜏 and 𝑘[𝐿𝜏+|𝜉(0)|]+1. Hence we get 𝜍𝑘𝐏𝜎𝑘𝑥2𝜏𝐄𝑉2𝜏𝜎𝑘𝐶2𝑒2𝜇1𝜏,(2.42) by taking 𝑘, which implies 𝐏𝜎2𝜏=0,(2.43) that is, 𝐏𝜎>2𝜏=1.(2.44) Moreover, by taking 𝑘 to (2.41), we then get 𝐄𝑉(𝑥(𝑡))𝐶2𝑒2𝜇1𝜏,0𝑡2𝜏.(2.45) Therefore, from (2.38), (2.44), and (2.45), we have 𝐄02𝜏𝑈(𝑥(𝑠))𝑑𝑠𝜇21𝐶2+2𝜏𝜇1𝐶1𝑒2𝜇1𝜏<.(2.46)
Step 3. So for any 𝑖𝐍, we repeat the similar analysis as above and then obtain 𝐏𝜎>𝑖𝜏=1,𝐄𝑉(𝑥(𝑡))𝐶𝑖𝑒𝑖𝜇1𝜏𝐄,0𝑡𝑖𝜏,𝜏0𝑈(𝑥(𝑠))𝑑𝑠𝜇21𝐶𝑖+𝑖𝜏𝜇1𝐶𝑖𝑒𝑖𝜇1𝜏<,(2.47) where 𝐶𝑖=𝑉(𝑥(0))+𝜇1𝐄(𝑖1)𝜏𝜏(1+𝑉(𝑥(𝑠))+𝑈(𝑥(𝑠)))𝑑𝑠<.(2.48)
So we can get 𝐏(𝜎=)=1 and the required result follows.

In the following lemma, we show that the solution of (2.1) remains in a compact set with a large probability.

Lemma 2.9. Under Assumptions 2.1, 2.2, and 2.4, for any pair of 𝜖(0,1) and 𝑇>0, there exists a sufficiently large integer 𝑘, dependent on 𝜖 and 𝑇, such that 𝐏𝜎𝑘𝑇𝜖,𝑘𝑘,(2.49) where 𝜎𝑘 is defined in Lemma 2.7.

Proof. According to Theorem 2.8, we can get 𝑥𝐄𝑉𝑇𝜎𝑘𝐶𝑖𝑒𝑖𝜇1𝜏,(2.50) for 𝑖 large enough to 𝑖𝜏𝑇 and 𝑘[𝐿𝜏+|𝜉(0)|]+1. Therefore, we have 𝜍𝑘𝐏𝜎𝑘𝑉𝑥𝜎𝑇𝐄𝑘𝐼{𝜎𝑘𝑇}𝑥𝐄𝑉𝑇𝜎𝑘𝐶𝑖𝑒𝑖𝜇1𝜏,(2.51) where 𝜍𝑘=inf|𝑥|𝑘𝐿𝑉(𝑥)for𝑘||||𝜏+𝜉(0)+1.(2.52) Under (2.7) in Assumption 2.2, there exists a sufficiently large integer 𝑘 such that 𝐏𝜎𝑘𝐶𝑇𝑖𝑒𝑖𝜇1𝜏𝜍𝑘𝜖,𝑘𝑘.(2.53) So we complete the proof.

3. The Euler Method

In this section, we introduce the Euler method to (2.1) under Assumptions 2.1, 2.2, 2.3, and 2.4.

Given a step size Δ𝑡=𝜏/𝑚(0,1),𝑚𝐍, the Euler method applied to (2.1) computes approximation 𝑋𝑛𝑥(𝑡𝑛), where 𝑡𝑛=𝑛Δ𝑡 for 𝑛=𝑚,(𝑚1),,1,0,1,, by setting 𝑋𝑛=𝜉(𝑛Δ𝑡)for𝑛=𝑚,(𝑚1),,1,0,(3.1) and forming 𝑋𝑛+1=𝑋𝑛𝑋+𝑎𝑛,𝑋𝑛𝑚𝑋Δ𝑡+𝑏𝑛,𝑋𝑛𝑚Δ𝑊𝑛+𝑡𝑛+1𝑡𝑛𝜀𝑐𝑋𝑛,𝑋𝑛𝑚,𝑣̃𝑝𝜙(𝑑𝑣×𝑑𝑡),(3.2) for 𝑛=0,1,, where Δ𝑊𝑛=𝑊(𝑡𝑛+1)𝑊(𝑡𝑛).

The continuous-time Euler method 𝑋(𝑡) on 𝑡[𝜏,) is then defined by [],𝑋(𝑡)=𝜉(𝑡)for𝑡𝜏,0(3.3)𝑋(𝑡)=𝑋0+𝑡0𝑎(𝑍(𝑠),𝑍(𝑠𝜏))𝑑𝑠+𝑡0+𝑏(𝑍(𝑠),𝑍(𝑠𝜏))𝑑𝑊(𝑠)𝑡0𝜀𝑐(𝑍(𝑠),𝑍(𝑠𝜏),𝑣)̃𝑝𝜙(𝑑𝑣×𝑑𝑠),(3.4) for 𝑡0, where 𝑍(𝑡)=𝑛=𝑚𝑋𝑛𝐼[𝑛Δ𝑡,(𝑛+1)Δ𝑡)([𝑡)for𝑡𝜏,).(3.5)

Actually, we can see in [11] that 𝑝𝜙={𝑝𝜙(𝑡)=𝑝𝜙(𝜀×[0,𝑡])} is a process that counts the number of jumps until some given time. The Poisson random measure 𝑝𝜙(𝑑𝑣×𝑑𝑡) generates a sequence of pairs {(𝜄𝑖,𝜉𝑖),𝑖{1,2,,𝑝𝜙(𝑇)}} for a given finite positive constant 𝑇 if 𝜆<. Here {𝜄𝑖Ω𝐑+,𝑖{1,2,,𝑝𝜙(𝑇)}} is a sequence of increasing nonnegative random variables representing the jump times of a standard Poisson process with intensity 𝜆, and {𝜉𝑖Ω𝜀,𝑖{1,2,,𝑝𝜙(𝑇)}} is a sequence of independent identically distributed random variables, where 𝜉𝑖 is distributed according to 𝜙(𝑑𝑣)/𝜙(𝜀). Then (3.2) can equivalently be of the following form: 𝑋𝑛+1=𝑋𝑛+𝑎𝑋𝑛,𝑋𝑛𝑚𝜀𝑐𝑋𝑛,𝑋𝑛𝑚𝑋,𝑣𝜙(𝑑𝑣)Δ𝑡+𝑏𝑛,𝑋𝑛𝑚Δ𝑊𝑛+𝑝𝜙(𝑡𝑛+1)𝑖=𝑝𝜙𝑡𝑛+1𝑐𝑋𝑛,𝑋𝑛𝑚,𝜉𝑖.(3.6)

In order to analyze the Euler method, we will give two lemmas.

The first lemma shows the close relation between the continuous-time Euler solution (3.4) and its step function 𝑍(𝑡).

Lemma 3.1. Suppose Assumptions 2.1 and 2.3 hold. Then for any 𝑇>0, there exists a positive constant 𝐾1(𝑘), dependent on integer 𝑘 and independent of Δ𝑡, such that for all Δ𝑡(0,1) the continuous-time Euler method (3.4) satisfies 𝐄||||𝑋(𝑡)𝑍(𝑡)2𝐾1(𝑘)Δ𝑡,(3.7) for 0𝑡𝑇𝜎𝑘𝜌𝑘 and 𝑘[𝐿𝜏+|𝜉(0)|]+1, where 𝜎𝑘 is defined in Lemma 2.7 and 𝜌𝑘=inf{𝑡0|𝑋(𝑡)|𝑘}.

Proof. For 0𝑡𝑇𝜎𝑘𝜌𝑘 and 𝑘[𝐿𝜏+|𝜉(0)|]+1, there is an integer 𝑛 such that 𝑡[𝑡𝑛,𝑡𝑛+1). Thus it follows from (3.2) that 𝑋(𝑡)𝑍(𝑡)=𝑋𝑛+𝑡𝑡𝑛𝑎(𝑍(𝑠),𝑍(𝑠𝜏))𝑑𝑠+𝑡𝑡𝑛+𝑏(𝑍(𝑠),𝑍(𝑠𝜏))𝑑𝑊(𝑠)𝑡𝑡𝑛𝜀𝑐(𝑍(𝑠),𝑍(𝑠𝜏),𝑣)̃𝑝𝜙(𝑑𝑣×𝑑𝑠)𝑋𝑛.(3.8)
Therefore, by taking expectations and the Cauchy-Schwarz inequality and using the martingale properties of 𝑑𝑊(𝑡) and ̃𝑝𝜙(𝑑𝑣×𝑑𝑡), we get 𝐄||||𝑋(𝑡)𝑍(𝑡)2||||3𝐄𝑡𝑡𝑛||||𝑎(𝑍(𝑠),𝑍(𝑠𝜏))𝑑𝑠2||||+3𝐄𝑡𝑡𝑛||||𝑏(𝑍(𝑠),𝑍(𝑠𝜏))𝑑𝑊(𝑠)2||||+3𝐄𝑡𝑡𝑛𝜀𝑐(𝑍(𝑠),𝑍(𝑠𝜏),𝑣)̃𝑝𝜙||||(𝑑𝑣×𝑑𝑠)23Δ𝑡𝐄𝑡𝑡𝑛||||𝑎(𝑍(𝑠),𝑍(𝑠𝜏))2𝑑𝑠+3𝐄𝑡𝑡𝑛||||𝑏(𝑍(𝑠),𝑍(𝑠𝜏))2𝑑𝑠+3𝐄𝑡𝑡𝑛𝜀||||𝑐(𝑍(𝑠),𝑍(𝑠𝜏),𝑣)2𝜙(𝑑𝑣)𝑑𝑠,(3.9) where the inequality |𝑢1+𝑢2+𝑢3|23|𝑢1|2+3|𝑢2|2+3|𝑢3|2 for 𝑢1,𝑢2,𝑢3𝐑𝑑 is used. Therefore, by applying Assumption 2.1, we get 𝐄𝑡𝑡𝑛||||𝑎(𝑍(𝑠),𝑍(𝑠𝜏))2𝑑𝑠2𝐄𝑡𝑡𝑛||||𝑎(𝑍(𝑠),𝑍(𝑠𝜏))𝑎(0,0)2𝑑𝑠+2𝐄𝑡𝑡𝑛||||𝑎(0,0)2𝑑𝑠2𝐶𝑘𝐄𝑡𝑡𝑛||||𝑍(𝑠)2+||||𝑍(𝑠𝜏)2||||𝑑𝑠+2𝑎(0,0)2Δ𝑡4𝑘2𝐶𝑘||||Δ𝑡+2𝑎(0,0)2𝐄Δ𝑡,𝑡𝑡𝑛||𝑏||(𝑍(𝑠),𝑍(𝑠𝜏))2𝑑𝑠4𝑘2𝐶𝑘||𝑏||Δ𝑡+2(0,0)2𝐄Δ𝑡,𝑡𝑡𝑛𝜀||||𝑐(𝑍(𝑠),𝑍(𝑠𝜏),𝑣)2𝜙(𝑑𝑣)𝑑𝑠4𝑘2𝐶Δ𝑡+2Δ𝑡𝜀||||𝑐(0,0,𝑣)2𝜙(𝑑𝑣).(3.10) Hence by substituting (3.10) into (3.9), we get 𝐄||||𝑋(𝑡)𝑍(𝑡)2Δ𝑡24𝑘2𝐶𝑘+12𝑘2||||𝐶+6𝑎(0,0)2||||+6𝑏(0,0)2+6𝜀||||𝑐(0,0,𝑣)2𝜙(𝑑𝑣),(3.11) for 0𝑡𝑇𝜎𝑘𝜌𝑘 and 𝑘[𝐿𝜏+|𝜉(0)|]+1.
So from Assumption 2.3, we can get the result (3.7) by choosing 𝐾1(𝑘)=24𝑘2𝐶𝑘+12𝑘2||||𝐶+6𝑎(0,0)2||||+6𝑏(0,0)2+6𝜀||||𝑐(0,0,𝑣)2𝜙(𝑑𝑣).(3.12)

In the following lemma, we demonstrate that the solution of continuous-time Euler method (3.4) remains in a compact set with a large probability.

Lemma 3.2. Under Assumptions 2.1, 2.2, 2.3, and 2.4, for any pair of 𝜖(0,1) and 𝑇>0, there exist a sufficiently large integer 𝑘 and a sufficiently small Δ𝑡1 such that 𝐏𝜌𝑘𝑇𝜖,Δ𝑡Δ𝑡1,(3.13) where 𝜌𝑘 is defined in Lemma 3.1.

Proof. Our proof is completed by the following steps.
Step 1. Using Itô's formula (see [1]) to 𝑉(𝑋(𝑡)), for 𝑡0, we have 𝑑𝑉=𝑉𝑋(𝑡)𝑥+1𝑋(𝑡)𝑎(𝑍(𝑡),𝑍(𝑡𝜏))2𝑏trace𝚃(𝑍(𝑡),𝑍(𝑡𝜏))𝑉𝑥𝑥+𝑋(𝑡)𝑏(𝑍(𝑡),𝑍(𝑡𝜏))𝑑𝑡𝜀𝑉𝑋(𝑡)+𝑐(𝑍(𝑡),𝑍(𝑡𝜏),𝑣)𝑉𝑋(𝑡)𝑉𝑥𝑋𝑐𝜙(𝑡)(𝑍(𝑡),𝑍(𝑡𝜏),𝑣)(𝑑𝑣)𝑑𝑡+𝑉𝑥+𝑋(𝑡)𝑏(𝑍(𝑡),𝑍(𝑡𝜏))𝑑𝑊(𝑡)𝜀𝑉𝑋(𝑡)+𝑐(𝑍(𝑡),𝑍(𝑡𝜏),𝑣)𝑉𝑋(𝑡)̃𝑝𝜙(𝑑𝑣×𝑑𝑡)=𝐿𝑉𝑋(𝑡),𝑋(𝑡𝜏)𝑑𝑡+𝑓𝑋(𝑡),𝑋(𝑡𝜏),𝑍(𝑡),𝑍(𝑡𝜏)𝑑𝑡+𝑉𝑥+𝑋(𝑡)𝑏(𝑍(𝑡),𝑍(𝑡𝜏))𝑑𝑊(𝑡)𝜀𝑉𝑋(𝑡)+𝑐(𝑍(𝑡),𝑍(𝑡𝜏),𝑣)𝑉𝑋(𝑡)̃𝑝𝜙(𝑑𝑣×𝑑𝑡),(3.14) where 𝑓(𝑥,𝑦,𝑍1,𝑍2)𝐑𝑑×𝐑𝑑×𝐑𝑑×𝐑𝑑𝐑 is defined by 𝑓𝑥,𝑦,𝑍1,𝑍2=𝑉𝑥𝑎𝑍(𝑥)1,𝑍2+1𝑎(𝑥,𝑦)2𝑏trace𝚃𝑍1,𝑍2𝑉𝑥𝑥𝑍(𝑥)𝑏1,𝑍212𝑏trace𝚃(𝑥,𝑦)𝑉𝑥𝑥+(𝑥)𝑏(𝑥,𝑦)𝜀𝑉𝑥(𝑥)𝑐(𝑥,𝑦,𝑣)𝑉𝑥(𝑍𝑥)𝑐1,𝑍2+,𝑣𝜙(𝑑𝑣)𝜀𝑉𝑍𝑥+𝑐1,𝑍2,𝑣𝑉(𝑥+𝑐(𝑥,𝑦,𝑣))𝜙(𝑑𝑣).(3.15) Moreover, for (𝑥,𝑦,𝑍1,𝑍2)𝐑𝑑×𝐑𝑑×𝐑𝑑×𝐑𝑑 with |𝑥||𝑦||𝑍1||𝑍2|𝑘, we have 𝑓𝑥,𝑦,𝑍1,𝑍2=𝑉𝑥𝑎𝑍(𝑥)1,𝑍2+1𝑎(𝑥,𝑦)2𝑏trace𝚃𝑍1,𝑍2𝑏𝚃𝑉(𝑥,𝑦)𝑥𝑥𝑍(𝑥)𝑏1,𝑍2+12𝑏trace𝚃(𝑥,𝑦)𝑉𝑥𝑥𝑏𝑍(𝑥)1,𝑍2+𝑏(𝑥,𝑦)𝜀𝑉𝑥(𝑍𝑥)𝑐(𝑥,𝑦,𝑣)𝑐1,𝑍2+,𝑣𝜙(𝑑𝑣)𝜀𝑉𝑍𝑥+𝑐1,𝑍2,𝑣𝑉(𝑥+𝑐(𝑥,𝑦,𝑣))𝜙(𝑑𝑣)𝐿𝑘||𝑥𝑍1||+||𝑦𝑍2||,(3.16) where Assumptions 2.1 and 2.2 are used and 𝐿𝑘 is a positive constant dependent on integer 𝑘, intensity 𝜆 and independent of Δ𝑡. Therefore, from (3.16), Assumption 2.4, and (3.7) in Lemma 3.1, we obtain 𝐄𝑡𝜌𝑘0𝑓𝑋(𝑠),𝑋(𝑠𝜏),𝑍(𝑠),𝑍(𝑠𝜏)𝑑𝑠𝐿𝑘𝐄𝑡𝜌𝑘0||||𝑋(𝑠)𝑍(𝑠)𝑑𝑠+𝐿𝑘𝐄𝑡𝜌𝑘0||||𝑋(𝑠𝜏)𝑍(𝑠𝜏)𝑑𝑠2𝐿𝑘𝐄𝑡𝜌𝑘0||||𝑋(𝑠)𝑍(𝑠)𝑑𝑠+𝐿𝑘0𝜏||||𝜉(𝑠)𝑍(𝑠)𝑑𝑠2𝐿𝑘𝑡0𝐄||𝑋𝑠𝜌𝑘𝑍𝑠𝜌𝑘||21/2𝑑𝑠+𝐿𝑘1𝑛=𝑚𝑡𝑛+1𝑡𝑛||𝑡𝜉(𝑠)𝜉𝑛||𝑑𝑠2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝐿𝑘𝐿𝜏Δ𝑡,(3.17) for 0𝑡𝑇 and 𝑘[𝐿𝜏+|𝜉(0)|]+1. Hence by taking expectations and integration to (3.14), applying the martingale properties of 𝑑𝑊(𝑡) and ̃𝑝𝜙(𝑑𝑣×𝑑𝑡), and then using (3.17) and Assumption 2.2, we obtain 𝐄𝑉𝑋𝑡𝜌𝑘𝑋=𝑉0+𝐄𝑡𝜌𝑘0𝐿𝑉𝑋(𝑠),𝑋(𝑠𝜏)𝑑𝑠+𝐄𝑡𝜌𝑘0𝑓𝑋(𝑠),𝑋𝑋(𝑠𝜏),𝑍(𝑠),𝑍(𝑠𝜏)𝑑𝑠𝑉0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝐄𝑡𝜌𝑘0𝜇11+𝑉𝑋(𝑠)+𝑉𝑋(𝑠𝜏)+𝑈𝑋(𝑠𝜏)𝜇2𝑈𝑋(𝑠)𝑑𝑠,(3.18) for 0𝑡𝑇 and 𝑘[𝐿𝜏+|𝜉(0)|]+1.
Step 2. For 0𝑡𝜏 and 𝑘[𝐿𝜏+|𝜉(0)|]+1, it follows from (3.18) that 𝐄𝑉𝑋𝑡𝜌𝑘𝑋𝑉0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝛼1+𝜇1𝐄𝑡𝜌𝑘0𝑉𝑋(𝑠)𝑑𝑠𝜇2𝐄𝑡𝜌𝑘0𝑈𝑋(𝑠)𝑑𝑠,(3.19) where 𝛼1=𝜇10𝜏(1+𝑉(𝜉(𝑠))+𝑈(𝜉(𝑠)))𝑑𝑠. Thus from (3.19), we get 𝐄𝑉𝑋𝑡𝜌𝑘𝑋𝑉0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝛼1+𝜇1𝐄𝑡0𝑉𝑋𝑠𝜌𝑘𝑑𝑠,(3.20) by the Gronwall inequality (see [18]), which gives 𝐄𝑉𝑋𝑡𝜌𝑘𝑉𝑋0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝛼1𝑒𝜇1𝑡,(3.21) for 0𝑡𝜏 and 𝑘[𝐿𝜏+|𝜉(0)|]+1. Moreover, from (3.19) and (3.21), we have 𝐄𝜏𝜌𝑘0𝑈𝑋(𝑠)𝑑𝑠𝜇21𝑉𝑋0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝛼1+𝜇1𝜏0𝐄𝑉𝑋𝑠𝜌𝑘𝑉𝑋𝑑𝑠0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝛼1𝑒𝜇1𝜏𝜇21,(3.22) for 𝑘[𝐿𝜏+|𝜉(0)|]+1.
Step 3. For 0𝑡2𝜏 and 𝑘[𝐿𝜏+|𝜉(0)|]+1, it follows from (3.18) that 𝐄𝑉𝑋𝑡𝜌𝑘𝑋𝑉0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝛼2+𝜇1𝐄𝑡𝜌𝑘0𝑉𝑋(𝑠)𝑑𝑠𝜇2𝐄𝑡𝜌𝑘0𝑈𝑋(𝑠)𝑑𝑠.(3.23) As the same way as Step 2, we can obtain 𝐄𝑉𝑋𝑡𝜌𝑘𝑉𝑋0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝛼2𝑒𝜇1𝑡𝐄,0𝑡2𝜏,(3.24)2𝜏𝜌𝑘0𝑈𝑉𝑋𝑋(𝑠)𝑑𝑠0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝛼2𝑒2𝜇1𝜏𝜇21,(3.25) where 𝛼2=𝜇1𝐄2𝜏𝜌𝑘01+𝑉𝑋(𝑠𝜏)+𝑈𝑋(𝑠𝜏)𝑑𝑠=𝜇1𝐄2𝜏𝜌𝑘𝜏𝜏1+𝑉𝑋(𝑠)+𝑈𝑋(𝑠)𝑑𝑠𝜇1𝐄𝜏𝜌𝑘𝜏1+𝑉𝑋(𝑠)+𝑈𝑋(𝑠)𝑑𝑠=𝛼1+𝜇1𝐄𝜏𝜌𝑘01+𝑉𝑋(𝑠)+𝑈𝑋(𝑠)𝑑𝑠𝛼1+𝜇1𝜏+𝜇1𝜏0𝐄𝑉𝑋𝑠𝜌𝑘𝑑𝑠+𝜇1𝐄𝜏𝜌𝑘0𝑈𝑋(𝑠)𝑑𝑠𝜇1𝑉𝑋𝜏+0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝛼1𝑒𝜇1𝜏+𝑒𝜇1𝜏𝜇1𝜇21,(3.26) from (3.21) and (3.22). So (3.24) becomes 𝐄𝑉𝑋𝑡𝜌𝑘𝑉𝑋0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝛼1𝛽1,0+𝛽2,0,(3.27) for 0𝑡2𝜏 and 𝑘[𝐿𝜏+|𝜉(0)|]+1, where 𝛽1,0=1+𝑒𝜇1𝜏+𝑒𝜇1𝜏𝜇1𝜇21𝑒2𝜇1𝜏,𝛽2,0=𝜇1𝜏𝛼1𝑒2𝜇1𝜏.(3.28)
Step 4. By repeating the same way in Steps 2 and 3, we get 𝐄𝑉𝑋𝑇𝜌𝑘𝑉𝑋0+2𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝜏𝐿𝑘𝐿Δ𝑡+𝛼1𝛽1+𝛽2,(3.29) for 𝑘[𝐿𝜏+|𝜉(0)|]+1, where 𝛽1 and 𝛽2 are two constants dependent on 𝜇1,𝜇2,𝜏,𝑇 and independent of 𝑘 and Δ𝑡. Therefore, we have 𝐏𝜌𝑘𝑉𝑋𝑇0𝛽1+𝛼1𝛽1+𝛽2+2𝛽1𝐿𝑘𝑇𝐾1(𝑘)Δ𝑡+𝛽1𝜏𝐿𝑘𝐿Δ𝑡𝜗𝑘,(3.30) where 𝜗𝑘=inf||𝑋||𝑘𝑉𝑋𝐿,𝑘||||𝜏+𝜉(0)+1.(3.31) Now, for any 𝜖(0,1), we can choose sufficiently large integer 𝑘 such that 𝑉𝑋0𝛽1+𝛼1𝛽1+𝛽2𝜗𝑘𝜖2(3.32) and sufficiently small Δ𝑡1 such that 2𝛽1𝐿𝑘𝑇𝐾1𝑘Δ𝑡1+𝛽1𝜏𝐿𝑘𝐿Δ𝑡1𝜗𝑘𝜖2.(3.33) So from (3.30), we can obtain 𝐏𝜌𝑘𝑇𝜖,Δ𝑡Δ𝑡1.(3.34)

4. Convergence in Probability

In this section, we show the convergence in probability of the Euler method to (2.1) over a finite time interval [0,𝑇], which is based on the following lemma.

Lemma 4.1. Under Assumptions 2.1, 2.3, and 2.4, for any 𝑇>0, there exists a positive constant 𝐾2(𝑘), dependent on 𝑘 and independent of Δ𝑡, such that for all Δ𝑡(0,1) the solution of (2.1) and the continuous-time Euler method (3.4) satisfy 𝐄sup0𝑡𝑇||𝑥𝑡𝜎𝑘𝜌𝑘𝑋𝑡𝜎𝑘𝜌𝑘||2𝐾2(𝑘)Δ𝑡,(4.1) where 𝜎𝑘 and 𝜌𝑘 are defined in Lemmas 2.7 and 3.1, respectively, and 𝑘[𝐿𝜏+|𝜉(0)|]+1.

Proof. From (2.1) and (3.4), for any 0𝑡𝑇 and 𝑘[𝐿𝜏+|𝜉(0)|]+1, we have 𝐄sup0𝑡𝑡||𝑥𝑡𝜎𝑘𝜌𝑘𝑋𝑡𝜎𝑘𝜌𝑘||23𝐄sup0𝑡𝑡||||𝑡𝜎𝑘𝜌𝑘0(𝑎(𝑥(𝑠),𝑥((𝑠𝜏)||||))𝑎(𝑍(𝑠),𝑍(𝑠𝜏)))𝑑𝑠𝟐+3𝐄sup0𝑡𝑡||||𝑡𝜎𝑘𝜌𝑘0(𝑏(𝑥(𝑠),𝑥((𝑠𝜏)||||))𝑏(𝑍(𝑠),𝑍(𝑠𝜏)))𝑑𝑊(𝑠)𝟐+3𝐄sup0𝑡𝑡||||𝑡𝜎𝑘𝜌𝑘0𝜀(𝑐(𝑥(𝑠),𝑥((𝑠𝜏)),𝑣)𝑐(𝑍(𝑠),𝑍(𝑠𝜏),𝑣))̃𝑝𝜙||||(𝑑𝑣×𝑑𝑠)𝟐,(4.2) where the inequality |𝑢1+𝑢2+𝑢3|23|𝑢1|2+3|𝑢2|2+3|𝑢3|2 for 𝑢1,𝑢2,𝑢3𝐑𝑑 is used. Therefore, by using the Cauchy-Schwarz inequality, Assumptions 2.1 and 2.4, Fubini's Theorem, and Lemma 3.1, we obtain 𝐄sup0𝑡𝑡||||𝑡𝜎𝑘𝜌𝑘0(𝑎(𝑥(𝑠),𝑥((𝑠𝜏)||||))𝑎(𝑍(𝑠),𝑍(𝑠𝜏)))𝑑𝑠2𝐄sup0𝑡𝑡𝑡𝜎𝑘𝜌𝑘012𝑑𝑠𝑡𝜎𝑘𝜌𝑘0||𝑎(𝑥(𝑠),𝑥((𝑠𝜏)||))𝑎(𝑍(𝑠),𝑍(𝑠𝜏))2𝑑𝑠𝑇𝐄𝑡𝜎𝑘𝜌𝑘0||𝑎(𝑥(𝑠),𝑥((𝑠𝜏)||))𝑎(𝑍(𝑠),𝑍(𝑠𝜏))2𝑑𝑠𝑇𝐶𝑘𝐄𝑡𝜎𝑘𝜌𝑘0||𝑥(𝑠||)𝑍(𝑠)2𝑑𝑠+𝑇𝐶𝑘𝐄𝑡𝜎𝑘𝜌𝑘0||𝑥((𝑠𝜏)||)𝑍(𝑠𝜏)2𝑑𝑠2𝑇𝐶𝑘𝐄𝑡𝜎𝑘𝜌𝑘0||𝑥(𝑠||)𝑍(𝑠)2𝑑𝑠+𝑇𝐶𝑘0𝜏||||𝜉(𝑠)𝑍(𝑠)2𝑑𝑠4𝑇𝐶𝑘𝐄𝑡𝜎𝑘𝜌𝑘0||||𝑋(𝑠)𝑍(𝑠)2𝑑𝑠+4𝑇𝐶𝑘𝐄𝑡𝜎𝑘𝜌𝑘0||𝑥(𝑠)||𝑋(𝑠)2𝑑𝑠+𝑇𝐶𝑘1𝑛=𝑚𝑡𝑛+1𝑡𝑛||𝑡𝜉(𝑠)𝜉𝑛||2𝑑𝑠4𝑇𝐶𝑘𝑡0𝐄||𝑋𝑠𝜎𝑘𝜌𝑘𝑍𝑠𝜎𝑘𝜌𝑘||2𝑑𝑠+4𝑇𝐶𝑘𝑡0𝐄sup0𝑢𝑠||𝑥𝑢𝜎𝑘𝜌𝑘𝑋𝑢𝜎𝑘𝜌𝑘||2𝑑𝑠+𝑇𝐶𝑘𝐿2𝜏Δ𝑡4𝑇2𝐶𝑘𝐾1(𝑘)+𝑇𝐶𝑘𝐿2𝜏Δ𝑡+4𝑇𝐶𝑘𝑡0𝐄sup0𝑢𝑠||𝑥𝑢𝜎𝑘𝜌𝑘𝑋𝑢𝜎𝑘𝜌𝑘||2𝑑𝑠.(4.3) Moreover, by using the martingale properties of 𝑑𝑊(𝑡) and ̃𝑝𝜙(𝑑𝑣×𝑑𝑡), Assumptions 2.1 and 2.4, Fubini's Theorem, and Lemma 3.1, we have 𝐄sup0𝑡𝑡||||𝑡𝜎𝑘𝜌𝑘0(𝑏(𝑥(𝑠),𝑥((𝑠𝜏)||||))𝑏(𝑍(𝑠),𝑍(𝑠𝜏)))𝑑𝑊(𝑠)24𝐄𝑡𝜎𝑘𝜌𝑘0||𝑏(𝑥(𝑠),𝑥((𝑠𝜏)||))𝑏(𝑍(𝑠),𝑍(𝑠𝜏))2𝑑𝑠4𝐶𝑘𝐄𝑡𝜎𝑘𝜌𝑘0||𝑥(𝑠||)𝑍(𝑠)2𝑑𝑠+4𝐶𝑘𝐄𝑡𝜎𝑘𝜌𝑘0||𝑥((𝑠𝜏)||)𝑍(𝑠𝜏)2𝑑𝑠8𝐶𝑘𝐄𝑡𝜎𝑘𝜌𝑘0||𝑥(𝑠||)𝑍(𝑠)2𝑑𝑠+4𝐶𝑘0𝜏||||𝜉(𝑠)𝑍(𝑠)2𝑑𝑠16𝐶𝑘𝐄𝑡𝜎𝑘𝜌𝑘0||||𝑋(𝑠)𝑍(𝑠)2𝑑𝑠+16𝐶𝑘𝐄𝑡𝜎𝑘𝜌𝑘0||𝑥(𝑠)||𝑋(𝑠)2𝑑𝑠+4𝐶𝑘1𝑛=𝑚𝑡𝑛+1𝑡𝑛||𝑡𝜉(𝑠)𝜉𝑛||2𝑑𝑠16𝑇𝐶𝑘𝐾1(𝑘)+4𝐶𝑘𝐿2𝜏Δ𝑡+16𝐶𝑘𝑡0𝐄sup0𝑢𝑠||𝑥𝑢𝜎𝑘𝜌𝑘𝑋𝑢𝜎𝑘𝜌𝑘||2𝐄𝑑𝑠,sup0𝑡𝑡||||𝑡𝜎𝑘𝜌𝑘0𝜀(𝑐(𝑥(𝑠),𝑥((𝑠𝜏)),𝑣)𝑐(𝑍(𝑠),𝑍(𝑠𝜏),𝑣))̃𝑝𝜙||||(𝑑𝑣×𝑑𝑠)2||||4𝐄𝑡𝜎𝑘𝜌𝑘0𝜀(𝑐(𝑥(𝑠),𝑥((𝑠𝜏)),𝑣)𝑐(𝑍(𝑠),𝑍(𝑠𝜏),𝑣))̃𝑝𝜙||||(𝑑𝑣×𝑑𝑠)2=4𝐄𝑡𝜎𝑘𝜌𝑘0𝜀||𝑐(𝑥(𝑠),𝑥((𝑠𝜏)||),𝑣)𝑐(𝑍(𝑠),𝑍(𝑠𝜏),𝑣)2𝜙(𝑑𝑣)𝑑𝑠16𝑇𝐶𝐾1(𝑘)+4𝐶𝐿2𝜏Δ𝑡+16𝐶𝑡0𝐄sup0𝑢𝑠||𝑥𝑢𝜎𝑘𝜌𝑘𝑋𝑢𝜎𝑘𝜌𝑘||2𝑑𝑠.(4.4) Therefore by substituting (4.3) and (4.4) into (4.2), we get 𝐄sup0𝑡𝑡||𝑥𝑡𝜎𝑘𝜌𝑘𝑋𝑡𝜎𝑘𝜌𝑘||2Δ𝑡12𝑇2𝐶𝑘𝐾1(𝑘)+3𝑇𝐶𝑘𝐿2𝜏+48𝑇𝐶𝑘𝐾1(𝑘)+48𝑇𝐶𝐾1(𝑘)+12𝐶𝑘𝐿2𝜏+12𝐶𝐿2𝜏+12𝑇𝐶𝑘+48𝐶𝑘+48𝐶𝑡0𝐄sup0𝑢𝑠||𝑥𝑢𝜎𝑘𝜌𝑘𝑋𝑢𝜎𝑘𝜌𝑘||2𝑑𝑠.(4.5) So by using the Gronwall inequality (see [18]), we have the result (4.1) by choosing 𝐾2(𝑘)=12𝑇2𝐶𝑘𝐾1(𝑘)+3𝑇𝐶𝑘𝐿2𝜏+48𝑇𝐶𝑘𝐾1(𝑘)+48𝑇𝐶𝐾1(𝑘)+12𝐶𝑘𝐿2𝜏+12𝐶𝐿2𝜏exp12𝑇2𝐶𝑘+48𝑇𝐶𝑘.+48𝑇𝐶(4.6)

Now, we state our main theorem which shows the convergence in probability of the continuous-time Euler method (3.4).

Theorem 4.2. Under Assumptions 2.1, 2.2, 2.3, and 2.4, for sufficiently small 𝜖,𝜍(0,1), there is a Δ𝑡 such that for all Δ𝑡<Δ𝑡𝐏sup0𝑡𝑇||𝑥(𝑡)||𝑋(𝑡)2𝜍𝜖,(4.7) for any 𝑇>0.

Proof. For sufficiently small 𝜖,𝜍(0,1), we define Ω=𝜔sup0𝑡𝑇||𝑥(𝑡)||𝑋(𝑡)2.𝜍(4.8) By Lemmas 2.9 and 3.2, there exists a pair of 𝑘 and Δ𝑡1 such that 𝐏𝜎𝑘𝜖𝑇3,𝐏𝜌𝑘𝜖𝑇3,Δ𝑡Δ𝑡1.(4.9) We then have 𝐏Ω𝐏𝜎Ω𝑘𝜌𝑘𝜎>𝑇+𝐏𝑘𝜎𝑘𝑇𝐏𝜎Ω𝑘𝜎𝑘𝜎>𝑇+𝐏𝑘𝜌𝑇+𝐏𝑘𝑇𝐏𝜎Ω𝑘𝜌𝑘+>𝑇2𝜖3,(4.10) for Δ𝑡Δ𝑡1. Moreover, from Lemma 4.1, we have 𝜍𝐏𝜎Ω𝑘𝜌𝑘𝐼>𝑇𝐄{𝜎𝑘𝜌𝑘>𝑇}sup0𝑡𝑇||𝑥(𝑡)||𝑋(𝑡)2𝐄sup0𝑡𝑇||𝑥𝑡𝜎𝑘𝜌𝑘𝑋𝑡𝜎