Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
Volume 2012, Article ID 127397, 24 pages
http://dx.doi.org/10.1155/2012/127397
Research Article

Numerical Solutions of Stochastic Differential Delay Equations with Poisson Random Measure under the Generalized Khasminskii-Type Conditions

Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China

Received 3 April 2012; Accepted 22 May 2012

Academic Editor: Zhenya Yan

Copyright © 2012 Minghui Song and Hui Yu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The Euler method is introduced for stochastic differential delay equations (SDDEs) with Poisson random measure under the generalized Khasminskii-type conditions which cover more classes of such equations than before. The main aims of this paper are to prove the existence of global solutions to such equations and then to investigate the convergence of the Euler method in probability under the generalized Khasminskii-type conditions. Numerical example is given to indicate our results.

1. Introduction

To take into consideration stochastic effects such as corporate defaults, operational failures, market crashes or central bank announcements in financial market, the research on stochastic differential equations (SDEs) with Poisson random measure (see [1, 2]) is important, since Merton initiated the model of such equations in (see [3]). Due to the rate of change of financial dynamics system depending on its past history, SDDE with Poisson random measure (see [4, 5]), the case we propose and consider in this work, is meaningful.

Since there is no explicit solution for an SDDE with Poisson random measure, one needs, in general, numerical methods which can be classified into strong approximations and weak approximations (see [68]).

We here give an overview of the results on the strong approximations of differential equation driven by Wiener process and Poisson random measure. Platen [9] presented a convergence theorem with order and originally introduced the jump-adapted methods which are based on all the jump times. Moreover, Bruti-Liberati and Platen (see [10]) get the jump-adapted order scheme, and they also construct the derivative free or implicit jump-adapted schemes with desired order of strong convergence. In [11], for a class of pure jump systems, the order of Taylor schemes is given under weaker conditions than the current literature. In [7, 10], Bruti-Liberati and Platen present the drift-implicit schemes which have order . Recently, [8] develops adaptive time stepping algorithms based on a jump augmented Monte Carlo Euler-Maruyama method, which achieve a prescribed precision. Mao [4] presents the convergence of numerical solutions for variable delay differential equations with Poisson random measure. In [12], the improved Runge-Kutta methods are presented to improve the accuracy behaviour of problems with small noise for SDEs driven by Poisson random measure. Clearly, the results above require that the SDEs with poisson random measure satisfy the global Lipschitz conditions and the linear growth conditions. In [5], the Euler scheme is proved to converge to the analytic solution for SDDEs with Wiener process and Poisson random measure under weaker conditions than the global Lipschitz condition and the linear growth condition.

However, there are many SDDEs with Poisson random measure, especially highly nonlinear equations, which do not satisfy the above-mentioned conditions and classical Khasminskii-type conditions (see [1315]). And in Section 5, we give such highly nonlinear equation. Our work is motivated by [16] in which the generalized Khasminskii-type conditions are applied to SDDEs with Wiener process. The main contribution in our paper is to present Euler method for SDDEs with Poisson random measure under the generalized Khasminskii-type conditions which cover more classes of these equations than all the mentioned classical conditions above.

Our work is organized as follows. In Section 2, the properties of SDDEs with Poisson random measure are given under the generalized Khasminskii-type conditions. In Section 3, Euler method is analyzed under such conditions. In Section 4, we present the convergence in probability of the Euler method. In Section 5, an example is given.

2. The Generalized Khasminskii-Type Conditions for SDDEs with Poisson Random Measure

2.1. Problem's Setting

Throughout this paper, unless otherwise specified, we use the following notations. Let be the Euclidean norm in . Let and . If is a vector or matrix, its transpose is denoted by . If is a matrix, its trace norm is denoted by . Let and . Let denote the family of continuous functions from to with the norm . Denote by the family of continuous functions from to . Let denote the family of continuously two times differentiable -valued functions from to . [] denotes the largest integer which is less than or equal to in . denotes the indicator function of a set .

The following -dimensional SDDE with Poisson random measure is considered in our paper: for , where . Here denotes . The initial data of (2.1) is given by where .

The drift coefficient , the diffusion coefficient , and the jump coefficient are assumed to be Borel measurable functions and the coefficients are sufficiently smooth.

The randomness in (2.1) is generated by the following (see [8]). An -dimensional Wiener process with independent scalar components is defined on a filtered probability space . A Poisson random measure is on , where with , and its deterministic compensated measure is a probability density, and we require finite intensity . The Poisson random measure is defined on a filtered probability space . The process is thus defined on a product space , where ,, , and contains all -null sets. The Wiener process and the Poisson random measure are mutually independent.

To state the generalized Khasminskii-type conditions, we define the operator by where

Now the generalized Khasminskii-type conditions are given by the following assumptions.

Assumption 2.1. For each integer , there exists a positive constant , dependent on , such that for with . And there exists a positive constant such that for .

Assumption 2.2. There are two functions and as well as two positive constants and such that for all .

Assumption 2.3. There exists a positive constant such that

Assumption 2.4. There exists a positive constant such that the initial data (2.2) satisfies

2.2. The Existence of Global Solutions

In this section, we analyze the existence and the property of the global solution to (2.1) under Assumptions 2.1, 2.2, and 2.4.

In order to demonstrate the existence of the global solution to (2.1), we redefine the following concepts mainly according to [17, 18].

Definition 2.5. Let be an -valued stochastic process. The process is said to be càdlàg if it is right continuous and for almost all the left limit exists and is finite for all .

Definition 2.6. Let be a stopping time such that a.s. An -valued, -adapted, and càdlàg process is called a local solution of (2.1) if on and, moreover, there is a nondecreasing sequence of stopping times such that a.s. and holds for any and with probability . If, furthermore, then it is called a maximal local solution of (2.1) and is called the explosion time. A local solution to (2.1) is called a global solution if .

Lemma 2.7. Under Assumptions 2.1 and 2.4, for any given initial data (2.2), there is a unique maximal local solution to (2.1).

Proof. From Assumption 2.4, for the initial data (2.2), we have For each integer , we define for . And then we define the truncation functions for and each . Moreover, we define the following equation: on with initial data on . Obviously, the equation satisfies the global Lipschitz conditions and the linear growth conditions. Therefore according to [4], there is a unique global solution to (2.16) and its solution is a càdlàg process (see [17]). We define the stopping time for , and where we set (as usual denotes the empty set) throughout our paper. We can easily get which means is a nondecreasing sequence and then let a.s. Now, we define with on and where . And from (2.16) and (2.19), we can also obtain for any and with probability . Moreover, if , then Hence is a maximal local solution to (2.1).
To show the uniqueness of the solution to (2.1), let be another maximal local solution. As the same proof as Theorem in [17], we infer that Hence by , we get
Therefore is a unique local solution and then it is a unique maximal local solution to (2.1).
So we complete the whole proof.

Now, the existence of the global solution to (2.1) is shown in the following theorem.

Theorem 2.8. Under Assumptions 2.1, 2.2, and 2.4, for any given initial data (2.2), there is a unique global solution to (2.1) on .

Proof. According to Lemma 2.7, there exists a unique maximal local solution to (2.1) on . Hence in order to show that this local solution is a global one, we only need to demonstrate a.s. Using Itô's formula (see [1]) to , we have for .
Our proof is divided into the following steps.
Step . For any integer and , by taking integration and expectations and using Assumption 2.2 to (2.25), we get which means where From (2.27), we obtain by the Gronwall inequality (see [18]), which leads to for and . Let Therefore, from (2.30), we have by taking , which gives Hence we get It thus follows from (2.30) and (2.34) that by taking .
Moreover, from (2.27), we get by taking , which gives where (2.34) and (2.35) are used.
Step . For any integer and , the similar analysis as above gives where from (2.35) and (2.37).
Thus, which gives for and . Hence we get by taking , which implies that is, Moreover, by taking to (2.41), we then get Therefore, from (2.38), (2.44), and (2.45), we have
Step . So for any , we repeat the similar analysis as above and then obtain where
So we can get and the required result follows.

In the following lemma, we show that the solution of (2.1) remains in a compact set with a large probability.

Lemma 2.9. Under Assumptions 2.1, 2.2, and 2.4, for any pair of and , there exists a sufficiently large integer , dependent on and , such that where is defined in Lemma 2.7.

Proof. According to Theorem 2.8, we can get for large enough to and . Therefore, we have where Under (2.7) in Assumption 2.2, there exists a sufficiently large integer such that So we complete the proof.

3. The Euler Method

In this section, we introduce the Euler method to (2.1) under Assumptions 2.1, 2.2, 2.3, and 2.4.

Given a step size , the Euler method applied to (2.1) computes approximation , where for , by setting and forming for , where .

The continuous-time Euler method on is then defined by for , where

Actually, we can see in [11] that is a process that counts the number of jumps until some given time. The Poisson random measure generates a sequence of pairs for a given finite positive constant if . Here is a sequence of increasing nonnegative random variables representing the jump times of a standard Poisson process with intensity , and is a sequence of independent identically distributed random variables, where is distributed according to . Then (3.2) can equivalently be of the following form:

In order to analyze the Euler method, we will give two lemmas.

The first lemma shows the close relation between the continuous-time Euler solution (3.4) and its step function .

Lemma 3.1. Suppose Assumptions 2.1 and 2.3 hold. Then for any , there exists a positive constant , dependent on integer and independent of , such that for all the continuous-time Euler method (3.4) satisfies for and , where is defined in Lemma 2.7 and .

Proof. For and , there is an integer such that . Thus it follows from (3.2) that
Therefore, by taking expectations and the Cauchy-Schwarz inequality and using the martingale properties of and , we get where the inequality for is used. Therefore, by applying Assumption 2.1, we get Hence by substituting (3.10) into (3.9), we get for and .
So from Assumption 2.3, we can get the result (3.7) by choosing

In the following lemma, we demonstrate that the solution of continuous-time Euler method (3.4) remains in a compact set with a large probability.

Lemma 3.2. Under Assumptions 2.1, 2.2, 2.3, and 2.4, for any pair of and , there exist a sufficiently large integer and a sufficiently small such that where is defined in Lemma 3.1.

Proof. Our proof is completed by the following steps.
Step . Using Itô's formula (see [1]) to , for , we have where is defined by Moreover, for with , we have where Assumptions 2.1 and 2.2 are used and is a positive constant dependent on integer , intensity and independent of . Therefore, from (3.16), Assumption 2.4, and (3.7) in Lemma 3.1, we obtain for and . Hence by taking expectations and integration to (3.14), applying the martingale properties of and , and then using (3.17) and Assumption 2.2, we obtain for and .
Step . For and , it follows from (3.18) that where . Thus from (3.19), we get by the Gronwall inequality (see [18]), which gives for and . Moreover, from (3.19) and (3.21), we have for .
Step . For and , it follows from (3.18) that As the same way as Step , we can obtain where from (3.21) and (3.22). So (3.24) becomes for and , where
Step . By repeating the same way in Steps and , we get for , where and are two constants dependent on and independent of and . Therefore, we have where Now, for any , we can choose sufficiently large integer such that and sufficiently small such that So from (3.30), we can obtain

4. Convergence in Probability

In this section, we show the convergence in probability of the Euler method to (2.1) over a finite time interval , which is based on the following lemma.

Lemma 4.1. Under Assumptions 2.1, 2.3, and 2.4, for any , there exists a positive constant , dependent on and independent of , such that for all the solution of (2.1) and the continuous-time Euler method (3.4) satisfy where and are defined in Lemmas 2.7 and 3.1, respectively, and .

Proof. From (2.1) and (3.4), for any and , we have where the inequality for is used. Therefore, by using the Cauchy-Schwarz inequality, Assumptions 2.1 and 2.4, Fubini's Theorem, and Lemma 3.1, we obtain Moreover, by using the martingale properties of and , Assumptions 2.1 and 2.4, Fubini's Theorem, and Lemma 3.1, we have