Abstract

A class of drift-implicit one-step schemes are proposed for the neutral stochastic delay differential equations (NSDDEs) driven by Poisson processes. A general framework for mean-square convergence of the methods is provided. It is shown that under certain conditions global error estimates for a method can be inferred from estimates on its local error. The applicability of the mean-square convergence theory is illustrated by the stochastic θ-methods and the balanced implicit methods. It is derived from Theorem 3.1 that the order of the mean-square convergence of both of them for NSDDEs with jumps is 1/2. Numerical experiments illustrate the theoretical results. It is worth noting that the results of mean-square convergence of the stochastic θ-methods and the balanced implicit methods are also new.

1. Introduction

In stochastic numerical analysis, the order of convergence plays a crucial role in the design of numerical algorithms. Unlike in the deterministic modelling situation, there exist in the stochastic environment different types of convergence. Both in the literature and in practice, most attention is focused on two major types of convergence, that is, strong convergence and weak convergence. There is a rich literature on this subject; we here only mention [14] and the references therein.

For stochastic differential equations (SDEs), Milstein [1] presented a fundamental convergence theorem which established the order of mean-square convergence of explicit one-step methods. The conditions of this theorem use both properties of mean and mean-square deviation of one-step approximation. The theorem showed that under certain conditions global error estimates for a method can be inferred from estimates on its local error. Buckwar [4] extended the convergence theory in [1] to stochastic functional differential equations. Recently, Zhang and Gan [5] extend the theory to neutral stochastic differential delay equations (NSDDEs). Therefore, the convergence theory in [1] and its generalization have received some attention in the case of nonjump SDEs. However, in the jump-SDE context, which is becoming increasingly important in mathematical finance [68], to our best knowledge, no corresponding convergence theory of numerical methods for NSDDEs with jumps has been presented in the literature. Motivated by the work of Zhang and Gan [5], our aim is to establish a relationship between the consistent order and the convergence order of the methods for the NSDDEs with jumps.

In this paper, a class of drift-implicit one-step schemes are proposed for NSDDEs driven by Poisson processes. A general framework for mean-square convergence of the methods is provided. It is shown that under certain conditions global error estimates for a method can be inferred from estimates on its local error. The applicability of the theory about mean-square convergence is illustrated by the stochastic -methods and the balanced implicit methods. It is derived from Theorem 3.5 that the order of the mean square convergence of both of them for NSDDEs with jumps is 12. It is worth noting that the results of mean-square convergence of the stochastic -methods and the balanced implicit methods are also new.

2. Neutral Stochastic Delay Differential Equations with Jumps

Let be a complete probability space with a filtration satisfying the usual conditions (i.e., it is right continuous and contains all the -null sets). Let be a -dimensional Wiener process, and is a scalar Poisson process with intensity (), both defined on the space . And is used to denote both the norm in and the trace norm (F-norm) in . Also, is used to represent the family of continuous mappings from to . Finally, is used to denote a family of -measurable, -valued random variables such that . denote mathematical expectation with respect to .

Consider the neutral stochastic delay differential equations (NSDDEs) with Poisson-driven jumps with initial data where . Here, is a constant, denotes , , , , and .

By the definition of Itô-interpreted stochastic differential equations, the integral version of (2.1) is expressed as follows:

Definition 2.1 (see [9]). An -valued stochastic process on is called a solution to (2.1) with initial data (2.2) if it has the following properties:(i) is continuous and -adapted;(ii), ,;(iii), , and (2.3) holds for every with probability 1, where denotes the family of Borel measurable functions such that a.s.
A solution is said to be unique if any other solution is indistinguishable from , that is,

In order to guarantee the existence and uniqueness of the solution, we impose the following hypothesis.

Assumption 1 (global Lipschitz condition). There exists a positive constant such that for all ,

Assumption 2 (linear growth condition). There exists a positive constant such that for all ,

Assumption 3. There is a constant such that for all

Remark 2.2. In this paper, we always assume that . Otherwise, the system (2.1) reduces to the stochastic delay differential equations with jumps.

We use to denote the constants which are independent of the stepsize .

Lemma 2.3. If Assumptions 13 hold, then (2.1) has a unique strong solution on , and the solution of (2.1) satisfies where is a positive constant which depends on constants , , , and initial function .

It is not hard to prove Lemma 2.3 in a similar way as the proof of Theorems 6.2.2 and 6.4.5 in [9].

Lemma 2.4. Let Assumptions 2 and 3 hold. Assume that the initial function is uniformly Lipschitz -continuous, that is, there is a positive constant such that then for all with , , where the constant depends on constants , , initial function , and positive integer .

Lemma 2.4 is a modified version of [10, Lemma 2.1]. In a similar way, it is not hard to obtain the estimate (2.10).

In this paper, we will use the following inequality. For any and , we have

3. Implicit One-Step Schemes

Define a mesh with uniform step which satisfies for an integer number (for convenience, we assume that ), and suppose that is a positive integer with . Let , . The drift-implicit one-step methods for the simulation of the solution of (2.1) are defined as follows: where is an approximation of the exact solution , and (, , and ) are increment functions. , , and is independent of . when .

Remark 3.1. Now, we discuss the solvability of (3.1). If increment function does not depend on , it is not difficult to find that the approximations can be computed iteratively. If depends on , in order to guarantee the existence and uniqueness of a solution, the general approach is to assume Lipschitz continuity of with respect to with the Lipschitz constant less than 1, and then to apply Banach's contraction mapping principle [4].

We denote by the value that is obtained when the exact solution values are inserted into the right-hand side of (3.1), that is,

We introduce the following definitions, which are presented in the literature, see [1, 3], for example.

Definition 3.2. The local error of method (3.1) is the sequence of random variables The global error of method (3.1) is the sequence of random variables

Definition 3.3. The numerical method (3.1) is said to be consistent with order in the mean and with order in the mean square sense if the following estimates hold with and : where the constants and do not depend on but may depend on and the initial data. Here, .

Definition 3.4. The numerical method (3.1) is said to be convergent with order in the mean square sense, on the mesh points, if the following estimate holds: where the constant does not depend on but may depend on and the initial data.

In order to obtain the main result, the following properties of the increment functions are required. There exist positive constants such that for (), and there exist the positive constants such that for all -measurable random variables ,

Now, we state our result on the convergence of the one-step method (3.1).

Theorem 3.5. Suppose that Assumption 3 and the conditions (3.7)–(3.11) hold. Assume that the one-step method (3.1) is consistent with order in the mean and order in the mean square sense, then the method (3.1) is convergent with order in the mean square sense.

Proof. By (3.4), we have where is defined as follows: Squaring and taking expectation on both sides of (3.12), using Assumption 3 and (2.11), we have It follows from (3.1), (3.2), and (3.13) that where Squaring and taking expectation on both sides of (3.15) yields We will now estimate the separate terms in (3.17) individually. Without loss of generality, we can assume that . We notice that the method (3.1) is consistent with order in the mean square sense; thus, there exists a constant such that By (3.7)–(3.9), we obtain which, by the inequality (3.14), yields where .
Since method (3.1) is consistent with order in the mean square, there exists a constant such that Noticing that is -measurable and by (3.21), we arrive at where . Applying the inequality , (3.7), (3.10), and (3.11) yields Here, the fact used has been that , , , , , are -measurable. Using (3.14) and (3.23), we arrive at where . Inserting (3.18), (3.20), (3.22), and (3.24) into (3.17) yields The following proof is analogous to that of Theorem 3.1 in [5]; thus, it is not hard to derive the convergence result. The proof is completed.

Remark 3.6. Notice that if in (2.1), then (2.1) reduces to the NSDDEs without jumps, our Theorem 3.5 coincides with Theorem 3.1 in [5], that is to say, Theorem 3.5 generalizes Theorem 3.1 in [5] to the case of NSDDEs with jumps.

4. The Examples

Theorem 3.5 presents the convergence result about the general implicit one-step methods for NSDDEs with jumps. In this section, we discuss the applicability of the theory presented in the previous section. We will give the convergence orders of the stochastic -methods and the balanced implicit methods.

Example 4.1. Consider the stochastic -methods for system (2.1), where , .

Lemma 4.2. Let Assumption 1 hold, then the stochastic -methods (4.1) are consistent with order in the mean and order in the mean square sense.

Proof. Combining (2.1), (3.2) with (4.1) yields Noticing the compensated Poisson process, , which satisfies Using Assumption 1, (4.3), Hölder inequality, Lemma 2.4, the properties of conditional expectation, and Jensons inequality: , we compute that where the compensated Poisson process satisfies Hence, the stochastic -methods (4.1) are consistent with order in the mean and order 1 in the mean square sense. The proof is completed.

Theorem 4.3. Let Assumption 1 hold, then the stochastic -methods (4.1) are convergent with order in the mean square sense.

Proof. Lemma 4.2 shows that the stochastic -methods (4.1) are consistent with order in the mean and order in the mean square sense. In order to prove that the stochastic -methods (4.1) are convergent with order in the mean square sense, by Theorem 3.5, we only need to verify the conditions (3.7)–(3.11).
From (4.1), we find that For the random variables (), by Assumption 1, we have Noticing that , , and the random variables are -measurable, we derive that From (4.8)–(4.10), we see that the increment functions of the stochastic -methods (4.1) satisfy the inequalities (3.7)–(3.9) with , , and . Noting that are independent of and , and are -measurable, then using , we find that Here, the fact used has been that and . From above, we find that the increment functions of the stochastic -methods (4.1) satisfy the estimations (3.10) and (3.11) with . Therefore, the conditions (3.7)–(3.11) are satisfied. By Lemma 4.2 and Theorem 3.5, it is not difficult to find that the stochastic -methods (4.1) are convergent with order in the mean square sense. The proof is completed.

Remark 4.4. For the case of , (2.1) reduces to the stochastic delay differential equations with jumps where .
Theorem 4.3 implies that the stochastic -methods for (4.12) are convergent with order , which coincides with Theorem 3.2 in [11].

Example 4.5. Consider the balanced implicit methods for system (2.1), Here, the matrix is given by where the , are, in general matrix, called control functions which are often chosen as constants.

Furthermore, the control functions must satisfy some conditions.

Assumption 4. The and represent bounded -matrix-valued functions for . For any real numbers , where for all step sizes considered and , the matrix has an inverse and satisfies the condition Here, is the unit matrix, and is a positive constant. Notice that if satisfy Assumption 4, the methods (4.13) are well defined and can be rewritten as

Lemma 4.6 (see [12]). If is independent of , , , and is -measurable, then .

For simplicity, from now on, we suppose that in (4.14) are constants, that is to say, , .

Lemma 4.7. Let Assumptions 1, 2, and 4 hold, then the balanced implicit methods (4.13) are consistent with order in the mean and order in the mean square sense.

Proof. Without loss of generality, we can assume that . From Lemma 4.2, we know that the Euler-Maruyama method is consistent with order 3/2 in the mean, thus using (4.4), we have where and are defined as follows: where . By (4.18), we have Noticing that are -measurable, is independent of , and using Lemma 4.6, we find that We notice that and are constants; thus, there exists a positive constant , such that (). Since are independent of , by Assumption 4, (4.19), (4.20), , , and , we obtain It follows from Assumption 2, Lemma 2.3, and (4.21) that Inserting (4.22) into (4.17) yields which implies that the balanced implicit methods (4.13) are consistent with order in the mean. In the following, we will show that the balanced implicit methods (4.13) are consistent with order 1 in the mean square sense. Using Assumptions 2 and 4, Lemma 2.3, (), , , , and (4.19), we compute that Theorem 4.3 implies that the Euler-Maruyama method is convergent with order . Thus, by (4.5) and (4.24), we have The proof is completed.

Theorem 4.8. Let Assumptions 14 hold, then the balanced implicit methods (4.16) are convergent with order in the mean square sense.

Proof. By (4.16), the increment functions , , and of the balanced implicit methods (4.16) are given as follows: For (), by Assumptions 14, (4.26), we arrive at Noticing that , , the random variables , , , and are -measurable and using Assumptions 14, (4.27), we obtain Since are independent of and the random variables , , , and are -measurable; thus, by the inequality , , and Lemma 4.6, we compute that From (4.28)–(4.32), we see that the increment functions of the balanced implicit methods (4.13) satisfy the conditions (3.7)–(3.11) with , , , and . A combination of Lemma 4.7 and Theorem 3.5 leads to the conclusion that the balanced implicit methods (4.13) are convergent with order in the mean square sense. The proof is completed.

Remark 4.9. For the case of and , (2.1) reduces to the stochastic differential equations with jumps From Theorem 4.8, it is not difficult to find that the balanced implicit methods for (4.33) are convergent with order in the mean square sense, which coincides with Theorem 2.1 in [13].

5. Numerical Experiments

In this section, several numerical examples are given to illustrate our theoretical results in the previous sections. Consider the nonlinear equationwith initial data

To show the convergence of the -methods (4.1) and the balanced implicit methods (4.16), we choose , , and . In all the numerical experiments, we identify the numerical solution using very small stepsize as the exact solution and compare this with the numerical approximations using for over 2000 different discretized Brownian paths. The mean-square errors , all measured at time , are estimated by trajectory averaging, that is, . We plot our approximation to against on a log-log scale in Figure 1. For reference, a dashed line of slope one-half is added in two graphs. In Figure 1., we show the convergence of the -method (4.1) in the left picture and the balanced implicit methods (4.16) in the right picture, respectively.

We see that the slopes of the two curves appear to match well in two pictures in Figure 1, which is consistent with the strong order of one-half implied in Theorem 4.3 and Theorem 4.8.

6. Conclusion

In this paper, we consider a family of implicit one-step methods for the NSDDEs with jumps. A relationship between the consistent order and the convergence order is established. A general framework for mean-square convergence of the methods is provided. The applicability of the mean-square convergence theory is illustrated by the stochastic -methods and the balanced implicit methods. We have generalized the existing results. The examples presented in Section 4 show that the main result in this paper can be applied not only to semi-implicit methods but also to full implicit methods.

Acknowledgments

This research is supported with funds provided by the National Natural Science Foundation of China (no. 10871207 and 11171352) and the project sponsored by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.