Abstract

The convergence of the split-step backward Euler (SSBE) method applied to stochastic differential equation with variable delay is proven in -sense. Almost sure convergence is derived from the convergence by Chebyshev’s inequality and the Borel-Cantelli lemma; meanwhile, the convergence rate is obtained.

1. Introduction

In probability theory, there are several types of convergence of sequences of random variables such as convergence in th mean ( sense), almost sure, in probability, and in distribution. As we know, the almost sure (a.s.) convergence and the convergence in sense each imply the convergence in probability, and the convergence in probability implies the convergence in distribution. Among them, the almost sure convergence, also known as convergence with probability one, is the convergence concept most closely related to that of nonrandom sequences. The mean-square convergence analysis of numerical schemes for solving stochastic delay differential equation (SDDE) has gained considerable research attention, and we refer here to the papers of Baker and Buckwar [1], Buckwar [2], Hu et al. [3], Liu et al. [4], and Mao and Sabanis [5] just to mention a few of them. In particular, a type of split-step method for stochastic differential equation (SDE) was first introduced by Higham et al. [6] and, subsequently, the method was extended to solve a linear SDDE with constant delay (see [7]) and to solve an SDDE with variable delay (see [8]). However, the almost sure and convergence of a numerical method for an SDDE are rarely investigated in the literature. Until recently, Gyöngy and Sabanis [9] proved the almost sure convergence of Euler approximations for a class of SDDE under local Lipschitz and monotonicity conditions.

In this paper we study the following nonlinear SDDE: with initial data for . Here the time delay is a real-valued function satisfying and . Unlike the delay in [9], however, the function will not be limited to an increasing function of   . The drift function and diffusion function are all continuous, and . The SDDE (1) is driven by a scalar Wiener process .

We can write (1) in the following integral form:

Here we focus mainly on the SSBE method proposed by Wang and Gan [8]. Although the convergence rate in mean-square ( ) sense has been obtained, the convergence analysis in sense is more difficult; for example, Jensen's inequality cannot be used directly. The aim of this paper is to obtain the almost sure convergence rate together with convergence rate of the SSBE method for SDDE (1). In the next section, we recall the SSBE method and give some assumptions. The main convergence results are given in Section 3.

2. The SSBE Method

First, let us state the numerical method. In the following, we consider a uniform mesh , where the positive integer is given, the time step size , and for . The numerical approximation of at time is denoted by . The SSBE method ([8]) for SDDE (1) can be written as where and for and positive integer .

To prove the almost sure convergence of the SSBE method (3), we need to define its continuous extension. Let us introduce two step processes as follows: where is the indicator function of set .

The continuous SSBE approximate solution is then defined by It is not difficult to see that for every .

At various points in this paper, we will assume subsets of the following set of conditions [8].(A1)The SDDE (1) has a unique solution on . The functions and are both locally Lipschitz continuous in and ; that is, there exists a constant such that for all and those with . Here, is the maximal operator.(A2)The function values and are bounded. The exact solution and its continuous-time approximation solution have th moment bounds; that is, there exist constants and integer such that (A3)(The Hölder continuity of the initial data) There exist constants and such that for all and positive integer , and is a continuous function satisfying

3. The Convergence Analysis

In this section, we first give some lemmas for deriving the main theorem. We define three stopping times [8] as follows: and , where is the minimal operator. Further, the infimum of the empty set is set as .

In what follows, constant is generic, which depends on , the initial data , the interval of integration , and , but it is independent of the discretization parameters and .

Lemma 1. Let be the solution of (6). Under assumptions ( ) and ( ), one has
for with , and integer .

Proof. We note that for ,
Combining (3), (5), (6), and (14), we obtain under assumptions (A1) and (A2).
Now we prove (13). From (6), we have for . To estimate , we will first apply the elementary inequality, which states that, for every , it follows that Then, for and ,
Applying Hölder’s inequality to the first integral of (18), as well as Burkholder-Davis-Gundy inequality to the Itô integral of (18), we obtain
We then have under the assumption (A2).
Similarly, replacing by and repeating the previous procedure, we obtain . Therefore, by substituting this result and (20) into (19), the proof is complete.

Lemma 2. Let be the solution of (6). Under assumptions ( ), ( ), and ( ), one has for with , and integer .

Proof. To show the desired result, let us consider the following four possible cases:(1) ,(2) ,(3) ,(4) .In case (1), without loss of generality, we assume , , . Thus, from (4), (5), and (6), we have
We note that with . Therefore, under assumptions (A1) and (A2), applying (17) and (23), we can derive (21) from (22). Under assumptions (A1)–(A3), the proof of  the three other cases follows in a similar manner that of Lemma 2.5 in [8]. Then combining these results gives the required result.

Lemma 3. Let be the solution of  SDDE (1) and let be the solution of (6). Under assumptions ( ), ( ), and ( ), one obtains for integer .

Proof. Suppose that and . From (2) and (6), we have Let denote the difference , by using Itô's formula to the function [10] and then, by taking expectation, we get Let denote the left side of (26); that is,
As we know, if is a stopping time and if is a martingale, then is a martingale too. Therefore, the expectation vanishes because of martingale property for Itô integral; then, the inequality (26) implies that for with .
By applying the local Lipschitz condition and the elementary Young inequality, we obtain which implies that by Lemmas 1 and 2.
A similar result can be derived for so that from (28), where and are generic constants independent of .
Now we will proceed by using an induction argument over consecutive intervals of length up to the end of the interval .
Step 1. Given , it is easy to show that for , so Inserting this into (31) gives by applying the Gronwall inequality [11].
Further, from (33), we have Combining (33) and (34) gives for .
Step 2. Suppose with , which implies for ; then we can obtain from the previous recursive step. Hence, the inequality (31) also gives
Finally, we can obtain the desired result using an approach similar to Step 1.

The next lemma shows that the sequence of the approximate solutions tends to the exact solution as in the sense of -norm under local condition.

Lemma 4. We assume that the conditions of Lemma 3 are fulfilled. Then, for , one has

Proof. Using (2), (6), (17), the Burkholder-Davis-Gundy inequality, and Hölder's inequality, we have
Combining Lemmas 1, 2, and 3, we get by first using Minkowski’s inequality.
Applying the local Lipschitz condition (7) to (38), the relation (37) then follows from (39).

The following two theorems are the main results of this paper. They give, respectively, the error and almost sure error of the SSBE method (3).

Theorem 5. One assumes that the conditions of Lemma 3 are fulfilled. Then, for and sufficiently large , the approximation (3) for SDDE (1) is convergent in sense and

The result can be proved via a similar approach to that in [5, Theorem 2.1] and [6, Theorem 2.2]; see the appendix.

Theorem 6. One assumes that the conditions of Lemma 3 are fulfilled. Then, for every positive number and sufficiently large .

Proof. From Theorem 5, Chebyshev's inequality [11] gives for any and .
The series is convergent for ; that is, . Further, by applying the Borel-Cantelli lemma [11], the inequality (42) implies that Thus, under assumptions (A1)–(A3), the approximate solution converges (a.s.) to the exact solution uniformly on as .

Appendix

Proof of Theorem 5. Obviously, we have By the improved Young inequality (see, e.g., [5, 6])
when , we obtain Note
Furthermore, and, similarly, Therefore, On the other side, Lemma 4 implies that Combining (A.1) and the above inequalities gives
Setting and choosing , we obtain (40) for sufficiently large .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the anonymous reviewers and editor for their helpful and constructive comments that greatly contributed to improving the paper. They would also like to thank Dr. Xiang Lv for his valuable discussion. This work was partially supported by E-Institutes of  Shanghai Municipal Education Commission (no. E03004), the National Natural Science Foundation of China (no. 10901106), and the Innovation Program of Shanghai Municipal Education Commission (no. 14YZ078).