Table of Contents Author Guidelines Submit a Manuscript
International Journal of Stochastic Analysis
Volume 2017 (2017), Article ID 9693153, 9 pages
https://doi.org/10.1155/2017/9693153
Research Article

Malliavin Differentiability of Solutions of SPDEs with Lévy White Noise

Department of Mathematics and Statistics, University of Ottawa, 585 King Edward Avenue, Ottawa, ON, Canada K1N 6N5

Correspondence should be addressed to Raluca M. Balan

Received 2 January 2017; Accepted 21 February 2017; Published 12 March 2017

Academic Editor: Bohdan Maslowski

Copyright © 2017 Raluca M. Balan and Cheikh B. Ndongo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We consider a stochastic partial differential equation (SPDE) driven by a Lévy white noise, with Lipschitz multiplicative term . We prove that, under some conditions, this equation has a unique random field solution. These conditions are verified by the stochastic heat and wave equations. We introduce the basic elements of Malliavin calculus with respect to the compensated Poisson random measure associated with the Lévy white noise. If is affine, we prove that the solution is Malliavin differentiable and its Malliavin derivative satisfies a stochastic integral equation.

1. Introduction

In this article, we consider the stochastic partial differential equation (SPDE):with some deterministic initial conditions, where is a second-order differential operator on , denotes the formal derivative of the Lévy white noise (defined below), and the function is Lipschitz continuous.

A process is called a (mild) solution of (1) if is predictable and satisfies the following integral equation: where is the solution of the deterministic equation with the same initial conditions as (1) and is the Green function of the operator .

The study of SPDEs with Gaussian noise is a well-developed area of stochastic analysis, and the behaviour of random field solutions of such equations is well understood. We refer the reader to [1] for the original lecture notes which led to the development of this area and to [2, 3] for some recent advances. In particular, the probability laws of these solutions can be analyzed using techniques from Malliavin calculus, as described in [4, 5].

On the other hand, there is a large literature dedicated to the study of stochastic differential equations (SDE) with Lévy noise, the monograph [6] containing a comprehensive account on this topic. One can develop also a Malliavin calculus for Lévy processes with finite variance, using an analogue of the Wiener chaos representation with respect to underlying Poisson random measure of the Lévy process. This method was developed in [7] with the same purpose of analyzing the probability law of the solution of an SDE driven by a finite variance Lévy noise. More recently, Malliavin calculus for Lévy processes with finite variance has been used in financial mathematics, the monograph [8] being a very readable introduction to this topic.

There are two approaches to SPDEs in the literature. One is the random field approach which originates in John Walsh’s lecture notes [1]. When using this approach, the solution is viewed as a real-valued process which is indexed by time and space. The other approach is the infinite-dimensional approach, due to Da Prato and Zabczyk [9], according to which the solution is a process indexed by time only, which takes values in an infinite-dimensional Hilbert space. It is not always possible to compare the solutions obtained using the two approaches (see [10] for several results in this direction). SPDEs with Lévy noise were studied in the monograph [11], using the infinite-dimensional approach. In the present article, we use the random field approach for examining an SPDE driven by the finite variance Lévy noise introduced in [12], with the goal of studying the Malliavin differentiability of the solution. As mentioned above, this study can be useful for analyzing the probability law of the solution. We postpone this problem for future work.

We begin by recalling from [12] the construction of the Lévy white noise driving (1). We consider a Poisson random measure (PRM) on the space of intensity defined on a complete probability space , where is a Lévy measure on ; that is, satisfies In addition, we assume that satisfies the following condition:

We denote by the compensated PRM defined by for any with , where is the class of Borel sets in . We denote by the -field generated by for all , and . We denote by the class of bounded Borel sets in and by the class of Borel sets in which are bounded away from .

A Lévy white noise with intensity measure is a collection of zero-mean square-integrable random variables defined by These variables have the following properties:(i) a.s. for all .(ii) are independent for any and for any disjoint sets .(iii)For any and for any , is independent of and has characteristic functionWe denote by the -field generated by for all . For any , we define the stochastic integral of with respect to :

Using the same method as in Itô’s classical theory, this integral can be extended to random integrands, that is, to the class of predictable processes , such that . The integral has the following isometry property:Recall that a process is predictable if it is measurable with respect to the predictable -field on , that is, the -field generated by elementary processes of the form where , is a bounded and -measurable random variable, and .

This article is organized as follows. In Section 2, we introduce the basic elements of Malliavin calculus with respect to the compensated Poisson random measure . In Section 3, we prove that, under a certain hypothesis, (1) has a unique solution. This hypothesis is verified in the case of the wave and heat equations. In Section 4, we examine the Malliavin differentiability of the solution, in the case when the function is affine. Finally, in the Appendix, we include a version of Gronwall’s lemma which is needed in the sequel.

2. Malliavin Calculus on the Poisson Space

In this section, we introduce the basic ingredients of Malliavin calculus with respect to , following very closely the approach presented in Chapters 10–12 of [8]. The difference compared to [8] is that our parameter space has variables instead of . For the sake of brevity, we do not include the proofs of the results presented in this section. These proofs can be found in Chapter 6 of the doctoral thesis [13] of the second author.

We set and . We denote by the set of all symmetric functions . We denote by the analogous spaces of -valued functions.

Let . For any measurable function with we define the -fold iterated integral of with respect to by where . Then, for all and .

For any , we defined the multiple integral of with respect to by . It follows that for all and If with , we define .

Let be the set of -valued square-integrable random variables defined on . By Theorem of [14], any -measurable random variable admits the chaos expansion where for all and .

The chaos expansion plays a crucial role in developing the Malliavin calculus with respect to . In particular, the Skorohod integrals with respect to and are defined as follows.

Definition 1. (a) Let be a square-integrable process such that is -measurable for any . For each , let be the chaos expansion of , with . One denotes by the symmetrization of with respect to all variables. One says that is Skorohod integrable with respect to (and one writes ) if In this case, one defines the Skorohod integral of with respect to by (b) Let be a square-integrable process such that is -measurable for any and . One says that is Skorohod integrable with respect to (and one writes ) if the process is Skorohod integrable with respect to . In this case, one defines the Skorohod integral of with respect to by The following result shows that the Skorohod integral can be viewed as an extension of the Itô integral.

Theorem 2. (a) If is a predictable process such that , then is Skorohod integrable with respect to and (b) If is a predictable process such that , then is Skorohod integrable with respect to and

We now introduce the definition of the Malliavin derivative.

Definition 3. Let be an -measurable random variable with the chaos expansion with . One says that is Malliavin differentiable with respect to if In this case, one defines the Malliavin derivative of with respect to by One denotes by the space of Malliavin differentiable random variables with respect to .

Note that .

Theorem 4 (closability of Malliavin derivative). Let and such that in and converges in . Then, and in .

Typical examples of Malliavin differentiable random variables are exponentials of stochastic integrals: for any , Moreover, the set of linear combinations of random variables of the form with is dense in .

The following result shows that the Malliavin derivative is a difference operator with respect to , not a differential operator.

Theorem 5 (chain rule). For any and any continuous function such that and , and

Similar to the Gaussian case, we have the following results.

Theorem 6 (duality formula). If and , then

Theorem 7 (fundamental theorem of calculus). Let be a process which satisfies the following conditions:(i) for any .(ii).(iii) for any .(iv).Then, , and ; that is,

As an immediate consequence of the previous theorem, we obtain the following result.

Theorem 8. Let be a process which satisfies the following conditions:(i) for all and .(ii).(iii) for any .(iv).Then, , and the following relation holds in :

3. Existence of Solution

In this section, we show that (1) has a unique solution.

We recall that is the solution of the homogeneous equation with the same initial conditions as (1), and is the Green function of the operator on . We assume that, for any , and we denote by its Fourier transform: We suppose that the following hypotheses hold.

Hypothesis H1. is continuous and uniformly bounded on .

Hypothesis H2. (a) ; (b) the function is continuous on , for any ; (c) there exist and a nonnegative function such that for any and , and .

Since is a Lipschitz continuous function, there exists a constant such that, for any ,In particular, for any ,where .

The following theorem is an extension of Theorem .(a) of [15] to an arbitrary operator .

Theorem 9. Equation (1) has a unique solution which is -continuous and satisfies

Proof.
Existence. We use the same argument as in the proof of Theorem of [16]. We denote by the sequence of Picard iterations defined by andBy induction on , it can be proved that the following properties hold:(P)(i) is well defined for any .(ii).(iii) is -continuous on .(iv) is -measurable for any and .Hypotheses (H1) and (H2) are needed for the proof of property (iii). From properties (iii) and (iv), it follows that has a predictable modification, denoted also by . This modification is used in definition (31) of . Using the isometry property (8) of the stochastic integral and (28), we havewhere . For any , we denoteTaking the supremum over in the previous inequality, we obtain that for any and . By applying Lemma A.1 (the Appendix) with and , we infer thatThis shows that the sequence converges in to a random variable , uniformly in ; that is,To see that is a solution of (1), we take the limit in as in (31). In particular, this argument shows thatUniqueness. Let , where and are two solutions of (1). A similar argument as above shows that for any . By Gronwall’s lemma, for all .

Example 10 (wave equation). If for and , then . Hypothesis (H2) holds since

Example 11 (heat equation). If for and , then . Hypothesis (H2) holds since

4. Malliavin Differentiability of the Solution

In this section, we show that the solution of (1) is Malliavin differentiable and its Malliavin derivative satisfies a certain integral equation. For this, we assume that the function is affine.

Our first result shows that the sequence of Picard iterations is Malliavin differentiable with respect to and the corresponding sequence of Malliavin derivatives is uniformly bounded in .

Lemma 12. Assume that is an arbitrary Lipschitz function. Let be the sequence of Picard iterations defined by (31). Then, for any and , and

Proof.
Step 1. We prove that the following property holds for any :For this, we use an induction argument on . Property is clear for . We assume that it holds for and we prove that it holds for .
By the definition of and the fact that the Itô integral coincides with the Skorohod integral if the integrand is predictable, it follows that, for any , We fix . We apply the fundamental theorem of calculus for the Skorohod integral with respect to (Theorem 8) to the process: We need to check that satisfies the hypotheses of this theorem. To check that satisfies (i), we apply the chain rule (Theorem 5) to and . Note that, for any ,by the induction hypothesis. We conclude that andWe note that satisfies hypothesis (ii) since, by (44), To check that satisfies hypothesis (iii), i.e., the process is Skorohod integrable with respect to for any , it suffices to show that this process is Itô integrable with respect to . Note that if and it is -measurable if . Hence, the process is predictable. By (46) and (28), and, hence,This proves that for almost all . By Theorem 2 (b), is Skorohod integrable with respect to andFinally, satisfies hypothesis (iv) since, by (50) and the isometry properties (8) and (49), we have By Theorem 8, we infer that , , andSince , this means that . Using (50) and (46), we can rewrite relation (52) as follows:It remains to prove thatUsing (53), the isometry property (8), relation (44), and the fact that is Lipschitz, we see that We perform integration with respect to on . We denoteWe obtainRelation (54) follows taking the supremum over .
Step 2. We prove that . By (57), we have where and is given by (33). This shows that where is given by (37). By Lemma of [16], .

We are now ready to state the main result of the present article.

Theorem 13. Assume that is an affine function; that is, for some . If is the solution of (1), then, for any and , and the following relation holds in (and hence, almost surely, for -almost all ):

Proof.
Step 1. For any and , let Note that, by Lemma 12, for any and .
Fix . We write relation (53) for and . We take the difference between these two equations. We obtainAt this point, we use the assumption that is the affine function . (An explanation of why this argument does not work in the general case is given in Remark 14.) In this case, relation (63) has the following simplified expression: Using Itô’s isometry and the inequality , we obtain where . Note that both sides of the previous inequality are zero if . Taking the integral with respect to on , we obtain where is given by (56). Recalling the definition of , we infer that where and the function is given by (33). By relation (35), we know that , which means that . By Lemma A.1 (the Appendix), we conclude that Hence, the sequence converges in to a variable , uniformly in ; that is,Step 2. We fix . By (36), in . By Step , converges in . We apply Theorem 4 to the variables and . We infer that and in . Combining this with (69), we obtain Relation (61) follows by taking the limit in as in (53).

Remark 14. Unfortunately, we were not able to extend Theorem 13 to an arbitrary Lipschitz function . To see where the difficulty comes from, recall that we need to prove that converges in , and the difference is given by (63). For an arbitrary Lipschitz function , by relation (28), we have Using (63), the isometry property (8), the inequality , and the previous inequality, we have The problem is that the second term on the right-hand side of the inequality above does not depend on and hence its integral with respect to on is equal to .

Appendix

A Variant of Gronwall’s Lemma

The following result is a variant of Lemma of [16], which is used in the proof of Theorem 13.

Lemma A.1. Let be a sequence of nonnegative functions defined on such that and, for any and , where is a nonnegative function on with and is a sequence of nonnegative constants. Then, there exists a sequence of nonnegative constants which satisfy for any , such that, for any and ,In particular, if for some , then

Proof. Let , be a sequence of i.i.d. random variables on with density function , and . Following exactly the same argument as in the proof of Lemma of [16], we have Relation (A.2) follows with for . The fact that for all was shown in the proof of Lemma of [16].
To prove the last statement, we let and . Then, and, hence, . We conclude that