International Journal of Stochastic Analysis

Volume 2017 (2017), Article ID 2876961, 12 pages

https://doi.org/10.1155/2017/2876961

## Semigroup Solution of Path-Dependent Second-Order Parabolic Partial Differential Equations

Claremont Graduate University, Claremont, CA, USA

Correspondence should be addressed to Henry Schellhorn; ude.ugc@nrohllehcs.yrneh

Received 16 December 2016; Accepted 1 February 2017; Published 27 February 2017

Academic Editor: Lukasz Stettner

Copyright © 2017 Sixian Jin and Henry Schellhorn. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We apply a new series representation of martingales, developed by Malliavin calculus, to characterize the solution of the second-order path-dependent partial differential equations (PDEs) of parabolic type. For instance, we show that the generator of the semigroup characterizing the solution of the path-dependent heat equation is equal to one-half times the second-order Malliavin derivative evaluated along the frozen path.

#### 1. Introduction

In this paper we consider semilinear second-order path-dependent PDEs (PPDEs) of parabolic type. These equations were first introduced by Dupire [1] and Cont and Fournie [2] and will be defined properly in the next section.

To motivate our result, we first consider the heat equation expressed in terms of a backward time variable. For we look for a function that solves

It is well known (see, e.g., [3], chapter 9.2 or [4]) that the solution is given by the flow of the semigroup ; that is, , where

The differential operator is said to be the* (infinitesimal) generator* of the semigroup . Consider now the path-dependent version of the heat equation:where is a continuous path on the interval and the derivatives are Dupire’s path derivatives. Our goal is to find the generator of the semigroup (flowing the solution) of PPDEs, which we will refer to as the* semigroup of the PPDE*. It turns out that , that is, one-half times the second-order vertical derivative, is not the appropriate infinitesimal generator, because of path dependence. Indeed, the vertical derivative is the rate of change of the functional for a change at time . The correct infinitesimal generator is equal to , where is the second-order Malliavin derivative of . An important difference is that is now viewed as a random variable, and the (first-order) Malliavin derivative is a stochastic process in the canonical probability space for Brownian motion. The* stopping* path operator was introduced in [5]. Informally, the action of the stopping path operator (which we define rigorously later) is to freeze the path after time :where is the stopped path. The stopped Malliavin derivative is thus an extension of both(i)the Dupire derivative; while the Dupire derivative corresponds to changes of the path at only one time, the iterated derivatives are taken with respect to changes of the canonical path at many different times ;(ii)the Malliavin derivative; while the Dupire derivative can be taken pathwise, as far as we know, the construction of the Malliavin derivative necessitates the introduction of a probability space.

The proof of the representation result is straightforward. Let us consider the path-independent case (1). Let be Brownian motion. By Itô’s lemma, it is obvious that is a martingale, say , and that the value of this martingale is the conditional expectation at time of . Consider now a general path-dependent terminal condition , in [5], Jin et al. gave a new representation of Brownian martingales (with ) as an exponential of a time-dependent generator, applied to the terminal value :

By the functional Feynman-Kac formula introduced in [1, 6], it is immediate that is the generator of the semigroup of the PPDE.

The main advantage of the semigroup method is that the solution of the PPDE can be constructed semianalytically: indeed, the method is similar to the Cauchy-Kowalewsky method, of calculating iteratively all the Malliavin derivatives of ; (6) can be rewritten indeed as

The main disadvantage can be seen immediately by considering (7): the terminal condition must be infinitely Malliavin differentiable. In contradistinction, the viscosity solution given in [7] necessitates to be only bounded and continuous. However, compared to the result shown in [6], needs only to be defined on continuous paths.

This paper is composed of two parts. In the first part, we give a rigorous proof of the result (7). Indeed, we complete the proof of Theorem 2.3 in our article [5]; although the statement was correct in that paper, one step of the proof was not obvious to finish. In the second part we characterize the generator of the semilinear PPDE.

#### 2. Martingale Representation

We first introduce some basic notations of Malliavin calculus. For a detailed introduction, we refer to [8] and our paper [5]. Let and be the complete filtered probability space, where the filtration is the usual augmentation of the filtration generated by Brownian motion on . The canonical Brownian motion can be also denoted by , by emphasizing its sample path. We denote by the space of square integrable random variables. For simplicity, we denote .

We denote the Malliavin derivative of order at time by . We call the set of random variables which are infinitely Malliavin differentiable and -measurable, that is, for any integer and :

*Definition 1. *For any deterministic function , we define the “stopping path” operator for asIn particular, that is to “freeze” Brownian motion after time .

From the definition, it is not hard to obtain that, for any -variable smooth function , . For a general random variable , refers to the value of along the stopping scenario of Brownian motion. According to the Wiener-Chaos decomposition, for any , there exists a sequence of deterministic function such that with convergence in . Therefore, in order to obtain an explicit representation of acting on a general variable , we first show the following proposition.

Proposition 2. *Let , an -variable square integrable deterministic function; thenThereforeas well as the isometry:*

Theorem 3. *Let . Then for any fixed time and , there exists a sequence that satisfies the following:*(i)* in ;*(ii)* for any ;*(iii)*there exist and a constant which does not depend on such that*

*We introduce the derivative in as, for any process ,Then we can set up an operator differential equation for . The following theorem is a generalization of Theorem 2.2. in [5] to functionals that are not discrete.*

*Theorem 4. For , assuming that , one has*

*Then our main theorem is the integral version of this operator differential equation. We first introduce the convergence condition.*

*Condition 1. *For any , satisfies

*According to isometry (12), this condition implies .*

*Remark 5. *We claim that other conditions exist which are easier to check than Condition 1. One of them is the convergence of the terms of series (23):To this “ local” condition, that is, a condition based on the calculation along the frozen path only, one needs to add a “global” condition involving all the paths to make it sufficient; that is, for any and , with a constant .

*Moreover, with different structures of , we have different alternative conditions which are easier to check for practical calculations. Here we list two examples.(1) If with smooth deterministic function and square integrable deterministic function , it is not hard to obtain Therefore, if there exists a constant such that, for all , with the help of Stirling approximation , Condition 1 is satisfied.(2) If has its chaos decomposition , we have Then according to (12), Condition 1 can be replaced by with some constant or some much stronger but easier conditions like the following: for *

*Then we have the following main result.*

*Theorem 6. Suppose that satisfies Condition 1 and is -measurable. For , then, in ,*

*The importance of the exponential formula (23) stems from the Dyson series representation, which we rewrite hereafter in a more convenient way:*

*3. Representation of Solutions of Path-Dependent Partial Differential Equations*

*3.1. Functional Itô Calculus*

*We now introduce some key concepts of the functional Itô calculus introduced by Dupire [1]. For more information, the reader is referred to [6], which we copy hereafter almost verbatim. Let be fixed. For each we denote by the set of càdlàg (right continuous with left limits) -valued functions on . For each , the value of at is denoted by . Denote . For each , , and , we define*

*Definition 7. *Given a function , there exists such that

*Then we say that is vertically differentiable at and define . The function is said to be vertically differentiable if exists for each . The second-order derivative is defined similarly.*

*Definition 8. *For a given , ifthen we say that is horizontally differentiable at and define . The function is said to be horizontally differentiable if exists for each .

*Definition 9. *The function is said to be in if , , and exist and we havewhere , and are some constants depending only on , and is the distance on . The classes and are defined analogously.

*For each , we denote by the set of continuous -valued functions on . We denote . Clearly . Given and , we say that is consistent with on if (since we already use the symbol to denote our freezing path operator (see Definition 1), we here use to denote a sample path) for each , *

*Definition 10. *The function is said to be in if there exists a function such that (30) holds and for we denote

*Note*. In the introduction, we use the notation for a family of nonanticipative functionals where . In order to highlight the symmetry between PDEs and PPDEs, the notation in PPDEs shows that is the counterpart of the argument in PDEs and is used instead of . This is in spirit closer to the original notation of [1, 2]. The reader will have no problem identifying .

*3.2. Non-Markovian BSDEs*

*As in [6], we use to denote the completion of the -algebra generated by with . Then we introduce , the space of all -adapted -valued processes with , and , the space of all -adapted -valued continuous processes with . Denote now .*

*We will make the following assumptions:*

**(H1) ** is a -valued function defined on . Moreover,

**(H2)** The drift is a given -valued continuous function defined on (see [6] for a definition of continuity). For any and , the function is differentiable and its derivative satisfieswhere and are constants depending only on .

*We now assume that (H1) and (H2) hold. We consider a non-Markovian BSDE, which is a particular case of (3.2) in [6]. From Theorem 2.8 in [6], for any , there exists a unique solution of the following BSDE:whereIn particular, defines a deterministic mapping from to .*

*3.3. Path-Dependent PDEs*

*The drift and terminal condition are required to be extended to the space of càdlàg paths because of the definition of the Dupire derivatives. We require the following (see [6] again):*

**(B1)** The function is a -valued function defined on . Moreover, there is a function such that on .

**(B2)** The drift is a given -valued continuous function defined on (see [6] for a definition of continuity). Moreover, there exists a function satisfying** (H2)** such that on .

*We can now define the following quasilinear parabolic path-dependent PDE:*

*Theorem 4.2 in [6] states the following: let be a solution of the above equation. Then we have for each , where is the unique solution of BSDE (33).*

*Theorem 11. Suppose that, for each , the random variablesatisfies Condition 1. Then the solution of (35) is *

*Proof. *According to (2.20) in [9] page 351, the solution of (33) is, for ,The result now follows by Theorem 6 and the fact that

*We note that, in the case of no drift (), we recover the result (6).*

*3.4. Proof of Proposition 2*

*This proof is made up by several inductions. Therefore we separate them into several steps.*

*Step 1. *We first apply Itô’s lemma and integration by parts formula of the Skorohod integral of Brownian motion to provide an explicit expansion for . The goal of the following step is to transform Skorohod integrals into time-integrals. For example, is symmetric:

*By the integration by parts formula (see (1.49) in [8]),*

*Based on this idea, for and , we defineand . For , . Then we are going to prove based on the following recurrence formula of : for any *

*To prove (43), we apply the integration by parts formula. For simplicity, we only keep the variables and . The notation means that the variable is not an argument of a function. We also emphasize again the symmetricity of function :Observing the properties of the binomial coefficients,We can see that, under the summation over , (47) and (49) cancel each other, (45) and (46) combine into , and (48) remains as the integral of . Rigorously, we proved (43).*

*To prove (42), we use induction. Supposing that case is correct, we observe case : by (43),*

*Step 2. *Now we are going to consider the action of the freezing path operator. We first prove that for all We only present the proof of and the general case is the same. By definition, we know that . ThereforeNow we recall a basic integration rule for a smooth function asWe apply (54) on (53) and obtainSince the number of variable is , which does not depend on , it enlightens us to change the order of summations. We want to sum over first. Observe that ; we obtainAccording to the property of binomial coefficient againWe claim that (56) is not 0 only when . Thus we have

*Step 3. *Now we can prove recurrence formula (10).

By (52) and (42), we haveNow we calculate the right hand side of (10):Let and we continue the above formula:Now we apply another basic rule of integration, for a -variable symmetric function Now apply (62) in (61) and we finally obtain

*Step 4. *We now use induction to prove (11), based on (10). For simplicity, we introduce for . Then (10) impliesWe calculate the right hand side of (11) with (65): let The proposition is proved.

*3.5. Proof of Theorem 3*

*The proof is constructive. For any fixed , if has its chaos decomposition , then for fixed (depending on ), we will study , where In other words, the kernel is constant when its arguments lie between and . Then we have the following lemma.*

*Lemma 12. and in particularwhere is a constant which does not depend on and *

*Proof. *For any fixed , we define a sequence of sets asObserve that on the kernels and coincide. According to (67), we obtain To bound (70), we apply Proposition 2 to obtainNow we apply (71) on (70) and by Cauchy-Schwartz inequality, we haveSince