- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 958920, 14 pages
The Dynamic Programming Method of Stochastic Differential Game for Functional Forward-Backward Stochastic System
1Institute for Financial Studies and Institute of Mathematics, Shandong University, Jinan 250100, China
2Institute of Mathematics, Shandong University, Jinan 250100, China
Received 23 October 2012; Accepted 18 December 2012
Academic Editor: Guangchen Wang
Copyright © 2013 Shaolin Ji et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper is devoted to a stochastic differential game (SDG) of decoupled functional forward-backward stochastic differential equation (FBSDE). For our SDG, the associated upper and lower value functions of the SDG are defined through the solution of controlled functional backward stochastic differential equations (BSDEs). Applying the Girsanov transformation method introduced by Buckdahn and Li (2008), the upper and the lower value functions are shown to be deterministic. We also generalize the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equations to the path-dependent ones. By establishing the dynamic programming principal (DPP), we derive that the upper and the lower value functions are the viscosity solutions of the corresponding upper and the lower path-dependent HJBI equations, respectively.
The theory of backward stochastic differential equations (BSDEs) has been studied widely since Pardoux and Peng  first introduced the nonlinear BSDEs in 1990. BSDEs have got applications in many fields, such as, stochastic control (see Peng ), stochastic differential games (SDGs) (see Hamadene and Lepeltier , Hamadene et al. ), mathematical finance (see El Karoui et al. ) and partial differential equation (PDE) theory (see Peng [6, 7]), and so forth.
In the aspect of finance, the BSDE theory presents a simple formulation of stochastic differential utilities introduced by Duffie and Epstein . When the generator of a BSDE does not depend on , the solution is just the recursive utility presented in . From the view of BSDE, by studying some important properties (such as, comparison theorem) of BSDEs, El Karoui et al.  gave the more general class of recursive utilities and their properties. And later the recursive optimal control problems whose cost functionals are described by the solution of BSDE are studied widely. Peng  obtained the Bellman’s dynamic programming principle (DPP) for this kind of problem and proved the value function to be a viscosity solution of one kind of quasi-linear second order PDE, that is, Hamilton-Jacobi-Bellman (HJB) equation. Later, for the recursive optimal control problem introduced by a BSDE under Markovian framework, by introducing the notion of backward semigroup of BSDE, in Peng , the Bellman’s DPP is derived and the value function is proved to be a viscosity solution of a generalized HJB equation.
By now, the DPP with related HJB equation has become a powerful approach to solving optimal control and game problems (see [10–13]). In , Buckdahn and Li studied a recursive SDG problem and interpreted the relationship between the controlled system and the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equation. A point is worthy to mention: in order to derive the DPP, they introduced a Girsanov transformation method to prove the value functions are deterministic which is different from the method developed in Peng .
There really exist some systems which are modeled only by stochastic systems whose evolutions depend on the past history of the states. Based on this phenomenon, Ji and Yang  investigated a controlled system governed by a functional forward-backward stochastic differential equation (FBSDE) and proved the value function to be the viscosity solution of the related path-dependent HJB equation.
In this paper, inspired by [10, 14], we will investigate the SDG problems of the functional FBSDEs. Precisely, the dynamics of our SDGs is described by the following functional SDE: And the cost functional is defined as which is the solution of the following functional BSDE: where is a path on . The driver and can be interpreted as the running cost and the terminal cost, respectively. Also, they depend on the past history of the dynamics. Equations (1) and (2) compose a decoupled functional FBSDE. The concrete conditions on are shown in the later section.
In the context, we adopt the strategy against control form. The cost functional is explained as a payoff for player I and as a cost for player II. The aim of this paper is to show the following lower and upper value functions: are the viscosity solutions of the following path-dependent HJBI equations, respectively, where where ( denotes the set of symmetric matrices).
To solve the above SDG, we need the functional Itô’s calculus and path-dependent PDEs which are recently introduced by Dupire  (for a recent account of this theory, the reader may consult [16–18]. And under the framework of functional Itô’s calculus, for the non-Markovian BSDEs, Peng and Wang  derived a nonlinear Feynman-Kac formula for classical solutions of path-dependent PDEs. For the further development, the readers may refer to [20, 21]).
In this paper, we apply the Girsanov transformation method in Buckdahn and Li  to prove the determinacy of the value functions, which is different from the method introduced by Peng [7, 9]. Making use of this method and the functional Itô’s calculus (introduced by Dupire  and developed by Cont and Fournié [16–18]), we complete the study of the zero-sum two-player SDGs in the non-Markovian case and present the lower and upper value functions of our SDG are the viscosity solutions of the corresponding path-dependent HJBI equations, respectively.
Different from the HJBI equations developed for stochastic delay systems, we establish the DPP and derive the HJBI equation in the new framework of functional Itô’s calculus.
This paper is organized as follows. Section 2 recalls the functional Itô’s calculus and the well-known results of BSDEs we will use later. In Section 3, we formulate our SDGs and get the corresponding DPP. Based on the obtained DPP, in Section 4 we derive the main result of the paper: the lower and upper value functions are the viscosity solutions of the associated path-dependent HJBI equations, respectively. And we add the proof for the DPP in the appendix.
2.1. Functional Itô’s Calculus
Let be fixed. For each , we denote the set of càdlàg functions from to .
For , denote by the value of at time . Thus is a càdlàg process on and its value at time is . is the path of up to time . We denote . For each and , is denoted by the value of at and which is also an element in .
We now introduce a distance on . Let and denote the inner product and norm in . For each and , we set It is obvious that is a Banach space with respect to . Since is not a linear space, is not a norm.
Definition 1. A functional is -continuous at , if for any there exists such that for each with , one has .
is said to be -continuous if it is -continuous at each .
Definition 2. Let and be given. If there exists , such that Then we say that is (vertically) differentiable at and denote the gradient of is said to be vertically differentiable in , if exists for each . The Hessian can be defined similarly. It is an -valued function defined on , where is the space of all symmetric matrices.
For each , we denote . It is clear that .
Definition 3. For a given , if one has then we say that is (horizontally) differentiable in at and denote . is said to be horizontally differentiable in if exists for each .
Definition 4. Define as the set of function defined on which are times horizontally and times vertically differentiable in such that all these derivatives are -continuous.
Theorem 5 (functional Itô’s formula). Let , be a probability space, if is a continuous semimartingale and is in , then for any ,
In this section, we recollect some important results which will be used in our SDG problems.
Let be the Wiener space, where is the set of continuous functions from to starting from (), is an arbitrarily fixed real time horizon, is the completed Borel -algebra over , and is the Wiener measure. Let be the canonical process: . We denote by the natural filtration generated by and augmented by all null sets, that is, , where is the set of all null subsets. First we present two spaces of processes as follows:
Consider , for every in , is progressively measurable and satisfies the following conditions:(A1) there exists a constant such that, ., for all , (A2).
In the following, we suppose the driver of a BSDE satisfies and .
Lemma 6. Under the assumptions and , for any random variable , the BSDE has a unique adapted solution
The readers may refer to Pardoux and Peng  for the above well-known existence and uniqueness results on BSDEs.
Lemma 7 (comparison theorem). Given two coefficients , satisfying , , and two terminal values , by and one denotes the solution of a BSDE with the data and , respectively. Then one has the following.(i)If and , P-a.s., then , P-a.s., for all . (ii)(Strict monotonicity) If, in addition to (i), one also assumes that , then , , and, in particular, .
With the notations in Lemma 7, we assume that, for some satisfying and , the drivers have the following form: where . Then, the following results hold true for all terminal values .
Lemma 8. The difference of the solutions and of BSDE (12) with the data and , respectively, satisfies the following estimate:
3. A DPP for Stochastic Differential Games of Functional FBSDEs
In this section, we consider the SDGs of functional FBSDEs.
First we introduce the background of SDGs. Suppose that the control state spaces are compact metric spaces. (resp., ) is the control set of all (resp., )-valued -progressively measurable processes for the first (resp., second) player. If (resp., ), we call (resp., ) an admissible control.
Let us give the following mappings:
For given admissible controls , , and , we consider the following functional forward-backward stochastic system: (H) (i)For all , , , , , , , and , are -measurable.(ii) There exists a constant , such that, for all , , , for any , (iii) There exists a constant , such that for all , , for any , ,
Theorem 9. Under the assumption , there exists a unique solution solving (17).
We recall the subspaces of admissible controls and the definitions of admissible strategies as follows, which are similar to .
Definition 10. An admissible control process (resp., ) for Player I (resp., II) on is an -progressively measurable, (resp., )-valued process. The set of all admissible controls for Player I (resp., II) on is denoted by (resp., . If , one will identify both processes and in . Similarly one interprets on in .
Definition 11. A nonanticipative strategy for Player I on is a mapping such that, for any -stopping time and any , with on , it holds that on . Nonanticipative strategies for Player II on , , are defined similarly. The set of all nonanticipative strategies for Player I on is denoted by . The set of all nonanticipative strategies for Player II on is denoted by . (Recall that .)
For given processes , initial data , the cost functional is defined as follows: where the process is defined by functional FBSDE (17).
For , the lower and the upper value functions of our SDGs are defined as
As we know, the essential infimum and essential supremum on a family of random variables are still random variables. But by applying the method introduced by Buckdahn and Li , we get and are deterministic.
Proposition 12. For any is a deterministic function in the sense that .
Proof. Let denote the Cameron-Martin space of all absolutely continuous elements whose derivative belongs to .
For any , we define the mapping . It is easy to check that is a bijection, and its law is given by . For any fixed , set . The proof can be separated into the following four steps.
For all .
First, we make the transformation for the functional SDE as follows: then, from the uniqueness of the solution of the functional SDE, we get Similarly, using the transformation to the BSDE in (17) and comparing the obtained equation with the BSDE obtained from (17) by replacing the transformed control process for , due to the uniqueness of the solution of functional BSDE, we obtain Hence
For , let . Then, .
Obviously, . And it is nonanticipating. In fact, given an -stopping time and , with on . Accordingly, on . Thus,
For any , and we have In fact, for convenience, setting , we know . Then for all . Therefore,
From the definition of essential supremum, for any random variable which satisfies , we have for all . So ., that is, Thus, Therefore, From above we get
Under the Girsanov transformation , is invariant, that is, In fact, we can prove ., for all , which is similar to the above step. From the above three steps, for all , we get Note that and have been used in the above equalities. So, for any . Thanks to is -measurable, this relation holds true for all .
To finish the proof, we also need the auxiliary lemma as follows.
Lemma 13. Let be a random variable defined over the classical Wiener space , such that , for any . Then
Proof. From Lemma 13 in Buckdahn and Li , we know for any , For any , where , for and is a finite partition of , from (36), Therefore, for any nonnegative integer So, for any polynomial function , we have Furthermore, for any , we still have (39). Combining the arbitrariness of , we obtain is independent of , for all partition of . Therefore, is independent of which implies is independent of itself, that is .
In Ji and Yang , they proved the following estimates.
Lemma 14. Under the assumption , there exists some constant such that, for any ,
From the definition of and Lemma 14, we have the following property.
Lemma 15. There exists some constant such that, for all ,
Now we adopt Peng’s notion of stochastic backward semigroup (which was first introduced by Peng  to prove the DPP for stochastic control problems) to discuss a generalized DPP for our SDG (17), (22). First we define the family of backward semigroups associated with FBSDE (17).
For given , a number , admissible control processes , we set where , solves the following functional FBSDE on : Also, we have
Theorem 16. Suppose holds true, the lower value function satisfies the following DPP: for any ,
The proof is given in the appendix.
4. Viscosity Solutions of Path-Dependent HJBI Equation
Now we study the following path-dependent PDEs: where where ( denotes the set of symmetric matrices).
We will show that the value function (resp., ) defined in (22) (resp., (23)) is a viscosity solution of the corresponding equation (46) (resp., (47)). First we give the definition of viscosity solution for this kind of PDEs. For more information on viscosity solution, the reader is referred to Crandall et al. .
Definition 17. A real-valued -continuous function is called(i)a viscosity subsolution of (46) if for any , satisfying on and , one has (ii)a viscosity supersolution of (46) if for any satisfying on and , one has (iii)a viscosity solution of (46) if it is both a viscosity sub- and supersolution of (46).
First we prove some helpful lemmas. For some fixed , denote where .
Consider the following BSDE:
Lemma 19. For every , one has the following:
Proof. Note is defined as through the following BSDE: Using Itô’s formula to , we have Combined with , we get the desired result.
Now consider the following BSDE:
Then, we have the following lemma.
Lemma 20. For every , one has where is independent of the control processes .
Proof. From Lemma 14, we know there exists some constant such that combined with we have From (52) and (54), using Lemma 8, set Denote , due to are Lipishtiz and that they are of linear growth, , , we have So,
Lemma 21. Denote by the solution of the following ordinary differential equation: where . Then, ,
Proof. First we define a function as follows:
Consider the following equation:
according to Lemma 6, for every , there exists a unique solving (68). Also,
In fact, according to the definition of and Lemma 7, we have
On the other side, we have the existence of a measurable function such that Set , we know . Therefore, which is due to the uniqueness of the solution of (68). In particular, for every . Hence, . for every .
Similarly, from , we also derive
Lemma 22. For every , one has where the constant is independent of the control processes .
Proof. Due to is linear growth in , uniformly in , from Lemma 8, we know there exists a constant which does not depend on nor on the controls and