- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Annual Issues
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Abstract and Applied Analysis
Volume 2014 (2014), Article ID 432718, 16 pages
An Optimal Control Problem of Forward-Backward Stochastic Volterra Integral Equations with State Constraints
1School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China
2School of Mathematics, Shandong University, Jinan 250100, China
Received 26 November 2013; Accepted 2 January 2014; Published 27 February 2014
Academic Editor: Litan Yan
Copyright © 2014 Qingmeng Wei and Xinling Xiao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper is devoted to the stochastic optimal control problems for systems governed by forward-backward stochastic Volterra integral equations (FBSVIEs, for short) with state constraints. Using Ekeland's variational principle, we obtain one kind of variational inequalities. Then, by dual method, we derive a stochastic maximum principle which gives the necessary conditions for the optimal controls.
As we know, with the exception of the applications in biology, physics, and so forth, Volterra integral equations often appear in some mathematical economic problems, for example, the relationships between capital and investment which include memory effects (in , the present stock of capital depends on the history of investment strategies over a period of time). And the simplest way to describe such memory effects is through Volterra integral operators. Based on the importance of Volterra integral equations, we will study a stochastic optimal control problem about a class of nonlinear stochastic equations—forward-backward stochastic Volterra integral equations (FBSVIEs, for short). First we review the backgrounds of these two kinds of Volterra integral equations: forward stochastic Volterra integral equations (FSVIEs, for short) and backward stochastic Volterra integral equations (BSVIEs, for short).
Let be a standard -dimensional Brownian motion defined on a complete filtered probability space , where is its natural filtration generated by and augmented by all the -null sets in . Consider the following FSVIE: The readers may refer to [2–13] and the references cited therein for the general results on FSVIEs. When studying the stochastic optimal control problems of FSVIEs, we need one kind of adjoint equations in order to derive a stochastic maximum principle. This new adjoint equation is actually a linear BSVIE. This motivates the investigation of the theory and applications of BSVIEs.
The following BSVIE was firstly introduced by Yong : where and are given maps with . For each , is -measurable (Lin  studied (2) when ). It is obvious that BSVIE is a natural generalization of backward stochastic differential equation (BSDE, for short). Comparing with BSDEs, BSVIE still has its own features as listed in Yong [14, 16]. One of the advantages is to study time-inconsistent phenomenon. As shown in Laibson  and Strotz , in the real world, time-inconsistent preference usually exists. At this point, one needs BSVIEs to generalize the so-called stochastic differential utility in  and dynamic risk measures (see [20–23]). Other applications are in the nonexponential discounting problems (see Ekeland and Lazrak  and Ekeland and Pirvu ) and time-inconsistent optimal control problem (see Yong [26, 27]). In [26, 27], Yong solved a time-inconsistent optimal control problem by introducing a family of -person noncooperative differential games and got an equilibrium control which was represented via a forward ordinary differential equation with a backward Riccati-Volterra integral equation.
As stated in Yong , in BSVIE (2) could represent the total (nominal) wealth of certain portfolio which might be a combination of certain contingent claims (e.g., European style, which is mature at time , is usually only -measurable) and some current cash flows, positions of stocks, mutual funds, bonds, and so on, at time . So, in general, the position process is not necessarily -adapted, but a stochastic process is merely -measurable. And Yong gave an example to make this point more clear in . Focusing on this kind of position process , a class of convex/coherent dynamic risk measures was introduced by Yong in  to measure the risk dynamically. Hence, one kind of control problems appears: how to minimize the risk or how to maximize the utility. Wang and Shi  obtained a maximum principle for FBSVIEs without state constraints. In this paper, we study one kind of optimal control problems in which the state equations are governed by the following FBSVIEs: By choosing admissible controls , we will maximize the following objective functional:
Our formulation has the following new features.(i)A strong assumption that in (3) is -measurable is given in . By applying the duality principle introduced in Yong , we overcome this restriction and assume a natural condition that is -measurable.(ii) in (3) is the terminal state of the BSVIE. In our formulation is also regarded as a control and our control is a pair . In mathematical finance, such kind of controls often appears as “consumption-investment plan” (see ). For the recent progress of studying this kind of control, we refer the reader to [31–34]. We also impose constraints on the state process and .(iii)We consider the double integral in the cost functional (4) in theory. Some further studies on the applications are still under consideration.
In order to solve this optimal control problem, we adopt the terminal perturbation method, which was introduced in [31–33, 35–41]. Recently, the dual approach is applied to utility optimization problem with volatility ambiguity (see [42, 43]). The basic idea is to perturb the terminal state and directly. By applying Ekeland’s variational principle to tackle the state constraints, we derive a stochastic maximum principle which characterizes the optimal control. It is worthy to point out that in place of Itô’s formula, we need two duality principles established by Yong in [16, 28] to obtain the above results.
This paper is organized as follows. First, we recall some elements of the theory of BSVIEs in Section 2. In Section 3, we formulate the stochastic optimization problem and prove a stochastic maximum principle. In Section 4, we give two examples. The first example is associated with the model we studied. The last example is about the “terminal” control .
Let be a -dimensional Brownian motion defined on a complete filtered probability space , where is natural filtration generated by and augmented by all the -null sets in ; that is, where is the set of all -null sets.
Here we keep on the definitions and notations for the spaces introduced in Yong .
For any , we denote
For any , , define the inner product and Let ; define the following spaces:(i) is -measurable and ;(ii) is -adapted and ;(iii) for almost all , ;(iv); (v) < ;(vi).
2.2. Backward Stochastic Volterra Integral Equations
For the reader’s convenience, we present some results of BSVIEs which we will use later.
Consider the following integral equation: where .
We assume the following. (H) Let be -measurable such that is -progressively measurable for all and Moreover, and, where is a deterministic function such that
The following -solution of BSVIEs was introduced by Yong .
For the proof of the following well-posedness results, the readers are referred to Yong .
Lemma 2. Let hold. Then for any , BSVIE (8) admits a unique adapted -solution on . Moreover the following estimate holds: , Let also satisfy and and is the adapted -solution of (8) with and replaced by and , respectively: then for all ,
Lemma 3. Let , , and . Let be the solution of the following FSVIE: Let be the adapted -solution to the following BSVIE: Then the following relation holds:
Lemma 4. Let , , and . Suppose that is the solution of the following linear BSVIE: and suppose that is the solution of the following FSVIE: Then the following relation holds:
3. Stochastic Optimization Problem
3.1. One Kind of Stochastic Optimization Problems
Let , be nonempty convex subsets of ; set
For any given control pair , we consider the following controlled integral equation: where and , , and .
For each , define the following objective functional: where , , and , , .
We assume the following. () , ,, , , , , are continuous in their argument and continuously differentiable in the variables ; () the derivatives of , , , in are bounded; () the derivatives of , in are bounded by , and the derivatives of , , in are bounded by ; () is -measurable such that , are -progressively measurable for all , , and .
Under the assumptions , , and , for any given , the FSVIE in (22) has a unique solution . For any given , the BSVIE has a unique M-solution associated with . Hence, there exists a unique triple satisfying (22).
Now we formulate the optimization problem: where is continuous and satisfies , .
3.2. Variational Equation
For , we define a metric in by It is obvious that is a complete metric space.
Let be an optimal control pair to problem (24) and let be the corresponding state processes of (22). For any , , using the convexity of , we have We denote by the solution of the corresponding FBSVIE (22) with .
Consider the following FBSVIE: where , , , and , , respectively. This equation is called the variational equation.
Lemma 5. Assume that , , and hold. One has
Proof. (1) We prove the first equality. By the FSVIEs in (22) and (27), we have
Therefore, we have
By choosing a proper such that , we have
Applying Lebesgue’s dominated convergence theorem, we have
(2) By the BSVIEs in (22) and (27), we have Let Thus, In Lemma 2, we take , . Then Applying Lebesgue’s dominated convergence theorem, we have Using the obtained first result, we can get the desired results.
3.3. Variational Inequality
In this subsection, using Ekeland’s variational principle (see ), we get the variational inequality.
Lemma 6 (Ekeland’s variational principle). Let be a complete metric space and let be a proper lower semicontinuous function bounded from below. Suppose that, for some , there exists satisfying . Then there exists such that (i),(ii),(iii), for all .
Given the optimal control pair , introduce a mapping by where , , , , , is an arbitrary positive constant and , , , satisfy , , and .
Theorem 8. Let be the optimal control pair. Under the assumptions , there exists a deterministic function , , , , , , , , , , , such that the following variational inequality holds: where , , , , , , , is the derivative of with respect to , respectively.
Proof. It is easy to check that the following properties hold: (i),(ii), for all ,(iii).Then, from Lemma 6 (Ekeland’s variational principle), we can find , such that one has the following:(i),(ii),(iii), for all .For each , we define
then . Indeed, , ; then
Let (resp., ) be the solution of BSVIE (22) with (resp., ). From Ekeland’s variational principle, it follows that
We consider the following variational equation:
where , , , , , , , , respectively.
Similarly to Lemma 5, we have which leads to the following expansions: From , we have Furthermore, the following expansions hold: For the given , we consider the following cases.
Case 1. There exists such that, for any , Then