About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2014 (2014), Article ID 432718, 16 pages
Research Article

An Optimal Control Problem of Forward-Backward Stochastic Volterra Integral Equations with State Constraints

1School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China
2School of Mathematics, Shandong University, Jinan 250100, China

Received 26 November 2013; Accepted 2 January 2014; Published 27 February 2014

Academic Editor: Litan Yan

Copyright © 2014 Qingmeng Wei and Xinling Xiao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper is devoted to the stochastic optimal control problems for systems governed by forward-backward stochastic Volterra integral equations (FBSVIEs, for short) with state constraints. Using Ekeland's variational principle, we obtain one kind of variational inequalities. Then, by dual method, we derive a stochastic maximum principle which gives the necessary conditions for the optimal controls.

1. Introduction

As we know, with the exception of the applications in biology, physics, and so forth, Volterra integral equations often appear in some mathematical economic problems, for example, the relationships between capital and investment which include memory effects (in [1], the present stock of capital depends on the history of investment strategies over a period of time). And the simplest way to describe such memory effects is through Volterra integral operators. Based on the importance of Volterra integral equations, we will study a stochastic optimal control problem about a class of nonlinear stochastic equations—forward-backward stochastic Volterra integral equations (FBSVIEs, for short). First we review the backgrounds of these two kinds of Volterra integral equations: forward stochastic Volterra integral equations (FSVIEs, for short) and backward stochastic Volterra integral equations (BSVIEs, for short).

Let be a standard -dimensional Brownian motion defined on a complete filtered probability space , where is its natural filtration generated by and augmented by all the -null sets in . Consider the following FSVIE: The readers may refer to [213] and the references cited therein for the general results on FSVIEs. When studying the stochastic optimal control problems of FSVIEs, we need one kind of adjoint equations in order to derive a stochastic maximum principle. This new adjoint equation is actually a linear BSVIE. This motivates the investigation of the theory and applications of BSVIEs.

The following BSVIE was firstly introduced by Yong [14]: where and are given maps with . For each , is -measurable (Lin [15] studied (2) when ). It is obvious that BSVIE is a natural generalization of backward stochastic differential equation (BSDE, for short). Comparing with BSDEs, BSVIE still has its own features as listed in Yong [14, 16]. One of the advantages is to study time-inconsistent phenomenon. As shown in Laibson [17] and Strotz [18], in the real world, time-inconsistent preference usually exists. At this point, one needs BSVIEs to generalize the so-called stochastic differential utility in [19] and dynamic risk measures (see [2023]). Other applications are in the nonexponential discounting problems (see Ekeland and Lazrak [24] and Ekeland and Pirvu [25]) and time-inconsistent optimal control problem (see Yong [26, 27]). In [26, 27], Yong solved a time-inconsistent optimal control problem by introducing a family of -person noncooperative differential games and got an equilibrium control which was represented via a forward ordinary differential equation with a backward Riccati-Volterra integral equation.

As stated in Yong [28], in BSVIE (2) could represent the total (nominal) wealth of certain portfolio which might be a combination of certain contingent claims (e.g., European style, which is mature at time , is usually only -measurable) and some current cash flows, positions of stocks, mutual funds, bonds, and so on, at time . So, in general, the position process is not necessarily -adapted, but a stochastic process is merely -measurable. And Yong gave an example to make this point more clear in [28]. Focusing on this kind of position process , a class of convex/coherent dynamic risk measures was introduced by Yong in [28] to measure the risk dynamically. Hence, one kind of control problems appears: how to minimize the risk or how to maximize the utility. Wang and Shi [29] obtained a maximum principle for FBSVIEs without state constraints. In this paper, we study one kind of optimal control problems in which the state equations are governed by the following FBSVIEs: By choosing admissible controls , we will maximize the following objective functional:

Our formulation has the following new features.(i)A strong assumption that in (3) is -measurable is given in [29]. By applying the duality principle introduced in Yong [28], we overcome this restriction and assume a natural condition that is -measurable.(ii) in (3) is the terminal state of the BSVIE. In our formulation is also regarded as a control and our control is a pair . In mathematical finance, such kind of controls often appears as “consumption-investment plan” (see [30]). For the recent progress of studying this kind of control, we refer the reader to [3134]. We also impose constraints on the state process and .(iii)We consider the double integral in the cost functional (4) in theory. Some further studies on the applications are still under consideration.

In order to solve this optimal control problem, we adopt the terminal perturbation method, which was introduced in [3133, 3541]. Recently, the dual approach is applied to utility optimization problem with volatility ambiguity (see [42, 43]). The basic idea is to perturb the terminal state and directly. By applying Ekeland’s variational principle to tackle the state constraints, we derive a stochastic maximum principle which characterizes the optimal control. It is worthy to point out that in place of Itô’s formula, we need two duality principles established by Yong in [16, 28] to obtain the above results.

This paper is organized as follows. First, we recall some elements of the theory of BSVIEs in Section 2. In Section 3, we formulate the stochastic optimization problem and prove a stochastic maximum principle. In Section 4, we give two examples. The first example is associated with the model we studied. The last example is about the “terminal” control .

2. Preliminaries

Let be a -dimensional Brownian motion defined on a complete filtered probability space , where is natural filtration generated by and augmented by all the -null sets in ; that is, where is the set of all -null sets.

2.1. Notations

Here we keep on the definitions and notations for the spaces introduced in Yong [16].

For any , we denote

For any , , define the inner product and Let ; define the following spaces:(i) is -measurable and ;(ii) is -adapted and ;(iii) for almost all , ;(iv); (v) < ;(vi).

2.2. Backward Stochastic Volterra Integral Equations

For the reader’s convenience, we present some results of BSVIEs which we will use later.

Consider the following integral equation: where .

We assume the following.(H) Let be -measurable such that is -progressively measurable for all and Moreover, and, where is a deterministic function such that

The following -solution of BSVIEs was introduced by Yong [16].

Definition 1. Let . A pair is called an adapted -solution of BSVIE (8) on if (8) holds in the usual Itô sense for almost all and, in addition, the following equation holds:

For the proof of the following well-posedness results, the readers are referred to Yong [16].

Lemma 2. Let hold. Then for any , BSVIE (8) admits a unique adapted -solution on . Moreover the following estimate holds: , Let also satisfy and and is the adapted -solution of (8) with and replaced by and , respectively: then for all ,

Yong proved the following two duality principles for linear SVIE and linear BSVIE in [16, 28], respectively. And they play a key role in deriving the maximum principle.

Lemma 3. Let , , and . Let be the solution of the following FSVIE: Let be the adapted -solution to the following BSVIE: Then the following relation holds:

Lemma 4. Let , , and . Suppose that is the solution of the following linear BSVIE: and suppose that is the solution of the following FSVIE: Then the following relation holds:

For the proofs of Lemmas 3 and 4, the readers are referred to Theorem 5.1 in [16] and Theorem 3.1 in [28], respectively.

3. Stochastic Optimization Problem

3.1. One Kind of Stochastic Optimization Problems

Let , be nonempty convex subsets of ; set

For any given control pair , we consider the following controlled integral equation: where and , , and .

For each , define the following objective functional: where , , and , , .

We assume the following.() , ,, , , , , are continuous in their argument and continuously differentiable in the variables ;() the derivatives of , , , in are bounded;() the derivatives of , in are bounded by , and the derivatives of ,  , in are bounded by ;() is -measurable such that , are -progressively measurable for all , , and .

Under the assumptions , , and , for any given , the FSVIE in (22) has a unique solution . For any given , the BSVIE has a unique M-solution associated with . Hence, there exists a unique triple satisfying (22).

Now we formulate the optimization problem: where is continuous and satisfies , .

3.2. Variational Equation

For , we define a metric in by It is obvious that is a complete metric space.

Let be an optimal control pair to problem (24) and let be the corresponding state processes of (22). For any , , using the convexity of , we have We denote by the solution of the corresponding FBSVIE (22) with .

Consider the following FBSVIE: where , , , and ,  , respectively. This equation is called the variational equation.

From Lemma 2 and , , , it is easy to check that the variational equation (27) has a unique solution .

Now we define To simplify the proof, we use the following notations: where , , , respectively. Similar to the arguments in [29, 32], we have the following lemma.

Lemma 5. Assume that , , and hold. One has

Proof. (1) We prove the first equality. By the FSVIEs in (22) and (27), we have where Therefore, we have By choosing a proper such that , we have Applying Lebesgue’s dominated convergence theorem, we have So,
(2) By the BSVIEs in (22) and (27), we have Let Thus, In Lemma 2, we take , . Then Applying Lebesgue’s dominated convergence theorem, we have Using the obtained first result, we can get the desired results.

3.3. Variational Inequality

In this subsection, using Ekeland’s variational principle (see [44]), we get the variational inequality.

Lemma 6 (Ekeland’s variational principle). Let be a complete metric space and let be a proper lower semicontinuous function bounded from below. Suppose that, for some , there exists satisfying . Then there exists such that (i),(ii),(iii), for all .

Given the optimal control pair , introduce a mapping by where , , , , , is an arbitrary positive constant and , , , satisfy , , and .

Remark 7. Under , from the well-posedness of BSVIEs (Lemma 2) as well as the proof of Lemma 5, we know that is a continuous function on .

Theorem 8. Let be the optimal control pair. Under the assumptions , there exists a deterministic function , , , , , , , , , , , such that the following variational inequality holds: where , , , , , , , is the derivative of with respect to , respectively.

Proof. It is easy to check that the following properties hold: (i),(ii), for all ,(iii).Then, from Lemma 6 (Ekeland’s variational principle), we can find , such that one has the following:(i),(ii),(iii), for all .For each , we define then . Indeed, , ; then Let (resp., ) be the solution of BSVIE (22) with (resp., ). From Ekeland’s variational principle, it follows that We consider the following variational equation: where , , , , , , , , respectively.
Similarly to Lemma 5, we have which leads to the following expansions: From , we have Furthermore, the following expansions hold: For the given , we consider the following cases.
Case 1. There exists such that, for any ,