- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

International Journal of Stochastic Analysis

Volume 2010 (2010), Article ID 329185, 25 pages

http://dx.doi.org/10.1155/2010/329185

## Optimal Control with Partial Information for Stochastic Volterra Equations

^{1}CMA and Department of Mathematics, University of Oslo, P.O. Box 1053 Blindern, 0316 Oslo, Norway^{2}Norwegian School of Economics and Business Administration (NHH), Helleveien 30, 5045 Bergen, Norway^{3}School of Mathematics, University of Manchester, Oxford Road, Manchester M13 9PL, UK

Received 26 October 2009; Revised 26 February 2010; Accepted 9 March 2010

Academic Editor: Agnès Sulem

Copyright © 2010 Bernt øksendal and Tusheng Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In the first part of the paper we obtain existence and characterizations of an optimal control for a linear quadratic control problem of linear stochastic Volterra equations. In the second part, using the Malliavin calculus approach, we deduce a general maximum principle for optimal control of general stochastic Volterra equations. The result is applied to solve some stochastic control problem for some stochastic delay equations.

#### 1. Introduction

Let be a filtered probability space and a -real valued Brownian motion. Let and a -finite measure on . Let denote a stationary Poisson random measure on with intensity measure . Denote by the compensated Poisson measure. Suppose that we have a cash flow where the amount at time is modelled by a stochastic delay equation of the form: Here is a fixed delay and , and are given bounded deterministic functions.

Suppose that we consume at the rate at time from this wealth , and that this consumption rate influences the growth rate of both through its value at time and through its former value , because of some delay mechanisms in the system determining the dynamics of .

With such a consumption rate the dynamics of the corresponding cash flow is given by where and are deterministic bounded functions.

Suppose that the consumer wants to maximize the combined utility of the consumption up to the terminal time and the terminal wealth. Then the problem is to find such that is maximal. Here and are given utility functions, possibly stochastic. See Section 4.

This is an example of a stochastic control problem with delay. Such problems have been studied by many authors. See, for example, [1–5] and the references therein. The methods used in these papers, however, do not apply to the cases studied here. Moreover, these papers do not consider partial information control (see below).

It was shown in [6] that the system (1.2) is equivalent to the following controlled stochastic Volterra equation: where

and is the transition function satisfying

So the control of the system (1.2) reduces to the control of the system (1.4). For more information about stochastic control of delay equations we refer to [6] and the references therein.

Stochastic Volterra equations are interesting on their own right, also for applications, for example, to economics or population dynamics. See, for example, Example in [7] and the references therein.

In the first part of this paper, we study a linear quadratic control problem for the following controlled stochastic Volterra equation: where is our control process and is a given predictable process with for all , while are bounded deterministic functions. In reality one often does not have the complete information when performing a control to a system. This means that the control processes is required to be predictable with respect to a subfiltration with . So the space of controls will be is a Hilbert space equipped with the inner product

will denote the norm in . Let be a closed, convex subset of , which will be the space of admissible controls. Consider the linear quadratic cost functional and the value function In Section 2, we prove the existence of an optimal control and provide some characterizations for the control.

In the second part of the paper (from Section 3), we consider the following general controlled stochastic Volterra equation: where is a given predictable process with for all . The performance functional is of the following form: where , , and are -predictable and is measurable and such that for any , the space of admissible controls. The problem is to find such that

Using the Malliavin calculus, inspired by the method in [8], we will deduce a general maximum principle for the above control problem.

*Remark 1.1. *Note that we are off the Markovian setting because the solution of the Volterra equation is not Markovian. Therefore the classical method of dynamic programming and the Hamilton-Jacobi-Bellman equation cannot be used here.

*Remark 1.2. *We emphasize that partial information is different from partial observation, where the control is based on noisy observations of the (current) state. For example, our discussion includes the case ( constant), which corresponds to delayed information flow. This case is not covered by partial observation models. For a comprehensive presentation of the linear quadratic control problem in the classical case with partial observation, see [9], with partial information see [10].

#### 2. Linear Quadratic Control

Consider the controlled stochastic Volterra equation (1.7) and the control problem (1.10), (1.11). We have the following Theorem.

Theorem 2.1. *Suppose that is bounded and , and for some . Then there exists a unique element such that
*

*Proof. *For simplicity, we assume and in this proof because these terms can be similarly estimated as the corresponding terms for Brownian motion . By (1.7) we have
Applying Gronwall's inequality, there exists a constant such that
Similar arguments also lead to
for some constant . Now, let be a minimizing sequence for the value function, that is, . From the estimate (2.3) we see that there exists a constant such that
Thus, by virtue of the assumption on , we have, for some constant ,
This implies that is bounded in , hence weakly compact. Let be a subsequence that converges weakly to some element in . Since is closed and convex, the Banach-Sack Theorem implies . From (2.4) we see that in implies that in for every and in . The same conclusion holds also for . Since is linear in , we conclude that equipped with the weak topology both on and , is continuous for every and is continuous. Thus,
are continuous with respect to the weak topology of and . Since the functionals of involved in the definition of in (1.10) are lower semicontinuous with respect to the weak topology, it follows that
which implies that is an optimal control.

The uniqueness is a consequence of the fact that is strictly convex in which is due to the fact that is affine in and is a strictly convex function. The proof is complete.

To characterize the optimal control, we assume and ; that is, consider the controlled system: Set For a predictable process , we have

Introduce

Lemma 2.2. *Under our assumptions, the above series converges at least in . Thus and are well-defined.*

*Proof. *We first note that
for , where
is a bounded deterministic function. Because of the similarity, let us prove only that is well-defined. Repeatedly using (2.13), we have
for some constant . This implies that
Thus, we have

The following theorem is a characterization of the optimal control.

Theorem 2.3. *Assume that and are bounded and . Suppose . Let be the unique optimal control given in Theorem 2.1. Then is determined by the following equation:
**
almost everywhere with respect to .*

*Proof. *For any , since is the optimal control, we have
This leads to
for all . By virtue of (2.9), it is easy to see that
satisfies the following equation:
Remark that is independent of . Next we will find an explicit expression for . Let be defined as in (2.10). Repeatedly using (2.9) we have
Similarly, we have the following expansion for :
Interchanging the order of integration,
Now substituting into (2.20) we obtain that
for all . Interchanging the order of integration and conditioning on we see that (2.26) is equivalent to
Since this holds for all , we conclude that
-a.e. Note that can be written as
Substituting into (2.28), we get (2.18), completing the proof.

*Example 2.4. *Consider the controlled system
and the performance functional
Suppose , meaning that the control is deterministic. In this case, we can find the unique optimal control explicitly. Noting that the conditional expectation reduces to expectation, the (2.18) for the optimal control becomes
where we have used the fact that , in this special case. Put
Then (2.33) yields
where
Substitute the expression of into (2.34) to get
Consequently,
Together with (2.35) we arrive at
*ds*-a.e.

#### 3. A General Maximum Principle

In this section, we consider the following general controlled stochastic Volterra equation: where is our control process taking values in and is as in (1.7). More precisely, , where is a family of -predictable controls. Here is a given subfiltration and , and are given measurable, -predictable functions. Consider a performance functional of the following form: where is predictable and is measurable and such that The purpose of this section is to give a characterization for the critical point of . First, in the following two subsections we recall briefly some basic properties of Malliavin calculus for and which will be used in the sequel. For more information we refer to [11] and [12].

##### 3.1. Integration by Parts Formula for

In this subsection, . Recall that the Wiener-Ito chaos expansion theorem states that any admits the representation for a unique sequence of symmetric deterministic function and Moreover, the following isometry holds: Let be the space of all such that its chaos expansion (3.4) satisfies For and , the Malliavin derivative of , , is defined by where is the times iterated integral to the first variables of keeping the last variable as a parameter. We need the following result.

Theorem A (Integration by parts formula (duality formula) for ). *Suppose that is -adapted with and let . Then
*

##### 3.2. Integration by Parts Formula for

In this section , where . Recall that the Wiener-Ito chaos expansion theorem states that any admits the representation for a unique sequence of functions , where is the space of functions , such that and is symmetric with respect to the pairs of variables . Here is the iterated integral: Moreover, the following isometry holds: Let be the space of all such that its chaos expansion (3.18) satisfies For and , the Malliavin derivative of , , is defined by where is the times iterated integral with respect to the first pairs of variables of keeping the last pair as a parameter. We need the following result

Theorem B (Integration by parts formula (duality formula) for ). *Suppose is -predictable with and let . Then
*

##### 3.3. Maximum Principles

Consider (3.1). We will make the following assumptions throughout this subsection.(H.1) The functions , , , and are continuously differentiable with respect to and .(H.2) For all and all bounded -measurable random variables the control

belongs to .(H.3) For all with bounded, there exists such that (H.4) For all with bounded, the process exists and satisfies the following equation: (H.5) For all , the Malliavin derivatives and exist.

In the sequel, we omit the random parameter for simplicity. Let be defined as in (3.2).(H.6) The functions , , , , and , are bounded on .

Theorem 3.1 (Maximum principle I for optimal control of stochastic Volterra equations). *(1) Suppose that is a critical point for in the sense that for all bounded . Then**
where is defined in (3.29) below and .**(2) Conversely, suppose such that (3.19) holds. Then is a critical point for .*

*Proof. * (1) Suppose that is a critical point for . Let be bounded. Write . Then
where
By the duality formulae (3.9) and (3.15), we have