International Journal of Stochastic Analysis

Volume 2012, Article ID 258674, 50 pages

http://dx.doi.org/10.1155/2012/258674

## Necessary Conditions for Optimal Control of Forward-Backward Stochastic Systems with Random Jumps

School of Mathematics, Shandong University, Jinan 250100, China

Received 28 September 2011; Revised 28 December 2011; Accepted 3 January 2012

Academic Editor: Jiongmin Yong

Copyright © 2012 Jingtao Shi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper deals with the general optimal control problem for fully coupled forward-backward stochastic differential equations with random jumps (FBSDEJs). The control domain is not assumed to be convex, and the control variable appears in both diffusion and jump coefficients of the forward equation. Necessary conditions of Pontraygin's type for the optimal controls are derived by means of spike variation technique and Ekeland variational principle. A linear quadratic stochastic optimal control problem is discussed as an illustrating example.

#### 1. Introduction

##### 1.1. Basic Notations

Throughout this paper, we denote by the space of -dimensional Euclidean space, by the space of matrices, and by the space of symmetric matrices. and denote the scalar product and norm in the Euclidean space, respectively. appearing in the superscripts denotes the transpose of a matrix.

Let be a complete filtered probability space satisfying the usual conditions, where the filtration is generated by the following two mutually independent processes:(i)a one-dimensional standard Brownian motion ;(ii)a Poisson random measure on , where is a nonempty open set equipped with its Borel field , with compensator , such that is a martingale for all satisfying . is assumed to be a -finite measure on and is called the characteristic measure.

Let be fixed and be a nonempty subset of . We denote . Any generic point in is denoted by . Let be the set of all -predictable processes such that . Any is called an admissible control process. We denote by or the set of integrable functions with norm . We define We denote Clearly, is a Banach space. Any process in is denoted by , whose norm is defined by

##### 1.2. Formulation of the Optimal Control Problem and Basic Assumptions

For any and , we consider the following fully coupled forward-backward stochastic control system: with the cost functional given by Here, takes value in and

For any and , we refer to as the state process corresponding to the admissible control if FBSDEJ (1.4) admits a unique adapted solution. For controlled FBSDEJ (1.4) with cost functional (1.5), we consider the following.

*Problem C.* Find , such that

Any satisfying (1.7) is called an optimal control process of Problem C, and the corresponding state process, denoted by , is called optimal state process. We also refer to as an optimal 5-tuple of Problem C.

Our main goal in this paper is to derive some necessary conditions for the optimal control of Problem C, which is called the stochastic maximum principle of Pontraygin’s type. For this target, we first introduce the following basic assumption throughout this paper. (H0)* For any ** and **, *FBSDEJ* (1.4) admits a unique adapted solution **. Moreover, the following estimate holds: **Further, if * *is the unique adapted solution of (1.4) with * *and * *replaced by * *and *, *respectively, then the following stability estimate holds: *
By adopting the ideas from Wu [1], we know that under certain -monotonicity conditions for the coefficients, the existence and uniqueness of FBSDEJ (1.4) guaranteed which leads to hypothesis (H0). Our main goal in this paper is to derive some necessary conditions for the optimal control of Problem C. Hence, we impose the well-posedness of the state equation (1.4) as an assumption to avoid some technicalities not closely to our main results.

##### 1.3. Developments of Stochastic Optimal Control and Contributions of the Paper

It is well known that the optimal control problem is one of the central themes of modern control sciences. Necessary conditions for the optimal control of the (forward) continuous stochastic control system, that is, the so-called stochastic maximum principle of Pontraygin’s type, were extensively studied since early 1960s. When Brownian motion is the unique noise source, Peng [2] (see also Yong and Zhou [3]) obtained the maximum principle for the general case, that is, the control variable entering the diffusion coefficient and control domain being not convex.

Forward-backward stochastic control systems where the controlled systems are described by *forward-backward stochastic differential equations* (FBSDEs) are widely used in mathematical economics and mathematical finance, which includes the usual forward SDEs as a special case. They are encountered in stochastic recursive utility optimization problems (see [4–8]) and principal-agent problems (see [9, 10]). Peng [11] first obtained the necessary conditions for optimal control for the partially coupling case when the control domain is convex. And then Xu [12] studied the nonconvex control domain case and obtained the corresponding necessary conditions. But he needs to assume that the diffusion coefficient in the forward control system does not contain the control variable. Ji and Zhou [8] applied the Ekeland variational principle to establish a maximum principle for a partially coupled forward-backward stochastic control system, while the forward state is constrained in a convex set at the terminal time. Wu [13] recently established a general maximum principle for optimal control problems derived by forward-backward stochastic systems, where control domains are nonconvex and forward diffusion coefficients explicitly depend on control variables. Moreover, some financial optimization problems for large investors (see [14–16]) and some asset pricing problems with forward-backward differential utility (see [7]) directly lead to fully coupled FBSDEs. Wu [17] first (see also Meng [18]) obtained the necessary conditions for optimal control of fully coupled forward-backward stochastic control systems when the control domain is convex. And then Shi and Wu [19] studied the nonconvex control domain case and obtained the corresponding necessary conditions under some -monotonic assumptions. But they also (similar to Xu [12]) need to assume that the control variable does not appear in the diffusion coefficient of the forward equation. Very recently, Yong [20] completely solved the problem of finding necessary conditions for optimal control of fully coupled FBSDEs. He considered an optimal control problem for general coupled FBSDEs with mixed initial-terminal conditions and derived the necessary conditions for the optimal controls when the control domain is not assumed to be convex and the control variable appears in the diffusion coefficient of the forward equation.

However, recently more and more research attentions are drawn towards the optimal control problem for *discontinuous* stochastic systems or stochastic systems with random jumps. The reason is clear for its applicable aspect. For example, there is compelling evidence that the dynamics of prices of financial instruments exhibit jumps that cannot be adequately captured solely by diffusion processes (i.e., processes satisfying some Itô-type *stochastic differential equations* (SDE for short), see Merton [21] and Cont and Tankov [22]. Several empirical studies demonstrate the existence of jumps in stock markets, the foreign exchange market, and bond markets. Jumps constitute also a key feature in the description of credit risk sensitive instruments. Therefore, models that incorporate jumps have become increasingly popular in finance and several areas of science and engineering, this leads to paying more attention to *stochastic differential equations with jumps* (SDEJs for short). As consequences, optimal control problems involving systems of SDEJs are widely studied. Situ [23] first obtained the maximum principle for (forward) stochastic control system with jumps but the jump coefficient does not contain the control variable. Tang and Li [24] discussed a more general case, where the control is allowed into both diffusion and jump coefficients, and the control domain is not necessarily convex; also some general state constraints are imposed. Maximum principles for forward-backward stochastic control systems with random jumps are studied in Øksendal and Sulem [25], Shi and Wu [26], where the FBSDEJs are partially coupled and the control domains are convex. Recently, Shi [27] obtained the necessary condition of optimal control as well as a sufficient condition of optimality under the assumption that the diffusion and jump coefficients do not contain the control variable and the control domain need not be convex. Necessary conditions for fully coupled forward-backward stochastic control systems with random jumps were studied in Shi and Wu [28] (see also Meng and Sun [29]) where the control domains are convex.

In this paper, we consider the general optimal control problem for fully coupled FBSDEJ (1.4). Here, by the word “general” we mean the allowance of the control variable into both diffusion and jump coefficients of the forward equation and the control domain is not assumed to be convex. It is well worth mentioning that the idea of second-order spike variation technique, developed by Peng [2], plays an important role in deriving the necessary conditions of general stochastic optimal control of jump-diffusion process in Tang and Li [24]. Following the standard idea of deriving necessary conditions for optimal control processes, due to the fact the control domain is not assumed to be convex, one needs to use spike variation for the control process and then to try obtaining a Taylor-type expansion for the state process and the cost functional (1.5) with respect to the spike variation of the control process, followed by some suitable duality relations to get a maximum principle of Pontryagin-type. However, the derivation of Taylor expansion of the state process with respect to the spike variation of the control process is technically difficult. The main reasons are that both and appear in the diffusion and jump coefficients of the forward equation of (1.4) and that the regularity/integrability of the continuous martingale process and the discontinuous martingale process (as part of the state process) seems to be not enough in the case when a second-order expansion is necessary. Note that in [25–29], due to the special structure of the problems, the second-order expansion is not necessary when one tries to derive the necessary conditions for optimal controls.

We overcome the above difficulty by the newly developed *reduction method* by Wu [13] and Yong [20] in the continuous case, independently. In fact, some ideas have been proposed early in Kohlmann and Zhou [30] and Ma and Yong [31]. Let us make it more precise. We first introduce the controlled initial value problem for a system of SDEJs, where is regarded as the state process and is regarded as the control process. Next, we regard the original terminal condition as terminal state constraint and then we translate Problem C into a high-dimensional reduced optimal control problem described by standard SDEJ with state constraint (see Problem in Section 3). The advantage of this reduced optimal control problem is that one needs not much regularity/integrability of processes since it is treated as a control process. We apply the Ekeland variational principle to deal with this high-dimensional reduced optimal control problem with state constraint. Finally, necessary conditions for the optimal control of Problem C are derived by the equivalence of Problems C and .

The rest of this paper is organized as follows. In Section 2, under some suitable assumptions, we give the main result of this paper, together with some discussions on special cases. In Section 3, after we make the reduction of our optimal control problems for FBSDEJs Problem C, a proof of our main result will be presented. Section 4 is devoted to a linear quadratic stochastic optimal control problem as an illustrating example. Finally, in Section 5 we give the concluding remarks and compare our theorem with some existing results.

#### 2. The Main Result and Some Special Cases

In this section, under some suitable assumptions, we will state the necessary conditions for the optimal control of our Problem C. Also, some interesting special cases will be discussed.

Let us introduce the following further assumptions beyond (H0).(H1)*For any *, *maps * is *measurable and there exists a constant **such that*(H2)*Maps ** is twice continuously differential, with the (partial) derivatives up to the second order being uniformly bounded, Lipschitz continuous in **, and continuous in **.*(H3)*The map ** is twice differentiable with the derivatives up to the second order being uniformly bounded and uniformly Lipschitz continuous; the maps ** are twice differentiable with the derivatives up to the second order being Lipschitz continuous; the map ** is twice differentiable with the derivatives up to the second order being Lipschitz continuous in **, and continuous in **.*

Next, to simplify our presentation, we now introduce some abbreviation notations. First, we make the following convention: for any differentiable map , let In particular, for is an -dimensional row vector. Also, for any twice differentiable function , the Hessian matrix is given by hereafter, stands for the set of all real symmetric matrices. On the other hand, for a twice differentiable function (denoting ), we denote

Now, let be an optimal 5-tuple of Problem C. For , we denote and let We also introduce For , let and let

Our main result in this paper is the following theorem for Problem C.

Theorem 2.1. *Suppose that (H0)–(H3) hold, and let be an optimal 5-tuple of Problem C. Then there exists a unique adapted solution to the following FBSDEJ:
**
Let be the unique adapted solution to the following matrix-valued BSDEJ:
**
where , and the Hamiltonian function is defined as follows:
**
Then,
**
with , and .*

*Remark 2.2. *In fact, the second-order adjoint equation (2.11) can be split into the following three BSDEJs if we add that :

Note that this kind of three second-order adjoint equations appears in Wu [13] whereas not in Yong [20].

Let us now look at some special cases. It can be seen that our theorem recovers the known results.

* A Classical Stochastic Optimal Control Problem with Random Jumps*

Consider a controlled SDEJ:
with the cost functional

In this case,
Hence, we have, by some direct computation/observation,
Consequently, (2.10) and (2.11) (or equivalently, (2.14)) become
where the Hamiltonian function is defined as
The maximum condition (2.13) is reduced to the following:
These are the necessary conditions for the stochastic optimal control problem with random jumps (see (2.37) of Tang and Li [24]). When the jump coefficient is independent of , our result is reduced to Situ [23]. Where there are no random jumps, our result recovers the well-known maximum principle for the classical stochastic optimal control problem of Peng [2] and Yong and Zhou [3]. And when is convex, the classical result of Bensoussan [32] is recovered.

* A Stochastic Optimal Control Problem for BSDEJs with Random Jumps*

Consider a controlled BSDEJ:
with the cost functional
In this case,
Hence, we have
Consequently, (2.10) and (2.11) (or equivalently, (2.16)) become
The maximum condition (2.13) is reduced to the following:
As we know, this new result has not been published elsewhere. When there are no random jumps, our result partially recovers that of Dokuchaev and Zhou [33].

* A Stochastic Optimal Control Problem for FBSDEJs with Random Jumps*

Consider a controlled FBSDEJ:
with the cost functional

In this case, we have
Consequently, (2.10) becomes
and (2.14), (2.15), (2.16) become
where the Hamiltonian function is defined as follows:
The maximum condition (2.13) remains the same. When is convex, we essentially recover the results in Shi and Wu [26] and Øksendal and Sulem [25] (partial information case), and when are independent of and is not assumed to be convex, our result is reduced to that of Shi [27]. Note that in both of these cases, our second adjoint equations are new.

When there are no random jumps, our result partially recovers that of Wu [13]. Note that in (3.27) of [13], some additional parameters have to be involved and to be determined. And when is convex, our result is reduced to that of Peng [11] and Wang and Wu [34] (partial observation case). Result of Xu [12] is recovered when is independent of and is not assumed to be convex.

* A Stochastic Optimal Control Problem for Fully Coupled FBSDEJs with Random Jumps*

Consider the controlled fully coupled FBSDEJ (1.4) with the cost functional (1.5). When is convex, we essentially recover the results in Shi and Wu [28] and Meng and Sun [29] (partial information case). When there are no random jumps, our result becomes a special case of Yong [20], because in [20] the author considered the mixed initial-terminal conditions, and some additional necessary conditions for the optimal control are derived. When is independent of , our result recovers those of Shi and Wu [19, 35] (partial observation case). And when is convex, our result is reduced to that of Wu [17].

#### 3. Problem Reduction and the Proof of the Main Theorem

This section is devoted to the proof of our main theorem. The proof is lengthy and technical. Therefore, we divide it into several steps to make the idea clear.

*Step 1 (problem reduction). *Consider the following initial value problem for a control system of SDEJs:
where is regarded as the state process and