Abstract

We are going to study an approach of optimal control problems where the state equation is a backward doubly stochastic differential equation, and the set of strict (classical) controls need not be convex and the diffusion coefficient and the generator coefficient depend on the terms control. The main result is necessary conditions as well as a sufficient condition for optimality in the form of a relaxed maximum principle.

1. Introduction

In 1994, Pardoux and Peng [1] considered a new kind of BSDE, that is a class of backward doubly stochastic differential equations (BDSDEs in short) with two different directions of stochastic integrals, that is, the equations involve both a standard (forward) stochastic Itô integral and a backward stochastic Itô integral . More precisely, they dealt with the following BDSDE:

They proved that if and are uniform Lipschitz, then (1), for any square integrable terminal value , has a unique solution in the interval . They also showed that BDSDEs can produce a probabilistic representation for solutions to some quasi-linear stochastic partial differential equations. Since this first existence and uniqueness result, many papers have been devoted to existence and/or uniqueness result under weaker assumptions. Among these papers, we can distinguish two different classes: Scalar BDSDEs and multidimensional BDSDEs. In the first case, one can take advantage of the comparison theorem: we refer to Shi et al. [2] who weakened the uniform Lipschitz assumptions to linear growth and continuous conditions by virtue of a comparison theorem introduced by them. They obtained the existence of solutions to BDSDEs but without uniqueness. In this spirit, let us mention the contributions of N’zi and Owo [3], which dealt with discontinuous coefficients. For multidimensional BDSDE, there is no comparison theorem, and to overcome this difficulty, a monotonicity assumption on the generator in the variable is used. This appears in the works of Peng and Shi [4] which have introduced a class of forward backward doubly stochastic differential equations, under the Lipschitz condition and monotonicity assumption. Unfortunately, the uniform Lipschitz condition cannot be satisfied in many applications. More recently, N’zi and Owo [5] established existence and uniqueness result under non-Lipschitz assumptions. Moreover the authors apply his theory to solve the financial model of cash flow valuation.

In this paper, we study a stochastic control problem where the system is governed by a nonlinear backward doubly stochastic differential equation (BDSDE) of the type

The control variable , called strict control, is an -adapted process with values in some sets of . We denote by the class of all strict controls.

The criteria to be minimized, over the set , has the following form: where and are given maps and is the trajectory of the system controlled by .

A control is called optimal if it satisfies

Our objective in this paper is to establish necessary as well sufficient optimality conditions, of the Pontryagin maximum principle type, for relaxed models.

In this paper, we solve this problem by using an approach developed by Bahlali [6], and it was developed by Chala [7, 8]. We introduce then a bigger new class of processes by replacing the -valued process by a -valued process , where is the space of probability measures on equipped with the topology of stable convergence. This new class of processes is called relaxed controls, and it has a richer structure of convexity, for which the control problem becomes solvable.

The main idea is to use the property of convexity of the set of relaxed controls and treat the problem with the method of convex perturbation on relaxed controls (instead of that of the spike variation on strict one). We establish then necessary and sufficient optimality conditions for relaxed controls.

In the relaxed model, the system is governed by the BDSDE as follows:

The expected cost to be minimized, the relaxed model, is defined from into by

A relaxed control is called optimal if it solves

Existence of an optimal solution for this problem has been solved to achieve the objective of this paper and establish necessary and sufficient optimality conditions for these two models, we proceed as follows.

Firstly, we give the optimality conditions for relaxed controls. The idea is to use the fact that the set of relaxed controls is convex. Then, we establish necessary optimality conditions by using the classical way of the convex perturbation method. More precisely, if we denote by an optimal relaxed control, and is an arbitrary element of , then with a sufficiently small and for each , we can define a perturbed control as follows .

By using the fact that the coefficients , , and are linear with respect to the relaxed control variable, necessary optimality conditions are obtained directly in the global form.

We note that necessary optimality conditions for relaxed controls, where the systems are governed by a stochastic differential equation, were studied by Mezerdi and Bahlali [9] and Bahlali [6], and we note also that necessary optimality conditions for Stochastic controls, where the systems are governed by forward-backward doubly stochastic differential equation, were studied by Bahlali and Gherbal [10] and Han, Peng, and Wu [11].

The paper is organized as follows. In Section 2, we give the precise formulation of the problem and introduce the relaxed model. We formulate the problem and give the various assumptions used throughout the paper. In Section 3, we give our first main result, the necessary optimality conditions for control problem and under additional hypothesis. In Section 4, we derive our second main result in this paper, which is sufficient conditions of optimality for relaxed controls.

Along this paper, we denote by some positive constants and for simplicity, we need the following matrix notations. We denote by the space of real matrix and the linear space of vectors where . For any , , , , and , we use the following notations: is the product scalar in ;   , where and are the th column of and ;   ;   ;   ;   ;   .

We denote by the transpose of the matrix and .

2. Formulation of the Problem

Let be a probability space, on which a -dimensional Brownian motions and are defined. For each , we define , where for any process , , . Note that the collection is neither increasing nor decreasing, and it does not constitute a classical filtration, where denotes the total of -null sets and denotes the -fields generated by .

Note that the collection is neither increasing nor decreasing, and it does not constitute a filtration.

Let be a positive real number and a nonempty subset of .

2.1. The Strict Control Problem

For any , let denote the set of -dimensional jointly measurable random processes which satisfy

We denote similarly by the set of continuous -dimensional random processes which satisfy

Let be a strictly positive real number and is a nonempty subset of .

Definition 1. is said to be a solution of (1), if and only if, and it satisfies (1).

Definition 2. An admissible strict control is an -adapted process with values in such that .
We denote by the set of all admissible controls.

For any , we consider the following BDSDE: where ,  , and is an -dimensional -measurable random variable such that .

The expected cost is defined from into by where ,   .

The control problem is to minimize the functional over , if is an optimal solution; that is,

A control that solves this problem is called optimal. Our goal is to establish a necessary condition of optimality for controls in the form of stochastic maximum principle.

The following assumptions will be in force throughout this paper:

They and all their derivatives with respect to are continuous in and uniformly bounded by ;   ,   and are bounded by .

We assume moreover that there exist constants and such that for any ,   ;   ,

Under the above assumptions, for every , (10) has a unique strong solution and the functional cost is well defined from into .

2.2. The Relaxed Model

The idea for the relaxed strict control problem defined above is to embed the set of strict controls into a wider class which gives a more suitable topological structure. In the relaxed model, the -valued process is replaced by a -valued process , where denotes the space of probability measure on equipped with the topology of stable convergence.

Definition 3. A relaxed control is a -valued process, progressively measurable with respect to and such that for each ,   is -measurable.
We denote by the set of all admissible relaxed controls.

Remark 4. Every relaxed control may be disintegrated as , where is a progressively measurable process with value in the set of probability measures . The set is embedded into the set of relaxed process by the mapping , where is the atomic measure concentrated at a single point .

For any , we consider the following relaxed BDSDE:

The expected cost to be minimized, the relaxed model, is defined from into by

A relaxed control is called optimal if it solves

Existence of an optimal solution for the problem {(14), (15), (16)} has been solved.

3. Optimality Conditions for Relaxed Controls

In this section, we study the problem {(14), (15), (16)} and we establish necessary condition of optimality for relaxed controls.

3.1. Preliminary Results

Since the set of relaxed controls is convex, then the classical way of treating such a problem is to use the convex perturbation method. More precisely, let be an optimal relaxed control and the solution of (14) controlled by . Then, we can define a perturbation relaxed as follows: where is sufficiently small and is an arbitrary element of .

Denote by the solution of (14) associated with .

From the optimality of , the variational inequality will be derived from the fact that

To this end, we need the following classical Lemmas.

Lemma 5. Under the assumption (13) and , we have

Proof. Let us prove (19) and (20).
Applying Itô’s formula to , and since is martingale; then taking the expectation for any , we have
From the Young formula, for every , , and by using the definition of , we have
Since and are uniformly Lipschitz with respect to and and from , where , then where is given by , we have
Let
Then, .
Choose ,   , then we have
From the above inequality, we derive the following two inequalities:
By using (24), Gronwall’s lemma, and Burkholder-Davis-Gundy inequality in (27), we obtain (19). Finally, (20) is derived from (27), (28), and (24).

Lemma 6. Let be the solution of the following linear equation (called variational equation):
Then, we have

Proof. For simplicity, we put ,   .
(i) Proof of (30): applying Itô’s formula to , and from the Young formula, for every , we have
For simplicity, we put , we have the following inequality: where and , are given by
Since , , , and are bounded, then where . Choose , then we have
From the above inequality, we deduce the following two inequalities:
Since and and and are continuous and bounded, then from (19) and (20), we have
From (40), we deduce that
By using (33), (41), Gronwall’s Lemma, and Burkholder-Davis-Gundy inequality, we obtain (30). Finally, (31) is derived from (39), (41), and (30).

Lemma 7. Let be an optimal relaxed control minimizing the cost over and the associated optimal trajectory. Then, for any , we have

Proof. Let be an optimal relaxed control minimizing the cost over , then from (18), and by using the definition of , we have
Then, where is given by
For simplicity, we put . Since the derivatives , , are continuous and bounded, then by using (30), (31), (19), and (20) and the Cauchy-Schwartz inequality, we have . By letting go to 0 in (44), the proof is completed.

3.2. Necessary Optimality Conditions for Relaxed Controls

Starting from the variational inequality (42), we can now state necessary optimality conditions for the relaxed control problem {(14), (15), (16)} in the global form.

The Hamiltonian is defined from into by

Theorem 8 (necessary optimality conditions for relaxed controls). Let be an optimal relaxed control minimizing the functional over and the solution of (14) associated with . Then, there exists unique adapted process of the following FDSDE system (called adjoint equation):

such that for every ,

Proof. Since , then (42) becomes
By applying Itô’s formula to , we get
Then, for every , (49) becomes
Now, let and be an arbitrary element of the -algebra , and set . It is obvious that is an admissible relaxed control.
Applying the above inequality with , we get
Which implies that
The quantity inside the conditional expectation is -measurable, and thus the result follows immediately.

4. Sufficient Optimality Conditions for Relaxed Controls

In this section, we study when necessary optimality conditions (48) become sufficient. For any , we denote by the solution of (14) controlled by .

Theorem 9 (sufficient optimality conditions for relaxed controls). Assume that the functions and are convex, and for any , is an -dimensional -measurable random variable such that . Then, is an optimal solution of the relaxed control problem {(14), (15), (16)}, if it satisfies (48).

Proof. Let be an arbitrary element of (candidate to be optimal). For any , we have
Since are convex, then
Thus,
We remark from (47) that . Then, we have
Thus,
By applying Itô’s formula to , we obtain
Then,
Since is convex in and linear in , then by using the Clarke generalized gradient of evaluated at and the necessary optimality conditions, that or equivalently
Then, from (60), we get
The theorem is proved.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work is partially supported by The Algerian PNR Project no. 8/u07/857.