Abstract

The main objective of this paper is to explore the relationship between the stochastic maximum principle (SMP in short) and dynamic programming principle (DPP in short), for singular control problems of jump diffusions. First, we establish necessary as well as sufficient conditions for optimality by using the stochastic calculus of jump diffusions and some properties of singular controls. Then, we give, under smoothness conditions, a useful verification theorem and we show that the solution of the adjoint equation coincides with the spatial gradient of the value function, evaluated along the optimal trajectory of the state equation. Finally, using these theoretical results, we solve explicitly an example, on optimal harvesting strategy, for a geometric Brownian motion with jumps.

1. Introduction

In this paper, we consider a mixed classical-singular control problem, in which the state evolves according to a stochastic differential equation, driven by a Poisson random measure and an independent multidimensional Brownian motion, of the following form: where , , , and are given deterministic functions and is the initial state. The control variable is a suitable process , where is the usual classical absolutely continuous control and is the singular control, which is an increasing process, continuous on the right with limits on the left, with . The performance functional has the form

The objective of the controller is to choose a couple of adapted processes, in order to maximize the performance functional.

In the first part of our present work, we investigate the question of necessary as well as sufficient optimality conditions, in the form of a Pontryagin stochastic maximum principle. In the second part, we give under regularity assumptions, a useful verification theorem. Then, we show that the adjoint process coincides with the spatial gradient of the value function, evaluated along the optimal trajectory of the state equation. Finally, using these theoretical results, we solve explicitly an example, on optimal harvesting strategy for a geometric Brownian motion, with jumps. Note that our results improve those in [1, 2] to the jump diffusion setting. Moreover we generalize results in [3, 4], by allowing both classical and singular controls, at least in the complete information setting. Note that in our control problem, there are two types of jumps for the state process, the inaccessible ones which come from the Poisson martingale part and the predictable ones which come from the singular control part. The inclusion of these jump terms introduces a major difference with respect to the case without singular control.

Stochastic control problems of singular type have received considerable attention, due to their wide applicability in a number of different areas; see [48]. In most cases, the optimal singular control problem was studied through dynamic programming principle; see [9], where it was shown in particular that the value function is continuous and is the unique viscosity solution of the HJB variational inequality.

The one-dimensional problems of the singular type, without the classical control, have been studied by many authors. It was shown that the value function satisfies a variational inequality, which gives rise to a free boundary problem, and the optimal state process is a diffusion reflected at the free boundary. Bather and Chernoff [10] were the first to formulate such a problem. Beneš et al. [11] explicitly solved a one-dimensional example by observing that the value function in their example is twice continuously differentiable. This regularity property is called the principle of smooth fit. The optimal control can be constructed by using the reflected Brownian motion; see Lions and Sznitman [12] for more details. Applications to irreversible investment, industry equilibrium, and portfolio optimization under transaction costs can be found in [13]. A problem of optimal harvesting from a population in a stochastic crowded environment is proposed in [14] to represent the size of the population at time as the solution of the stochastic logistic differential equation. The two-dimensional problem that arises in portfolio selection models, under proportional transaction costs, is of singular type and has been considered by Davis and Norman [15]. The case of diffusions with jumps is studied by Øksendal and Sulem [8]. For further contributions on singular control problems and their relationship with optimal stopping problems, the reader is referred to [4, 5, 7, 16, 17].

The stochastic maximum principle is another powerful tool for solving stochastic control problems. The first result that covers singular control problems was obtained by Cadenillas and Haussmann [18], in which they consider linear dynamics, convex cost criterion, and convex state constraints. A first-order weak stochastic maximum principle was developed via convex perturbations method for both absolutely continuous and singular components by Bahlali and Chala [1]. The second-order stochastic maximum principle for nonlinear SDEs with a controlled diffusion matrix was obtained by Bahlali and Mezerdi [19], extending the Peng maximum principle [20] to singular control problems. A similar approach has been used by Bahlali et al. in [21], to study the stochastic maximum principle in relaxed-singular optimal control in the case of uncontrolled diffusion. Bahlali et al. in [22] discuss the stochastic maximum principle in singular optimal control in the case where the coefficients are Lipschitz continuous in , provided that the classical derivatives are replaced by the generalized ones. See also the recent paper by Øksendal and Sulem [4], where Malliavin calculus techniques have been used to define the adjoint process.

Stochastic control problems in which the system is governed by a stochastic differential equation with jumps, without the singular part, have been also studied, both by the dynamic programming approach and by the Pontryagin maximum principle. The HJB equation associated with this problems is a nonlinear second-order parabolic integro-differential equation. Pham [23] studied a mixed optimal stopping and stochastic control of jump diffusion processes by using the viscosity solutions approach. Some verification theorems of various types of problems for systems governed by this kind of SDEs are discussed by Øksendal and Sulem [8]. Some results that cover the stochastic maximum principle for controlled jump diffusion processes are discussed in [3, 24, 25]. In [3] the sufficient maximum principle and the link with the dynamic programming principle are given by assuming the smoothness of the value function. Let us mention that in [24] the verification theorem is established in the framework of viscosity solutions and the relationship between the adjoint processes and some generalized gradients of the value function are obtained. Note that Shi and Wu [24] extend the results of [26] to jump diffusions. See also [27] for systematic study of the continuous case. The second-order stochastic maximum principle for optimal controls of nonlinear dynamics, with jumps and convex state constraints, was developed via spike variation method, by Tang and Li [25]. These conditions are described in terms of two adjoint processes, which are linear backward SDEs. Such equations have important applications in hedging problems [28]. Existence and uniqueness for solutions to BSDEs with jumps and nonlinear coefficients have been treated by Tang and Li [25] and Barles et al. [29]. The link with integral-partial differential equations is studied in [29].

The plan of the paper is as follows. In Section 2, we give some preliminary results and notations. The purpose of Section 3 is to derive necessary as well as sufficient optimality conditions. In Section 4, we give, under-regularity assumptions, a verification theorem for the value function. Then, we prove that the adjoint process is equal to the derivative of the value function evaluated at the optimal trajectory, extending in particular [2, 3]. An example has been solved explicitly, by using the theoretical results.

2. Assumptions and Problem Formulation

The purpose of this section is to introduce some notations, which will be needed in the subsequent sections. In all what follows, we are given a probability space , such that contains the -null sets, for an arbitrarily fixed time horizon , and satisfies the usual conditions. We assume that is generated by a -dimensional standard Brownian motion and an independent jump measure of a Lévy process , on , where for some . We denote by (resp., ) the -augmentation of the natural filtration of (resp., . We assume that the compensator of has the form , for some -finite Lévy measure on , endowed with its Borel -field . We suppose that and set , for the compensated jump martingale random measure of .

Obviously, we have where denotes the totality of -null sets and denotes the -field generated by .

Notation. Any element will be identified with a column vector with components, and its norm is . The scalar product of any two vectors and on is denoted by or . For a function , we denote by (resp., the gradient or Jacobian (resp., the Hessian) of with respect to the variable .

Given , let us introduce the following spaces.(i) or is the set of square integrable functions such that (ii) is the set of -valued adapted cadlag processes such that (iii) is the set of progressively measurable -valued processes such that (iv) is the set of measurable maps such that

To avoid heavy notations, we omit the subscript in these notations when .

Let be a fixed strictly positive real number; is a closed convex subset of and . Let us define the class of admissible control processes .

Definition 1. An admissible control is a pair of measurable, adapted processes , and , such that(1) is a predictable process, is of bounded variation, nondecreasing, right continuous with left-hand limits, and ,(2).
We denote by the set of all admissible controls. Here (resp., ) represents the set of the admissible controls (resp., ).

Assume that, for , , the state of our system is given by where is given, representing the initial state.

Let be measurable functions.

Notice that the jump of a singular control at any jumping time is defined by , and we let be the continuous part of .

We distinguish between the jumps of caused by the jump of , defined by and the jump of caused by the singular control , denoted by . In the above, represents the jump in the Poisson random measure, occurring at time . In particular, the general jump of the state process at is given by .

If is a continuous real function, we let

The expression (12) defines the jump in the value of caused by the jump of at . We emphasize that the possible jumps in coming from the Poisson measure are not included in .

Suppose that the performance functional has the form where , , and , with .

An admissible control is optimal if

Let us assume the following.(H1)The maps , , , and are continuously differentiable with respect to and is continuously differentiable in .(H2)The derivatives , , , , , , , , and are continuous in and uniformly bounded.(H3), , , and are bounded by , and is bounded by , for some .(H4)For all , the map satisfies uniformly in , (H5) are continuous and bounded.

3. The Stochastic Maximum Principle

Let us first define the usual Hamiltonian associated to the control problem by where . and for , denote the th column of the matrices and , respectively.

Let be an optimal control and let be the corresponding optimal trajectory. Then, we consider a triple of square integrable adapted processes associated with , with values in such that

3.1. Necessary Conditions of Optimality

The purpose of this section is to derive optimality necessary conditions, satisfied by an optimal control, assuming that the solution exists. The proof is based on convex perturbations for both absolutely continuous and singular components of the optimal control and on some estimates of the state processes. Note that our results generalize [1, 2, 21] for systems with jumps.

Theorem 2 (necessary conditions of optimality). Let be an optimal control maximizing the functional over , and let be the corresponding optimal trajectory. Then there exists an adapted process , which is the unique solution of the BSDE (18), such that the following conditions hold.(i)For all (ii)For all , with probability 1 where .

In order to prove Theorem 2, we present some auxiliary results.

3.1.1. Variational Equation

Let be such that . The convexity condition of the control domain ensures that for the control is also in . We denote by the solution of the SDE (8) corresponding to the control . Then by standard arguments from stochastic calculus, it is easy to check the following estimate.

Lemma 3. Under assumptions (H1)–(H5), one has

Proof. From assumptions (H1)–(H5), we get by using the Burkholder-Davis-Gundy inequality
From Definition 1 and Gronwall’s lemma, the result follows immediately by letting go to zero.

We define the process by

From (H2) and Definition 1, one can find a unique solution which solves the variational equation (26), and the following estimate holds.

Lemma 4. Under assumptions (H1)–(H5), it holds that

Proof. Let
We denote and , for notational convenience. Then we have immediately that and satisfies the following SDE:
Since the derivatives of the coefficients are bounded, and from Definition 1, it is easy to verify by Gronwall’s inequality that and where is given by
Since , , and are bounded, then where is a generic constant depending on the constants , , and . We conclude from Lemma 3 and the dominated convergence theorem, that . Hence (27) follows from Gronwall’s lemma and by letting go to . This completes the proof.

3.1.2. Variational Inequality

Let be the solution of the linear matrix equation, for where is the identity matrix. This equation is linear, with bounded coefficients, then it admits a unique strong solution. Moreover, the condition (H4) ensures that the tangent process is invertible, with an inverse satisfying suitable integrability conditions.

From Itô’s formula, we can easily check that , and , where is the solution of the following equation so . If we simply write and . By the integration by parts formula ([8, Lemma 3.6]), we can see that the solution of (26) is given by , where is the solution of the stochastic differential equation

Let us introduce the following convex perturbation of the optimal control defined by for some and . Since is an optimal control, then . Thus a necessary condition for optimality is that

The rest of this subsection is devoted to the computation of the above limit. We will see that the expression (37) leads to a precise description of the optimal control in terms of the adjoint process. First, it is easy to prove the following lemma.

Lemma 5. Under assumptions (H1)–(H5), one has

Proof. We use the same notations as in the proof of Lemma 4. First, we have where
By using Lemma 4, and since the derivatives , and are bounded, we have . Then, the result follows by letting go to in the above equality.

Substituting by in (38) leads to

Consider the right continuous version of the square integrable martingale

By the Itô representation theorem [30], there exist two processes where , for , and , satisfying

Let us denote . The adjoint variable is the process defined by

Theorem 6. Under assumptions (H1)–(H5), one has

Proof. From the integration by parts formula ([8, Lemma 3.5]), and by using the definition of for , and , we can easily check that
Also we have substituting (46) in (47), the result follows.

3.1.3. Adjoint Equation and Maximum Principle

Since (37) is true for all and , we can easily deduce the following result.

Theorem 7. Let be the optimal control of the problem (14) and denote by the corresponding optimal trajectory, then the following inequality holds: where the Hamiltonian is defined by (17), and the adjoint variable for , is given by (44).

Now, we are ready to give the proof of Theorem 2.

Proof of Theorem 2. (i) Let us assume that is an optimal control for the problem (14), so that inequality (48) is valid for every . If we choose in inequality (48), we see that for every measurable, -adapted process
For define
Obviously , for each . Let us define by
If , where denotes the Lebesgue measure, then which contradicts (49), unless . Hence the conclusion follows.
(ii) If instead we choose in inequality (48), we obtain that for every measurable, -adapted process , the following inequality holds:
In particular, for , we put . Since the Lebesgue measure is regular then the purely discontinuous part . Obviously, the relation (53) can be written as
This contradicts (53) unless for every . This proves (20).
Let us prove (21). Define , for , then we have , and . Hence, we can rewrite (53) as follows:
By comparing with (53) we get then we conclude that
Expressions (22) and (23) are proved by using the same techniques. First, for each and fixed, we define , where denotes the Dirac unit mass at is a discrete measure, then and . Hence which contradicts (53), unless for every and , we have
Next, let be defined by
Then, the relation (53) can be written as which implies that By the fact that , and , we get Thus (23) holds. The proof is complete.

Now, by applying Itô’s formula to , it is easy to check that the processes defined by relation (44) satisfy BSDE (18) called the adjoint equation.

3.2. Sufficient Conditions of Optimality

It is well known that in the classical cases (without the singular part of the control), the sufficient condition of optimality is of significant importance in the stochastic maximum principle, in the sense that it allows to compute optimal controls. This result states that, under some concavity conditions, maximizing the Hamiltonian leads to an optimal control.

In this section, we focus on proving the sufficient maximum principle for mixed classical-singular stochastic control problems, where the state of the system is governed by a stochastic differential equation with jumps, allowing both classical control and singular control.

Theorem 8 (sufficient condition of optimality in integral form). Let be an admissible control and denote the associated controlled state process. Let be the unique solution of (18). Let one assume that and are concave functions. Moreover suppose that for all , , and
Then is an optimal control.

Proof. For convenience, we will use the following notations throughout the proof:
Let be an arbitrary admissible pair, and consider the difference
We first note that, by concavity of , we conclude that which implies that
By the fact that for , we deduce that the stochastic integrals with respect to the local martingales have zero expectation. Due to the concavity of the Hamiltonian , the following holds
The definition of the Hamiltonian and (64) leads to , which means that is an optimal control for the problem (14).

The expression (64) is a sufficient condition of optimality in integral form. We want to rewrite this inequality in a suitable form for applications. This is the objective of the following theorem which could be seen as a natural extension of [2, Theorem ] to the jump setting and [3, Theorem 2.1] to mixed regular-singular control problems.

Theorem 9 (sufficient conditions of optimality). Let be an admissible control and the associated controlled state process. Let be the unique solution of (18). Let one assume that and are concave functions. If in addition one assumes that(i)for all (ii)for all , with probability 1
Then is an optimal control.

Proof. Using (71) and (72) yields
The same computations applied to (73) and (74) imply
Hence, from Definition 1, we have the following inequality:
The desired result follows from Theorem 8.

4. Relation to Dynamic Programming

In this section, we come back to the control problem studied in the previous section. We recall a verification theorem, which is useful to compute optimal controls. Then we show that the adjoint process defined in Section 3, as the unique solution to the BSDE (18), can be expressed as the gradient of the value function, which solves the HJB variational inequality.

4.1. A Verification Theorem

Let be the solution of the controlled SDE (8), for , with initial value . To put the problem in a Markovian framework, so that we can apply dynamic programming, we define the performance criterion

Since our objective is to maximize this functional, the value function of the singular control problem becomes If we do not apply any singular control, then the infinitesimal generator , associated with (8), acting on functions , coincides on with the parabolic integro-differential operator given by where denotes the generic term of the symmetric matrix . The variational inequality associated to the singular control problem is for , and , for , are given by

We start with the definition of classical solutions of the variational inequality (81).

Definition 10. Let one consider a function , and define the nonintervention region by
We say that is a classical solution of (81) if

The following verification theorem is very useful to compute explicitly the value function and the optimal control, at least in the case where the value function is sufficiently smooth.

Theorem 11. Let be a classical solution of (81) with the terminal condition (82), such that for some constants . Then, for all , and
Furthermore, if there exists such that with probability 1 for all jumping times of , then it follows that .

Proof. See [8, Theorem ].

In the following, we present an example on optimal harvesting from a geometric Brownian motion with jumps see, for example, [5, 8].

Example 12. Consider a population having a size which evolves according to the geometric Lévy process; that is
Here is the total number of individuals harvested up to time . If we define the price per unit harvested at time by and the utility rate obtained when the size of the population at is by . Then the objective is to maximize the expected total time-discounted value of the harvested individuals starting with a population of size ; that is, where is the time of complete depletion, and are positive constants with . The harvesting admissible strategy is assumed to be nonnegative, nondecreasing continuous on the right, satisfying with , and such that . We denote by the class of such strategies. For any define Note that the definition of is similar to , except that the starting time is , and the state at is .
If we guess the nonintervention region has the form for some barrier point , then (84) gets the form, for . We try a solution of the form hence where is the fundamental solution of the ordinary integro-differential equation We notice that , for some arbitrary constant ; we get where
Note that and ; then there exists such that . The constant is given by
Outside we require that , where is a constant to be determined. This suggests that the value must be of the form
Assuming smooth fit principle at point , then the reflection threshold is where
Since and , we deduce that .
To construct the optimal control , we consider the stochastic differential equation and if this is the case, then Arguing as in [7], we can adapt Theorem  15 in [16] to obtain an identification of the optimal harvesting strategy as a local time of a reflected jump diffusion process. Then, the system (106)–(109) defines the so-called Skorokhod problem, whose solution is a pair , where is a jump diffusion process reflected at .
The conditions (89)–(92) ensure the existence of an increasing process such that stays in for all times . If the initial size , then is nondecreasing and his continuous part increases only when so as to ensure that .
On the other hand, we only have if the initial size then , or if jumps out of the nonintervention region by the random measure ; that is, . In these cases we get immediately to bring to .
It is easy to verify that, if is a solution of the Skorokhod problem (106)–(109), then is an optimal solution of the problem (93) and (94).
By the construction of and , all the conditions of the verification Theorem 11 are satisfied. More precisely, the value function along the optimal state reads as

4.2. Link between the SMP and DPP

Compared with the stochastic maximum principle, one would expect that the solution of BSDE (18) to correspond to the derivatives of the classical solution of the variational inequalities (81)-(82). This is given by the following theorem, which extends [3, Theorem 3.1] to control problems with a singular component and [2, Theorem 3.3] to diffusions with jumps.

Theorem 13. Let be a classical solution of (81), with the terminal condition (82). Assume that , with , and there exists such that the conditions (89)–(92) are satisfied. Then the solution of the BSDE (18) is given by

Proof. Throughout the proof, we will use the following abbreviations: for , and ,
From Itô’s rule applied to the semimartingale , one has where is defined as in Theorem 11, and the sum is taken over all jumping times of . Note that where is a pure jump process. Then, we can rewrite (114) as follows: Let denotes the continuous part of ; that is, . Then, we can easily show that
For every , using (88) we have This proves Furthermore, for every and , we have .
But (91) implies that ; thus
The mean value theorem yields where is some point on the straight line between and , and represents the gradient matrix of . To prove that the right-hand side of the above equality vanishes, it is enough to check that if then , for . It is clear by (92) that
Since , then , for . According to (88), we obtain This shows that
On the other hand, define
If we differentiate with respect to and evaluate the result at , we deduce easily from (84), (89), and (90) that
Finally, substituting (119), (120), (124), and (126) into (116) yields
The continuity of leads to
Clearly,
Now, from (17) we have
The th coordinate of the adjoint process satisfies with . Hence, the uniqueness of the solution of (131) and relation (128) allows us to get where is the generic element of the matrix and is the optimal solution of the controlled SDE (8).

Example 14. We return to the same example in the previous section.
Now, we illustrate a verification result for the maximum principle. We suppose that is a fixed time. In this case the Hamiltonian gets the form
Let be a candidate for an optimal control, and let be the corresponding state process with corresponding solution of the following adjoint equation, for all
Since , we assume the transversality condition
We remark that ; then , and the condition (138) reduces to
We use the relation between the value function and the solution of the adjoint equation along the optimal state. We prove that the solution of the adjoint equation is represented as for all .
To see this, we differentiate the process using Itô’s rule for semimartingales and by using the same procedure as in the proof of Theorem 13. Then, the conclusion follows readily from the verification of (135), (136), and (139). First, an explicit formula for is given in [4] by where , and is a geometric Lévy process defined by
From the representation (142) and by the fact that , we get hence
By the dominated convergence theorem, we obtain (139) by sending to infinity in (145).
A simple computation shows that the conditions (135)–(138) are consequences of (107)–(109). This shows in particular that the pair satisfies the optimality sufficient conditions and then it is optimal. This completes the proof of the following result.

Theorem 15. One supposes that , and . If the strategy is chosen such that the corresponding solution of the adjoint process is given by (141), then this choice is optimal.

Remark 16. In this example, it is shown in particular that the relationship between the stochastic maximum principle and dynamic programming could be very useful to solve explicitly constrained backward stochastic differential equations with transversality condition.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the referees and the associate editor for valuable suggestions that led to a substancial improvement of the paper. This work has been partially supported by the Direction Générale de la Recherche Scientifique et du Développement Technologique (Algeria), under Contract no. 051/PNR/UMKB/2011, and by the French Algerian Cooperation Program, Tassili 13 MDU 887.