- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Annual Issues
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 469390, 9 pages
Necessary Conditions for Optimality for Stochastic Evolution Equations
Department of Mathematics, College of Science, Qassim University, P.O. Box 6644, Buraydah 51452, Saudi Arabia
Received 2 May 2013; Accepted 14 August 2013
Academic Editor: Fuding Xie
Copyright © 2013 AbdulRahman Al-Hussein. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper is concerned with providing the maximum principle for a control problem governed by a stochastic evolution system on a separable Hilbert space. In particular, necessary conditions for optimality for this stochastic optimal control problem are derived by using the adjoint backward stochastic evolution equation. Moreover, all coefficients appearing in this system are allowed to depend on the control variable. We achieve our results through the semigroup approach.
Consider a stochastic controlled problem governed by the following stochastic evolution equation (SEE): We will be interested in trying to minimize the cost functional, which is given by (5), over a set of admissible controls.
This system is driven mainly by a possibly unbounded linear operator on a separable Hilbert space and a cylindrical Wiener process on . Here, denotes a control process.
We will derive the maximum principle for this control problem. More precisely, we will concentrate on providing necessary conditions for optimality for this optimal control problem, which gives this minimization. For this purpose we will apply the theory of backward stochastic evolution equations (BSEEs) shortly as in (10) in Section 3. These equations together with backward stochastic differential equations (BSDEs) have become of great importance in a number of fields. For example, in [1–7], one can find applications of BSDEs to stochastic optimal control problems. Some of these references have also studied the maximum principle to find either necessary or sufficient conditions for optimality for stochastic differential equations (SDEs) or stochastic partial differential equations (SPDEs). Necessary conditions for optimality of the control process and its corresponding solution but for the case when the noise term does not depend on can be found in .
In our work here, we allow to depend on the control variable and study a stochastic control problem associated with the former SEE. This control problem is explained in details in Section 2, and the main theorem is stated in Section 3 and is proved together with all necessary estimates in Section 4. Sufficient conditions for optimality for this optimal control problem can be found in . We refer the reader also to .
On the other hand, we recall that control problems governed by SPDEs that are driven by martingales are studied in . In fact in , we derived the maximum principle (necessary conditions) for optimality of stochastic systems governed by SPDEs. The technique used there relies heavily on the variational approach. The reason beyond that is that the only known way until now to find solutions to the resulting adjoint BSPDEs is achieved through the same variational approach and is established in details in . Thus, the semigroup approach to get mild solutions (as done here in Theorem 2 and in Section 3) cannot be used to study such adjoint BSPDEs considered in . Moreover, it is not obvious how one can allow the control variable to enter in the noise term and in particular in the mapping in equation (1.1) of  and obtain a result like Theorem 3. This problem is still open and is also pointed out in [8, Remark 6.4].
In the present work, we will show how to handle this open problem in great success, and as we stated earlier, we can and will allow all coefficients in (1) and especially in the diffusion term to depend on the control variable . We emphasize that our work here does not need to go through neither the technique of Hamilton-Jacobi-Bellman equations nor the technique of viscosity solutions. We refer the reader to  for this business and to  and some of the related references therein for the semigroup technique. Thus, our results here are new. In this respect, we thank the anonymous referee for pointing out the recent and relevant work of Fuhrman et al. in .
2. Statement of the Problem
Let be a complete probability space, and denote by the collection of -null sets of . Let be a cylindrical Wiener process on with its completed natural filtration , ; see  for more details.
For a separable Hilbert space , denote by to the space of all progressively measurable processes with values in such that
This space is Hilbert with respect to the norm
Moreover, if , where is the space of all Hilbert-Schmidt operators on , the stochastic integral can be defined and is a continuous stochastic martingale in . The norm and inner product on will be denoted, respectively, by and .
Let us assume that is a separable Hilbert space equipped with an inner product and is a convex subset of . We say that is admissible if and a.e., a.s. The set of admissible controls will be denoted by .
Suppose that and are two continuous mappings, and consider the following controlled SEE: where . A solution (in the sense of the following theorem) of (4) will be denoted by to indicate the presence of the control process .
Let and be two measurable mappings such that the following cost functional is defined:
The optimal control problem of system (4) is to find the value function and an optimal control such that
We close this section by the following theorem.
Theorem 1. Assume that is an unbounded linear operator on that generates a -semigroup on , , are continuously Fréchet differentiable with respect to , and their derivatives , are uniformly bounded. Then, for every there exists a unique mild solution on to (4). That is, is a progressively measurable stochastic process such that , and for all ,
From here on, we will assume that is the infinitesimal generator of a -semigroup on . Its adjoint operator is then the infinitesimal generator of the adjoint semigroup of .
3. Stochastic Maximum Principle
It is known from the literature that BSDEs play a fundamental role in deriving the maximum principle for SDEs. In this section, we will search for such a role for SEEs like (4). To prepare for this business, let us first define the Hamiltonian by the following formula:
Then, we consider the following BSEE on : where denotes the gradient of , which is defined by using the directional derivative of at a point in the direction of , as . This equation is the adjoint equation of (4).
As in the previous section, a mild solution (or a solution) of (10) is a pair such that we have —a.s. for all
Theorem 2. Assume that , , , and are continuously Fréchet differentiable with respect to , the derivatives , , , and are uniformly bounded, and
for some constant .
Then, there exists a unique (mild) solution of BSEE (10).
Our main result is the following.
Theorem 3. Suppose that the following two conditions hold. (i), , and are continuously Fréchet differentiable with respect to , , is continuously Fréchet differentiable with respect to , the derivatives , , , , , and are uniformly bounded, and
for some constant .(ii) is Lipschitz with respect to uniformly in .
If is an optimal pair for the control problem (4)–(7), then there exists a unique solution to the corresponding BSEE (10) such that the following inequality holds:
The proof of this theorem will be given in Section 4. Now, to illustrate this theorem, let us present an example.
Example 4. Let and be two separable Hilbert spaces as considered earlier, and let . We will study in this example a special case of the control problem (4)–(7). In particular, given as in Theorem 3, we would like to minimize the cost functional
where is a bounded linear operator from into and is another bounded linear operator from into .
The Hamiltonian is then given by the formula: where and the adjoint BSEE is
From the construction of the solution of (18), as for example, in [16, Lemma 3.1], this BSEE attains an explicit solution: where is the unique element of satisfying
On the other hand, for fixed , we note that the function attains its minimum at , where and are the adjoint operators of and , respectively. So, we elect as a candidate optimal control.
It is easy to see that with these choices all the requirements of Theorem 3 are verified. Hence, this candidate given in (21) is an optimal control for the problem (15)-(16), and its corresponding optimal solution is the solution of the following SEE:
Finally, the value function attains the formula
Remark 5. A concrete example in the setting of Example 4 can be constructed by taking , , (half-Laplacian), , , and , for some fixed elements , of and a positive definite nuclear operator on .
The computations in this case of , , , , and become direct from the corresponding equations in Example 4.
Let be an optimal control, and let be the corresponding solution of (4). Let be an element of such that . For a given , consider the variational control:
We note that the convexity of implies that . Considering this control , we will let be the solution of the SEE (4) corresponding to and denote it briefly by .
Let be the solution of the following linear equation:
The following three lemmas contain estimates that will play a vital role in deriving the desired variational equation and the maximum principle for our control problem.
Lemma 6. Assume condition (i) of Theorem 3. Then,
Proof. The solution of (25) is given by the formula
By using Minkowski's inequality (triangle inequality), Holder's inequality, Burkholder's inequality for stochastic convolution together with assumption (i), and Gronwall's inequality, we obtain easily for some constant .
Lemma 7. Assuming condition (i) of Theorem 3, one has
Proof. Observe first from (8) that
Hence, where .
Secondly, from condition (i), we get where for is a positive constant, and is another positive constant coming thanks to (i) from the following inequality:
Similarly, for some positive constants , .
Finally, by applying (32), (35) in (30) and then using Gronwall's inequality, we find that for some constant that depends in particular on , , and . Hence, the proof is complete.
Keeping the notations and used in the preceding proof, let us state the following lemma.
Lemma 8. Let . Then, under condition (i) of Theorem 3,
Proof. From the corresponding equations (4) and (25), we deduce that
Consequently, from (i) and as in the proof of Lemma 7, it follows that
for all , where
But (i), (28), and the dominated convergence theorem give as .
Similarly, we have as .
On the other hand, as done for (34), if , by using (i) and the dominated convergence theorem. Similarly, if .
Finally applying (43)–(45) in (41) shows that
Hence, from (40) and Gronwall's inequality, we obtain as .
The following theorem contains our main variational equation, which is one of the main tools needed for deriving the maximum principle stated in Theorem 3.
Theorem 9. Suppose that (i) and (ii) in Theorem 3 hold. For each , we have
Proof. We can write as
Note that with the help of our assumptions and by making use of Lemmas 8, 7, and 6 and the dominated convergence theorem, we deduce that Hence, Similarly,
On the other hand, applying Lemmas 8, 7, and 6 and using the continuity and boundedness of in (i), (ii) and the dominated convergence theorem imply that In particular, we obtain
As a result, the theorem follows from (49), (52) and (55).
Let us next introduce an important variational inequality.
Lemma 11. Under hypothesis (i) in Theorem 3, we have
Proof. The proof is done by using Yosida approximation of the operator and Itô's formula for the resulting SDEs and can be gleaned directly from the proof of Theorem 2.1 in .
We are now ready to establish (or complete in particular) the proof of Theorem 3.
Proof of Theorem 3. Recall the BSEE (10):
From Theorem 2, there exists a unique solution to it. Thereby, it remains to prove (15).
Applying (56) and (58) gives But, as done for (44), by using the continuity and boundedness of in assumption (i) and the dominated convergence theorem, one can find that as . This means that Similarly,
Now, by applying (62) and (63) in (60), we deduce that Therefore, by dividing (64) by and letting , the following inequality holds:
Finally, (14) follows by arguing, if necessary, as in [19, page 280], for instance.
The author would like to thank the associate editor and anonymous referee(s) for their remarks and also for pointing out the recent work of Fuhrman et al., . This work is supported by the Science College Research Center at Qassim University, Project no. SR-D-012-1610.
- A. Al-Hussein, “Sufficient conditions of optimality for backward stochastic evolution equations,” Communications on Stochastic Analysis, vol. 4, no. 3, pp. 433–442, 2010.
- A. Al-Hussein, “Suffcient conditions for optimality for stochastic evolution equations,” Statistics & Probability Letters, vol. 839, pp. 2103–2107, 2013.
- Y. Hu and S. Peng, “Maximum principle for optimal control of stochastic system of functional type,” Stochastic Analysis and Applications, vol. 14, no. 3, pp. 283–301, 1996.
- B. Øksendal, “Optimal control of stochastic partial differential equations,” Stochastic Analysis and Applications, vol. 23, no. 1, pp. 165–179, 2005.
- B. Øksendal, F. Proske, and T. Zhang, “Backward stochastic partial differential equations with jumps and application to optimal control of random jump fields,” Stochastics, vol. 77, no. 5, pp. 381–399, 2005.
- S. Peng, “Backward stochastic differential equations and applications to optimal control,” Applied Mathematics and Optimization, vol. 27, no. 2, pp. 125–144, 1993.
- J. Yong and X. Y. Zhou, Stochastic Controls. Hamiltonian Systems and HJB Equations, vol. 43 of Applications of Mathematics, Springer, New York, NY, USA, 1999.
- A. Al-Hussein, “Necessary conditions for optimal control of stochastic evolution equations in Hilbert spaces,” Applied Mathematics and Optimization, vol. 63, no. 3, pp. 385–400, 2011.
- A. Al-Hussein, “Backward stochastic partial differential equations driven by infinite-dimensional martingales and applications,” Stochastics, vol. 81, no. 6, pp. 601–626, 2009.
- A. Debussche, Y. Hu, and G. Tessitore, “Ergodic BSDEs under weak dissipative assumptions,” Stochastic Processes and their Applications, vol. 121, no. 3, pp. 407–426, 2011.
- S. Cerrai, Second Order PDE's in Finite and Infinite Dimension. A Probabilistic Approach, vol. 1762 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 2001.
- M. Fuhrman, Y. Hu, and G. Tessitore, “Stochastic maximum principle for optimal control of SPDEs,” Comptes Rendus Mathématique. Académie des Sciences. Paris, vol. 350, no. 13-14, pp. 683–688, 2012.
- A. Al-Hussein, “Martingale representation theorem in infinite dimensions,” Arab Journal of Mathematical Sciences, vol. 10, no. 1, pp. 1–18, 2004.
- G. Da Prato and J. Zabczyk, Second Order Partial Differential Equations in Hilbert Spaces, vol. 293 of London Mathematical Society Lecture Note Series, Cambridge University Press, Cambridge, UK, 2002.
- A. Ichikawa, “Stability of semilinear stochastic evolution equations,” Journal of Mathematical Analysis and Applications, vol. 90, no. 1, pp. 12–44, 1982.
- A. Al-Hussein, “Time-dependent backward stochastic evolution equations,” Bulletin of the Malaysian Mathematical Sciences Society, vol. 30, no. 2, pp. 159–183, 2007.
- Y. Hu and S. G. Peng, “Adapted solution of a backward semilinear stochastic evolution equation,” Stochastic Analysis and Applications, vol. 9, no. 4, pp. 445–459, 1991.
- G. Tessitore, “Existence, uniqueness and space regularity of the adapted solutions of a backward SPDE,” Stochastic Analysis and Applications, vol. 14, no. 4, pp. 461–486, 1996.
- A. Bensoussan, Stochastic Control of Partially Observable Systems, Cambridge University Press, Cambridge, UK, 1992.