Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 534604 | 6 pages | https://doi.org/10.1155/2014/534604

Existence of the Optimal Control for Stochastic Boundary Control Problems Governed by Semilinear Parabolic Equations

Academic Editor: Quanxin Zhu
Received02 Jul 2014
Accepted21 Aug 2014
Published28 Sep 2014

Abstract

We study an optimal control problem governed by a semilinear parabolic equation, whose control variable is contained only in the boundary condition. An existence theorem for the optimal control is obtained.

1. Introduction

Control theory is a mathematical description of how to act optimally to gain future rewards. Since the necessary conditions of optimal problems were established for the deterministic control systems by Pontryagin’s group [1] in the 1950s and 1960s, a lot of work has been done not only on the deterministic case but also on the stochastic case. To understand the deterministic case of optimal control problems governed by partial differential equations, we can see the classical book written by Lions [2] in 1971. And to the stochastic case, we can see [38] and so forth. The boundary control problems of stochastic partial differential equations have been developing fast and very active in recent years, including the boundary exact controllability [9, 10] and the maximum principle for boundary control [11].

In our paper, this kind of boundary control problem is extensively studied in many papers in the case of deterministic control systems. For example, we can see Chai [12]. But in the stochastic case, as we known, there are only a few results for this problem. One of the difficulties is that the properties of the solution of the state equation are not clear compared with the deterministic case.

In many papers, the authors only assume that the optimal controls exist for their problems, and they have not proved the existence. In our life, we always only want to use the necessary conditions to find the optimal control and achieve our goal, but the existence for the control problems is also important. Some useful results have been obtained, for instance, in [1316]. About the control problems governed by semilinear parabolic equations, we can see [17, 18]. In this paper, the necessary conditions for our problem have been worked out by a member of our team in the paper [19]. So, in this paper, we only consider the existence of the optimal control as a complement for [19].

The rest of this paper is organized as follows. Section 2 begins with a general formulation of our stochastic optimal control problem. In Section 3, we give two important lemmas of our problem. And we give the existence result of the optimal control in Section 4.

2. Preliminaries

Let be a complete probability space, equipped with a right-continuous filtration , on which a -dimensional mutually independent standard Brownian motion is defined. Moreover, we assume that is the augmentation of by all the -null sets of . For a fixed , we denote the -algebra of predictable sets on by . Let be an open bounded subset in with a smooth boundary . Let and , and stands for the -algebra of all Borel subsets of , where is a topological space. denotes the corresponding square-integral space, and denotes the Sobolev space . The positive constants and can be different in different place throughout this paper.

Let be the set of satisfying the following:(1) is -measurable and -adapted;(2), where denotes the usual -dimensional Lebesgue measure on .

The set of admissible controls is a convex, closed subset of .

Now we consider the following stochastic distributed control system with -value control processes in : where (-a.s.) is -adapted, and the differential operators ,  are defined as follows: where denotes the outward unit normal vector to at the point . We can see [12] for more details to the Neumann boundary condition in the trace sense. ,  are -value functions and is a -value function.

The cost functional is defined as follows: where denotes the adapted solution of the state equation (1) corresponding to and , ,  are -value functions.

From the above content, our optimal control problem can be stated as follows.

Problem (P). Find a such that Any satisfying the above identity is called an optimal control, and the corresponding state is called an optimal trajectory. is called an optimal pair.

We assume that the following conditions hold.

() The coefficients , , , , , and are measurable in with values in the sets of real symmetrical matrices, , , , , and , respectively. The real function is -measurable and , where is a positive constant. And the functions , , , , , and are all bounded by in absolute value. Furthermore, the matrix is uniformly positive definite, which means for a constant .

() Consider , , and to be functions which satisfy the following properties:(i) is -measurable, is -measurable, and is -measurable;(ii) and ;(iii)there exists a positive constant such that, for , we have uniformly in and uniformly in and for almost every .

() Consider ,  , and to be maps such that the following conditions are satisfied:(i) is -measurable, is -measurable, and is -measurable;(ii) and for every and ;(iii),  are continuous and continuously differentiable functions with respect to the state and is a continuous and continuously differentiable function with respect to the state and the control variable ; moreover, there exists a constant such that

3. Some Basic Lemmas

First, we give the definition of solution of the state equation (1).

Definition 1. We say a function is a weak solution of the state equation (1), if it satisfies for every and almost every .

Lemma 2 (Yu and Liu [19]). Let the conditions () and () hold; then the state equation (1) has a unique (weak) solution for every control . Moreover, there exists a constant such that

Remark 3. We can see that, by Lemma 2, the cost functional can be denoted by for simplicity.

Lemma 4. Let one designate by the solution of the state equation (1) corresponding to and suppose that weakly in . Then, for any fixed , one has

Proof. From the state equation (1), we have By the definition of the operators , , we apply Itô’s formula to , and then we can get By , it follows that For any fixed , Thanks to Lemma 2 and because the imbedding is compact, we can obtain From the conditions -, using Young’s inequality, it follows that where is a constant.

Hence, our conclusion follows.

4. The Existence of the Optimal Control

In this section, we give our main result based on the above lemmas.

Theorem 5 (existence of the optimal control). One assumes hold together with the following additional conditions: (i)the function is convex for each ;(ii)there exists a such that the set is bounded in .Then there exists at least one optimal control for Problem (P).

Proof. Thanks to the conditions and Lemma 2, it is easy to verify that is finite for each .
Now we fix a , and we can find a minimizing sequence for . Let be the corresponding states, that is, the solutions of (1) corresponding to . By condition (ii), there exists a constant such that Hence, we can assume that for some (because is a closed set ).
Combining the conclusions of Lemmas 2 and 4, we can deduce that
On the other hand, by Mazur's theorem (see, for instance, [20]), we can find a sequence of convex combinations ; that is, such that
Now, using the convexity of with respect to , by the dominated convergence theorem and condition , we obtain That means is an optimal control for Problem (P).

Remark 6. In this paper, we give a new method to study the existence of the optimal control for stochastic control problems. And we also can see that, compared to the necessary conditions, we need to give some additional conditions to obtain the existence result. So the existence problems are harder to be considered.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was partially supported by the Special Fund of Basic Scientific Research of Central Colleges (Grant no. CZQ14021). This work was partially supported by the Teaching Research Fund of South-Central University for Nationalities (JYX13023).

References

  1. L. S. Pontryagin, V. G. Boltyanskti, R. V. Gamkrelidze, and E. F. Mischenko, The Mathematical Theory of Optimal Control Processes, Interscience, John Wiley & Sons, New York, NY, USA, 1962.
  2. J. L. Lions, Optimal Control of Systems Governed by Partial Differential Equations, Springer, 1971. View at: MathSciNet
  3. A. Bensoussan, “Stochastic maximum principle for distributed parameter systems,” Journal of the Franklin Institute, vol. 315, no. 5-6, pp. 387–406, 1983. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  4. D. P. Bertsekas, Dynamic Programming and Optimal Control, Athena Scientific, Belmont, Mass, USA, 2nd edition, 2000. View at: MathSciNet
  5. U. G. Haussmann, “General necessary conditions for optimal control of stochastic systems,” Mathematical Programming Study, no. 6, pp. 30–48, 1976. View at: Google Scholar | Zentralblatt MATH | MathSciNet
  6. H. J. Kappen, “An introduction to stochastic control theory, path integrals and reinforcement learning,” in Proceedings of the 9th Granada Seminar on Computational Physics, pp. 149–181, American Institute of Physics, 2007. View at: Google Scholar
  7. J. Yong and X. Y. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations, vol. 43 of Applications of Mathematics, Springer, New York, NY, USA, 1999. View at: Publisher Site | MathSciNet
  8. X. Y. Zhou, “On the necessary conditions of optimal controls for stochastic partial differential equations,” SIAM Journal on Control and Optimization, vol. 31, no. 6, pp. 1462–1478, 1993. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  9. I. Lasiecka and R. Triggiani, “Exact controllability of the wave equation with Neumann boundary control,” Applied Mathematics and Optimization, vol. 19, no. 3, pp. 243–290, 1989. View at: Publisher Site | Google Scholar | MathSciNet
  10. T. T. Li, “Exact controllability for quasilinear hyperbolic systems and its application to unsteady flows in a network of open canals,” Mathematical Methods in the Applied Sciences, vol. 27, no. 9, pp. 1089–1114, 2004. View at: Publisher Site | Google Scholar | MathSciNet
  11. E. Casas, “Pontryagin's principle for state-constrained boundary control problems of semilinear parabolic equations,” SIAM Journal on Control and Optimization, vol. 35, no. 4, pp. 1297–1327, 1997. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  12. S. Chai, “Stabilization of thermoelastic plates with variable coefficients and dynamical boundary control,” Indian Journal of Pure and Applied Mathematics, vol. 36, no. 5, pp. 227–249, 2005. View at: Google Scholar | Zentralblatt MATH | MathSciNet
  13. R. Buckdahn, B. Labed, C. Rainer, and L. Tamer, “Existence of an optimal control for stochastic control systems with nonlinear cost functional,” Stochastics, vol. 82, no. 3, pp. 241–256, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  14. M. H. A. Davis, “On the existence of optimal policies in stochastic control,” SIAM Journal on Control and Optimization, vol. 11, pp. 587–594, 1973. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  15. E. Karoui Nicole, D. Nguyen, and J.-P. Monique, “Compactification methods in the control of degenerate diffusions: existence of an optimal control,” Stochastics, vol. 20, no. 3, pp. 169–219, 1987. View at: Publisher Site | Google Scholar | MathSciNet
  16. C. B. Wan and M. H. A. Davis, “Existence of optimal controls for stochastic jump processes,” SIAM Journal on Control and Optimization, vol. 17, no. 4, pp. 511–524, 1979. View at: Publisher Site | Google Scholar | MathSciNet
  17. E. Casas, L. A. Fernández, and J. M. Yong, “Optimal control of quasilinear parabolic equations,” Proceedings of the Royal Society of Edinburgh A: Mathematics, vol. 125, no. 3, pp. 545–565, 1995. View at: Publisher Site | Google Scholar | MathSciNet
  18. T. I. Seidman and H. Zhou, “Existence and uniqueness of optimal controls for a quasilinear parabolic equation,” The SIAM Journal on Control and Optimization, vol. 20, no. 6, pp. 747–762, 1982. View at: Publisher Site | Google Scholar | MathSciNet
  19. H. Yu and B. Liu, “Optimality conditions for stochastic boundary control problems governed by semilinear parabolic equations,” Journal of Mathematical Analysis and Applications, vol. 395, no. 2, pp. 654–672, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  20. I. Ekeland and R. Temam, Analyse Convexe et Problèmes Variationnels, Dunod, Paris, Farnce, 1974. View at: MathSciNet

Copyright © 2014 Weifeng Wang and Baobin Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

565 Views | 362 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.