- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Abstract and Applied Analysis
Volume 2014 (2014), Article ID 623129, 14 pages
Optimal Control Problem with Necessary Optimality Conditions for the Viscous Dullin-Gottwald-Holm Equation
Department of Mathematics Education, Daegu University, Jillyang, Gyeongsan, Gyeongbuk 712-714, Republic of Korea
Received 20 September 2013; Revised 5 January 2014; Accepted 7 January 2014; Published 24 February 2014
Academic Editor: Ziemowit Popowicz
Copyright © 2014 Jinsoo Hwang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We study the quadratic cost optimal control problems for the viscous Dullin-Gottwald-Holm equation. The main novelty of this paper is to derive the necessary optimality conditions of optimal controls, corresponding to physically meaningful distributive observations. For this we prove the Gâteaux differentiability of nonlinear solution mapping on control variables. Moreover by making use of the second order Gâteaux differentiability of solution mapping on control variables, we prove the local uniqueness of optimal control. This is another novelty of the paper.
Recently, in the study of shallow water wave, Dullin et al.  derived a new integrable shallow water wave equation with linear and nonlinear dispersion as follows: where is fluid velocity, and are squares of length scales, and is the linear wave speed for undisturbed water at rest at spatial infinity, where and its spatial derivatives are taken to vanish. Letting , (1) reduces to the well-known Korteweg-de Vries (KDV) equation (linear dispersion case). And when letting , (1) reduces to the Camassa-Holm equation of  (nonlinear dispersion case). Usually we can rewrite (1) into where is a momentum variable. Physically, the third and fourth terms of the left side of (2) represent convection and stretching effects of unidirectional propagation of shallow water waves over a flat bottom, respectively (see [2, 3]).
Recently, Shen et al.  studied the optimal control problem of the following viscous DGH equation (cf. ): where and stands for the viscosity constant of shallow water wave. As explained in Holm and Staley , the small viscosity makes sense to take into account the balance or relaxation between convection and stretching dynamics of shallow water wave.
In  Shen et al. studied the distributive optimal control problems of (3) (cf. ). For this purpose, they modified (3) to Dirichlet boundary value problem in short interval and proved the existence and uniqueness of in (3) by weak formulation. However the well-posedness of (3) with respect to is unclear and the proof contained in  relies on the size of which is an extra condition. Further, in  they employed the quadratic cost objective functional to be minimized within an admissible control set with the distributive observation of in (3) and only discussed the existence of optimal controls which minimize the quadratic cost. But the necessary optimality conditions of optimal controls have not been studied there.
As for the necessary optimality condition of optimal controls, we can find a recent paper Sun . By employing the Dubovitskii and Milyutin functional analytic approach, Sun has established in [9, Theorem 3] the Pontryagin maximum principle of the optimal control for the viscous DGH equation with the quite general cost which depends on and not on . Meanwhile, in this paper, we propose the quadratic cost functional for , which is actually more reasonable than that for , and establish the necessary optimality conditions of optimal controls due to Lions  in Theorems 5 and 7 for the physically meaningful observations and , respectively. To this end, we successfully characterize the Gâteaux derivative of in the direction , where is a Hilbert space of control variables and is an optimal control for quadratic cost.
Actually, the extension of optimal control theory to quasilinear equations is not easy. Some researches have been devoted to the study of optimal control or identification problems in specific quasilinear equations. For instance, we can refer to Hwang and Nakagiri [11, 12] and Hwang [13, 14].
The aim of this paper can be summarized as follows. Firstly, we clarify the well-posedness of (3) with respect to in the Hadamard sense with appropriate initial value condition in short interval as posed in . Secondly, based on the well-posedness result, we expand the optimal control theory due to Lions  with emphasis on deriving necessary optimality conditions of optimal controls in the following distributive control system: where , is a forcing term, is a controller, is a control, and denotes the state for a given .
In order to apply the variational approach due to Lions  to our problem, we propose the quadratic cost functional as studied in Lions  which is to be minimized within ; is an admissible set of control variables in . We show the existence of which minimizes the quadratic cost functional . Then, we establish the necessary conditions of optimality of the optimal control for some physically meaningful observation cases employing the associate adjoint systems. For this we successfully prove the Gâteaux differentiability of the nonlinear solution mapping , which is used to define the associate adjoint systems.
Moreover, in this paper we discuss the local uniqueness of optimal control. As widely known, it is unclear and difficult to verify the uniqueness of optimal control in nonlinear control problems. By employing the idea given in Ryu , we show the strict convexity of the quadratic cost in local time interval by utilizing the second order Gâteaux differentiability of the nonlinear solution mapping . Whence by proving strict convexity of the quadratic cost with respect to the control variable, we prove the local uniqueness of optimal control. This is another novelty of the paper.
For fixed , we set and . The scalar products and norms on and , , are denoted by , and , , , respectively. Then, by virtue of Poincare inequality, we can replace these scalar products and norms by and , , respectively. Let us denote the topological dual spaces of , , by , . We denote their duality pairing between and by , .
We consider the following Dirichlet boundary value problem for the viscous Dullin-Gottwald-Holm (DGH) equation: where , is a forcing function, and is an initial value.
In order to define weak solutions of (5), we define some Hilbert spaces. At first, is defined by endowed with the norm where denotes the first order distributional derivatives of . Further, is defined by endowed with the norm where denotes the first order distributional derivatives of . We note here that and are continuously imbedded in and , respectively (cf. Dautray and Lions [16, page 555]).
From now on, we will omit writing the integral variables in the definite integral without any confusion.
Lemma 1. Let satisfy the boundary conditions of (5) and . Then one has where is a constant.
Proof. According to the boundary conditions of , we have
Since is an isomorphism, we can deduce that where are constants. Thus we prove this lemma.
The following variational formulation is used to define the weak solution of (5).
Definition 2. A function is said to be a weak solution of (5) if and satisfies
Theorem 3. Assume that , , and . Then the problem (5) has a unique solution in . And the solution mapping of into is a local Lipschitz continuous; that is, for each and , one has the inequality where is a constant which depends on and .
Remark 4. In , the well-posedness of (5) is partially verified, which is indeed the case that the viscosity constant is large enough. However, as we will see in the Appendix, such an extra assumption can be removed.
Proof of Theorem 3. By utilizing the result of , combined with Lemma 1, we can know that (5) possesses a unique solution under the data condition .
Based on the above result, we prove the inequality (14). For that purpose, we denote by and by . Then, we can observe from (5) that where and , . Multiplying in both sides of (15), we have
And we integrate (16) over to have By Sobolev embedding theorem, , , and , the right members of (17) can be estimated as follows: where are constants. We replace the right hand side of (17) by the right members of (18). Then we have where is a constant, depending on and . And we apply the Gronwall inequality to (19) to obtain where is a constant, depending on and . Next we estimate in (15) as follows: By Sobolev embedding theorem, we have the following inequality: where , are constants, depending on and . By (21) and (22) we can obtain where , are constants, depending on and . We can deduce that (20) and (23) imply where is a constant, depending on and . Finally from Lemma 1 and (24) we have where , are constants, depending on and . Thus we complete the proof.
3. Quadratic Cost Optimal Control Problems
In this section we study the quadratic cost optimal control problems for the viscous DGH equation due to the theory of Lions . Let be a Hilbert space of control variables, and let be an operator, called a controller. We consider the following nonlinear control system: where , , , and is a control. By virtue of Theorem 3 and (26), we can define uniquely the solution map of into . We will call the solution of (27) the state of the control system (27). The observation of the state is assumed to be given by where is an operator called the observer and is a Hilbert space of observation variables. The quadratic cost function associated with the control system (27) is given by where is a desired value of and is symmetric and positive; that is, for some . Let be a closed convex subset of , which is called the admissible set. An element which attains the minimum of over is called an optimal control for the cost (29).
In this section we will characterize the optimal controls by giving necessary conditions for optimality. For this it is necessary to write down the necessary optimality condition, and to analyze (31) in view of the proper adjoint state system, where denote the Gâteaux derivative of at . And we study local uniqueness of the optimal control.
As indicated in Section 1, we show the existence of an optimal control and give the characterizations of them.
3.1. Existence of the Optimal Control
Now we show the existence of an optimal control for the cost (29).
Proof. Set . Since is nonempty, there is a sequence in such that
Obviously is bounded in . Then by (30) there exists a constant such that
This shows that is bounded in . Since is closed and convex, we can choose a subsequence (denoted again by ) of and find a such that
as . From now on, each state corresponding to is the solution of
where . By (33) the term is estimated as
Hence we can deduce from Theorem 3 that
for some . And also from Lemma 1 we can know that
for some . Therefore, by the extraction theorem of Rellich’s, we can find a subsequence of , say again , and find a such that
By using the fact that is compact and by virtue of (39), we can refer to the result of the Aubin-Lions-Temam’s compact imbedding theorem (cf. Temam [17, page 271]) to verify that is precompact in . Hence there exists a subsequence such that
Since , we know that . And from (40) we can choose a subsequence of , denoted again by such that
We use (39)–(41) and apply the Lebesgue dominated convergence theorem to have as . We replace and by and , respectively, and take in (35). Then by the standard argument in Dautray and Lions [16, pages 561–565], we conclude that the limit satisfies in weak sense, where . Moreover the uniqueness of weak solutions in (43) via Theorem 3 enables us to conclude that in , which implies weakly in . Since is continuous on and is lower semicontinuous, it follows that It is also clear from that . Hence But since by definition, we conclude that . This completes the proof.
3.2. Gâteaux Differentiability of Solution Mapping
In order to characterize the optimal control which satisfies the necessary optimality condition (31), we need to prove the Gâteaux differentiability of the mapping of .
Definition 6. The solution map of into is said to be Gâteaux differentiable at if for any there exists a such that The operator denotes the Gâteaux derivative of at and the function is called the Gâteaux derivative in the direction , which plays an important role in the nonlinear optimal control problem.
Theorem 7. The map of into is Gâteaux differentiable at and such the Gâteaux derivative of at in the direction , say , is a unique solution of the following problem: where and .
Proof. Let , . We set and
where and .
By the continuity of (14), we have where is a constant, depending on and . Hence we have Therefore, we can infer that there exists a and a sequence tending to such that as . Since the imbedding is compact ([17, page 271]), it is implied from (52) that for some tending to as . Whence by (50)–(53) and Lebesgue dominated convergence theorem we can easily show that as , where and . And also we can deduce from (49), (52), and (56) that as .
Hence we can see from (52) to (57) that weakly in as in which is a solution of (47). This convergence can be improved by showing the strong convergence of also in the topology of .
Subtracting (47) from (49) and denoting by , we obtain that where and .
From (54) and (55) we know that In order to estimate we multiply in both sides of (58) and integrate it over to have The integral parts of the right member of (60) can be estimated as follows: where , , and are constants. We replace the right hand side of (62) by the right members of (61)–(65). And we apply the Gronwall inequality to the replaced inequality; then we arrive at where is a constant. By virtue of (59) and (66) we deduce that As in (21)–(23), we have from (58), (66), and (67) that Therefore (67) and (68) mean Whence from Lemma 1 This completes the proof.
3.3. Necessary Condition of Optimal Control
In this section we will characterize the optimal controls by giving necessary condition (71) for optimality for the following physically meaningful observations.(1)We take and and observe (2)We take and and observe Since by Theorem 3, the above observations are meaningful.
Due to Lions , we construct the necessary condition of optimal control via appropriate adjoint equation. In order to follow the idea we need to introduce and analyze the following adjoint equation for distributive observations: where or , , and . In order to show the well-posedness of (74), we introduce the solution Hilbert space defined by equipped with the norm We remark that is continuously embedded in (cf. Dautray and Lions [16, page 555]).
In the following proposition we show the well-posedness of (74).
Proposition 8. Assume that ; then by reversing the direction of time , (74) admits a unique solution satisfying where and .
The proof of Proposition 8 is given in the Appendix.
Remark 9. As we will see, there are some merits in taking as the solution space for adjoint equations. For the observation (72), even though we can take the adjoint system in the space with additional boundary conditions, we can derive the same necessary optimality condition of optimal controls through the less regular solution . Therefore, is preferred solution space of adjoint equation for the observation (72). And also, for the observation (73), we need to solve adjoint equation in because of the less regular data condition than that of the observation (72).
3.3.1. Case of the Observation (72)
In this subsection we consider the cost functional expressed by where is a desired value. Let be the optimal control subject to (27) and (78). Then the optimality condition (71) is represented by where is the solution of (47).
Now we will formulate the following adjoint system to describe the optimality condition: where and .
Now we proceed the calculations. We multiply both sides of the weak form of (80) by and integrate it over . Then we have where . By (47) for , we can verify by integration by parts that the left hand side of (81) yields where . Therefore, by (81) and (82) we can deduce that the optimality condition (79) is equivalent to Hence, we give the following theorem.
Theorem 11. The optimal control for (78) is characterized by the following system of equations and inequality: where and .
3.3.2. Case of the Observation (73)
We consider the following momentum’s distributive cost functional expressed by where and . Let be the optimal control subject to (27) and (87). Then the optimality condition (71) is rewritten as where and is the solution of (47). As before we formulate the following adjoint system to describe the optimality condition: where and .