Dynamics, Control, and Optimization with ApplicationsView this Special Issue
Research Article | Open Access
Eun-Young Ju, Jin-Mun Jeong, "Optimal Control Problems for Nonlinear Variational Evolution Inequalities", Abstract and Applied Analysis, vol. 2013, Article ID 724190, 10 pages, 2013. https://doi.org/10.1155/2013/724190
Optimal Control Problems for Nonlinear Variational Evolution Inequalities
We deal with optimal control problems governed by semilinear parabolic type equations and in particular described by variational inequalities. We will also characterize the optimal controls by giving necessary conditions for optimality by proving the Gâteaux differentiability of solution mapping on control variables.
In this paper, we deal with optimal control problems governed by the following variational inequality in a Hilbert space : Here, is a continuous linear operator from into which is assumed to satisfy Gårding's inequality, where is dense subspace in . Let be a lower semicontinuous, proper convex function. Let be a Hilbert space of control variables, and let be a bounded linear operator from into . Let be a closed convex subset of , which is called the admissible set. Let be a given quadratic cost function (see (61) or (103)). Then we will find an element which attains minimum of over subject to (1).
Recently, initial and boundary value problems for permanent magnet technologies have been introduced via variational inequalities in [1, 2] and nonlinear variational inequalities of semilinear parabolic type in [3, 4]. The papers treating the variational inequalities with nonlinear perturbations are not many. First of all, we deal with the existence and a variation of constant formula for solutions of the nonlinear functional differential equation (1) governed by the variational inequality in Hilbert spaces in Section 2.
Based on the regularity results for solution of (1), we intend to establish the optimal control problem for the cost problems in Section 3. For the optimal control problem of systems governed by variational inequalities, see [1, 5]. We refer to [6, 7] to see the applications of nonlinear variational inequalities. Necessary conditions for state constraint optimal control problems governed by semilinear elliptic problems have been obtained by Bonnans and Tiba  using methods of convex analysis (see also ).
Let stand for solution of (1) associated with the control . When the nonlinear mapping is Lipschitz continuous from into , we will obtain the regularity for solutions of (1) and the norm estimate of a solution of the above nonlinear equation on desired solution space. Consequently, in view of the monotonicity of , we show that the mapping is continuous in order to establish the necessary conditions of optimality of optimal controls for various observation cases.
In Section 4, we will characterize the optimal controls by giving necessary conditions for optimality. For this, it is necessary to write down the necessary optimal condition due to the theory of Lions . The most important objective of such a treatment is to derive necessary optimality conditions that are able to give complete information on the optimal control.
Since the optimal control problems governed by nonlinear equations are nonsmooth and nonconvex, the standard methods of deriving necessary conditions of optimality are inapplicable here. So we approximate the given problem by a family of smooth optimization problems and afterwards tend to consider the limit in the corresponding optimal control problems. An attractive feature of this approach is that it allows the treatment of optimal control problems governed by a large class of nonlinear systems with general cost criteria.
2. Regularity for Solutions
If is identified with its dual space we may write densely and the corresponding injections are continuous. The norm on , , and will be denoted by , , and , respectively. The duality pairing between the element of and the element of is denoted by , which is the ordinary inner product in if .
For we denote by the value of at . The norm of as element of is given by Therefore, we assume that has a stronger topology than and, for brevity, we may regard that
Let be a bounded sesquilinear form defined in and satisfying Gårding's inequality where and is a real number. Let be the operator associated with this sesquilinear form: Then is a bounded linear operator from to by the Lax-Milgram Theorem. The realization of in which is the restriction of to is also denoted by . From the following inequalities where is the graph norm of , it follows that there exists a constant such that Thus we have the following sequence where each space is dense in the next one with continuous injection.
It is also well known that generates an analytic semigroup in both and . For the sake of simplicity we assume that and hence the closed half plane is contained in the resolvent set of .
If is a Banach space, is the collection of all strongly measurable square integrable functions from into and is the set of all absolutely continuous functions on such that their derivative belongs to . will denote the set of all continuously functions from into with the supremum norm. If and are two Banach spaces, is the collection of all bounded linear operators from into , and is simply written as . Here, we note that by using interpolation theory we have
First of all, consider the following linear system:
Lemma 2. Suppose that the assumptions for the principal operator stated above are satisfied. Then the following properties hold.(1) For (see Lemma 1) and , , there exists a unique solution of (13) belonging to and satisfying where is a constant depending on . (2) Let and . Then there exists a unique solution of (13) belonging to and satisfying where is a constant depending on .
Let be a nonlinear single valued mapping from into . (F) We assume that for every .
Let be another Hilbert space of control variables and take as stated in the Introduction. Choose a bounded subset of and call it a control set. Let us define an admissible control as
Noting that the subdifferential operator is defined by the problem (1) is represented by the following nonlinear functional differential problem on :
Proposition 3. (1) Let the assumption (F) be satisfied. Assume that , , and where is the closure in of the set . Then, (1) has a unique solution
where is the minimum element of and there exists a constant depending on such that
where is some positive constant and .
Furthermore, if then the solution belongs to and satisfies
(2) We assume the following. (A) is symmetric and there exists such that for every and any where .
Then for , , and (1) has a unique solution which satisfies
The following Lemma is from Brézis [14; Lemma A.5].
Lemma 5. Let satisfying for all and be a constant. Let be a continuous function on satisfying the following inequality: Then,
For each , we can define the continuous solution mapping . Now, we can state the following theorem.
Theorem 6. (1) Let the assumption (F) be satisfied, , and . Then the solution of (1) belongs to and the mapping
is Lipschtz continuous; that is, suppose that and be the solution of (1) with in place of for ,
where is a constant.
(2) Let the assumptions (A) and (F) be satisfied and let and . Then , and the mapping is continuous.
Proof. (1) Due to Proposition 3, we can infer that (1) possesses a unique solution with the data condition . Now, we will prove the inequality (33). For that purpose, we denote by . Then
Multiplying on the above equation by , we have
By integrating the above inequality over , we have
integrating the above inequality over , we have
Thus, we get
Combining this with (38) it holds that
By Lemma 5, the following inequality
From (42) and (44) it follows that
The third term of the right hand side of (45) is estimated as
The second term of the right hand side of (45) is estimated as
Thus, from (47) and (48), we apply Gronwall's inequality to (15), and we arrive at
where is a constant. Suppose in , and let and be the solutions (1) with and , respectively. Then, by virtue of (49), we see that in .
(2) It is easy to show that if and , then belongs to . Let , and be the solution of (1) with in place of for . Then in view of Lemma 2 and assumption (F), we have Since we get, noting that , Hence arguing as in (9) we get Combining (50) and (53) we obtain Suppose that and let and be the solutions (1) with and , respectively. Let be such that Then by virtue of (54) with replaced by we see that This implies that in . Hence the same argument shows that in Repeating this process we conclude that in .
3. Optimal Control Problems
In this section we study the optimal control problems for the quadratic cost function in the framework of Lions . In what follows we assume that the embedding is compact.
Let be another Hilbert space of control variables, and be a bounded linear operator from into ; that is, which is called a controller. By virtue of Theorem 6, we can define uniquely the solution map of into . We will call the solution the state of the control system (1).
Let be a Hilbert space of observation variables. The observation of state is assumed to be given by where is an operator called the observer. The quadratic cost function associated with the control system (1) is given by where is a desire value of and is symmetric and positive; that is, for some . Let be a closed convex subset of , which is called the admissible set. An element which attains minimum of over is called an optimal control for the cost function (61).
Remark 7. The solution space of strong solutions of (1) is defined by
endowed with the norm
Let be an open bounded and connected set of with smooth boundary. We consider the observation of distributive and terminal values (see [15, 16]).
(1) We take and and observe
(2) We take and and observe The above observations are meaningful in view of the regularity of (1) by Proposition 3.
Theorem 8. (1) Let the assumption (F) be satisfied. Assume that and . Let be the solution of (1) corresponding to . Then the mapping is compact from to .
(2) Let the assumptions (A) and (F) be satisfied. If and , then the mapping is compact from to .
Proof. (1) We define the solution mapping from to by
In virtue of Lemma 2, we have
Hence if is bounded in , then so is in . Since is compactly embedded in by assumption, the embedding is also compact in view of Theorem 2 of Aubin . Hence, the mapping is compact from to .
(2) If is compactly embedded in by assumption, the embedding is compact. Hence, the proof of (2) is complete.
As indicated in the Introduction we need to show the existence of an optimal control and to give the characterizations of them. The existence of an optimal control for the cost function (61) can be stated by the following theorem.
Theorem 9. Let the assumptions (A) and (F) be satisfied and . Then there exists at least one optimal control for the control problem (1) associated with the cost function (61); that is, there exists such that
Proof. Since is nonempty, there is a sequence such that minimizing sequence for the problem (70) satisfies Obviously, is bounded. Hence by (62) there is a positive constant such that This shows that is bounded in . So we can extract a subsequence (denoted again by ) of and find a such that in . Let be the solution of the following equation corresponding to : By (15) and (17) we know that and are bounded in and , respectively. Therefore, by the extraction theorem of Rellich's, we can find a subsequence of , say again , and find such that However, by Theorem 8, we know that From (F) it follows that By the boundedness of we have Since are uniformly bounded from (73)–(77) it follows that and noting that is demiclosed, we have that Thus we have proved that satisfies a.e. on the following equation: Since is continuous and is lower semicontinuous, it holds that It is also clear from that Thus, But since by definition, we conclude is a desired optimal control.
4. Necessary Conditions for Optimality
In this section we will characterize the optimal controls by giving necessary conditions for optimality. For this it is necessary to write down the necessary optimal condition and to analyze (84) in view of the proper adjoint state system, where denote the Gâteaux derivative of at . Therefore, we have to prove that the solution mapping is Gâteaux differentiable at . Here we note that from Theorem 6 it follows immediately that The solution map of into is said to be Gâteaux differentiable at if for any there exists a such that The operator denotes the Gâteaux derivative of at and the function is called the Gâteaux derivative in the direction , which plays an important part in the nonlinear optimal control problems.
First, as is seen in Corollary 2.2 of Chapter II of , let us introduce the regularization of as follows.
Lemma 10. For every , define where . Then the function is Fréchet differentiable on and its Frećhet differential is Lipschitz continuous on with Lipschitz constant . In addition, where is the element of minimum norm in the set .
Now, we introduce the smoothing system corresponding to (1) as follows.
Lemma 11. Let the assumption (F) be satisfied. Then the solution map of into is Lipschtz continuous.
Moreover, let us assume the condition (A) in Proposition 3. Then the map of into is also Lipschtz continuous.
Proof. We set . From Theorem 6, it follows immediately that so the solution map of into is Lipschtz continuous. Moreover, since by the assumption (A) and (2) of Theorem 6, it holds and, by the relation (12), So we know that the map of into is also Lipschtz continuous.
In order to obtain the optimality conditions, we require the following assumptions. (F1) The Gâteaux derivative in the second argument for is measurable in for and continuous in for a.e. , and there exist functions such that (F2) The map is Gâteaux differentiable, and the value is the Gâteaux derivative of at such that there exist functions such that
Theorem 12. Let the assumptions (A), (F1), and (F2) be satisfied. Let be an optimal control for the cost function in (61). Then the following inequality holds: where is a unique solution of the following equation: