Research Article  Open Access
Yeşim Saraç, Murat Subaşı, "On the Regularized Solutions of Optimal Control Problem in a Hyperbolic System", Abstract and Applied Analysis, vol. 2012, Article ID 156541, 12 pages, 2012. https://doi.org/10.1155/2012/156541
On the Regularized Solutions of Optimal Control Problem in a Hyperbolic System
Abstract
We use the initial condition on the state variable of a hyperbolic problem as control function and formulate a control problem whose solution implies the minimization at the final time of the distance measured in a suitable norm between the solution of the problem and given targets. We prove the existence and the uniqueness of the optimal solution and establish the optimality condition. An iterative algorithm is constructed to compute the required optimal control as limit of a suitable subsequence of controls. An iterative procedure is implemented and used to numerically solve some test problems.
1. Introduction and Statement of the Problem
Optimal control problems for hyperbolic equations have been investigated by Lions in his famous book [1]. Lions examined the problems in detail when the control function is at the right hand side and in the boundary condition of the hyperbolic problem. Furthermore, when the control is in the boundaries [2–4], in the coefficient [5, 6], and at the right hand side of the equation [7, 8], there have been some control problem studies for different types of cost functionals. As for the control of initial conditions, Lions mentioned the control of the initial velocity of the system in detail but stated briefly the control of initial status of the system solving the system in .
In this study, we consider the following problem of minimizing the cost functional: under the following condition: Since the problem is usually ill posed for , we use the parameter as the regularization parameter which is the strong convexity constant, and this guaranties the uniqueness and stability of the regularized solution. The functional is called cost functional and the term is called penalization term; its role is, on one hand, to avoid using “too large” controls in the minimization of and, on the other hand, to assure coercivity for .
Lions in [1] mentioned the observation of in and in for the control . Except this study, there is no investigation in the literature about the control of initial status of the hyperbolic system up to now. In this study, we investigate different targets. With the choice of the functional in (1.1), we use and , which correspond to final velocity and force, respectively, for the control . Since the Fréchet differential of the cost functional cannot be obtained with the usage of usual norm in , we get the differentiability with the only use of Poincare norm.
The space is a Hilbert subspace of and the Poincare inner product and the Poincare norm are defined, respectively, as Let We search for We organize this paper as follows. In Section 2, we establish the existence and the uniqueness of the optimal solution. In Section 3, we derive the necessary optimality condition. In Section 4, we construct an algorithm for the numerical approximation of the optimal solution according to steepest descent algorithm. In Section 5, we give symbolic representation for optimal solution by using this algorithm on some examples.
2. Existence and Uniqueness of the Optimal Solution
First we state the generalized solution of the hyperbolic problem (1.2).
Definition 2.1. The generalized solution of (1.2) will be defined as the function which satisfies the following integral identity: for all with . To have this solution the following is needed:
Theorem 2.2. Suppose that (2.2) holds, then (1.2) has a unique generalized solution and the following estimate is valid for the solution: Proof of this theorem can easily be obtained by Galerkin method used in [9].
The strategy to prove existence and uniqueness of this optimal control is to use the relationship between minimization of quadratic functionals and variational problems corresponding to symmetric bilinear forms. The key point is to write in the following way: Here is bilinear (since the mapping is linear) and symmetric.
Also, the difference function is the solution of the following problem: and for the solution of this problem the following estimates are valid: Hence, we write the following: and this implies the coercivity of . Since applying CauchySchwartz inequality, we get for and .
So, we obtain using (2.7) and write for . Then is continuous.
The functional in (2.4) is defined as We can easily write that using (2.7). Hence we see that the functional is continuous.
The number in (2.4) is defined as Therefore we have established the conditions of the following existence and uniqueness theorem for the problem.
Theorem 2.3. Let be a continuous symmetric bilinear coercive form and a continuous linear form on . Then there exists a unique element such that Proof of this theorem can easily be obtained by showing the weak lower semicontinuity of as in [1].
3. Lagrange Multipliers and Optimality Condition
To derive the optimality condition, let us introduce the Lagrangian , given by Notice that is linear in , therefore corresponds to the state equation (1.2). Moreover, generates the following adjoint problem: while constitutes the following Euler equation: So, we can state the following theorem in view of [10].
Theorem 3.1. The control and the state are optimal if there exists a multiplier such that and satisfy the following optimality conditions: for .
4. An Iterative Algorithm and Its Convergence
Now, we can apply standard steepest descent iteration. Gradient of at any is given by It turns out that plays the role of the steepest descent direction for . This suggests an iterative procedure to compute a sequence of controls convergent to the optimal one.
Select an initial control . If is known then is computed according to the following scheme.(1) Solve the state problem (1.2) in the sense (2.1) and get corresponding .(2) Knowing solve the adjoint problem (3.4).(3) Using get the gradient .(4) Set and select the relaxation parameter in order to assure that for sufficiently small .
Concerning the choice of the relaxation parameter, there are several possibilities and these can be found in any optimization books.
One of the following can be taken as a stopping criterion to the iteration process:
Lemma 4.1. The cost functional (1.1) is strongly convex with the strong convexity constant .
From the following strongly convex functional definition:
we can see that the cost functional (1.1) is strongly convex with the constant .
So, we can give the following theorem which states the convergence of the minimizer to optimal solution.
Theorem 4.2. Let be optimum solution of the problem (1.1)–(1.5). Then the minimizer given in (4.2) satisfies the following inequality:
Proof. If we take in the definition of the strongly convex functional, we write Since we find
5. Numerical Examples
Example 5.1. Let us consider the following problem of minimizing the cost functional: under the following condition: Rewrite the functional as where Choosing and starting the initial element , then we get the minimizing sequence. Here the relaxation parameter assures the inequality . In this example if we use the stopping criteria , we get the following minimizing element after 250 iterations: and for this optimal control the values of the and are such as For different the values of , and optimal controls are given in Table 1.
