- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Annual Issues
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 156541, 12 pages
On the Regularized Solutions of Optimal Control Problem in a Hyperbolic System
Department of Mathematics, Faculty of Science, Atatürk University, 25240 Erzurum, Turkey
Received 7 March 2012; Accepted 12 June 2012
Academic Editor: Valery Covachev
Copyright © 2012 Yeşim Saraç and Murat Subaşı. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We use the initial condition on the state variable of a hyperbolic problem as control function and formulate a control problem whose solution implies the minimization at the final time of the distance measured in a suitable norm between the solution of the problem and given targets. We prove the existence and the uniqueness of the optimal solution and establish the optimality condition. An iterative algorithm is constructed to compute the required optimal control as limit of a suitable subsequence of controls. An iterative procedure is implemented and used to numerically solve some test problems.
1. Introduction and Statement of the Problem
Optimal control problems for hyperbolic equations have been investigated by Lions in his famous book . Lions examined the problems in detail when the control function is at the right hand side and in the boundary condition of the hyperbolic problem. Furthermore, when the control is in the boundaries [2–4], in the coefficient [5, 6], and at the right hand side of the equation [7, 8], there have been some control problem studies for different types of cost functionals. As for the control of initial conditions, Lions mentioned the control of the initial velocity of the system in detail but stated briefly the control of initial status of the system solving the system in .
In this study, we consider the following problem of minimizing the cost functional: under the following condition: Since the problem is usually ill posed for , we use the parameter as the regularization parameter which is the strong convexity constant, and this guaranties the uniqueness and stability of the regularized solution. The functional is called cost functional and the term is called penalization term; its role is, on one hand, to avoid using “too large” controls in the minimization of and, on the other hand, to assure coercivity for .
Lions in  mentioned the observation of in and in for the control . Except this study, there is no investigation in the literature about the control of initial status of the hyperbolic system up to now. In this study, we investigate different targets. With the choice of the functional in (1.1), we use and , which correspond to final velocity and force, respectively, for the control . Since the Fréchet differential of the cost functional cannot be obtained with the usage of usual norm in , we get the differentiability with the only use of Poincare norm.
The space is a Hilbert subspace of and the Poincare inner product and the Poincare norm are defined, respectively, as Let We search for We organize this paper as follows. In Section 2, we establish the existence and the uniqueness of the optimal solution. In Section 3, we derive the necessary optimality condition. In Section 4, we construct an algorithm for the numerical approximation of the optimal solution according to steepest descent algorithm. In Section 5, we give symbolic representation for optimal solution by using this algorithm on some examples.
2. Existence and Uniqueness of the Optimal Solution
First we state the generalized solution of the hyperbolic problem (1.2).
Definition 2.1. The generalized solution of (1.2) will be defined as the function which satisfies the following integral identity: for all with . To have this solution the following is needed:
Theorem 2.2. Suppose that (2.2) holds, then (1.2) has a unique generalized solution and the following estimate is valid for the solution: Proof of this theorem can easily be obtained by Galerkin method used in .
The strategy to prove existence and uniqueness of this optimal control is to use the relationship between minimization of quadratic functionals and variational problems corresponding to symmetric bilinear forms. The key point is to write in the following way: Here is bilinear (since the mapping is linear) and symmetric.
Also, the difference function is the solution of the following problem: and for the solution of this problem the following estimates are valid: Hence, we write the following: and this implies the coercivity of . Since applying Cauchy-Schwartz inequality, we get for and .
So, we obtain using (2.7) and write for . Then is continuous.
The number in (2.4) is defined as Therefore we have established the conditions of the following existence and uniqueness theorem for the problem.
Theorem 2.3. Let be a continuous symmetric bilinear coercive form and a continuous linear form on . Then there exists a unique element such that Proof of this theorem can easily be obtained by showing the weak lower semicontinuity of as in .
3. Lagrange Multipliers and Optimality Condition
To derive the optimality condition, let us introduce the Lagrangian , given by Notice that is linear in , therefore corresponds to the state equation (1.2). Moreover, generates the following adjoint problem: while constitutes the following Euler equation: So, we can state the following theorem in view of .
Theorem 3.1. The control and the state are optimal if there exists a multiplier such that and satisfy the following optimality conditions: for .
4. An Iterative Algorithm and Its Convergence
Now, we can apply standard steepest descent iteration. Gradient of at any is given by It turns out that plays the role of the steepest descent direction for . This suggests an iterative procedure to compute a sequence of controls convergent to the optimal one.
Select an initial control . If is known then is computed according to the following scheme.(1) Solve the state problem (1.2) in the sense (2.1) and get corresponding .(2) Knowing solve the adjoint problem (3.4).(3) Using get the gradient .(4) Set and select the relaxation parameter in order to assure that for sufficiently small .
Concerning the choice of the relaxation parameter, there are several possibilities and these can be found in any optimization books.
One of the following can be taken as a stopping criterion to the iteration process:
Lemma 4.1. The cost functional (1.1) is strongly convex with the strong convexity constant .
From the following strongly convex functional definition: we can see that the cost functional (1.1) is strongly convex with the constant .
So, we can give the following theorem which states the convergence of the minimizer to optimal solution.
Proof. If we take in the definition of the strongly convex functional, we write Since we find
5. Numerical Examples
Example 5.1. Let us consider the following problem of minimizing the cost functional: under the following condition: Rewrite the functional as where Choosing and starting the initial element , then we get the minimizing sequence. Here the relaxation parameter assures the inequality . In this example if we use the stopping criteria , we get the following minimizing element after 250 iterations: and for this optimal control the values of the and are such as For different the values of , and optimal controls are given in Table 1.
Example 5.2. We consider the following problem of minimizing the cost functional: subject to We can rewrite the cost functional as For Taking and the initial element , we obtain a minimizing sequence. In this example and stopping criteria are chosen.
Optimal control function after 37 iterations is and are and , respectively, for this optimal control. For different , the values of , , and optimal controls are given in Table 2.
In a hyperbolic problem, the initial condition can be controlled from the targets and using Poincare norm. The Lagrange multiplier is while the function is the solution of adjoint problem. The symbolic optimal control function is easily obtained in numerical examples.
- J.-L. Lions, Optimal Control of Systems Governed by Partial Differential Equations, Translated from the French by S. K. Mitter. Die Grundlehren der mathematischen Wissenschaften, Band 170, Springer, New York, NY, USA, 1971.
- M. Negreanu and E. Zuazua, “Uniform boundary controllability of a discrete 1-D wave equation,” Systems & Control Letters, vol. 48, no. 3-4, pp. 261–279, 2003.
- L. I. Bloshanskaya and I. N. Smirnov, “Optimal boundary control by an elastic force at one end and a displacement at the other end for an arbitrary sufficiently large time interval in the string vibration problem,” Differential Equations, vol. 45, no. 6, pp. 878–888, 2009.
- A. Smyshlyaev and M. Krstic, “Boundary control of an anti-stable wave equation with anti-damping on the uncontrolled boundary,” Systems & Control Letters, vol. 58, no. 8, pp. 617–623, 2009.
- X. Feng, S. Lenhart, V. Protopopescu, L. Rachele, and B. Sutton, “Identification problem for the wave equation with Neumann data input and Dirichlet data observations,” Nonlinear Analysis, vol. 52, no. 7, pp. 1777–1795, 2003.
- A. Münch, P. Pedregal, and F. Periago, “Optimal design of the damping set for the stabilization of the wave equation,” Journal of Differential Equations, vol. 231, no. 1, pp. 331–358, 2006.
- J. D. Benamou, “Domain decomposition, optimal control of systems governed by partial differential equations, and synthesis of feedback laws,” Journal of Optimization Theory and Applications, vol. 102, no. 1, pp. 15–36, 1999.
- F. Periago, “Optimal shape and position of the support for the internal exact control of a string,” Systems & Control Letters, vol. 58, no. 2, pp. 136–140, 2009.
- O. A. Ladyzhenskaya, Boundary Value Problems of Mathematical Physics, vol. 49 of Applied Mathematical Sciences, Springer, New York, NY, USA, 1985.
- E. Zeidler, Nonlinear Functional Analysis and Its Applications. III: Variational Methods and Optimization, Springer, New York, NY, USA, 1985.