Abstract

The present paper deals with the necessary optimality condition for a class of distributed parameter systems in which the system is modeled in one-space dimension by a hyperbolic partial differential equation subject to the damping and mixed constraints on state and controls. Pontryagin maximum principle is derived to be a necessary condition for the controls of such systems to be optimal. With the aid of some convexity assumptions on the constraint functions, it is obtained that the maximum principle is also a sufficient condition for the optimality.

1. Introduction

It is well known that many processes in science and engineering are modeled by partial differential equations. The problems concerning the control of vibrating systems are generally governed by hyperbolic partial differential equations which are obtained by using conservation laws as a description of the distributed parameter system. In order to control these systems, the derivation of necessary conditions in the form of a maximum principle has been studied since the 1960s.

The Maximum principle is introduced for the first time by Pontryagin and his students as a necessary condition for the optimality of a mechanical system which is defined by ordinary differential or difference-differential equations [1]. Pontryagin’s maximum principle is given in the form of a Hamiltonian that is defined in terms of an adjoint variable and the control function. The first application of maximum principle was the maximization of the terminal velocity of a rocket. Following this initial work, it is shown that the maximum principle is a necessary condition, if the value of the Hamiltonian maximized over the controls is concave in the state variables and the sufficient condition with appropriate transversality conditions [2]. Necessary conditions of optimality for distributed parameter systems described by boundary-value problems for hyperbolic and parabolic equations are studied in [3] where the completeness assumption on the class of admissible controls is imposed. In [4], Necessary condition for optimality is derived in a form similar to Pontryagin’s maximum principle without admissible control function set which is bounded or closed. These necessary conditions are expressed in terms of certain “generalized Jocabians.” For a general control problem formulated in terms of a differential inclusion, maximum principle is given by weak pseudo-Lipschitz behavior that is postulated on the underlying multifunction [5]. Necessary condition in the form of Pontryagin maximum principle by adopting the Dubovitskiy-Milyutin functional analytical approach is derived in [6]. In [7], Barnes presents a maximum principle as a necessary condition for optimal control of vibrating system that is modeled by second order linear hyperbolic PDE where completeness assumption is dropped and a regular point is used. Furthermore, Barnes shows that, by certain convexity assumptions on the constraint functions, the maximum principle is also a sufficient condition for the optimality of distributed parameters systems. Other related theoretical studies about the maximum principle in the literature are available such as [825]. In [17], the optimal necessary and sufficient conditions are derived for a single partial differential equation without damping term subject to the homogeneous boundary conditions in one space dimension. The systems that involve only one control function are studied in [125], while studies in [2630] examine the systems with multiple control functions due to the size of the structures or to increase the control efficiency.

In the present paper, inspired by [7], the necessary optimality condition is given for a hyperbolic partial differential equation in one-space dimensional system. The system involves a damping and several control functions and is subject to the mixed integral constraints on state and control functions. The partial differential equation under consideration involves spatial derivatives of at most order four. The main goal of the control problem interested here is to minimize the performance index of the control problem in a given period of time with the control functions and state variable subject to the constraints in the form of integral equality or/and inequality. Under proper convexity assumptions on the constraint functions, the maximum principle is also a sufficient condition for a general class of hyperbolic partial differential equations in one-space dimensional system.

2. Mathematical Formulation of the Problem

Let us consider the following partial differential equation [31]: where is the transversal displacement at , is the space variable, is the time variable, is the external excitation, is the mass per unit length of the beam, is predetermined terminal time, and ,   are the control functions, in which and are continuous functions on , and are positive-definite operators, and is the damping operator. Equation (1) is subject to the following boundary conditions: where , , and are continuous functions on and the initial conditions in which The following assumptions are made: ,   , , is the closure of ; ,   ,      ,   ; the set of admissible control functions is given by in which denotes the Hilbert space of real-valued square-integrable functions on the domain in the Lebesgue sense with the usual inner product and norm defined by respectively. Under these assumptions, the system equations (1)–(4) have a solution [32].

3. Formulation of the Control Problem

The optimal control problem aims to determine optimal control functions that minimizes the performance index at the terminal time : The first two terms in the right hand side of (8) denote the modified energy of the system and the last term represents the control effort spent in control duration. th admissible control function subject to (1)–(4) and the following constraints: in which ,  ,  ,  ,  , and  , for ,    , are continuous functions of their parameters. Also, ,  ,  , and , for ,   , have the continuous derivatives with respect to and ,  have the continuous derivatives with respect to .

4. Necessary and Sufficient Conditions for Optimality

In this section, necessary condition of optimality is derived in the form of the maximum principle. Also, under proper convexity assumptions on the constraint functions, it is shown that maximum principle is also sufficient condition of optimality. For convenience, let us assume that where is the related term with th control function in . In order to achieve the maximum principle, an adjoint variable along with the adjoint operator is introduced. The adjoint variable satisfies the following equation: where we introduced the Lagrange multiplier and Equation (11) is subject to the following homogeneous boundary conditions: and the terminal conditions

Let us introduce a special perturbation of the optimal control problem and three lemmas to derive the maximum principle. Suppose that ,    are optimal control functions corresponding to optimal displacement . Let be arbitrary points in the open region and let ,    ,   be arbitrary subfunctions for every admissible control function ,   . Also, let us assume that . Choose such that , if ,    and , for each . Let be real parameters satisfying . Let and be for . Hence, the intervals and the rectangles do not have any intersection for , respectively. denotes the vector ; is -dimensional Euclidean space, and the norm of is given by . The admissible controls are defined by for .

Lemma 1. Let satisfy the system given by (1)–(4) corresponding to the controls . Consider the following difference functions, in which Note that satisfies following equation: and the following homogeneous boundary conditions: and also, the zero initial conditions: Then, is a quantity such that

Proof. We start the proof by examining the following energy integral: then, it can be rewritten in [33] as With integration by parts and using homogeneous boundary conditions given by (19a) and (19b), (24) becomes Note that if is nonself adjoint operator, the foregoing inequality is also satisfied. By applying the Cauchy-Schwartz inequality to the space integral, we obtain Taking the sup of both sides of (26) leads to where is a quantity such that By means of (27), the following inequality is observed for each : Because [34], the following equality is obtained: Because the coefficients of (1) are bounded away from zero, the conclusion of Lemma 1 is obtained from (30). It is concluded from Lemma 1 that Namely, (1)–(4) have a unique solution.

Let us define the differential operator and its adjoint operator as follows:

Lemma 2. Let and be two functions which are defined in . Also, let us assume that and satisfy conditions (13a) and (13b) and (19a) and (19b)-(20), respectively. Then,

Proof. The reader is referred to [31].

Definition 3. For the arbitrary constants, ,  ,  is the solution of the system equations (11)–(14a) and (14b). Let be the response function corresponding to optimal control functions , ,   be any of the following functions: A point is called a regular point, for each , if it satisfies the following equality for any sufficiently small : It can be concluded from [35] that all points of are regular for each .

Let and denote the vector valued functional , for , and the set

Definition 4. If there exists a surface in the following form: in for sufficiently small and are any finite collection of vectors from , then the set is called as a derived set of the set at [36].

Lemma 5. Assume that the points are regular points in , for , and is introduced as follows, for any , :
If , for , there exist constants (not all zero) such that in which ’s and ’s are functions which are defined in (15).

Proof. Define the functionals on the class of admissible controls by
In order to use the Lagrange multiplier rule, we should construct a derived set for the set at [36]. To this end, functions are introduced, for ,    , satisfying and, for and , satisfying For each point, is defined as follows: To show that is a derived set for at , let be an arbitrary finite collection of vectors from . We must show that there exist points , which are subject to the vector parameter for all sufficiently small positive values of such that Since ,   , there exist regularity points of and subfunctions such that To show that can be written as where is the admissible control given by (15), for , we observe that In order to obtain (47), we use that is regular at each of the points and we use the conclusion of Lemma 1. If the following equality is substituted in (33) it is observed that By means of (43) and (47), we can write where denotes the th component of . For , we have in which . Considering (33) and (41a), (41b), (41c), (41d), (41e), and (41f), it is observed that If (52) is substituted into (51), (50) is obtained for ,   . For , (50) can be obtained by using (33)–(41a), (41b), (41c), (41d), (41e), and (41f). By the definition , one obtains where is taken as . This completes the proof of being a derived set for at . Then, there exist nonpositive Lagrange multipliers [36], not zero simultaneously, satisfying for any vector in . Let us take in the foregoing discussion to obtain the conclusion of Lemma 5 and put By (50), it follows that for any in . Then, we obtain the proof of Lemma 5 as follows:

Theorem 6 (maximum principle). For the optimal control functions , the corresponding optimal state and adjoint variables are satisfying (1)–(4) and satisfying (11), boundary conditions (13a) and (13b), and terminal conditions (14a) and (14b), respectively. The maximum principle states that if where the Hamiltonian is given by then the performance index equation (8) is minimized; that is,

Proof. Let be a regular point for admissible optimal control functions . By Lemma 5, for and some , there exists Lagrange multipliers independent of with such that for any function . Note that the term reaches its maximum value at , for . Let us consider the first term in (62), that can be rewritten in the form to be subject to If we define , we obtain the conclusion of the maximum principle. This completes the proof of Theorem 6.

Theorem 7. Consider the control system equations (1)–(4) and (8)-(9a), (9b), (9c), and (9d). Let the functions be in the form and let satisfying (13a) and (13b)-(14a) and (14b) be the nonzero solution of Assume that there exist admissible control functions and the constants , , ,  that satisfy the maximum principle equations (58). Let us assume that following assumptions are satisfied:(a)for ,   are convex functions of and ,   are convex functions of ;(b) ,    , for ,   ;(c)the constraints equation (9a), (9b), (9c), and (9d) are satisfied by ;(d)if the strict inequality holds in (9a), (9b), (9c), and (9d), the corresponding Lagrange multiplier ;(e) ,   are convex functions of and is convex function of , for .
Under these assumptions, the maximum principle given by (58) is also sufficient condition for the admissible control functions to be optimal. The condition (d) is proved in [36]. If the ,   , and are linear functions of and , respectively, the condition (e) is satisfied, for ,   .

Proof. If ’s and satisfy (9a), (9b), (9c), and (9d), then, by the condition (d), for ,   . Then, we can write the following inequality By using the convexity assumption (e), And finally by applying Lemma 2 and the conditions (13a) and (13b)-(14a) and (14b), we obtain Note that the right-hand side of (72) is nonnegative due to the condition (b). Then, we obtain It follows that the maximum principle is also a sufficient condition for a global minimum of the performance index (8). It completes the proof of Theorem 7.

5. Conclusion

A necessary optimality condition for a general class of damped hyperbolic partial differential equation in one-space dimension is derived in the form of a maximum principle by using the derived set and regular point concepts. Under proper convexity assumption on state variable, it is proved that the maximum principle is also sufficient condition for the controls to be optimal. In [17], the systems under consideration include only one control function. Also, the necessary and sufficient conditions for optimality are given for a single hyperbolic differential equation without damping term subject to the homogeneous boundary conditions in one-space dimension. But, in the present paper, as an original contribution to literature, the necessary and sufficient optimality conditions obtained in [17] are generalized for a general class of damped hyperbolic equation involving damping and several control functions subject to nonhomogeneous boundary conditions in one space dimension.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.