- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Annual Issues
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 946910, 9 pages
Maximum Principle in the Optimal Control Problems for Systems with Integral Boundary Conditions and Its Extension
1Meshkin Shahr Branch, Islamic Azad University, Meshkin Shahr 5661668691, Iran
2Baku State University, Z. Khalilov Street 23, 1148 Baku, Azerbaijan
3Institute Cybernetics of Azerbaijan National Academy of Sciences, B. Vahab-Zade Street 9, 1141 Baku, Azerbaijan
Received 19 April 2013; Revised 14 June 2013; Accepted 1 July 2013
Academic Editor: Nazim Idrisoglu Mahmudov
Copyright © 2013 A. R. Safari et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The optimal control problem with integral boundary condition is considered. The sufficient condition is established for existence and uniqueness of the solution for a class of integral boundary value problems for fixed admissible controls. First-order necessary condition for optimality is obtained in the traditional form of the maximum principle. The second-order variations of the functional are calculated. Using the variations of the controls, various optimality conditions of second order are obtained.
Boundary value problems with integral conditions constitute a very interesting and important class of boundary problems. They include two-, three-, and multipoint and nonlocal boundary value problems as special cases, (see [1–3]). The theory of boundary value problems with integral boundary conditions for ordinary differential equations arises in different areas of applied mathematics and physics. For example, heat conduction, thermoelasticity, chemical engineering, plasma physics, and underground water flow can be reduced to the nonlocal problems with integral boundary conditions. For boundary value problems with nonlocal boundary conditions and comments on their importance, we refer the reader to the papers [4, 5] and the references therein.
The role of the Pontryagin maximum principle is critical for any research related to optimal processes that have control constraints. The simplicity of the principle's formulation together with its meaningful and beneficial directness has become an extraordinary attraction and one of the major causes for the appearance of new tendencies in mathematical sciences. The maximum principle is by nature a necessary first-order optimality condition since it was born as an extension of Euler-Lagrange and Weierstrass necessary conditions of variational calculus.
At present, there exists a great amount of works devoted to derivation of necessary optimality conditions of first and second orders for the systems with local conditions (see [6–12] and the references therein).
Since the systems with nonlocal conditions describe real processes, it is necessary to study the optimal control problems with nonlocal boundary conditions.
The optimal control problems with nonlocal boundary conditions have been investigated in [13–25]. Note that optimal control problems with integral boundary condition are considered and first-order necessary conditions are obtained in [23–25]. In certain cases, the first-order optimality conditions are “degenerated,” and are fulfilled trivially on a series of admissible controls. In this case, it is necessary to obtain second-order optimality conditions.
In the present paper, we investigate an optimal control problem in which the state of the system is described by differential equations with integral boundary conditions. Note that this problem is a natural generalization of the Cauchy problem. The matters of existence and uniqueness of solutions of the boundary value problem are investigated, first and second increments formula of the functional are calculated. Using the variations of the controls, various optimality conditions of first and second order are obtained.
The organization of the present paper is as follows. First, we give the statement of the problem. Second, theorems on existence and uniqueness of a solution for the problem (1)–(3) are established under some sufficient conditions on nonlinear terms. Third, the functional increment formula of first order is presented, and Pontryagin's maximum principle is provided. Fourth, variations of the functional of the first and second-order are given. Fifth, Legendre-Clebsh condition is obtained. Finally, a conclusion is given.
Consider the following system of differential equations with integral boundary condition: where ; is -dimensional continuous function and has second-order derivative with respect to ; is the given constant vector; is matrix function; is a control parameter; and is an bounded set.
Here, it is assumed that the scalar functions and are continuous by their own arguments and have continuous and bounded partial derivatives with respect to , , and to second order, inclusively. Under the solution of boundary value problem (1)–(3) corresponding to the fixed control parameter , we understand the function that is absolutely continuous on . Denote the space of such functions by . By , we define the space of continuous functions on with values from . It is obvious that this is a Banach space with the norm where is the norm in space .
Admissible controls are taken from the class of bounded measurable functions with values from the set . The admissible control together with corresponding solutions of (1), (2) is called an admissible process.
Introduce the following conditions:(1)Let , where ,(2) is a continuous function, and there exists the constant (3), where -unit matrix.
Proof. Note that under condition , the matrix is invertible and the estimation holds [26, page 78]. If is a solution of differential equation (1), then for
where is still an arbitrary constant. For determining , we require that the function defined by equality (9) satisfies condition (2):
Since , then
The equality (11) may be written in the following equivalent from Now, considering the value of defined by (12) in (9), we get It is obvious one can write the last equality as Since , Introduce the matrix function
Then, (14) turns to
Thus, we show that the boundary value problem (1)–(3) may be written in the form of integral equation (8). By direct verification, we can show that the solution of integral equation (8) also satisfies to the boundary value problem (1)–(3). Theorem 1 is proved.
For every fixed admissible controls, define the operator by the rule
Theorem 2. Let conditions (1)–(3) be fulfilled. Then, for any and for each fixed admissible control, boundary value problem (1)–(3) has the unique solution that satisfies the following integral equation:
Proof. Let , and let be fixed. Consider the mapping defined by equality (18). Clearly, the fixed points of the operator are solutions of the problem (1)-(2). We will use the Banach contraction principle to prove that defined by (18) has a fixed point. Then, for any , we have
Estimation (21) shows that the operator is a contraction in the space . Therefore, according to the principle of contraction operators, the operator defined by equality (18) has a unique fixed point at . So, integral equation (19) or boundary value problem (1)–(3) has a unique solution. Theorem 2 is proved.
3. First-Order Optimality Condition
In this section, we assume that is closed set in . In order to obtain the necessary conditions for optimality, we will use the standard procedure (see, e.g., ). Namely, we should analyze the changing of the objective functional caused by some control impulse. In other words, we must derive the increment formula that originates from Taylor's series expansion. A suitable definition of the conjugate system will facilitate the extraction of the dominant term that is destined to determine the necessary condition for optimality. For the sake of simplicity, it will be reasonable to construct a linearized model of nonlinear system (8), (9) in some small vicinity.
3.1. Increment Formula
Let and be two admissible processes. We can determine the boundary value problem for problem (1)–(3) where denotes the total increment of the function . Then, we can represent the increment of the functional in the form Let us introduce some nontrivial vector function and numerical vector . Then, the increment of functional index (4) may be represented as After some operations usually used in deriving of the first-order optimality conditions, for the increment of the functional, we get the formulas where
Suppose that the vector function and vector is a solution of the following conjugate problem (the stationary condition of the Lagrangian function by state):
Then, increment formula (25) takes the form
3.2. The Maximum Principle
Let us consider the formula for the increment of the functional on the needle-shaped variation of the admissible control. As a parameters, we take the point , number , and vector . The variation interval belongs to . Needle-shaped variation of the control is given as follows:
A traditional form of the necessary optimality condition will follow from the increment formula (28) if we show that on the needle-shaped variation the state increment has the order .
That follows from conditions (1)–(3) and equalities (19) and (22) From this, we obtain which proves our hypothesis on response of the state increment caused by the needle-shaped variation given by (29) This also implies that for , where Therefore, the changing of objective functional caused by the needle-shaped variation (29) can be represented according to (28) as It should be noted that in the last expression, we used the mean value theorem.
Theorem 3 (maximum principle). Suppose that the admissible process is optimal for problem (1)–(4) and is the solution to conjugate boundary value problem (27) calculated on the optimal process. Then, for all , the following inequality holds:
Remark 4. If the function is linear with respect to and functions , are convex with respect to , , and , respectively, then maximum principle (36) is both necessary and sufficient optimality condition. This fact follows from the increment formula where , .
4. Variations of the Functional and Derivation of Legendre-Clebsh Conditions
Let the set be open. Since the functions , , and are continuous by their own arguments and have continuous and bounded partial derivatives with respect to , and up to second order, inclusively, then increment formula (28) takes the form where Let now , where is a rather small number and is some piecewise continuous function. Then, the increment of the functional for the fixed functions , is the function of the parameter . If the representation is valid, then is called the first, and is the second variation of the functional. Further, we get an explicit expression for the first and second variations. To achieve the object, we have to select in the principal term with respect to .
Assume that where is the variation of the trajectory. Such a representation exists, and for the function , one can obtain an equation in variations. Indeed, by definition of , we have: Applying the Taylor formula to the integrand expression, we get Since this formula is true for any , then Equation (44) is said to be an equation in variations. Obviously, integral equation (44) is equivalent to the following nonlocal boundary value problem:
Assume that the solution of differential equation (45) determined by equality (47) satisfies boundary condition (46). Then, for the solutions of problems (45), (46), we get the following explicit formula: where
Now, substituting (41) into (38), one may get Considering definition (40), we finally obtain It follows from (40) that the conditions are fulfilled for the optimal control . From the first condition in (54), it follows that
Hence, we can prove that the following equality is fulfilled along the optimal control (see [11, p. 54]): and it is called the Euler equation. From the second condition in (54), it follows that the following inequality is fulfilled along the optimal control:
Inequality (57) is an implicit necessary optimality condition of first order. However, the practical value of such conditions is not great, since it requires very complicated calculations.
For obtaining effectively verifiable optimality conditions of second order, following [12, p. 16], we take into account (49) in (57) and introduce the matrix function Then, for the second variation of the functional, we get the terminal formula
Theorem 5. If the admissible control satisfies condition (56), then for its optimality in problem (1)–(4), the inequality
should be fulfilled for all .
The analogy of the Legandre-Klebsh condition for the considered problem follows from condition (60).
Theorem 6. Along the optimal process for all and
To prove (61), one constructs the variation of the control
where and is some -dimensional vector.
By virtue of (62) the corresponding variation of the trajectory indeed is where is a continuous bounded function.
Substituting variation (62) into (60) and selecting the principal term with respect to , one may obtain Thus, considering the second condition of (54), one obtains the Legandre-Klebsh criterion (61).
Condition (61) is the second-order optimality condition. It is obvious that when the right-hand side of system (1) is linear with respect to control parameters, then condition (61) also degenerates fulfills trivially. Following [11, p. 27], [12, p. 40], if for all then the admissible control is said be a singular control in the classic sense.
Theorem 7. For singular optimality of the control in the classic sense, the inequality should be fulfilled for all .
Condition (66) is an integral necessary condition of optimality for singular controls in the classic sense. Selecting special variation in different way in formula (60), we can get various necessary optimality conditions.
In this work, the optimal control problem is considered when the state of the system is described by the differential equations with integral boundary conditions. Applying the Banach contraction principle, the existence and uniqueness of the solution are proved for the corresponding boundary problem by fixed admissible control. The first and second order increment formulas for the functional are calculated. Various necessary conditions of optimality of the first and second order are obtained by the help of the variation of the controls. Of course, such type, the existence and uniqueness results and necessary conditions of optimality hold under the same sufficient conditions on nonlinear terms for the system of nonlinear differential equations (1), subject to multipoint nonlocal and integral boundary conditions where are given matrices and Here, . Moreover, the method given in [27, 28] and the method presented in the paper may allow one to investigate optimal control for infinite-dimensional systems with integral boundary conditions.
- M. Benchohra, J. J. Nieto, and A. Ouahab, “Second-order boundary value problem with integral boundary conditions,” Boundary Value Problems, vol. 2011, Article ID 260309, 9 pages, 2011.
- B. Ahmad and J. J. Nieto, “Existence results for nonlinear boundary value problems of fractional integrodifferential equations with integral boundary conditions,” Boundary Value Problems, vol. 2009, Article ID 708576, 11 pages, 2009.
- A. Boucherif, “Second-order boundary value problems with integral boundary conditions,” Nonlinear Analysis. Theory, Methods & Applications, vol. 70, no. 1, pp. 368–379, 2009.
- R. A. Khan, “Existence and approximation of solutions of nonlinear problems with integral boundary conditions,” Dynamic Systems and Applications, vol. 14, no. 2, pp. 281–296, 2005.
- A. Belarbi, M. Benchohra, and A. Quahab, “Multiple positive solutions for non-linear boundary value problems with integral boundary conditions,” Archivum Mathematicum, vol. 44, no. 1, pp. 1–7, 2008.
- F. P. Vasilev, Optimization Methods, Factorial Press, Moscow, Russia, 2002.
- O. V. Vasiliev, Optimization Methods, vol. 5 of Advanced Series in Mathematical Science and Engineering, World Federation Publishers Company, Atlanta, Ga, USA, 1996.
- A. J. Krener, “The high order maximal principle and its application to singular extremals,” SIAM Journal on Control and Optimization, vol. 15, no. 2, pp. 256–293, 1977.
- H. J. Kelley, R. E. Kopp, and H. G. Moyer, “Singular extremals,” in Topics in Optimization, pp. 63–101, Academic Press, New York, NY, USA, 1967, Edited by G. Leitmann.
- G. Fraser-Andrews, “Finding candidate singular optimal controls: a state of the art survey,” Journal of Optimization Theory and Applications, vol. 60, no. 2, pp. 173–190, 1989.
- R. Gabasov and F. M. Kirillova, Osobye Optimalnye Upravleniya, Nauka, Moscow, Russia, 1973, in Russian.
- K. B. Mansimov, Singular Controls in Systems With Delay, Elm, Baku, Azerbaijan, 1999, in Russian.
- Y. A. Sharifov, “Optimal control of impulsive systems with nonlocal boundary conditions,” Russian Mathematics, no. 2, pp. 75–84, 2013.
- Y. A. Sharifov, “Necessary optimality conditions of first and second order for systems with boundary conditions,” Transactions of National Academy of Sciences of Azerbaijan, vol. 28, no. 1, pp. 189–198, 2008.
- A. Y. Sharifov, “Conditions optimality in problems control with systems impulsive Differential equations under non-local boundary conditions,” Ukrainian Mathematical Journal, vol. 64, no. 6, pp. 836–847, 2012.
- A. Y. Sharifov and N. B. Mammadova, “On second-order necessary optimality conditions in the classical sense for systems with nonlocal conditions,” Differential Equations, vol. 48, no. 4, pp. 605–608, 2012.
- M. F. Mekhtiyev, I. Sh. Djabrailov, and Y. A. Sharifov, “Necessary optimality conditions of second order in classical sense in optimal control problems of three-point conditions,” Journal of Automation and Information Sciences, vol. 42, no. 3, pp. 47–57, 2010.
- O. O. Vasilieva and K. Mizukami, “Optimal control of a boundary value problem,” Izvestiya Vysshikh Uchebnykh Zavedeniĭ. Matematika, no. 12, pp. 33–41, 1994.
- O. O. Vasilieva and K. Mizukami, “Dynamical processes described by a boundary value problem: necessary optimality conditions and solution methods,” Rossiĭskaya Akademiya Nauk. Izvestiya Akademii Nauk. Teoriya i Sistemy Upravleniya, no. 1, pp. 95–100, 2000.
- O. Vasilieva and K. Mizukami, “Optimality criterion for singular controllers: linear boundary conditions,” Journal of Mathematical Analysis and Applications, vol. 213, no. 2, pp. 620–641, 1997.
- O. Vasilieva, “Maximum principle and its extension for bounded control problems with boundary conditions,” International Journal of Mathematics and Mathematical Sciences, vol. 2004, no. 35, pp. 1855–1879, 2004.
- Y. A. Sharifov, “Classical necessary optimality conditions in discrete optimal control problems with nonlocal conditions,” Automatic Control and Computer Sciences, vol. 45, no. 4, pp. 192–200, 2011.
- M. F. Mekhtiyev, H. H. Mollai, and Y. A. Sharifov, “On an optimal control problem for nonlinear systems with integral conditions,” Transactions of NAS of Azerbaijan, vol. 25, no. 4, pp. 191–198, 2005.
- A. Ashyralyev and Y. A. Sharifov, “Optimal control problem for impulsive systems with integral boundary conditions,” in AIP Conference Proceedings, vol. 1470, pp. 12–15, 2012.
- A. Ashyralyev and Y. A. Sharifov, “Optimal control problem for impulsive systems with integral boundary conditions,” Electronic Journal of Differential Equations, vol. 2013, no. 80, pp. 1–11, 2013.
- N. N. Samarskii and A. Goolin, Numerical Methods, Nauka, Moscow, Russia, 1989, in Russian.
- H. O. Fattorini, Infinite-Dimensional Optimization and Control Theory, vol. 62, Cambridge University Press, Cambridge, UK, 1999.
- H. O. Fattorini, Infinite Dimensional Linear Control Systems, vol. 201, Elsevier, Amsterdam, The Netherlands, 2005.