- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
International Journal of Differential Equations
Volume 2012 (2012), Article ID 173634, 18 pages
Direct Method for Resolution of Optimal Control Problem with Free Initial Condition
1Department of Mathematics, Faculty of Sciences, University Mouloud Mammeri of Tizi-Ouzou, Tizi-Ouzou, Algeria
2Laboratoire de Conception et Conduite de Systèmes de Production (L2CSP), UMMTO Tizi-Ouzou, Algeria
Received 22 September 2011; Accepted 3 November 2011
Academic Editor: Sabri Arik
Copyright © 2012 Louadj Kahina and Aidene Mohamed. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The theory of control analyzes the proprieties of commanded systems. Problems of optimal control (OC) have been intensively investigated in the world literature for over forty years. During this period, series of fundamental results have been obtained, among which should be noted the maximum principle (Pontryagin et al., 1962) and dynamic programming (Bellman, 1963). For many of the problems of the optimal control theory (OCT), adequate solutions are found (Bryson and Yu-chi, 1969, Lee and Markus, 1967, Gabasov and Kirillova, 1977, 1978, 1980). Results of the theory were taken up in various fields of science, engineering, and economics. The present paper aims at extending the constructive methods of Balashevich et al., (2000) that were developed for the problems of optimal control with the bounded initial state is not fixed are considered.
Problems of optimal control (OC) have been intensively investigated in the world literature for over forty years. During this period, a series of fundamental results have been obtained, whose majority is based on the maximum principle  and dynamic programming [2–4]. Currently there exist two types of methods of resolution: direct methods and indirect methods. The indirect methods are based on the maximum principle  and the methods of the shooting . The direct methods are based on the discretization of the initial problem, but here we obtain an approximate solution.
The aim of this paper is to apply an adaptive method of linear programming [6–13] for an optimal control problem with a free initial condition. Here we use a final procedure based on a resolution of linear system with the Newton method to obtain a optimal solution. Here, we use a finite set of switching points of a control [11, 14–16]. We solve the same problem in the article , we transform a problem initial to a problem of linear programming by carrying changes of variables in three procedures: change of control, change of support, and final procedure, in our paper, a solution of this problem, we discretize a problem initial to find an optimal support by using change of control and change of support, and we present the final procedure which uses this solution as an initial approximation for solving problem in the class of piecewise continuous function.
We explain below that the realizations of the adaptive method  described in the paper possess the following advantages.(1)Size of the support (the main tool of the method), which mainly influences the complexity of an iteration of the method, does not depend on all general constraints but only on the quantity of endpoint constraints.(2)In operations of described realizations, only parameters of the initial control problem are used. This consideration decreases requirements to operative memory and increases accuracy of calculations.(3)Main operations are conducted with initial (primal) and adjoint systems without auxiliary objects arising after reduction of the initial optimal control problem to the equivalent LP problem.(4)Because of storing a little volume of additional information and using parallel calculations, the time for integration of primal and adjoint systems in the dual part of an iteration decreases substantially. This precipitates the solution to the open-loop optimization problem and also the formation of current supports and realizations of optimal feedbacks when positional solutions are constructed.(5)Effectiveness of methods is practically independent of a quantization period.
The paper has the following structure: in Section 2, the canonical optimal control problem is formulated and the definition of support is introduced. Primal and dual ways of its dynamical identification are given. In Section 3, optimality and suboptimality criterion are given. In Section 4, optimality and -optimality criteria are exposed. In Section 5, numerical algorithm for solving the problem is discussed. The iteration consists in three procedures: change of control, change of a support, and at the end, final procedure. In Section 6, the results are illustrated with a numerical example.
2. Statement of the Problem
On the time interval , we have the following linear problem of optimal control: Here is a state of control system (2.2); , , is a piecewise continuous function; ; ; , ; are scalars; , are -vectors; , , , , are sets of indices.
By using the Cauchy formula, we obtain the solution of system (2.2): where , , is the solution of the system: By using the formula (2.5) for , problem (2.1)–(2.4) becomes the equivalent following problem: where , , , and .
3. Fundamental Definitions
Definition 3.1. A pair formed of an -vector and a piecewise continuous function is called a generalized control.
By using this notation, a functional becomes
Definition 3.3. An admissible control is said to be an optimal open-loop control if a control criterion reaches its maximal value
Definition 3.4. For a given , a control is said to be -optimal (approximate solution) if
4. Support Control
In the interval , let us choose subset formed of an isolated moment, where , is an integer. A function , , is called a discrete control if By using this discretization, problem (2.7)–(2.10) becomes are defined by the following expression: and equal Here , are a solution to the dual equation with the initial condition and , , is an matrix function solution of the following equation: with the initial condition
A pair of an admissible control and a support is said to be a support control. A support control is said to be not degenerate if , , , .
Let us consider another admissible control , where , , , and let us calculate the increment of the cost functional: As is admissible, then we have and consequently the increment of the functional is equal to where , , , is a function of the Lagrange multipliers called potentials, calculated as a solution to the equation , where , . Introduce an -vector of estimates , and a function of cocontrol .
By using this vector, the cost of functional increment takes the form A support control is dually not degenerate if , , , , where , .
5. Calculation of the Value of Suboptimality
The new control is admissible if it satisfies the constraints: The maximum of functional (4.16) under constraints (5.1) is reached for and is equal to where The number is called a value of suboptimality of the support control . From this, . From this inequality, we deduce the following results.
Theorem 6.1. The following relations: are sufficient, and in the cases of nondegeneracy, they are necessary for the optimality of support control .
Theorem 6.2. For any , the admissible control is -optimal if and only if there exists a support such that .
7. Numerical Algorithm for Solving the Problem
Let it be said that is a given number. Suppose that criterion optimality and -optimality do not satisfy an initial support control . From this we let it pass to iteration of the algorithm: for the “new” so that . The iteration consists in three procedures:(1)change of an admissible control ,(2)change of support ,(3)final procedure.
7.1. Change of Control
Consider an initial support control , and let be a new admissible control constructed by the formulas: where is an admissible direction of changing a control ; is the maximum step along this direction.
Construct the Admissible Direction
Let us introduce a pseudocontrol .
First, we compute the nonsupport values of a pseudo-control
Secondly, support values of a pseudocontrol are computed from the equations: By a pseudocontrol we compute the admissible direction
Construct the Maximal Step
Since is to be admissible, then we have that is, Then, the maximal step is chosen as . Here : : Let us calculate the value of suboptimality of the new support control , with computed according to (7.1): .
Consequentlyif , then is an optimal control;if , then is an -optimal control;if , then we perform a change of support.
7.2. Change of Support
The change of support will be to satisfy .
Here, we have .
We will distinguish between two cases which can occur after the first procedure:(a), .(b), .
Each case is investigated separately.
This change is based on variation of potentials, estimates, and cocontrol: where is an admissible direction of change , a maximal step along this direction, and increment of potential.
Construct an Admissible Direction
First, Construct the support values of admissible direction for each case.
Case a (). Let us put
Case b (). Let us put
By using the values , we compute the variation of potentials as .
Finally, we get the variation of nonsupport components of the estimates and the cocontrol:
Construct a Maximal Step
A maximal step is equal to , where where
Construct a New Support
For constructing a new support, we consider the following cases.
A new support , where
A new support , where
A new support , where
A new support , where
A value of suboptimality for support control is equal to where (1)If , then the control is optimal for problem (2.1)–(2.4).(2)If , then the control is -optimal for problem (2.1)–(2.4).(3)If , then we pass to a new iteration with the support control or to the final procedure.
7.3. Final Procedure
By using a support , we construct a quasicontrol : If then is optimal control, and if then denote , .
Here, are zeroes of the optimal cocontrol , ; , . Suppose that From system (7.23), we deduce and construct the following function: where The final procedure is to find the solution of the system of -nonlinear equations We solve this system by the Newton method using an initial approximation: The approximation , in step , is equal to where As , we can easily show that For all instants , there exists a small that all , , the matrices are not degenerate, and the matrix is not degenerate. If elements , , , do not leave the -vicinity of , , vector is taken as a solution of (7.28) provided that for a given . So we put . The suboptimal control for problem (2.1)–(2.4) is computed as If the Newton method does not converge, we decrease parameter and perform the iterative process again.
We illustrate the results obtained in this paper using the following example: Let the matrices and arrays be as follows: Let us consider the initial condition as
Problem (8.1) is reduced to a canonical form (2.1)–(2.4) by introducing the new variable , . Then, the control criterion takes the form . In the class of discrete controls with quantization period , problem (8.1) is equivalent to LP problem of dimension .
To construct the optimal open-loop control of problem (8.1).
As an initial support, a set was selected. This support corresponds to the set nonsupport zeroes of the cocontrol . The problem was solved in 18 iterations; that is, to construct the optimal open-loop control, a support -matrix was changed 18 times. The optimal value of the control criterion was found to be equal 6.602499 and the time is very quickly 2.30.
Movements of the cocontrol in the course of iterations are pictured in Figure 1.
The given data illustrate the effectiveness of the method used. In our opinion, the time it takes today to construct optimal open-loop controls is not of significant importance. It is only important that the method is able to construct a reliable solution in a reasonable time. Let us give some calculations.
At first, a characteristic of the methods for comparison is chosen. A comparison of the number of iterations in various methods is not always reasonable as iterations of various methods often differ a great deal from one another. It is more naturel to define the effectiveness of method  by using the number on integration of a primal or an adjoint system with insignificant volume or required operative memory. In this connection, as a unit of the complexity the time of integration of a primal or an adjoint system on the whole control interval is taken. If a method admits to make operation in parallel, then the complexity is defined by the time needed for a set of microprocessors to solve the problem.
The proposed characteristic is not absolute (exact) as it does not take into account to evaluate methods at “first approximation.” Table 1 contains some information on the solution to problem (8.1) for other quantization periods.
Of course, one can solve problem (8.1) by LP methods, transforming the problem (4.2)–(4.5). In doing so, one integration of the system is sufficient to form the matrix of the LP problem. However, such “static” approach is concerned with a large volume of required operative memory, and it is fundamentally different from the traditional “dynamical” approaches based on dynamical models (2.1)–(2.4). Then, problem (2.1)–(2.4) was solved.
In Figure 2, The realization , , is given. In Figure 3, projections of transients of system (8.1) closed by optimal open-loop on planes are presented. In Figure 4, projections of transients of system (8.1) closed by optimal open-loop on planes are presented.
The optimal initial state is
An optimal control problem with free initial condition has been considered.
The model problem becomes a problem, where we search the best of initial condition and a control which permits to bring the system of initial condition towards the final state which verifies the constraint .
To conclude, it appears that the study and applications of adaptive methods have at least important advantage. Control law computations can be executed very quickly in real time, in particular, by using parallel computers.
- L. S. Pontryagin, V. G. Boltyanski, R. V. Gamkrelidze, and E. F. Mischenko, The Mathematical Theory of Optimal Processes, Interscience, New York, NY, USA, 1962.
- R. E. Bellman, Dynamic Programming, Princeton University Press, Princeton, NJ, USA, 1963.
- R. E. Bellman, I. Glicksberg, and O. A. Gross, “Some aspects of the mathematical theory of control processes,” Tech. Rep. R-313, Rand Corporation, Santa Monica, Calif, USA, 1958.
- R. Gabasov, F. M. Kirillova, and N. S. Pavlenok, “Optimization of dynamical systems by using dynamical controllers,” Automatica i Telemekh, vol. 5, pp. 8–28, 2004.
- E. Trélat, Contrôle optimal : Théorie et applications, Mathématiques Concêtes, Vuibert, Paris, France, 2005.
- E. B. Lee and L. Markus, Foundations of Optimal Control Theory, John Wiley & Sons, New York, NY, USA, 1967.
- N. V. Balashevich, R. Gabasov, and F. M. Kirillova, “Numerical methods for program and positional optimization of linear control systems,” Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki, vol. 40, no. 6, pp. 838–859, 2000.
- R. Gabasov and F. M. Kirillova, Linear Programming Methods, vol. 1, Byelorussian State University Publishing House, Minsk, Russia, 1977.
- R. Gabasov and F. M. Kirillova, Linear Programming Methods, vol. 2, Byelorussian State University Publishing House, Minsk, Russia, 1978.
- R. Gabasov and F. M. Kirillova, Linear Programming Methods, vol. 3, Byelorussian State University Publishing House, Minsk, Russia, 1980.
- M. Aiden, I. L. Vorob'ev, and B. Oukacha, “An algorithm for solving a linear optimal control problem with a minimax performance index,” Computational Mathematics and Mathematical Physics, vol. 45, no. 10, pp. 1691–1700, 2005.
- R. Gabasov, N. V. Balashevich, and F. M. Kirillova, “On the synthesis problem for optimal control systems,” SIAM Journal on Control and Optimization, vol. 39, no. 4, pp. 1008–1042, 2000.
- R. Gabasov and F. M. Kirillova, Constructive Methods of Optimization, part 2, University Press, Minsk, Russia, 1984.
- A. E. Bryson and H. O. Yu-Chi, Applied Optimal Control, Blaisdell, Toronto, Canada, 1969.
- H. Maurer, C. Büskens, J. H. R. Kim, and C. Y. Kaya, “Optimization methods for the verification of second order sufficient conditions for bang-bang controls,” Optimal Control Applications and Methods, vol. 26, no. 3, pp. 129–156, 2005.
- L. Poggialini, “On local state optimality of bang-bang extremals in a free horizon Bolza problem,” Rendiconti del Seminario Matematico dell’Universitá e del Politecnico di Torino, vol. 63, no. 4, 2005.
- K. Louadj and M. Aidene, “Optimization of a problem of optimal control with free initial state,” Applied Mathematical Sciences, vol. 4, no. 5, pp. 201–216, 2010.
- R. Gabasov and F. M. Kirillova, “Adaptive method of solving linear programing problems,” Preprints Series, University of Karlsruhe, Institute for Satistics and Mathematics, 1994.
- R. Gabasov, F. M. Kirillova, and N. S. Pavlenok, “Optimal discrete-impulse control of linear systems,” Automation and Remote Control, vol. 69, no. 3, pp. 443–462, 2008.