- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 209329, 15 pages
Adaptive Method for Solving Optimal Control Problem with State and Control Variables
1Department of Mathematics, Faculty of Sciences, Mouloud Mammeri University, Tizi-Ouzou, Algeria
2Laboratoire de Conception et Conduite de Systèmes de Production (L2CSP), Tizi-Ouzou, Algeria
Received 29 November 2011; Revised 19 April 2012; Accepted 20 April 2012
Academic Editor: Jianming Shi
Copyright © 2012 Louadj Kahina and Aidene Mohamed. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The problem of optimal control with state and control variables is studied. The variables are: a scalar vector and the control ; these variables are bonded, that is, the right-hand side of the ordinary differential equation contains both state and control variables in a mixed form. For solution of this problem, we used adaptive method and technology of linear programming.
Problems of optimal control have been intensively investigated in the world literature for over forty years. During this period, a series of fundamental results have been obtained, among which should be noted the maximum principle  and dynamic programming [2, 3]. Results of the theory were taken up in various fields of science, engineering, and economics.
The optimal control problem with mixed variables and free terminal time is considered. This problem is among the most difficult problems in the mathematical theory of control processes [4–7]. An algorithm based on the concept of simplex method [4, 5, 8, 9] so called support control is proposed to solve this problem.
The aim of the paper is to realize the adaptive method of linear programming . In our opinion the numerical solution is impossible without using the computers of discrete controls defined on the quantized axes as accessible controls. This made, it possible to eliminate some analytical problems and reduce the optimal control problem to a linear programming problem. The obtained results show that the adequate consideration of the dynamic structure of the problem in question makes it possible to construct very fast algorithms of their solution.
The work has the following structure. In Section 2, The terminal optimal control problem with mixed variables is formulated. In Section 3, we give some definitions needed in this paper. In Section 4, the definition of support is introduced. Primal and dual ways of its dynamical identification are given. In Section 5, we calculate a value of suboptimality. In Section 6, optimality and ɛ-optimality criteria are defined. In Section 7, there is a numerical algorithm for solving the problem; the iteration consists in two procedures: change of control and change of a support to find a solution of discrete problem; at the end, we used a final procedure to find a solution in the class of piecewise continuous functions. In Section 8, the results are illustrated with a numerical example.
2. Problem Statement
We consider linear optimal control problem with control and state constraints: subject to where , and are constant or time-dependent matrices of appropriate dimensions, is a state of control system (2.1)–(2.2), and , , is a piecewise continuous function. Among these problems in which state and control are variables, we consider the following problem: subject to where is a state of control system (2.3)–(2.6); , , is a piecewise continuous function, ; ; is an -vector; , , is a continuous scalar function; , , is an -vector function; are scalars; , are -vectors; , are sets of indices.
3. Essentials Definitions
Definition 3.1. A pair formed of an -vector and a piecewise continuous function is called a generalized control.
Definition 3.2. The constraint (2.4) is assumed to be controllable, that is for any -vector , there exists a pair , for which the equality (2.4) is fulfilled.
A generalized control is said to be an admissible control if it satisfies constraints (2.4)–(2.6).
Definition 3.3. An admissible control is said to be an optimal open-loop control if a control criterion reaches its maximal value
Definition 3.4. For a given , an -optimal control is defined by the inequality
4. Support and the Accompanying Elements
Let us introduce a discretized time set where , and is an integer. A function , , is called a discrete control if First, we describe a method of computing the solution of problem (2.3)–(2.6) in the class of discrete control, and then we present the final procedure which uses this solution as an initial approximation for solving problem (2.3)–(2.6) in the class of piecewise continuous functions.
Definitions of admissible, optimal, -optimal controls for discrete functions are given in a standard form.
Choose an arbitrary subset of elements and an arbitrary subset of - elements.
Form the matrix, where , .
A pair of an admissible control and a support is said to be a support control.
A support control is said to be primally nonsingular if .
Let us consider another admissible control , where , and let us calculate the increment of the cost functional Since then the increment of the functional equals where is called potentials: , , , , .
Introduce an -vector of estimates and a function of cocontrol . With the use of these notions, the value of the cost functional increment takes the form
A support control is dually nonsingular if , where .
5. Calculation of the Value of Suboptimality
The number is called a value of suboptimality of the support control . From there, . Of this last inequality, the following result is deduced.
6. Optimality and -Optimality Criterion
Theorem 6.1 (see ). The following relations: are sufficient, and in the case of non degeneracy, they are necessary for the optimality of control .
Theorem 6.2. For any , the admissible control is -optimal if and only if there exists a support such that .
7. Primal Method for Constructing the Optimal Controls
A support is used not only to identify the optimal and -optimal controls, but also it is the main tool of the method. The method suggested is iterative, and its aim is to construct an -solution of problem (2.3)–(2.6) for a given . As a support will be changing during the iterations together with an admissible control, it is natural to consider them as a pair.
Below to simplify the calculations, we assume that on the iterations, only primally and dually nonsingular support controls are used.
The iteration of the method is a change of an “old” control for the “new” one so that . The iteration consists of two procedures: (1)change of an admissible control ,(2)change of support . Construction of the initial support control concerns with the first phase of the method and can be performed with the use of the algorithm described below.
At the beginning of each iteration the following information is stored: (1)an admissible control ,(2)a support ,(3)a value of suboptimality . Before the beginning of the iteration, we make sure that a support control does not satisfy the criterion of -optimality.
7.1. Change of an Admissible Control
The new admissible control is constructed according to the formulas: where is an admissible direction of changing a control ; is the maximum step along this direction.
7.1.1. Construct of the Admissible Direction
Let us introduce a pseudocontrol .
First, we compute the nonsupport values of a pseudocontrol Support values of a pseudocontrol are computed from the equation
With the use of a pseudocontrol, we compute the admissible direction : , ; , .
7.1.2. Construct of Maximal Step
Since is to be admissible, the following inequalities are to be satisfied: that is, Thus, the maximal step is chosen as .
Here, : and : Let us calculate the value of suboptimality of the support control with computed according to (7.1): . Consequently,(1)if , then is an optimal control,(2)if , then is an -optimal control,(3)if , then we perform a change of support.
7.2. Change of Support
For given, we assume that and . We will distinguish between two cases which can occur after the first procedure:(a),(b).Each case is investigated separately.
We perform change of support that leads to decreasing the value of suboptimality . The change of support is based on variation of potentials, estimates, and cocontrol: where is an admissible direction of change and is a maximal step along this direction.
7.2.1. Construct of an Admissible Direction
First, construct the support values of admissible direction
(a) . Let us put
(b) . Let us put Using the values , we compute the variation of potentials as . Finally, we get the variation of nonsupport components of the estimates and the cocontrol:
7.2.2. Construct of a Maximal Step
A maximal step equals with , where
7.2.3. Construct of a New Support
For constructing a new support, we consider the four following cases: (1): a new support has two following components: (2): a new support has the two following components: (3): a new support has two following components: (4): a new support has two following components: A value of suboptimality for support control takes the form where (1)If , then we perform the next iteration starting from the support control .(2)If , then the control is optimal for problem (2.3)–(2.6) in the class of discrete controls.(3)If , then the control is -optimal for problem (2.3)–(2.6) in the class of discrete controls. If we would like to get the solution of problem (2.3)–(2.6) in the class of piecewise continuous control, we pass to the final procedure when case 2 or 3 takes place.
7.3. Final Procedure
Let us assume that for the new control , we have . With the use of the support we construct a quasicontrol , If then is optimal, and if then denote , where are zeros of the optimal cocontrol, that is, , with . Suppose that Let us construct the following function: where The final procedure consists in finding the solution of the system of nonlinear equations We solve this system by the Newton method using as an initial approximation of the vector The approximation , at a step , is computed as Let us compute the Jacobi matrix for (7.26) As det, we can easily show that
For instants , there exists a small that for any , the matrix is nonsingular and the matrix is also nonsingular if elements do not leave the -vicinity of , .
Vector is taken as a solution of (4.6) if for a given . So we put .
We illustrate the results obtained in this paper using the following example:
Let the matrix be
We introduce the adjoint system which is defined as
Problem (8.1) is reduced to canonical form (2.3)–(2.6) by introducing the new variable . Then, the control criterion takes the form max. In the class of discrete controls with quantization period , problem (8.1) is equivalent to LP problem of dimension .
To construct the optimal open-loop control of problem (8.1), as an initial support, a set was selected. This support corresponds to the set of nonsupport zeroes of the cocontrol . The problem was solved in 26 iterations, that is, to construct the optimal open-loop control, a support matrix was changed 26 times. The optimal value of the control criterion was found to be equal to in time .
Of course, one can solve problem (8.1) by LP methods, transforming the problem (4.6)–(7.8). In doing so, one integration of the system is sufficient to form the matrix of the LP problem. However, such “static” approach is concerned with a large volume of required operative memory, and it is fundamentally different from the traditional “dynamical” approaches based on dynamical models (2.3)–(2.6). Then, problem (2.3)–(2.6) was solved.
In Figure 1, there are control and switching function for minimum principle. In Figure 2, there is phaseportrait for a system (8.1). In Figure 3, there are state variables for a system (8.1). In Figure 3, state variables for a system (8.1). In Figure 4, state variables for a system (8.1).
- L. S. Pontryagin, V. G. Boltyanski, R. Gamkrelidze, and E. F. Mischenko, The Mathematical Theory of Optimal Processes, Interscience Publishers, New York, NY, USA, 1962.
- R. E. Bellman, Dynamic Programming, Princeton University Press, Princeton, NJ, USA, 1963.
- R. E. Bellman, I. Glicksberg, and O. A. Gross, Some Aspects of the Mathematical Theory of Control Processes, Rand Corporation, Santa Monica, Calif, USA, 1958, Report R-313.
- N. V. Balashevich, R. Gabasov, and F. M. Kirillova, “Calculating an optimal program and an optimal control in a linear problem with a state constraint,” Computational Mathematics and Mathematical Physics, vol. 45, no. 12, pp. 2030–2048, 2005.
- R. Gabasov, F. M. Kirillova, and N. V. Balashevich, “On the synthesis problem for optimal control systems,” SIAM Journal on Control and Optimization, vol. 39, no. 4, pp. 1008–1042, 2000.
- R. Gabasov, N. M. Dmitruk, and F. M. Kirillova, “Decentralized optimal control of a group of dynamical objects,” Computational Mathematics and Mathematical Physics, vol. 48, no. 4, pp. 561–576, 2008.
- N. V. Balashevich, R. Gabasov, and F. M. Kirillova, “Numerical methods of program and positional optimization of the linear control systems,” Zh Vychisl Mat Mat Fiz, vol. 40, no. 6, pp. 838–859, 2000.
- R. Gabasov, Adaptive Method of Linear Programming, University of Karsruhe, Institute of Statistics and Mathematics, Karsruhe, Germany, 1993.
- L. Kahina and A. Mohamed, “Optimization of a problem of optimal control with free initial state,” Applied Mathematical Sciences, vol. 4, no. 5–8, pp. 201–216, 2010.