About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
VolumeΒ 2012Β (2012), Article IDΒ 209329, 15 pages
http://dx.doi.org/10.1155/2012/209329
Research Article

Adaptive Method for Solving Optimal Control Problem with State and Control Variables

1Department of Mathematics, Faculty of Sciences, Mouloud Mammeri University, Tizi-Ouzou, Algeria
2Laboratoire de Conception et Conduite de Systèmes de Production (L2CSP), Tizi-Ouzou, Algeria

Received 29 November 2011; Revised 19 April 2012; Accepted 20 April 2012

Academic Editor: JianmingΒ Shi

Copyright Β© 2012 Louadj Kahina and Aidene Mohamed. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The problem of optimal control with state and control variables is studied. The variables are: a scalar vector π‘₯ and the control 𝑒(𝑑); these variables are bonded, that is, the right-hand side of the ordinary differential equation contains both state and control variables in a mixed form. For solution of this problem, we used adaptive method and technology of linear programming.

1. Introduction

Problems of optimal control have been intensively investigated in the world literature for over forty years. During this period, a series of fundamental results have been obtained, among which should be noted the maximum principle [1] and dynamic programming [2, 3]. Results of the theory were taken up in various fields of science, engineering, and economics.

The optimal control problem with mixed variables and free terminal time is considered. This problem is among the most difficult problems in the mathematical theory of control processes [4–7]. An algorithm based on the concept of simplex method [4, 5, 8, 9] so called support control is proposed to solve this problem.

The aim of the paper is to realize the adaptive method of linear programming [8]. In our opinion the numerical solution is impossible without using the computers of discrete controls defined on the quantized axes as accessible controls. This made, it possible to eliminate some analytical problems and reduce the optimal control problem to a linear programming problem. The obtained results show that the adequate consideration of the dynamic structure of the problem in question makes it possible to construct very fast algorithms of their solution.

The work has the following structure. In Section 2, The terminal optimal control problem with mixed variables is formulated. In Section 3, we give some definitions needed in this paper. In Section 4, the definition of support is introduced. Primal and dual ways of its dynamical identification are given. In Section 5, we calculate a value of suboptimality. In Section 6, optimality and Ι›-optimality criteria are defined. In Section 7, there is a numerical algorithm for solving the problem; the iteration consists in two procedures: change of control and change of a support to find a solution of discrete problem; at the end, we used a final procedure to find a solution in the class of piecewise continuous functions. In Section 8, the results are illustrated with a numerical example.

2. Problem Statement

We consider linear optimal control problem with control and state constraints: 𝐽π‘₯𝑑(π‘₯,𝑒)=𝑔𝑓+ξ€œξ€Έξ€Έπ‘‘π‘“0(𝐢π‘₯(𝑑)+𝐷𝑒(𝑑))π‘‘π‘‘βŸΆmaxπ‘₯,𝑒,(2.1) subject to Μ‡π‘₯=𝑓(π‘₯(𝑑),𝑒(𝑑))=𝐴π‘₯(𝑑)+𝐡𝑒(𝑑),0≀𝑑≀𝑑𝑓,π‘₯(0)=π‘₯0𝑑,π‘₯𝑓=π‘₯𝑓,π‘₯min≀π‘₯(𝑑)≀π‘₯max,𝑒min≀𝑒(𝑑)≀𝑒maxξ€Ί,π‘‘βˆˆπ‘‡=0,𝑑𝑓,(2.2) where 𝐴,𝐡,𝐢, and 𝐷 are constant or time-dependent matrices of appropriate dimensions, π‘₯βˆˆπ‘…π‘› is a state of control system (2.1)–(2.2), and 𝑒(β‹…)=(𝑒(𝑑),π‘‘βˆˆπ‘‡), 𝑇=[0,𝑑𝑓], is a piecewise continuous function. Among these problems in which state and control are variables, we consider the following problem: 𝐽(π‘₯,𝑒)=π‘ξ…žξ€œπ‘₯+𝑑𝑓0𝑐(𝑑)𝑒(𝑑)π‘‘π‘‘βŸΆmaxπ‘₯,𝑒,(2.3) subject to ξ€œπ΄π‘₯+π‘‘βˆ—0β„Ž(𝑑)𝑒(𝑑)𝑑𝑑,0≀𝑑≀𝑑𝑓,(2.4)π‘₯(0)=π‘₯0,π‘₯(2.5)min≀π‘₯(𝑑)≀π‘₯max,𝑒min≀𝑒(𝑑)≀𝑒maxξ€Ί,π‘‘βˆˆπ‘‡=0,𝑑𝑓,(2.6) where π‘₯βˆˆπ‘…π‘› is a state of control system (2.3)–(2.6); 𝑒(β‹…)=(𝑒(𝑑),π‘‘βˆˆπ‘‡), 𝑇=[0,𝑑𝑓], is a piecewise continuous function, π΄βˆˆπ‘…π‘šΓ—π‘›; 𝑐=𝑐(𝐽)=(𝑐𝑗,π‘—βˆˆπ½); 𝑔=𝑔(𝐼)=(𝑔𝑖,π‘–βˆˆπΌ) is an π‘š-vector; 𝑐(𝑑), π‘‘βˆˆπ‘‡, is a continuous scalar function; β„Ž(𝑑), π‘‘βˆˆπ‘‡, is an π‘š-vector function; 𝑒min,𝑒max are scalars; π‘₯min=π‘₯min(𝐽)=(π‘₯min𝑗,π‘—βˆˆπ½), π‘₯max=π‘₯max(𝐽)=(π‘₯max𝑗,π‘—βˆˆπ½) are 𝑛-vectors; 𝐼={1,…,π‘š}, 𝐽={1,…,𝑛} are sets of indices.

3. Essentials Definitions

Definition 3.1. A pair 𝑣=(π‘₯,𝑒(β‹…)) formed of an 𝑛-vector π‘₯ and a piecewise continuous function 𝑒(β‹…) is called a generalized control.

Definition 3.2. The constraint (2.4) is assumed to be controllable, that is for any π‘š-vector 𝑔, there exists a pair 𝑣, for which the equality (2.4) is fulfilled.
A generalized control 𝑣=(π‘₯,𝑒(β‹…)) is said to be an admissible control if it satisfies constraints (2.4)–(2.6).

Definition 3.3. An admissible control 𝑣0=(π‘₯0,𝑒0(β‹…)) is said to be an optimal open-loop control if a control criterion reaches its maximal value 𝐽𝑣0ξ€Έ=max𝑣𝐽(𝑣).(3.1)

Definition 3.4. For a given πœ€β‰₯0, an πœ€-optimal control π‘£πœ€=(π‘₯πœ€,π‘’πœ€(β‹…)) is defined by the inequality 𝐽𝑣0ξ€Έβˆ’π½(π‘£πœ€)β‰€πœ€.(3.2)

4. Support and the Accompanying Elements

Let us introduce a discretized time set π‘‡β„Ž={0,β„Ž,…,π‘‘π‘“βˆ’β„Ž} where β„Ž=𝑑𝑓/𝑁, and 𝑁 is an integer. A function 𝑒(𝑑), π‘‘βˆˆπ‘‡, is called a discrete control if [𝑒(𝑑)=𝑒(𝜏),π‘‘βˆˆπœ,𝜏+β„Ž),πœβˆˆπ‘‡β„Ž.(4.1) First, we describe a method of computing the solution of problem (2.3)–(2.6) in the class of discrete control, and then we present the final procedure which uses this solution as an initial approximation for solving problem (2.3)–(2.6) in the class of piecewise continuous functions.

Definitions of admissible, optimal, πœ€-optimal controls for discrete functions are given in a standard form.

Choose an arbitrary subset π‘‡π΅βŠ‚π‘‡β„Ž of π‘™β‰€π‘š elements and an arbitrary subset π½π΅βŠ‚π½ of π‘š-𝑙 elements.

Form the matrix, 𝑃𝐡=ξ€·π‘Žπ‘—=𝐴(𝐼,𝑗),π‘—βˆˆπ½π΅;𝑑(𝑑),π‘‘βˆˆπ‘‡π΅ξ€Έ,(4.2) where βˆ«π‘‘(𝑑)=𝑑𝑑+β„Žβ„Ž(𝑠)𝑑𝑠, π‘‘βˆˆπ‘‡β„Ž.

A set 𝑆𝐡={𝑇𝐡,𝐽𝐡} is said to be a support of problem (2.3)–(2.6) if det𝑃𝐡≠0.

A pair {𝑣,𝑆𝐡} of an admissible control 𝑣(𝑑)=(π‘₯,𝑒(𝑑),π‘‘βˆˆπ‘‡) and a support 𝑆𝐡 is said to be a support control.

A support control {𝑣,𝑆𝐡} is said to be primally nonsingular if π‘‘βˆ—π‘—<π‘₯𝑗<π‘‘βˆ—π‘—,π‘—βˆˆπ½π΅;π‘“βˆ—<𝑒(𝑑)<π‘“βˆ—,π‘‘βˆˆπ‘‡π΅.

Let us consider another admissible control 𝑣=(π‘₯,𝑒(β‹…))=𝑣+Δ𝑣, where π‘₯=π‘₯+Ξ”π‘₯,𝑒(𝑑)=𝑒(𝑑)+Δ𝑒(𝑑),π‘‘βˆˆπ‘‡, and let us calculate the increment of the cost functional Δ𝐽(𝑣)=π½π‘£ξ€Έβˆ’π½(𝑣)=π‘ξ…žξ€œΞ”π‘₯+𝑑𝑓0𝑐(𝑑)Δ𝑒(𝑑)𝑑𝑑.(4.3) Since ξ€œπ΄Ξ”π‘₯+𝑧0β„Ž(𝑑)Δ𝑒(𝑑)𝑑𝑑=0,(4.4) then the increment of the functional equals 𝑐Δ𝐽(𝑣)=ξ…žβˆ’πœˆξ…žπ΄ξ€Έξ€œΞ”π‘₯+𝑑𝑓0𝑐(𝑑)βˆ’πœˆξ…žξ€Έβ„Ž(𝑑)Δ𝑒(𝑑)𝑑𝑑,(4.5) where πœˆβˆˆπ‘…π‘š is called potentials: πœˆξ…ž=π‘žξ…žπ΅π‘„, π‘žπ΅=(π‘π‘Ÿπ‘—,π‘—βˆˆπ½π΅;π‘ž(𝑑),π‘‘βˆˆπ‘‡π΅), 𝑄=π‘ƒπ΅βˆ’1, βˆ«π‘ž(𝑑)=𝑑𝑑+β„Žπ‘(𝑠)𝑑𝑠, π‘‘βˆˆπ‘‡β„Ž.

Introduce an 𝑛-vector of estimates Ξ”ξ…ž=πœˆξ…žπ΄βˆ’π‘ξ…ž and a function of cocontrol Ξ”(β‹…)=(Ξ”(𝑑)=πœˆξ…žπ‘‘(𝑑)βˆ’π‘ž(𝑑),π‘‘βˆˆπ‘‡β„Ž). With the use of these notions, the value of the cost functional increment takes the form Δ𝐽(𝑣)=Ξ”ξ…žξ“Ξ”π‘₯βˆ’π‘‘βˆˆπ‘‡β„ŽΞ”(𝑑)Δ𝑒(𝑑).(4.6)

A support control {𝑣,𝑆𝐡} is dually nonsingular if Ξ”(𝑑)β‰ 0,π‘‘βˆˆπ‘‡π»,Δ𝑗≠0,π‘—βˆˆπ½π», where 𝑇𝐻=π‘‡β„Ž/𝑇𝐡,𝐽𝐻=𝐽/𝐽𝐡.

5. Calculation of the Value of Suboptimality

The new control 𝑣(𝑑) is admissible, if it satisfies the constraints: π‘₯minβˆ’π‘₯≀Δπ‘₯≀π‘₯maxβˆ’π‘₯,𝑒minβˆ’π‘’(𝑑)≀Δ𝑒(𝑑)≀𝑒maxβˆ’π‘’(𝑑),π‘‘βˆˆπ‘‡.(5.1) The maximum of functional (4.6) under constraints (5.1) is reached for: Ξ”π‘₯𝑗=π‘₯minπ‘—βˆ’π‘₯𝑗ifΔ𝑗>0,Ξ”π‘₯𝑗=π‘₯maxπ‘—βˆ’π‘₯𝑗ifΔ𝑗π‘₯<0,minπ‘—βˆ’π‘₯𝑗≀Δπ‘₯𝑗≀π‘₯maxπ‘—βˆ’π‘₯𝑗ifΔ𝑗=0,π‘—βˆˆπ½,Δ𝑒(𝑑)=𝑒minβˆ’π‘’(𝑑)ifΞ”(𝑑)>0Δ𝑒(𝑑)=𝑒maxπ‘’βˆ’π‘’(𝑑)ifΞ”(𝑑)<0min≀Δ𝑒(𝑑)≀𝑒maxifΞ”(𝑑)=0,π‘‘βˆˆπ‘‡β„Ž,(5.2) and is equal to 𝛽=𝛽𝑣,𝑆𝐡=ξ“π‘—βˆˆπ½+𝐻Δ𝑗π‘₯π‘—βˆ’π‘₯min𝑗+ξ“π‘—βˆˆπ½βˆ’π»Ξ”π‘—ξ‚€π‘₯π‘—βˆ’π‘₯max𝑗+ξ“π‘‘βˆˆπ‘‡+Δ𝑒(𝑑)(𝑑)βˆ’π‘’minξ€Έ+ξ“π‘‘βˆˆπ‘‡βˆ’ξ€·π‘’Ξ”(𝑑)(𝑑)βˆ’π‘’maxξ€Έ,(5.3) where 𝑇+=ξ€½π‘‘βˆˆπ‘‡π»ξ€Ύ,Ξ”(𝑑)>0,π‘‡βˆ’=ξ€½π‘‘βˆˆπ‘‡π»ξ€Ύ,𝐽,Ξ”(𝑑)<0+𝐻=ξ€½π‘—βˆˆπ½π»,Δ𝑗>0,π½βˆ’π»=ξ€½π‘—βˆˆπ½π»,Δ𝑗.<0(5.4)

The number 𝛽(𝑣,𝑆𝐡) is called a value of suboptimality of the support control {𝑣,𝑆𝐡}. From there, 𝐽(𝑣)βˆ’π½(𝑣)≀𝛽(𝑣,𝑆𝐡). Of this last inequality, the following result is deduced.

6. Optimality and πœ€-Optimality Criterion

Theorem 6.1 (see [8]). The following relations: 𝑒(𝑑)=𝑒min𝑒𝑖𝑓Δ(𝑑)>0,(𝑑)=𝑒max𝑒𝑖𝑓Δ(𝑑)<0,min≀𝑒(𝑑)≀𝑒max𝑖𝑓Δ(𝑑)=0,π‘‘βˆˆπ‘‡β„Ž,π‘₯𝑗=π‘₯min𝑗𝑖𝑓Δ𝑗π‘₯>0,𝑗=π‘₯max𝑗𝑖𝑓Δ𝑗π‘₯<0,min𝑗≀π‘₯𝑗≀π‘₯max𝑗𝑖𝑓Δ𝑗=0,π‘—βˆˆπ½,(6.1) are sufficient, and in the case of non degeneracy, they are necessary for the optimality of control 𝑣.

Theorem 6.2. For any πœ€β‰₯0, the admissible control 𝑣 is πœ€-optimal if and only if there exists a support 𝑆𝐡 such that 𝛽(𝑣,𝑆𝐡)β‰€πœ€.

7. Primal Method for Constructing the Optimal Controls

A support is used not only to identify the optimal and πœ€-optimal controls, but also it is the main tool of the method. The method suggested is iterative, and its aim is to construct an πœ€-solution of problem (2.3)–(2.6) for a given πœ€β‰₯0. As a support will be changing during the iterations together with an admissible control, it is natural to consider them as a pair.

Below to simplify the calculations, we assume that on the iterations, only primally and dually nonsingular support controls are used.

The iteration of the method is a change of an β€œold” control {𝑣,𝑆𝐡} for the β€œnew” one {𝑣,𝑆𝐡} so that 𝛽{𝑣,𝑆𝐡}≀𝛽{𝑣,𝑆𝐡}. The iteration consists of two procedures: (1)change of an admissible control 𝑣→𝑣,(2)change of support 𝑆𝐡→𝑆𝐡. Construction of the initial support control concerns with the first phase of the method and can be performed with the use of the algorithm described below.

At the beginning of each iteration the following information is stored: (1)an admissible control 𝑣,(2)a support 𝑆𝐡={𝑇𝐡,𝐽𝐡},(3)a value of suboptimality 𝛽=𝛽(𝑣,𝑆𝐡). Before the beginning of the iteration, we make sure that a support control {𝑣,𝑆𝐡} does not satisfy the criterion of πœ€-optimality.

7.1. Change of an Admissible Control

The new admissible control is constructed according to the formulas: π‘₯𝑗=π‘₯𝑗+πœƒ0𝑙𝑗,π‘—βˆˆπ½,𝑒(𝑑)=𝑒(𝑑)+πœƒ0𝑙(𝑑),π‘‘βˆˆπ‘‡β„Ž,(7.1) where 𝑙=(𝑙𝑗,π‘—βˆˆπ½,𝑙(𝑑),π‘‘βˆˆπ‘‡β„Ž) is an admissible direction of changing a control 𝑣; πœƒ0 is the maximum step along this direction.

7.1.1. Construct of the Admissible Direction

Let us introduce a pseudocontrol ̃𝑣=(Μƒπ‘₯,̃𝑒(𝑑),π‘‘βˆˆπ‘‡).

First, we compute the nonsupport values of a pseudocontrol Μƒπ‘₯𝑗=ξ‚»π‘₯min𝑗ifΔ𝑗π‘₯β‰₯0,max𝑗ifΔ𝑗≀0,π‘—βˆˆπ½π»,𝑒̃𝑒(𝑑)=max𝑒ifΞ”(𝑑)≀0,minifΞ”(𝑑)β‰₯0,π‘‘βˆˆπ‘‡π».(7.2) Support values of a pseudocontrol {Μƒπ‘₯𝑗,π‘—βˆˆπ½π΅;̃𝑒(𝑑),π‘‘βˆˆπ‘‡π΅} are computed from the equation ξ“π‘—βˆˆπ½π΅π΄(𝐼,𝑗)Μƒπ‘₯𝑗+ξ“π‘‘βˆˆπ‘‡π΅ξ“π‘‘(𝑑)̃𝑒(𝑑)=π‘”βˆ’π‘—βˆˆπ½π»π΄(𝐼,𝑗)Μƒπ‘₯𝑗+ξ“π‘‘βˆˆπ‘‡π»π‘‘(𝑑)̃𝑒(𝑑).(7.3)

With the use of a pseudocontrol, we compute the admissible direction 𝑙: 𝑙𝑗=Μƒπ‘₯π‘—βˆ’π‘₯𝑗, π‘—βˆˆπ½; 𝑙(𝑑)=̃𝑒(𝑑)βˆ’π‘’(𝑑), π‘‘βˆˆπ‘‡β„Ž.

7.1.2. Construct of Maximal Step

Since 𝑣 is to be admissible, the following inequalities are to be satisfied: π‘₯min≀π‘₯≀π‘₯max;𝑒min≀𝑒(𝑑)≀𝑒max,π‘‘βˆˆπ‘‡β„Ž,(7.4) that is, π‘₯min≀π‘₯𝑗+πœƒ0𝑙𝑗≀π‘₯max𝑒,π‘—βˆˆπ½,min≀𝑒(𝑑)+πœƒ0𝑙(𝑑)≀𝑒max,π‘‘βˆˆπ‘‡β„Ž.(7.5) Thus, the maximal step πœƒ0 is chosen as πœƒ0=min{1;πœƒ(𝑑0);πœƒπ‘—0}.

Here, πœƒπ‘—0=minπœƒπ‘—: πœƒπ‘—=⎧βŽͺβŽͺ⎨βŽͺβŽͺ⎩π‘₯maxπ‘—βˆ’π‘₯𝑗𝑙𝑗if𝑙𝑗π‘₯>0,minπ‘—βˆ’π‘₯𝑗𝑙𝑗if𝑙𝑗<0,+∞if𝑙𝑗=0,π‘—βˆˆπ½π΅,(7.6) and πœƒ(𝑑0)=minπ‘‘βˆˆπ‘‡π΅πœƒ(𝑑): ⎧βŽͺβŽͺ⎨βŽͺβŽͺβŽ©π‘’πœƒ(𝑑)=maxβˆ’π‘’(𝑑)𝑒𝑙(𝑑)if𝑙(𝑑)>0,minβˆ’π‘’(𝑑)𝑙(𝑑)if𝑙(𝑑)<0,+∞if𝑙(𝑑)=0,π‘‘βˆˆπ‘‡π΅.(7.7) Let us calculate the value of suboptimality of the support control {𝑣,𝑆𝐡} with 𝑣 computed according to (7.1): 𝛽(𝑣,𝑆𝐡)=(1βˆ’πœƒ0)𝛽(𝑣,𝑆𝐡). Consequently,(1)if πœƒ0=1, then 𝑣 is an optimal control,(2)if 𝛽(𝑣,𝑆𝐡)β‰€πœ€, then 𝑣 is an πœ€-optimal control,(3)if 𝛽(𝑣,𝑆𝐡)>πœ€, then we perform a change of support.

7.2. Change of Support

For πœ€>0 given, we assume that 𝛽(𝑣,𝑆𝐡)>πœ€ and πœƒ0=min(πœƒ(𝑑0),𝑑0βˆˆπ‘‡π΅;πœƒπ‘—0,𝑗0∈𝐽𝐡). We will distinguish between two cases which can occur after the first procedure:(a)πœƒ0=πœƒπ‘—0,𝑗0∈𝐽𝐡,(b)πœƒ0=πœƒ(𝑑0),𝑑0βˆˆπ‘‡π΅.Each case is investigated separately.

We perform change of support 𝑆𝐡→𝑆𝐡 that leads to decreasing the value of suboptimality 𝛽(𝑣,𝑆𝐡). The change of support is based on variation of potentials, estimates, and cocontrol: πœˆξ…ž=𝜈+Ξ”πœˆ;Δ𝑗=Δ𝑗+𝜎0𝛿𝑗,π‘—βˆˆπ½,Ξ”(𝑑)=Ξ”(𝑑)+𝜎0𝛿(𝑑),π‘‘βˆˆπ‘‡β„Ž,(7.8) where (𝛿𝑗,π‘—βˆˆπ½,𝛿(𝑑),π‘‘βˆˆπ‘‡β„Ž) is an admissible direction of change (Ξ”,Ξ”(β‹…)) and 𝜎0 is a maximal step along this direction.

7.2.1. Construct of an Admissible Direction (𝛿𝑗,π‘—βˆˆπ½,𝛿(𝑑),π‘‘βˆˆπ‘‡β„Ž)

First, construct the support values 𝛿𝐡=(𝛿𝑗,π‘—βˆˆπ½π΅,𝛿(𝑑),π‘‘βˆˆπ‘‡π΅) of admissible direction

(a) πœƒ0=πœƒπ‘—0. Let us put 𝛿(𝑑)=0ifπ‘‘βˆˆπ‘‡π΅,𝛿𝑗=0if𝑗≠𝑗0,π‘—βˆˆπ½π΅,𝛿𝑗0=1ifπ‘₯𝑗0=π‘₯min𝑗0,𝛿𝑗0=βˆ’1ifπ‘₯𝑗0=π‘₯max𝑗0,(7.9)

(b) πœƒ0=πœƒ(𝑑0). Let us put 𝛿𝑗=0ifπ‘—βˆˆπ½π΅,𝑇𝛿(𝑑)=0ifπ‘‘βˆˆπ΅π‘‘0,𝛿𝑑0ξ€Έ=1if𝑒𝑑0ξ€Έ=𝑒min,𝛿𝑑0ξ€Έ=βˆ’1if𝑒𝑑0ξ€Έ=𝑒max.(7.10) Using the values 𝛿𝐡=(𝛿𝑗,π‘—βˆˆπ½π΅,𝛿(𝑑),π‘‘βˆˆπ‘‡π΅), we compute the variation Ξ”πœˆ of potentials as Ξ”πœˆβ€²=𝛿′𝐡𝑄. Finally, we get the variation of nonsupport components of the estimates and the cocontrol: 𝛿𝑗=Ξ”πœˆξ…žπ΄(𝐼,𝑗),π‘—βˆˆπ½π»,𝛿(𝑑)=Ξ”πœˆξ…žπ‘‘(𝑑),π‘‘βˆˆπ‘‡π».(7.11)

7.2.2. Construct of a Maximal Step 𝜎0

A maximal step equals 𝜎0=min(𝜎0𝑗,𝜎0𝑑) with 𝜎0𝑗=πœŽπ‘—1=minπœŽπ‘—,π‘—βˆˆπ½π»;𝜎0𝑑=𝜎(𝑑1)=min𝜎(𝑑),π‘‘βˆˆπ‘‡π», where πœŽπ‘—=⎧βŽͺ⎨βŽͺβŽ©βˆ’Ξ”π‘—π›Ώπ‘—ifΔ𝑗𝛿𝑗<0,+∞ifΔ𝑗𝛿𝑗β‰₯0,π‘—βˆˆπ½π»,ξƒ―βˆ’πœŽ(𝑑)=Ξ”(𝑑)𝛿(𝑑)ifΞ”(𝑑)𝛿(𝑑)<0,+∞ifΞ”(𝑑)𝛿(𝑑)β‰₯0,π‘‘βˆˆπ‘‡π».(7.12)

7.2.3. Construct of a New Support

For constructing a new support, we consider the four following cases: (1)πœƒ0=πœƒ(𝑑0),𝜎0=𝜎(𝑑1): a new support 𝑆𝐡={𝑇𝐡,𝐽𝐡} has two following components: 𝑇𝐡=𝑇𝐡𝑑0ξ€Ύβˆͺ𝑑1ξ€Ύ,𝐽𝐡=𝐽𝐡,(7.13)(2)πœƒ0=πœƒ(𝑑0),𝜎0=πœŽπ‘—1: a new support 𝑆𝐡={𝑇𝐡,𝐽𝐡} has the two following components: 𝑇𝐡=𝑇𝐡𝑑0ξ€Ύ,𝐽𝐡=𝐽𝐡βˆͺ𝑗1ξ€Ύ,(7.14)(3)πœƒ0=πœƒπ‘—0,𝜎0=πœŽπ‘—1: a new support 𝑆𝐡={𝑇𝐡,𝐽𝐡} has two following components: 𝑇𝐡=𝑇𝐡,𝐽𝐡=𝐽𝐡𝑗0ξ€Ύβˆͺ𝑗1ξ€Ύ,(7.15)(4)πœƒ0=πœƒπ‘—0,𝜎0=𝜎(𝑑1): a new support 𝑆𝐡={𝑇𝐡,𝐽𝐡} has two following components: 𝑇𝐡=𝑇𝐡βˆͺ𝑑1ξ€Ύ,𝐽𝐡=𝐽𝐡𝑗0ξ€Ύ,(7.16)A value of suboptimality for support control 𝛽(𝑣,𝑆𝐡) takes the form 𝛽𝑣,𝑆𝐡=ξ€·1βˆ’πœƒ0𝛽𝑣,π‘†π΅ξ€Έβˆ’π›ΌπœŽ0,(7.17) where ξ‚»||𝑑𝛼=̃𝑒0ξ€Έβˆ’π‘’ξ€·π‘‘0ξ€Έ||ifπœƒ0𝑑=πœƒ0ξ€Έ,||Μƒπ‘₯𝑗0βˆ’π‘₯𝑗0||ifπœƒ0=πœƒπ‘—0.(7.18)(1)If 𝛽(𝑣,𝑆𝐡)>πœ€, then we perform the next iteration starting from the support control {𝑣,𝑆𝐡}.(2)If 𝛽(𝑣,𝑆𝐡)=0, then the control 𝑣 is optimal for problem (2.3)–(2.6) in the class of discrete controls.(3)If 𝛽(𝑣,𝑆𝐡)<πœ€, then the control 𝑣 is πœ€-optimal for problem (2.3)–(2.6) in the class of discrete controls. If we would like to get the solution of problem (2.3)–(2.6) in the class of piecewise continuous control, we pass to the final procedure when case 2 or 3 takes place.

7.3. Final Procedure

Let us assume that for the new control 𝑣, we have 𝛽(𝑣,𝑆𝐡)>πœ€. With the use of the support 𝑆𝐡 we construct a quasicontrol ̂𝑣=(Μ‚π‘₯,̂𝑒(𝑑),π‘‘βˆˆπ‘‡), Μ‚π‘₯𝑗=⎧βŽͺ⎨βŽͺ⎩π‘₯min𝑗ifΔ𝑗π‘₯>0,max𝑗ifΞ”π‘—βˆˆξ‚ƒπ‘₯<0,min𝑗,π‘₯max𝑗ifΞ”π‘—βŽ§βŽͺ⎨βŽͺβŽ©π‘’=0,π‘—βˆˆπ½.̂𝑒(𝑑)=min𝑒,ifΞ”(𝑑)<0maxβˆˆξ€Ίπ‘’,ifΞ”(𝑑)>0,min,𝑒maxξ€»,ifΞ”(𝑑)=0,π‘‘βˆˆπ‘‡β„Ž.(7.19) If ξ€œπ΄(𝐼,𝐽)Μ‚π‘₯+𝑑𝑓0β„Ž(𝑑)̂𝑒(𝑑)𝑑𝑑=𝑔,(7.20) then ̂𝑣 is optimal, and if ξ€œπ΄(𝐼,𝐽)Μ‚π‘₯+𝑑𝑓0β„Ž(𝑑)̂𝑒(𝑑)𝑑𝑑≠𝑔,(7.21) then denote 𝑇0={π‘‘π‘–βˆˆπ‘‡,Ξ”(𝑑𝑖)=0}, where 𝑑𝑖 are zeros of the optimal cocontrol, that is, Ξ”(𝑑𝑖)=0,𝑖=1,𝑠, with π‘ β‰€π‘š. Suppose that ̇Δ𝑑𝑖≠0,𝑖=1,𝑠.(7.22) Let us construct the following function: 𝑓(Θ)=𝐴𝐼,𝐽𝐡π‘₯𝐽𝐡+𝐴𝐼,𝐽𝐻π‘₯𝐽𝐻+𝑠𝑖=0𝑒max+𝑒min2βˆ’π‘’maxβˆ’π‘’min2̇Δ𝑑signπ‘–ξ€Έξ‚Άξ€œπ‘‘π‘–+1π‘‘π‘–β„Ž(𝑑)π‘‘π‘‘βˆ’π‘”,(7.23) where π‘₯𝑗=π‘₯min𝑗+π‘₯max𝑗2βˆ’π‘₯maxπ‘—βˆ’π‘₯min𝑗2signΔ𝑗,π‘—βˆˆπ½π»,𝑑0=0,𝑑𝑠+1=𝑑𝑓,ξ‚€π‘‘Ξ˜=𝑖,𝑖=1,𝑠;π‘₯𝑗,π‘—βˆˆπ½π΅ξ‚.(7.24) The final procedure consists in finding the solution Θ0=𝑑0𝑖,𝑖=1,𝑠;π‘₯0𝑗,π‘—βˆˆπ½π΅ξ‚(7.25) of the system of π‘š nonlinear equations 𝑓(Θ)=0.(7.26) We solve this system by the Newton method using as an initial approximation of the vector Θ(0)=𝑑𝑖,𝑖=1,𝑠;π‘₯𝑗,π‘—βˆˆπ½π΅ξ‚.(7.27) The (π‘˜+1)th approximation Θ(π‘˜+1), at a step π‘˜+1β‰₯1, is computed as Θ(π‘˜+1)=Θ(π‘˜)+Ξ”Ξ˜(π‘˜),Ξ”Ξ˜(π‘˜)=βˆ’πœ•π‘“βˆ’1ξ€·Ξ˜(π‘˜)ξ€Έπœ•Ξ˜(π‘˜)ξ€·Ξ˜β‹…π‘“(π‘˜)ξ€Έ.(7.28) Let us compute the Jacobi matrix for (7.26) ξ€·Ξ˜πœ•π‘“(π‘˜)ξ€Έπœ•Ξ˜(π‘˜)=𝐴𝐼,𝐽𝐡;𝑒minβˆ’π‘’max̇Δ𝑑sign𝑖(π‘˜)ξ‚β„Žξ‚€π‘‘π‘–(π‘˜),𝑖=1,𝑠(7.29) As det𝑃𝐡≠0, we can easily show that ξ€·Ξ˜detπœ•π‘“(0)ξ€Έπœ•Ξ˜(0)β‰ 0.(7.30)

For instants π‘‘βˆˆπ‘‡π΅, there exists a small πœ‡>0 that for any Μƒπ‘‘π‘–βˆˆ[π‘‘π‘–βˆ’πœ‡,𝑑𝑖+πœ‡],𝑖=1,𝑠, the matrix ̃𝑑(β„Ž(𝑖),𝑖=1,𝑠) is nonsingular and the matrix πœ•π‘“(Θ(π‘˜))/πœ•Ξ˜(π‘˜) is also nonsingular if elements 𝑑𝑖(π‘˜),𝑖=1,𝑠,π‘˜=1,2,… do not leave the πœ‡-vicinity of 𝑑𝑖, 𝑖=1,𝑠.

Vector Θ(π‘˜βˆ—) is taken as a solution of (4.6) if β€–β€–π‘“ξ€·Ξ˜(π‘˜βˆ—)ξ€Έβ€–β€–β‰€πœ‚,(7.31) for a given πœ‚>0. So we put πœƒ0=πœƒ(π‘˜βˆ—).

The suboptimal control for problem (2.3)–(2.6) is computed as π‘₯0𝑗=ξ‚»π‘₯0𝑗,π‘—βˆˆπ½π΅,Μ‚π‘₯𝑗,π‘—βˆˆπ½π»π‘’0𝑒(𝑑)=max+𝑒min2βˆ’π‘’maxβˆ’π‘’min2̇Δ𝑑sign0𝑖𝑑,π‘‘βˆˆ0𝑖,𝑑0𝑖+1ξ€Ί,𝑖=1,𝑠.(7.32) If the Newton method does not converge, we decrease the parameter β„Ž>0 and perform the iterative process again.

8. Example

We illustrate the results obtained in this paper using the following example: ξ€œ025𝑒(𝑑)π‘‘π‘‘βŸΆmin,Μ‡π‘₯1=π‘₯3,Μ‡π‘₯2=π‘₯4,Μ‡π‘₯3=βˆ’π‘₯1+π‘₯2+𝑒,Μ‡π‘₯4=0.1π‘₯1βˆ’1.01π‘₯2,π‘₯1(0)=0.1,π‘₯2(0)=0.25,π‘₯3(0)=2,π‘₯4(π‘₯0)=1,1(25)=π‘₯2(25)=π‘₯3(25)=π‘₯4π‘₯(25)=0,min≀π‘₯≀π‘₯max[].,0≀𝑒(𝑑)≀1,π‘‘βˆˆ0,25(8.1)

Let the matrix be βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽ0000⎞⎟⎟⎟⎟⎟⎟⎠,π‘₯𝐴=00100001βˆ’11000.1βˆ’1.0100,β„Ž(𝑑)=1000010000100001,𝑔=min=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βˆ’4βˆ’4βˆ’4βˆ’4,π‘₯max=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽ4444⎞⎟⎟⎟⎟⎟⎟⎠.(8.2)

We introduce the adjoint system which is defined as πœ“1=βˆ’πœ“3+0.1πœ“4,πœ“2=πœ“3βˆ’1.01πœ“4,πœ“3=πœ“1,πœ“4=πœ“2,πœ“1𝑑𝑓=0,πœ“2𝑑𝑓=0,πœ“3𝑑𝑓=0,πœ“4𝑑𝑓=0.(8.3)

Problem (8.1) is reduced to canonical form (2.3)–(2.6) by introducing the new variable Μ‡π‘₯5=𝑒,π‘₯5(0)=0. Then, the control criterion takes the form βˆ’π‘₯5(𝑑𝑓)β†’max. In the class of discrete controls with quantization period β„Ž=25/1000=0.0025, problem (8.1) is equivalent to LP problem of dimension 4Γ—1000.

To construct the optimal open-loop control of problem (8.1), as an initial support, a set 𝑇𝐡={5,10,15,20} was selected. This support corresponds to the set of nonsupport zeroes of the cocontrol 𝑇𝑛0={2.956,5.4863,9.55148,12.205,17.6190,19.0372}. The problem was solved in 26 iterations, that is, to construct the optimal open-loop control, a support 4Γ—4 matrix was changed 26 times. The optimal value of the control criterion was found to be equal to 6.602054 in time 2.92.

Table 1 contains some information on the solution of problem (8.1) for other quantization periods.

tab1
Table 1

Of course, one can solve problem (8.1) by LP methods, transforming the problem (4.6)–(7.8). In doing so, one integration of the system is sufficient to form the matrix of the LP problem. However, such β€œstatic” approach is concerned with a large volume of required operative memory, and it is fundamentally different from the traditional β€œdynamical” approaches based on dynamical models (2.3)–(2.6). Then, problem (2.3)–(2.6) was solved.

In Figure 1, there are control 𝑒(𝑑) and switching function for minimum principle. In Figure 2, there is phaseportrait (π‘₯1,π‘₯3) for a system (8.1). In Figure 3, there are state variables π‘₯1(𝑑),π‘₯2(𝑑) for a system (8.1). In Figure 3, state variables π‘₯3(𝑑),π‘₯4(𝑑) for a system (8.1). In Figure 4, state variables π‘₯1(𝑑),π‘₯2(𝑑) for a system (8.1).

209329.fig.001
Figure 1: Optimal control 𝑒(𝑑) and switching function.
209329.fig.002
Figure 2: Phaseportrait π‘₯1(𝑑),π‘₯3(𝑑).
209329.fig.003
Figure 3: Optimal state variables π‘₯1(𝑑),π‘₯2(𝑑).
209329.fig.004
Figure 4: Optimal state variables π‘₯3(𝑑),π‘₯4(𝑑).

References

  1. L. S. Pontryagin, V. G. Boltyanski, R. Gamkrelidze, and E. F. Mischenko, The Mathematical Theory of Optimal Processes, Interscience Publishers, New York, NY, USA, 1962.
  2. R. E. Bellman, Dynamic Programming, Princeton University Press, Princeton, NJ, USA, 1963.
  3. R. E. Bellman, I. Glicksberg, and O. A. Gross, Some Aspects of the Mathematical Theory of Control Processes, Rand Corporation, Santa Monica, Calif, USA, 1958, Report R-313.
  4. N. V. Balashevich, R. Gabasov, and F. M. Kirillova, β€œCalculating an optimal program and an optimal control in a linear problem with a state constraint,” Computational Mathematics and Mathematical Physics, vol. 45, no. 12, pp. 2030–2048, 2005. View at Scopus
  5. R. Gabasov, F. M. Kirillova, and N. V. Balashevich, β€œOn the synthesis problem for optimal control systems,” SIAM Journal on Control and Optimization, vol. 39, no. 4, pp. 1008–1042, 2000. View at Publisher Β· View at Google Scholar Β· View at Scopus
  6. R. Gabasov, N. M. Dmitruk, and F. M. Kirillova, β€œDecentralized optimal control of a group of dynamical objects,” Computational Mathematics and Mathematical Physics, vol. 48, no. 4, pp. 561–576, 2008. View at Publisher Β· View at Google Scholar Β· View at Scopus
  7. N. V. Balashevich, R. Gabasov, and F. M. Kirillova, β€œNumerical methods of program and positional optimization of the linear control systems,” Zh Vychisl Mat Mat Fiz, vol. 40, no. 6, pp. 838–859, 2000.
  8. R. Gabasov, Adaptive Method of Linear Programming, University of Karsruhe, Institute of Statistics and Mathematics, Karsruhe, Germany, 1993.
  9. L. Kahina and A. Mohamed, β€œOptimization of a problem of optimal control with free initial state,” Applied Mathematical Sciences, vol. 4, no. 5–8, pp. 201–216, 2010. View at Scopus