#### Abstract

We introduce a formulation for the time-optimal control problems of systems displaying fractional dynamics in the sense of the Riemann-Liouville fractional derivatives operator. To propose a solution to the general time-optimal problem, a rational approximation based on the Hankel data matrix of the impulse response is considered to emulate the behavior of the fractional differentiation operator. The original problem is then reformulated according to the new model which can be solved by traditional optimal control problem solvers. The time-optimal problem is extensively investigated for a double fractional integrator and its solution is obtained using either numerical optimization time-domain analysis.

#### 1. Introduction

In the world surrounding us, the physical laws of dynamics are not always followed by all systems. When the considered systems are complex or can only be studied on a macroscopic scale, they sometimes divert from the traditional integer order dynamic laws. In some cases, their dynamics follow fractional-order laws meaning that their behavior is governed by fractional-order differential equations [1]. As an illustration, it can be found in the literature [2] that materials with memory and hereditary effects, and dynamical processes, such as gas diffusion and heat conduction, in fractal porous media can be more precisely modeled using fractional-order models than using integer-order models. Another vein of research has identified the dynamics of a frog's muscle to display fractional-order behavior [3].

Optimal Control Problems (OCPs) or Integer-Order Optimal Controls (IOOCs) can be found in a wide variety of research topics such as engineering and science of course, but economics as well. The field of IOOCs has been investigated for a long time and a large collection of numerical techniques has been developed to solve this category of problems [4].

The main objective of an OCP is to obtain control input signals that will make a given system or process satisfy a given set of physical constraints (either on the system's states or control inputs) while extremizing a performance criterion or a cost function. Fractional Optimal Control Problems (FOCPs) are OCPs in which the criterion and/or the differential equations governing the dynamics of the system display at least one fractional derivative operator. The first record of a formulation of the FOCP was given in [5]. This formulation was general but includes constraints on the system's states or control inputs. Later, a general definition of FOCP was formulated in [6] that is similar to the general definition of OCPs. The number of publications linked to FOCPs is limited since that problem has only been recently considered.

Over the last decade, the framework of FOCPs has hatched and grown. In [5], Agrawal gives a general formulation of FOCPs in the Riemann-Liouville (RL) sense and proposes a numerical method to solve FOCPs based on variational virtual work coupled with the Lagrange multiplier technique. In [7], the fractional derivatives (FDs) of the system are approximated using the Grünwald-Letnikov definition, providing a set of algebraic equations that can be solved using numerical techniques. The problem is defined in terms of the Caputo fractional derivatives in [8] and an iterative numerical scheme is introduced to solve the problem numerically. Distributed systems are considered in [9] and an eigenfunction decomposition is used to solve the problem. Özdemir et al. [10] also use eigenfunction expansion approach to formulate an FOCP of a 2-dimensional distributed system. Cylindrical coordinates for the distributed system are considered in [11]. A modified Grünwald-Letnikov approach is introduced in [12] which leads to a central difference scheme. Frederico and Torres [13–15], using similar definitions of the FOCPs, formulated a Noether-type theorem in the general context of the fractional optimal control in the sense of Caputo and studied fractional conservation laws in FOCPs. In [6], a rational approximation of the fractional derivatives operator is used to link FOCPs and the traditional IOOCs. A new solution scheme is proposed in [16], based on a different expansion formula for fractional derivatives.

In this article, we introduce a formulation to a special class of FOCP: the Fractional Time-Optimal Control Problem (FTOCP). Time-optimal control problems are also referred to in the literature as minimum-time control problems, free final time-optimal control, or bang-bang control problems. These different denominations define the same kind of optimal control problem in which the purpose is to transfer a system from a given initial state to a specified final state in minimum time. So far, this special class of FOCPs has been disregarded in the literature. In [6], such a problem was solved as an example to demonstrate the capability and generality of the proposed method but no thorough studies were done.

The article is organized as follows. In Section 2, we give the definitions of fractional derivatives in the RL sense and FOCP and introduce the formulation of FTOCP. In Section 3, we consider the problem of the time-optimal control of a fractional double integrator and propose different schemes to solve the problem. In Section 4, the solution to the problem is obtained for each approach for a given system. Finally, we give our conclusions in Section 5.

#### 2. Formulation of the Fractional Time-Optimal Control Problem

##### 2.1. The Fractional Derivative Operator

There exist several definitions of the fractional derivative operator: Riemann-Liouville, Caputo, Grünwald-Letnikov, Weyl, as well as Marchaud and Riesz [17–20]. Here, we are interested in the Riemann-Liouville definition of the fractional derivatives for the formulation of the FOCP.

The Left Riemann-Liouville Fractional Derivative (LRLFD) of a function is defined as where is the Gamma function defined for any complex number as and where the order of the derivative satisfies . The Right Riemann-Liouville Fractional Derivative (RRLFD) is defined as

##### 2.2. Fractional Optimal Control Problem Formulation

With the RL definition of the fractional derivatives operator given in (2.1) and (2.3), we can specify a general FOCP: find the optimal control for a fractional dynamical system that minimizes the following performance criterion: subject to the following system dynamics: with initial condition and with the following constraints: where is the state variable, stands for the time, and , and are arbitrary given nonlinear functions. The subscripts , , , and on the functions and stand for, respectively, objective function, trajectory constraint, endpoint inequality constraint and endpoint equality constraint.

##### 2.3. Fractional Time-Optimal Control Problem Formulation

A FTOCP is defined by a performance index of the form which appears when we desire to minimize the time required to reach a given target given some initial conditions .

Under such conditions, the problem is to transfer the system whose dynamics are given by from a given initial state to the desired state . The minimum time required to reach the target is defined as .

To ensure that the problem has a solution, the control variables are required to be constrained in the following way: In the following, we will make use of the minimum principle to determine the optimal control law for the previously defined problem. We define the state associated with the optimal control law as .

We define the Hamiltonian for the problem described by the dynamic system (2.9) and the criterion (2.8) as where stands for the costate variable. The optimal costate variable is defined as .

In the case of constrained systems, the results given in [5] do not apply as the control function is constrained and does not have arbitrary variations. Indeed, if the control lies on the boundary in (2.10), then the variations are not arbitrary. Instead, we need to use Pontryagin's minimum principle [4]. According to the proof of the theorem in [21], we use for demonstration arbitrary variations in the control signal . We define both the increment and the (first) variation of the performance index as where the first variation is defined as With the constraints (2.10) and making the assumption that all the admissible variations on the control are small enough to ensure that the sign of the increment can be determined by the sign of the variation , the necessary condition on to minimize becomes According to [21, Chapter 2], the first variation can be defined as

In the previous equation,

(1)if the optimal state equations are satisfied, then we obtain the state relation, (2)if the costate is chosen so that the coefficient of the dependent variation in the integrand is identically zero, then we obtain the costate equation,(3)the boundary condition is chosen so that it results in the auxiliary boundary condition.When all of the previous enumerated items are satisfied, the first variation can be reformulated as The integrand in the previous relation is the first-order approximation to change in the Hamiltonian due to a change in alone. This means that by definition combining the previous two equations leads us to Now, using the above, the necessary condition becomes for all admissible less than a small value. The relation becomes Replacing by , the necessary condition becomes When applied to our problem, we obtain which can be simplified to The state and costate equations can be retrieved from [5] and give with where we again note that is free. We can notice that is the control signal that causes to take its minimum value.

Let us consider the simple case of a fractional system with the following: dynamics The state and costate equations become with Using Pontryagin's minimum principle, we have Defining gives us We can now derive the optimal control sequence . Given (2.31),

(1)if is positive, then the optimal control must be the smallest admissible control value so that (2)and if is negative, then the optimal control must be the largest admissible control value so thatCombining (2.32) and (2.33) gives us the following control law: In Section 3, we provide several numerical methods to obtain the control , state and costate for a specific problem from the literature.

#### 3. Solution of the Time-Optimal Control of a Fractional Double Integrator

In this section, we consider the following FTOCP: subject to

##### 3.1. Solution Using Rational Approximation of the Fractional Operator

It is possible for time-optimal control problems to be reformulated into traditional optimal control problems by augmenting the system dynamics with additional states (one additional state for autonomous problems). For that purpose, the first step is to specify a nominal time interval, , for the problem and to define a scaling factor, adjustable by the optimization procedure, to scale the system dynamics, and hence, in effect, scale the duration of the time interval accordingly. This scale factor and the scaled time are represented by the extra states.

The problem defined by (3.1)-(3.2) can accordingly be reformulated as follows: find the control (satisfying ) over the time interval , which minimizes the quadratic performance index subject to the following dynamics: where the initial conditions are where is the initial value chosen by the user. Final state constraints are

According to [22], we can approximate the operator using a state space definition Such approximation is called rational approximation of the fractional operator. To ensure the applicability of our method, we need to define a new state vector such that where is the state vector of the rational approximation for the fractional-order system described by .

Using the methodology proposed in [6], we reformulate the problem defined by (3.1)-(3.2). Find the control (satisfying ), which minimizes the quadratic performance index subjected to the following dynamics: the initial condition and the final state constraints given by

Such a definition allows the problem to be solved by any traditional optimal control problem solver.

##### 3.2. Solution Using Bang-Bang Control Theory

The solution to the problem defined by (3.1)–(3.2) can also be solved using bang-bang control theory. In the integer case (), the solution in the time domain is well documented and is given by where is the switching time.

Such a solution is obtained by deriving the system's dynamic equations in the time domain: for (acceleration) and for (deceleration) Finding the switching time and the final time can be done by solving the system of final time conditions: We can apply a similar technique to solve the fractional problem. For (acceleration) and for (deceleration) Finding the switching time and the final time can be achieved by solving the system of final time conditions: When expanded, it becomes Renaming , we get The solution to the system of equations is as follows: solve for

#### 4. Results

In this section, we find the solution to the problem defined in Section 3 for the following parameter values:

(i)(ii)(iii)The analytical solution for this system for (the traditional double integrator) is given in [23] by as To solve the problem, we use the RIOTS_95 Matlab Toolbox [24]. The acronym RIOTS means “recursive integration optimal trajectory solver.” It is a Matlab toolbox developed to solve a wide variety of optimal control problems. For more information about the toolbox, please refer to [25].

Figure 1 shows the states as functions of for . Figures 2, 3, 4, 5, and 6 show the state as functions of for different values of (0.9, 0.8, 0.7, 0.6, and 0.5, respectively). We can observe that when the order approaches , the optimal duration nears its value for the double integrator case.

Since it is possible to obtain the analytical solution of the problem from (3.22), we give in Figure 7 the plot of the duration of the control versus the order of the fractional derivative. As we can see, for , the solution matches the results obtained for a double integrator.

#### 5. Conclusions

We developed a formulation for fractional time-optimal control problems. Such problems occur when the dynamics of the system can be modeled using the fractional derivatives operator. The formulation made use of the the Lagrange multiplier technique, Pontryagin's minimum principle, and the state and costate equations. Considering a specific set of dynamical equations, we were able to demonstrate the bang-bang nature of the solution of fractional time-optimal control problems, just like in the integer-order case. We were able to solve a special case using both optimal control theory and bang-bang control theory. The optimal control solution can be obtained using a rational approximation of the fractional derivative operator whereas the bang-bang control solution is derived from the time-domain solution for the final time constraints. Both methods showed similar results, and in both cases as the order approaches the integer value , the numerical solutions for both the state and the control variables approach the analytical solutions for .