Research Article  Open Access
A HighPrecision Single Shooting Method for Solving Hypersensitive Optimal Control Problems
Abstract
Solving hypersensitive optimal control problems is a longstanding challenge for decades in optimization engineering, mainly due to the possible nonexistence of the optimal solution to meet the required error tolerance under doubleprecision arithmetic and the hypersensitivity of the optimal solution with respect to the initial conditions. In this paper, a new highprecision single shooting method is presented to address the above two difficulties. Multipleprecision arithmetic and Taylor series method are introduced to provide the accurate optimal solution with arbitrary higher significant digits and arbitrary higher integral accuracy, respectively. Besides, a new modified bidirectional single shooting method is developed, which fully utilizes the threesegment structure of the hypersensitive optimal control problems and provides appropriate initial guess that is close to the optimal solutions. Numerical demonstrations in a typical hypersensitive optimal control problem are presented to illustrate the effectiveness of this new method, which indicates that the accurate optimal solution of this challenging problem can be easily solved by this simple single shooting method within several iterations.
1. Introduction
An optimal control problem and its associated Hamiltonian boundary value problem (HBVP) are called hypersensitive if the time interval of interest is long relative to the rates of expansion and contraction of the linearized Hamiltonian dynamics in certain directions in a neighborhood of the optimal solution. Solving hypersensitive optimal control problems (HOCPs) is a challenging task, which has been studied extensively for decades [1–6].
Typically, there are two types of methods, usually categorized as direct and indirect methods, for solving HOCPs. The direct method converts the optimal control problem into a nonlinear programming problem (NLP) by appropriate discretization [7]. Various direct methods have been developed during the past few decades, such as direct multiple shooting method [8], gradientbased optimization techniques [9, 10], differential inclusion method [11], interior point method [7, 12], collocation method [13, 14], pseudospectral method [4, 15–17], and control parameterization method [18–20]. Direct methods are straightforward and robust to accommodate complex conditions, such as path constraints and intermediate constraints. When applying direct methods to solve HOCPs, the key is to use a higher density of nodes in the first and third segment, according to the threesegment structure characteristic of HOCPs, which is described qualitatively as “takeoff,” “cruise,” and “landing” by analogy to an airporttoairport trajectory for an aircraft [1]. Reference [3] introduced a simple method for mesh point distribution for solving HOCPs based on density functions, and the problem of mesh refinement is subsequently converted to a problem of finding an appropriate density function. Reference [4] presented a hpadaptive pseudospectral method to solve HOCPs, which iteratively determines the number of segments, the width of each segment, and the polynomial degree required in each segment. Reference [5] proposed a symplectic algorithm with nonuniform grids combining with density functions to solve HOCPs. Later, [21] applied GPOPSII, a commercial software program that employs a LegendreGaussRadau quadrature orthogonal collocation method to solve HOCPs. In their method, an adaptive mesh refinement method is implemented to determine the number of mesh intervals and the degree of the approximating polynomial within each mesh interval to achieve a specified accuracy. However, there are several disadvantages for solving HOCPs by direct methods. The first one is that the obtained optimal solution by direct methods is merely approximate to the accurate solution. The error between these two solutions is usually small and acceptable for most optimal control problems; however, this error may be extremely large and unacceptable for HOCPs, as demonstrated in this paper later. Besides, direct methods cannot provide insight regarding the optimal solutions [6].
Unlike direct methods, indirect methods transform the original problems to HBVPs by firstorder optimality conditions [22], which are helpful to provide the dynamical information of the systems. Besides, the optimal solution is guaranteed to be a least extremal. However, it is extremely difficult to solve HOCPs by indirect methods, mainly due to the high sensitivity to the initial unknowns. Based on the threesegment structure, [1, 2, 23, 24] developed a dichotomic basis method to solve HOCPs. The core idea is the decomposition of contracting and expanding behaviors of the system by coordinate transformation, and the iterative process is required due to the approximation of dichotomic basis. Recently, [6] presented a finitetime Lyapunov analysis based method to construct an approximate dichotomic basis and developed a corresponding manifoldfollowing solution approximation method. However, the optimal solutions obtained by the above methods are only nearoptimal, and, as pointed in [6], the sensitivity of the optimal solution to the errors in the unknown boundary conditions may cause illconditioning, which potentially exceeds the available precision. Thus, to the best of the authors’ knowledge, the above difficulties are not well solved in the existing literature, and obtaining the accurate optimal solutions of HOCPs by indirect methods is still extremely daunting.
In this paper, a new highprecision single shooting method is presented to address the above two main difficulties for solving HOCPs by indirect methods: the possible nonexistence of the optimal solution to meet the required error tolerance under doubleprecision arithmetic; the hypersensitivity of the optimal solution with respect to the initial conditions. Multipleprecision arithmetic and Taylor series method, which provide higher significant digits and higher integral accuracy, respectively, are introduced to address the first difficulty. A modified bidirectional single shooting method (MBISSM) is developed to settle the second one, which introduces an intermediate point along the equilibrium segment as the starting point and shoots to both boundary points simultaneously. Finally, numerical simulations are provided to demonstrate the effectiveness and efficiency of the proposed method, which reveals that the challenging HOCPs can be easily solved within several iterations by this proposed method.
2. Hypersensitive Optimal Control Problem
Consider the optimal control problem with the dynamical equations as follows:where and are the state and control vector at time , respectively, and is the smooth function. The performance index iswhere and are the prescribed initial time and terminal time, respectively. The boundary conditions are defined aswhere and are known.
According to the optimal control theory [22], the corresponding Hamiltonian function is implicitly dependent on time, which is expressed aswhere is the costate vector. The Hamiltonian differential equations are as follows:and the optimal control vector satisfies
Denote , as an alternate expression for the Hamiltonian system in (6). For sufficiently large value of , that is, is long relative to the rates of both expansion and contraction in certain directions in the neighborhood of the optimal solution; this problem becomes completely hypersensitive. It is a degenerate class of two timescale HBVPs that is composed of fast and slow subsystems. Otherwise, the problem is partial hypersensitive [1, 2]. In this paper, the completely hypersensitive case is considered, and the saddle type equilibrium point is supposed to exist. As illustrated in [1, 2], the optimal trajectory shows the threesegment structure geometrically, which is described by where and denote the durations of the initial and terminal segment, respectively. is the optimal solution of the initial segment with initial conditions , is the optimal solution of the terminal segment with terminal conditions , and is the optimal solution of equilibrium segment with . Please notice that and should be large enough to allow the boundarylayer segments to reach with sufficient accuracy [2].
Based on the observation that , [1, 23] presented a dichotomic basis method for HOCPs. In their method, the original optimal control problem is spitted into three subproblems.
(a) Subproblem A. Solve Hamiltonian system with the optimal control determined by (7), initial conditions defined in (3), and terminal conditions given as where is the guessing time duration of the initial segment.
(b) Subproblem B. Solve Hamiltonian system without control effort, that is,together with the initial conditions and terminal conditions as follows:where is the time duration of the terminal segment. Since the control variables are totally determined, this subproblem is actually an initial value problem solved by a numerical integration method.
(c) Subproblem C. Solve Hamiltonian system with the optimal control determined by (7), the initial conditions defined in (4), and terminal conditions given as
Please notice that subproblem C has the same structure as subproblem A, except that the time direction is opposite. Thus it requires a backward integration to complete numerical computation.
Dichotomic basis method is elegant and efficient; however, the obtained optimal solution by this method is only nearoptimal. Thus, obtaining the accurate optimal solution of HOCPs by indirect methods is still an open challenge.
3. HighPrecision Single Shooting Method
In this section, a new highprecision single shooting method is developed to obtain the accurate optimal solution for HOCPs, which combines with three important themes: multipleprecision arithmetic, Taylor series method, and a MBISSM. In this proposed method, multipleprecision arithmetic and Taylor series method provide the ability of highprecision computation with higher significant digits and higher integral accuracy, respectively, and the MBISSM provides appropriate initial guess that is in the close neighborhood of the optimal solution.
3.1. MultiplePrecision Arithmetic
Currently, most optimization applications and algorithms utilize doubleprecision (64bit) accuracy, which allows numerical computations to be accurate to ~15 significant digits and is more than sufficient for most optimal control problems. However, for a rapidly growing body of applications, such as HOCPs considered in this paper, doubleprecision arithmetic may not be sufficient any more. Therefore, a higher level of numeric precision is required, and multipleprecision arithmetic is an effective way.
Typically, the ability of multipleprecision arithmetic is commonly provided in software, as hardware support for higher precision is severely lacking. It is highly likely that a commodity processor supporting higher floatingpoint precision will be developed in the near future [25]. Quadruple (128bit or ~30 significant digits) or octal (256bit or ~60 significant digits) precision arithmetic is a typical choice. For some specific problems, hundreds or even more significant digits are required to obtain numerically meaningful results.
Nowadays there are several commercial or freely available multipleprecision software packages, which usually provide highlevel language interfaces, custom data types, and operator overload features. In this paper, a Multiprecision Computing Toolbox developed by Advanpix [26] is applied for highprecision computation. This software is a MATLAB extension for computing with arbitrary precision; that is, the toolbox equips MATLAB with a new multipleprecision numeric type and extensive set of mathematical functions that are capable of computing with arbitrary precision. Thus many existing MATLAB programs can be directly converted to achieve arbitrary precision with small efforts.
3.2. Taylor Series Method
Numerical methods for integrating ordinary differential equations have been studied since the end of the last century, and a large number of integration formulas have been developed. Typically, most numerical integration methods are designed to approximate the exact solution of the ordinary differential equations, which gives rise to truncation errors. Generally, the truncation errors remain to be sufficiently small for most optimal control problems with common numerical integration methods, such as the wellknown RungeKutta methods. However, for HOCPs considered in this paper, the truncation errors would be amplified to be incredibly large with these methods, even when multipleprecision arithmetic is applied.
The truncation errors can be reduced by two different ways: by reducing the step size and by using the higherorder integration formula. If the step size between two adjacent values becomes smaller, the truncation errors of the numerical integrations would decay. The disadvantage is that the step size may decrease to be so small that it may take extremely long time to complete the integration, and the roundoff error would be increased [27]. Thus, a higherorder integration formula, called Taylor series method [28], is preferred to provide accurate numerical integration solution. Taylor series method is one of the oldest methods, which traces back to Newton and Euler. It has an advantage that its formula at an arbitrarily high order can be easily expressed in the united form with large step sizes and thus small values of computing time is required [29]. Therefore, from the viewpoint of numerical simulations, it is rather easy and attractive to use this method at a very high order so as to reduce the truncation errors to a required level.
Consider the initial value problem as follows:where , is the specified initial time, is the prescribed initial conditions, and is the smooth function. According to the rules of Taylor series method, the value of the solution at is approximated by the th degree Taylor series of developed at and evaluated at , that is,where is the step size and is defined as
In this paper, a variable step size and variable order (VSVO) scheme, which aims to control the integration error automatically by adjusting the integration step size through the criteria related to the prescribed tolerance and variable order formulation, is implemented during the integration by Taylor series method. The details of VSVO scheme can be found in [30].
3.3. Modified Bidirectional Single Shooting Method
Single shooting methods are one of the most commonly used numerical methods to solve HBVPs, which attempt to identify appropriate initial conditions for a related initial value problem to provide the solution of the original boundary value problem. If the boundary conditions are not satisfied to the desired accuracy, the selecting process of initial conditions is repeated with a new set of initial conditions until the desired accuracy is achieved. Single shooting methods are typically classified as forward single shooting method (FSSM), backward single shooting method (BSSM), and bidirectional single shooting method (BISSM), according to their different integration directions. As illustrated in Figure 1, FSSM and BSSM, which start from the initial and terminal boundary points, respectively, are unidirectional. Their solving procedures are all the same except for the opposite integral direction. Other than the above unidirectional shooting methods, BISSM starts from both boundary points and shoots to an arbitrary intermediate point [31]. However, these methods are not appropriate to solve HOCPs, mainly due to the extreme difficulty of selecting proper initial guess, even if multipleprecision arithmetic and Taylor series method are incorporated.
(a) FSSM
(b) BSSM
(c) BISSM
In this paper, a new MBISSM is presented to address the above difficulty. As illustrated in Figure 2, this MBISSM starts from an arbitrary intermediate point inside the time interval and shoots to both boundary points simultaneously. Denote an intermediate point at along the equilibrium segment as the starting point, which is the unknown to be determined numerically by the shooting method. According to the observation that this optimal solution is near the equilibrium value during equilibrium segment as suggested by threesegment structure, it is reasonable to set as the initial guess, which is expressed as follows:
In this MBISSM, the boundary conditions are given aswhich are both required to be satisfied by backward and forward integrations from the starting point, respectively. Denote and to be the state and costate vectors at and by integrating backward and forward, respectively. Thus, the optimization problem is stated as follows:
Once the optimal solution at the starting point is obtained by the MBISSM, denoted as , the states at the initial and terminal time are completely determined by integrating from , denoted as and , respectively. The initial state errors, which are the differences between and the given initial value defined in (3), can be expressed as
Denote as the state that is integrated from and . The terminal state errors are denoted as
Due to the existence of the initial state errors defined in (19), the Euclidean norm of the terminal state errors may be extremely larger for the HOCPs, which may violate the error tolerance. In this case, a lower error tolerance for solving (18) is required with the help of multipleprecision arithmetic, until the required accuracy is satisfied.
Although multiple shooting methods are wildly demonstrated to be superior to the single shooting methods in terms of the convergence domain [32], they do not provide any information regarding the flow structure in the phase space, which aids the development of a simpler approximate solution method for solving HOCPs [6]. Thus, multiple shooting methods are not considered in this paper.
4. Numerical Demonstration
Consider the following hypersensitive optimal control problem adopted from [1], which is to determine the control, , on the time interval to minimize the performance index as follows:The dynamic constraints are expressed aswith the boundary conditions defined aswhere is fixed. It is known that, for sufficiently large value of , this optimal control problem becomes hypersensitive [1]. Thus, the Hamiltonian function is formulated aswhere is the costate vector associated with , the governing costate differential equations of which are given as
According to optimal control theory [22], the optimal control is determined bythat is,
Substituting (30) into the state equations in (22) yieldswhich constitute the Hamiltonian differential equations together with the costate equations in (28), and the corresponding equilibrium value is .
In this paper, all computations are executed in a desktop computer with a 2.20 GHz CPU. All of the codes are implemented under MATLAB, and the required accuracy is set as .
4.1. Optimal Solutions with DoublePrecision Arithmetic
In this section, several numerical tests are implemented to demonstrate the performance of the proposed MBISSM with doubleprecision arithmetic. The middle point with is chosen as the starting point for the MBISSM. The FFSM and a commercial software GPOPSII [21], which is a LegendreGaussRadau quadrature orthogonal collocation based direct method, are implemented for the comparison. The simulation results of the BSSM and the BISSM are not provided in this section, as they are quite close with the FFSM’s. Although GPOPSII adopts a direct method to solve the original optimal control problem described in ((21)–(26)), it provides highly accurate costates estimation, which permits the ability to compare its optimal solution directly with other indirect methods. MATLAB function ode45, which is based on an explicit RungeKutta (4,5) formula with adaptive step control, is used as numerical integration algorithm and the error tolerance for all of these three methods is set at .
Eight test cases with varying terminal time in the range of are implemented, and the numerical solutions are provided in Table 1. In Table 1, comparisons of MBISSM, FFSM, and GPOPSII in the Euclidean norm of terminal state errors and convergent CPU time are illustrated. The character “X” is used to denote the divergent cases. The initial guesses for these methods are all the same, that is, the equilibrium value . As illustrated in Table 1, all the test cases by the FFSM are divergent, and 6 cases are successfully convergent by the MBISSM, which reveals that the MBISSM has much better convergence property when compared with the FSSM. GPOPSII has the best convergence property without a failure. However, the corresponding Euclidean norm of terminal state errors is much larger than the MBISSM’s. As described in the previous section, the terminal state errors by MBISSM are mainly caused by the tiny initial state errors defined in (19). Unlike the MBISSM, the terminal state errors by the GPOPSII are totally determined by the number of mesh intervals and the degree of the approximating polynomial within each mesh interval.

Taking case, for example, the optimal solution at the middle point is obtained as follows:Obviously, the above solution is approximate zero, which indicates that the optimal solution at the middle point is indeed in the close neighborhood of the equilibrium value with and guarantees the success of the proposed MBISSM with the equilibrium values as the initial guess. Due to the fact that the initial guess is extremely close to the accurate optimal solution, this HOCP can be easily solved in only 6 iterations by the MBISSM. With the above optimal solution at the middle point as the starter, the optimal solution obtained at by the MBISSM can be calculated by backward integration, given asThus the initial state errors defined in (19) arewhere the initial conditions and are defined in ((23)(24)). The integrated trajectory with , , , and as the initial starter is shown in Figure 3, which illustrates that the terminal state errors defined in (20) reach two orders of magnitude and the maximum state errors are three orders of magnitude for the whole trajectory. Fortunately, with the help of multipleprecision arithmetic, these errors can be significantly reduced to meet the required error tolerance, which will be discussed later.
For comparison, the corresponding initial optimal costates with by GPOPSII are given as follows:which are provided by GPOPSII via a highly accurate costates estimation method. The integrated trajectory with , , , and as the initial starter is shown in Figure 4, which reveals that the terminal state errors reach as high as four orders of magnitude and the maximum state errors are five orders of magnitude for the whole trajectory. Because the large terminal state errors by GPOPSII are directly determined by the discrete characteristics of direct methods, they cannot be reduced effectively even with multipleprecision arithmetic.
4.2. Optimal Solutions with MultiplePrecision Arithmetic
As demonstrated previously, although the proposed MBISSM has much higher convergence property than the FSSM, it still fails to be convergent for larger terminal time and the terminal state errors of convergent case may also be extremely large. Fortunately, both of the above shortcomings can be fully addressed with multipleprecision arithmetic and Taylor series method.
Still taking the case, for example, the same problem is resolved by the MBISSM with multipleprecision arithmetic and Taylor series method as integration algorithm. The corresponding iterative formulation of Taylor coefficients iswhere , is the order of Taylor series. Multipleprecision arithmetic with 30 significant digits is adapted for highprecision optimization. The integral accuracy for the Taylor series method and the error tolerance for the MBISSM are set at and , respectively. This HOCP can be easily solved in only 10 iterations by the MBISSM, and the convergent CPU time is about 980 s. The optimal solution at the middle point is obtained as follows:With the above optimal solution at the middle point as the starter, the optimal solution at initial time can be calculated by backward integration, that is,Thus the initial state errors defined in (19) areThe integrated trajectory with , , , and as the initial starter is shown in Figure 5, and the Euclidean norm of terminal state errors defined in (20) is about . As the error tolerance is set at a higher accuracy of , the obtained optimal solution accuracy still satisfies the required accuracy . If the accuracy cannot meet the required accuracy, a higher significant digits and a higher integral accuracy are required.
Another case with terminal time is also implemented, which fails with doubleprecision arithmetic as illustrated in Table 1. For this case, 90 significant digits are used for highprecision computation. The integral accuracy for the Taylor series method and the error tolerance for this single shooting method are set at and , respectively. This problem is easily solved by the proposed highprecision single shooting method within 10 iterations, and the convergent CPU time is about 35 hours. The obtained optimal solution at is given asThus the corresponding initial state errors defined in (19) areThe integrated trajectory with , , , and as the initial starter is shown in Figure 6, and the Euclidean norm of terminal state errors defined in (20) is about , which meets the required accuracy .
Lower significant digits are used, that is, the first 55 significant digits of and , which are With (23)(24) and (45) as the initial starter of trajectory integration, the terminal state errors defined in (20) are provided in Table 2. The integral accuracy for the Taylor series method remains unchanged, which is . As illustrated in Table 2, the Euclidean norm of terminal state errors increases rapidly along with the deceasing of significant digits, which reaches as high as about for the 55 significant digits case.

If lower integral accuracies of Taylor series method are used with 90 significant digits, the terminal state errors defined in (20) are provided in Table 3, taking (23)(24) and (42)(43) as the initial starter of trajectory integration. As illustrated in Table 3, the terminal state errors increase rapidly along with the raising of integral accuracy. It is amazingly found that the terminal state errors are four orders of magnitude large if the integral accuracy is set at , which obviously cannot meet the required error tolerance. Based on the above discussions, it can be concluded that no optimal solution may exist to meet the required error tolerance for lower significant digits and lower integral accuracy, which explains why the MBISSM fails to solve the same problem with doubleprecision arithmetic in section.
