Abstract

Optimal control problems are common in aerospace engineering. A Python software program called PySCP is described for solving multiple-phase optimal control problems using sequential convex programming methods. By constructing a series of approximated second-order cone programming subproblems, PySCP approaches to the solution of the original optimal control problem in an iterative way. The key components of the software are described in detail, including convexification, discretization, and the adaptive trust region method. The convexification of the first-order differential dynamic equation is implemented using successive linearization. Six discretization methods, including zero-order hold, first-order hold, Runge-Kutta, and three hp pseudospectral collocation methods, are implemented so that different types of optimal control problems can be tackled efficiently. Adaptive trust region method is employed, and robust convergence is achieved. Both free-final-time problem and fixed-final-time problem can be solved by the software. The application of the software is demonstrated on three optimal control problems with varying complexity. PySCP provides researchers a useful toolkit to solve a wide variety of optimal control problems using sequential convex programming.

1. Introduction

Many aerospace engineering problems require to solve an optimal control problem (OCP) [1], such as the ascent trajectory of launch vehicles [2, 3], hypersonic vehicle reentry [4], spacecraft rendezvous [5, 6], and extraterrestrial objects soft landing [7]. Traditionally, there are two methods that can be used to solve OCPs, the indirect methods and direct methods [8]. The former derives the optimality condition based on the Pontryagin maximum principle and classical calculus of variation theories, resulting in a two-point boundary value problem (TPBVP). The costate variables in the TPBVP have no explicit meaning and are extremely sensitive to initial guesses, making the TPBVP very difficult to solve. In contrast, the direct methods discrete the original continuous-time OCP into a nonlinear programming (NLP) problem, which can be then solved by an NLP solver. The direct methods are more often used in practice [9] because the analytical optimality condition is generally very complicated.

In most aerospace engineering problems, the NLP problems obtained by direct methods are large-scale sparse and nondeterministic polynomial-time hard, and the computation time to reach the expected accuracy is not guaranteed or limited. It is possible that the computation time is so long that no result can be reached. Although the dramatic increase in computing power in past decades makes it possible for direct methods to be widely used in aerospace engineering, rapid computation and guaranteed convergence are still in pursuit, especially in multidisciplinary design optimization [10] studies and online guidance and control [11] where computation efficiency is essential.

Convex optimization problems are computationally tractable and globally convergent and can be solved in polynomial time [12, 13]. These outstanding characteristics have attracted many researchers to try to solve OCPs by convex approaches [14]. There are two ways to do this. The first way is called lossless convexification. The word “lossless” indicates that the convexification process does not lose the problem’s characterization, and the convexified convex problem is equivalent to the original OCP. However, most optimal control problems cannot be approached using lossless convexification due to the intrinsic nonconvexity. In this case, the second way is applied, which is called sequential convex programming (SCP), also known as successive convex programming. Since equivalent convex problem cannot be constructed, approximate convex problem is constructed in the local neighbourhood of a reference solution (i.e., the reference trajectory). By solving the approximate convex problem, new reference trajectory is obtained, and new approximate convex problem can be generated. The solution to the optimal control problem can then be approached in an iterative way.

In recent years, significant effort has been devoted into this topic, and many aerospace engineering problems have been solved by convex approaches. Açıkmeşe et al. [1518] proposed lossless convexification to solve the single-phase OCP of Mars pinpoint landing, which was then extended to onboard guidance for vertical landing launch vehicles and asteroid landing [19, 20]. Many advanced techniques have been developed and combined with SCP, for instance, problem reformulation by equivalent transformation [21, 22] and pseudospectral methods [23]. Both aerodynamic control and thrust control for powered landing have also been tackled using SCP [24, 25]. Sequential convex approaches have achieved significant progress in this area, and even six-degree-of-freedom free-final-time powered landing can be solved in real time [26].

Hypersonic reentry is another difficult problem that has been well addressed. Subject to significant aerodynamic forces and strict path constraints, trajectory optimization of hypersonic reentry is a highly nonlinear problem. Liu et al. [27] first introduced new control variables and obtained corresponding linear dynamics plus additional nonconvex control constraints which were then relaxed into convex constraints. By successive linearization, the original nonconvex problem is converted into a sequence of second-order convex programming (SOCP) problems. Line search and trust region methods [28], adaptive mesh refinement [29], pseudospectral discretization methods [30], and equivalent transformation [31] have also been employed associated with SCP to cope with this highly nonlinear OCP. SCP has also been extensively applied in many other problems, such as unmanned arial vehicle formation flight [32] and spacecraft rendezvous guidance [33, 34].

In past decades, there has been many scientific computing software that implemented direct methods and widely used in aerospace engineering. The first well-known direct collocation software was Optimal Trajectories by Implicit Simulation [35] (OTIS), a FORTRAN software that has general-purpose capabilities for problems in aeronautics and astronautics. Commercial general-purpose optimal control software GPOPS II [36, 37] is a Matlab toolkit that uses hp-adaptive pseudospectral methods to discrete OCPs and large-scale NLP solvers (SNOPT [38] and IPOPT [39]) for the transcribed NLP problems; DIDO [40, 41] uses pseudospectral methods and sequential quadratic programming (SQP) to solve OCPs, which are widely used in orbit maneuvering studies. Program to optimize simulated trajectory II (POST II) [42] developed by NASA uses direct shooting methods and gradient descent/SQP and is applicable in various ascent and descent trajectory optimization problems. DLR developed first European tool based on flipped Radau pseudospectral methods, referred as SPARTAN [43, 44]. IClocs2 [45, 46] is another comprehensive software suite for solving OCPs implemented in Matlab and Simulink, aiming at providing a user-friendly interface. However, there has been no public software that implements SCP to boost the research on optimal control problems using convex optimization.

This paper presents a Python software program for solving multiple-phase optimal control problems using sequential convex programming methods. PySCP first maps the time horizon of each phase onto a normalized time horizon so that both fixed-final-time and free-time-time cases can be handled. The nonconvex functions in the optimal control problem need to be convexified. PySCP provides a template to convexify the dynamic equation using successive linearization. After convexification, the continuous-time problem is discretized to form a discrete SOCP problem. To do this, PySCP implements six discretization methods, including zero-order hold (ZOH), first-order hold (FOH), Runge-Kutta (RK), and three Legendre pseudospectral methods. The trust region size has a deterministic influence on the convergence property of the iteration process. An adaptive trust region method is employed to adjust the trust region size. To demonstrate the utility of PySCP, three examples of different complexity are illustrated. We hope this software can help to provide researchers a platform to deal with optimal control problems with less effort and promote the application of SCP in aerospace engineering.

2. Multiple-Phase Optimal Control Problem

The general multiple-phase optimal control problems that can be solved by PySCP are given as follows.

Problem P0. Minimize the objective function subject to the dynamic constraints the path constraints the boundary constraints and the linkage constraints

The above functions are defined by the following mappings:

is the state variable vector in the -th phase, and is the control variable vector. , , , and are the state variable dimension, control variable dimension, number of path constraints, and number of boundary constraints in the -th phase. is the number of linkage couples, and is the linkage constraints in the -th linkage couple. An example of how phases can be linked is given in Figure 1. There are four phases and three linkage couples in total.

Phases can be either free-final-time or fixed-final-time. The time domain of each phase is mapped onto a normalized time domain based on the following expression:

Denote . The multiple-phase optimal control problem can be rewritten as follows.

Problem P1. Minimize the objective function subject to the dynamic constraints the path constraints the boundary constraints and the linkage constraints

The dynamic equations are a first-order differential equation set, and all the other functions are algebraic. For simplification, the symbol for phase number is omitted in the remaining part of this paper, except for the functions in the objective and linkage constraints.

2.1. Workflow of PySCP

The workflow of PySCP is illustrated in Figure 2. The basic idea behind this is that “construct the approximated SOCP and iterate to approach the original problem.”

Step 1. Convexification. If all functions and constraints of the optimal control problem are convex, the optimal control problem can be reformulated as convex optimization problem without requirement for approximation, and the solution to the convex problem is also the optimal solution to the original problem. Otherwise, all nonconvex functions and constraints must be transformed into convex ones. There are many different convexification methods, and most of them require a reference trajectory for approximation.

Step 2. Discretization. The convexified convex problem is still continuous-time problem which cannot be directly handled by computers. Discretization methods approximate the functions in the continuous-time problem using state variables and control variables at the discrete points, after which a SOCP problem is constructed. The discretization method determines the unknown variable number and the computation efficiency of the transcribed convex optimization problem. In PySCP, six discretization methods are implemented, including ZOH, FOH, Runge-Kutta, and three Legendre pseudospectral methods.

Step 3. Solve the convex optimization problem. The SOCP problem can be solved efficiently in bounded computation time [12, 47]. There are many mature software or toolkit that implements SOCP solvers, such as MOSEK [48], CPLEX, SDPT3 [49], SeDuMi [50], and ECOS [51]. There is also a variety of software that provides interfaces to formulate SOCP problems, such as CVX [52], CVXPY [53], CVXOPT [54], and CVXGEN [55], to name a few. PySCP uses CVXPY to address the SOCP problem, which is solved by the primal-dual internal point method using ECOS by default.

In each iteration, the solution to the SOCP problem is checked whether it meets the convergence requirement. If positive, the algorithm stops. Otherwise, go to Step 4.

Step 4. Trust region adaption. In mathematical optimization, those methods that iterate to approach to the solution by constructing approximated problems are usually called local descent methods [56], including trust region method and line search method. The former first determines the step size and then finds the search direction. In contrast, the latter method determines the search direction first and then the step size. The basic idea behind SCP is the same as the local descent methods. Both the trust region method and line search method can be applied for iterative optimization [57]. PySCP adopts a method called adaptive trust region method. The solution to the SOCP serves as the new reference trajectory for constructing a new SOCP problem.

In the next three sections, the convexification, discretization, and adaptive trust region method will be discussed one by one.

2.2. Convexification Methods

Convexification and discretization are aimed at constructing a discrete SOCP problem in the local neighbourhood of a reference trajectory. SOCP is a kind of convex optimization problems defined as where is 2-norm operator. In a SOCP problem, the equality function must be linear, and inequality function is convex in the sense that the constrained feasible space is a second-order cone. All the function in Problem P1 must be convexified into functions with the same form as Equation (13). There are many convexification methods, including equivalent transformation, change of variables, successive linearization, successive approximation, and relaxation. We refer the readers to [14] for more details.

The convexification method depends on the specific problem. In this paper, we focus on the convexification of the dynamic equation. The successive linearization is employed, which refers to the process of repeatedly linearizing a nonconvex function around a reference trajectory by Taylor expansion. Denote the reference trajectory of the -th iteration as . For a first-order differential equation in the following form

the right-hand side can be linearized by Taylor expansion. If the final time is fixed, is known and a constant value; then, the linearized differential equation can be expressed as

When the final time is free, is an unknown variable. The dynamic equation can be approximated as

In Equations (15) and (16), is the Jacobian matrix of respect to , and is the Jacobian matrix of respect to . Denote

Equation (15) can be rewritten as and Equation (16) is rewritten as

In this way, the right-hand side of the first-order differential equation is linearized. The only difference between Equation (20) and Equation (21) lies at the third term on the right-hand side. To keep it simple, only Equation (21) will be discussed, and the case for fixed-final-time phases can be obtained in a similar way.

3. Discretization Methods

The objective function, path constraints, boundary constraints, and linkage constraints are all algebraic functions, and the discrete form of these functions is the same as the original functions. However, the dynamic constraints are first-order differential functions and require to be approximated and transformed to algebraic functions.

To do this, PySCP implements six discretization methods, as shown in Figure 3. ZOH, FOH, and RK belong to low-order discretization methods.

Global Legendre pseudospectral methods benefit from two outstanding characteristics [58, 59]: (1) when the solution is smooth, the pseudospectral methods have quasi-exponential convergence when the number of discrete points is increased; (2) Runge phenomenon is avoided. However, if the solution is not smooth, the quasi-exponential convergence is invalid. In order to overcome this phenomenon, hp Legendre pseudospectral methods are adopted. Low-order methods only have quasi-linear convergence [60] when the number of discrete points is increased. If the solution accuracy is high, the required number of discrete points becomes very large. These two kinds of discretization methods have their own advantages and disadvantages. Users can select appropriate methods for their problems.

3.1. Legendre Pseudospectral Methods

Global Legendre pseudospectral methods use the roots of orthogonal Legendre polynomials as the collocation points and construct a set of Lagrangian polynomials to approximate the functions. Depending on the difference of collocation points, global Legendre pseudospectral methods can be divided into Legendre-Gauss (LG) pseudospectral method, Legendre-Gauss-Radau (LGR) pseudospectral method, and Legendre-Gauss-Lobatto (LGL) pseudospectral method.

The discretization points, also known as nodes, are the points where to approximate the variables. An illustration of the pseudospectral collocation points and nodes is shown in Figure 4 (the circles represent the collocation points, and the nodes include both the circles and the crosses). Denote the -th-order Legendre polynomial as . The collocation points of the LG are the root of , which contains neither nor . The LGR points are the root of , which contains the left bound of the time interval but does not contain . The LGL points are the root of together with and . The LG and LGL points are symmetric about the origin whereas the LGR points are not. There is another version of LGR, which is called flipped LGR (fLGR). The fLGR points are obtained by flipping the LGR points with respect to the y axis.

The global Legendre pseudospectral methods approximate the functions on the time interval of using global polynomials. They lose pseudoexponential convergence when the optimal control is not smooth. To overcome this problem, hp pseudospectral methods are implemented in PySCP. hp methods first divide the time interval into several subintervals and then approximate the function on the subintervals with local Lagrangian polynomials. “h” represents the number of the subintervals, and “p” represents the polynomial degree in the subinterval. The global pseudospectral methods can be considered as special cases of hp pseudospectral methods in the sense that .

Take hp fLGR as example. The time interval is divided into subintervals. Denote the -th subinterval as and the left bound and right bound of this subinterval as . Assume that there are collocation points in this subinterval. A set of Lagrangian polynomials are constructed using the variables at the nodes

The functions in the subinterval can be approximated as where and are the state variables and control variables at the nodes in and have the following property

is the Kronecker Delta function. Differentiating Equation (23), we have where is the pseudospectral differential matrix and . Thus, the dynamic equation can be discretized by the following form:

Denote the state variables in all subintervals as follows:

The dynamic equation can be approximated by

This is called the pseudospectral differential form. Corresponding to this form, the dynamic equation can also be approximated in another form called the pseudospectral integral form. In the -th subinterval, the state variable is approximated by where is matrix and . Combing all subintervals, the integral form can be expressed as

is a vector containing the state variables on the left bound of all subintervals. The differential form and integral form of hp LG pseudospectral method and hp LGR are similar. It should be noted that the differential form and the integral form are equivalent [61]. However, this is not the case for hp LGL. Therefore, hp LGL is not implemented in PySCP.

Apart from the dynamic equation, the integral term in the objective is approximated by

is the integral weights of the pseudospectral methods. We refer the readers to [62] for the computation of the differential matrix , the integral matrix , and the integral weights .

3.2. Low-Order Discretization Methods

The convexified dynamic equation

can be considered as a continuous-time linear control system. In control theory, this control system can be discretized using ZOH, FOH, and RK methods. Assume that the normalized time horizon is discretized into subintervals by discretization points, which can also be called nodes. The subintervals can be either uniformly distributed or nonuniformly distributed.

According to the control theory [63], the state transition of Equation (32) over an subinterval can be expressed as where is the state transition matrix which has the following properties: (1) is first-order continuous(2)Nonsingular and invertible: and , where is the unit matrix(3)For all , (4) is the unique solution to the linear matrix ordinary differential equation

ZOH assumes that the control variables keep constant during each subinterval and are always equal to the control variable on the left bound of each subinterval, i.e.,

FOH assumes that the control variables are linearly interpolated by the control variables on the left bound and right bound of each subinterval, i.e.,

The illustration of ZOH and FOH is shown in Figure 5.

In the ZOH method, the state variables at can be approximated by where the matrices are given by

Thus, the state variables on adjacent points are associated with each other by an algebraic equation. In the FOH method, the state variables at can be associated with by the following equation: where the matrices are given by

In the RK method, the transcribed dynamic equation is given as follows: where

The subscript indicates the value is evaluated at the middle point of and . Subscribing Equation (42) into Equation (41), the latter equation can be reformatted into the same form as that of Equation (39).

4. Adaptive Trust Region Method

After convexification and discretization, the multiple-phase optimal control problem is transcribed to a SOCP problem around a reference trajectory , which is summarized as follows:

Problem P2. Minimize the objective subject to the transcribed dynamic constraints (Equation (28), Equation (30), Equation (37), or Equation (39) depending on the discretization method), path constraints the boundary constraints the linkage constraints and the trust region constraints

The dynamic constraints are equality functions, and the feasible space is very small in most situations, or even null, especially in the first several iterations. This phenomenon is usually called artificial infeasibility. To avoid the situation that no solution exists, a slack variable is introduced. In pseudospectral methods, is the number of collocation points. For low-order discretization methods, is the number of the subintervals. Take Equation (37) as an example, the dynamic constraints are converted to an inequality function

The slack variable enlarges the feasible space of the dynamic constraints. The norm of represents how much the dynamic constraints are violated; therefore, should be as small as possible to make sure that the solution to the SOCP problem is also the solution to the original OCP. In addition to the dynamic constraints, a slack variable for the path constraints is also introduced. The path constraints are transformed to the following inequality function:

To ensure that two slack variables approach to zero at convergence, they are penalized in the objective function. Construct an augmented objective function of the original optimal control problem, defined as follows:

The penalty on the slack variables is also introduced onto the objective function of Problem 2, as follows: where and are the penalty factors on the slack variables.

Define a ratio parameter to measure the accuracy of approximation. The numerator is the improvement in the performance index of the original optimal control problem, and the denominator is the improvement in the performance index of the SOCP problem. If is close to 1, the approximation is very accurate, and the trust region can be enlarged. If , the approximation is poor, and the trust region must be scaled down.

Based on the above analysis, define the following parameters: and . If , then adjust the trust region by . If , then the trust region keeps unchanged, . Otherwise. if , enlarge the trust region by . The PySCP algorithm is terminated when the nonlinear cost reduction goes below a tolerance .

5. Examples

In this section, PySCP is demonstrated on three examples. The first example is the Breakwell problem and demonstrates the ability of PySCP to solve single-phase smooth OCPs with fixed final time. The second example is the lunar landing problem to show its ability to cope with nonsmooth free-final-time problems. The third example is the ascent trajectory optimization of two-stage-to-orbit launch vehicle, which is a nonsmooth multiple-phase optimal control problem.

The simulation parameters in all the following examples are listed in Table 1, including the penalty factors and parameters for the adaptive trust region method. Suitable values of these parameters here are selected according to a trial-and-error process. The virtual control weights are typically 1-3 orders of magnitude larger to ensure that the corresponding terms in the right-hand side of Equation (50) go to 0.

5.1. Breakwell Problem

The Breakwell problem [60] is a single-phase fixed-final-time optimal control problem whose optimum control variable is smooth. The objective is given by subject to a convex dynamic equation fixed boundary constraints and state constraints

In this problem, all functions and constraints are convex. PySCP achieves the solution on the first iteration because no approximation is made when transcribing the problem to the SOCP problem. The analytical optimal objective is .

The optimal state variables are shown in Figure 6, and the optimal control variable is shown in Figure 7. The discretization method is FOH, and the subinterval number is 40. All other discretization methods lead to the same optimal solution. The straight lines are the initial guess, which are far away from the final solution.

To assess the precision of the implemented PySCP method, the state variables are integrated by open-loop propagation given the computed control signals. The integrated state vector is denoted as . The open-loop propagation is implemented and integrated in PySCP, which is automatically called after iteration stops. The relative error between the integrated state and the computed state is defined as

The max relative error of each state variable is

Multiple discretization methods are implemented in PySCP. All these methods are employed, and the CPU times and max relative errors for different methods are listed in Table 2.

The CPU times for the low-order methods become larger when the node number grows. However, this is not the case for the pseudospectral methods. For instance, hp LG with 42 nodes is faster than that with 56 nodes. This is mainly because of the structure of the differential matrix and integral matrix , as illustrated in Figure 8. The nonzero elements only exist in the diagonal blocks, and the number of nonzero elements in each block is proportional to the square of the polynomial degrees in corresponding interval. Higher polynomial degree leads to a longer solution time.

Optimal objective can be obtained by all methods, but the pseudospectral methods outperform the low-order methods. ZOH has a much worse accuracy and objective value than other methods. FOH and RK have a very small relative error compared to other methods, which is probably due to the way that FOH and RK discrete the dynamic functions. They approximate the dynamic functions by integrating from one node to the next, similar to open-loop propagation. However, they are characterized with larger objective error. The pseudospectral methods are better in obtaining the optimal objective. Even low degree can lead to an optimal objective close to the analytical value.

The state relative error and objective error are plotted in Figure 9. The hp pseudospectral methods are implemented using both h-scheme and p-scheme. The character “h” indicates that a fixed low-degree polynomial (in this case, ) is used in each interval, and the number of subintervals is changed, while “p” indicates that is fixed to be 1, and the polynomial degree is changed.

This is an example problem whose state and control variables are all smooth. The pseudospectral methods outperform the low-order methods by 1-2 orders of magnitude. h-scheme and p-scheme have a similar behaviour. They both have a quasi-exponential convergence behaviour as stated before when solving smooth problems.

5.2. Lunar Landing Problem

The lunar landing problem [64] is a single-phase free-final-time bang-bang control problem. The objective is

The state variables are the flight altitude and velocity , subject to convex dynamic equations where is the gravitational acceleration at the Moon surface. The boundary condition is given by

The control variable is bounded by . After normalization, the objective function is given by which is not convex. Therefore, successive linearization is employed to approximate the objective function.

The optimal states are shown in Figure 10, and the optimal control is shown in Figure 11. The analytical optimal objective is 8.7831. The plotted curve by PySCP is calculated using the hp fLGR method with three subintervals, each with ten collocation points.

The state relative error and objective error for this nonsmooth problem are plotted in Figure 12. Different from the previous smooth problem, the pseudospectral methods have a similar behaviour with that of FOH and RK when the node number is increased. This is mainly due to the noncontinuity in the control variable. The breaking point in control variable makes the pseudospectral methods loss its convergence superiority over the low-order methods. An adaptive mesh refinement might help to mitigate this phenomenon, and faster convergence might be obtained for the pseudospectral methods.

5.3. Ascent Trajectory Optimization of a Two-Stage-to-Orbit Launch Vehicle

This problem is a multiple-phase optimal control problem. The state can be described by four variables: the altitude , downrange , velocity , and horizontal flight path angle , as depicted in Figure 13.

The objective is to maximize the payload mass, equivalent to the minimization of the opposite value of the final mass subject to the dynamic equation

The control variables are , , and thrust magnitude . They satisfy the following equation: which is nonconvex. This constraint is convexified by simply changing the equality into inequality

In the optimal solution, the equality always holds. The aerodynamic drag acceleration is calculated by where is the reference area and is calculated by the exponential atmospheric model

is the sea level atmosphere density, and is the reference altitude. By successive linearization, the following are determined as follows: with

The dynamic pressure path constraints must be met, which is . This is a nonconvex constraint, which can be approximated by

The linkage condition between two flight phases is given by the following equality functions: where is the structural mass of the first stage. The trajectory optimization is based on the parameter of the available data of an existing launch vehicle, Falcon 9, as listed in Table 3.

The boundary conditions of the flight mission are listed in Table 4. The parameters of the final downrange and the initial flight path angle are not constant values; instead, they can take values in reasonable ranges and are determined by the optimization result.

The optimized states and control are plotted in Figure 14, including the altitude, downrange, velocity, flight path angle, mass, and the thrust magnitude, respectively. The results from GPOPS II are also shown. It can be seen that the results by SCP and NLP-based method coincide with each other, but slight difference exists in the thrust magnitude. GPOPS II obtains a perfect bang-bang control profile, while that of PySCP has a slight slope. This is because the GPOPS II implements hp-adaptive mesh refinement so that it can capture the discontinuity in the control, whereas PySCP does not implement mesh refinement yet.

Twelve iterations are performed before the algorithm converges, and all constraints are satisfied. The trajectory obtained by two methods is almost identical. PySCP only costs 1/6 CPU time of GPOPS II, which proves the high computational efficiency of the PySCP. As optimization theory points out, there is no theoretical guarantee on the maximum iteration and convergence rate for NLP-based methods. In contrast, the SOCP subproblems can be solved in a limited iteration number.

6. Conclusions

A Python software called PySCP has been described for multiple-phase optimal control problems using sequential convex programming. The software uses successive linearization to convexify nonconvex dynamic constraints. The software employs zero-order hold, first-order hold, Runge-Kutta, hp Legendre-Gauss, hp Legendre-Gauss-Radau, and hp flipped Legendre-Gauss-Radau to convert the continuous-time optimal control problem into discrete second-order cone programming problem, which is then solved by primal-dual internal point method. Soft penalty method and adaptive trust region method are employed to manage the iteration process. The utility of the software is demonstrated on three optimal control problems. The software described in this article provides a useful toolkit to solve a wide variety of optimal control problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This research is supported by the Huzhou Institute of Zhejiang University under the Huzhou Distinguished Scholar Program (ZJIHI-KY0016).