Abstract

Multistage launch vehicles are employed to place spacecraft and satellites in their operational orbits. Trajectory optimization of their ascending path is aimed at defining the maximum payload mass at orbit injection, for specified structural, propulsive, and aerodynamic data. This work describes and applies a method for optimizing the ascending path of the upper stage of a specified launch vehicle through satisfaction of the necessary conditions for optimality. The method at hand utilizes a recently introduced heuristic technique, that is, the particle swarm algorithm, to find the optimal ascent trajectory. This methodology is very intuitive and relatively easy to program. The second-order conditions, that is, the Clebsch-Legendre inequality and the conjugate point condition, are proven to hold, and their fulfillment enforces optimality of the solution. Availability of an optimal solution to the second order is an essential premise for the possible development of an efficient neighboring optimal guidance.

1. Introduction

Multistage rockets are employed to place spacecraft and satellites in their operational orbits. The optimization of their ascending trajectory leads to determining the maximum payload mass that can be inserted in the desired orbit. This goal is achieved by finding the optimal control time history and the optimal thrust and coast durations.

The numerical solution of aerospace trajectory optimization problems has been pursued with different approaches in the past. Indirect methods, such as the gradient-restoration algorithm [1, 2] and the shooting method [3] or direct techniques, such as direct collocation [4, 5], direct transcription [6, 7], and differential inclusion [8, 9], are examples of such techniques. However, only a relatively small number of publications are concerned with trajectory optimization of multistage launch vehicles [1, 2, 10, 11]. A recently published paper [12] describes a simple method for performance evaluation through generation of a near optimal trajectory for a multistage launch vehicle.

This research considers the optimal exoatmospheric trajectory of the upper stage of the latter rocket, whose characteristics are specified. The trajectory arc that precedes orbital injection is composed of two phases: (1) coast (Keplerian) arc and (2) thrust phase. More specifically, for the upper stage the existence and duration of a coast arc (with no propulsion) and the optimal thrust direction are being investigated through the first-order necessary conditions for optimality, that is, the Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with a heuristic technique, that is, the particle swarm algorithm. This technique is intended to numerically determine the state, control, and adjoint variables (conjugate to the dynamics equations). The availability of the latter quantities allows the direct evaluation of the second-order conditions for optimality, that is, the Clebsch-Legendre condition and the nonexistence of conjugate points. The numerical process for verifying these conditions is based on the sweep method and is illustrated in full detail in this work.

The present paper is organized as follows. Section 2 deals with fundamental concepts related to the multistage rocket characteristics and dynamics. Section 3 is concerned with optimal thrust programming and illustrates the first- and second-order conditions for optimality. Section 4 contains an overview of the numerical technique used to generate the optimal ascending path and presents the numerical solution to the trajectory optimization problem, together with the second-order conditions verification. Section 5 concludes the paper, summarizing its main results and accomplishments.

2. Characteristics and Dynamics of the Multistage Launch Vehicle

This work addresses the problem of optimizing the ascending path of the upper stage of a multistage rocket with specified structural, propulsive, and aerodynamic characteristics, through the determination of the optimal control time history, leading to injecting the payload in the desired operational orbit. This is a circular orbit of radius and specified spatial orientation. The launch vehicle is modeled as a point mass, in the context of a three-degree-of-freedom problem.

2.1. Rocket Characteristics

The three-stage rocket that is being considered is the MultiRole Air Launch Missile (Muralm) [13]. It has specified structural, propulsive, and aerodynamic characteristics, which are the same as those described in [12] and are (partially) reported in the following.

For the sake of simplicity, the mass distribution of the launch vehicle can be described in terms of masses of subrockets: subrocket 1 is the entire rocket, including all the three stages, subrocket 2 is the launch vehicle after the first stage separation, and subrocket 3 is the launch vehicle after separation of the first two stages and therefore is represented by the third stage only. Let denote the initial mass of subrocket . This mass is composed of a structural mass , a propellant mass , and a payload mass (). For the first two subrockets, () coincides with the initial mass of the subsequent subrocket (i.e., ). With regard to the third subrocket, is the (actual) payload mass, which is to be maximized. The mass distribution for the Muralm three-stage rocket is the following: The initial mass of the third stage is specified and this implies that minimizing the third stage propellant expenditure implies maximizing the payload mass that can be injected in the desired orbit.

The propulsive characteristics of the launch vehicle can be described in terms of thrust magnitude and specific impulse . Both these quantities are assumed as time-independent for all of the stages. For the upper stage, the following propulsive data are assumed: where the superscript refers to stage 3. For each subrocket , it is then straightforward to derive the thrust acceleration as a function of the time : where is the thrust acceleration at ignition of stage (which occurs at the time ) and is the gravitational acceleration at sea level.

The aerodynamics of the Muralm rocket has been modeled through the Missile DATCOM database for the first two stages, whereas it is irrelevant for the upper stage, because the flight is approximated as entirely exoatmospheric, because of the altitude reached at the second stage separation.

2.2. Equations of Motion

Orbital motion is usually described in an Earth-centered inertial frame (ECI), with origin coincident with the Earth center, and axes associated with the unit vectors ; corresponds to the vernal axis, whereas is aligned with the Earth rotation axis and directed toward North (cf. Figure 1). The orbital plane contains the unit vectors , associated with the ascending node crossing (corresponding to the point at which the space vehicle crosses the equatorial plane from South to North), , aligned with the direction of perigee, and , directed toward the instantaneous position of the spacecraft. Two angles suffice to define the orbit plane orientation in the ECI frame, that is, the right ascension of the ascending node (RAAN) and the orbit inclination . The argument of perigee defines the position of perigee on the orbit plane, whereas the instantaneous true anomaly identifies the position of the space vehicle.

The entire trajectory of the upper stage entirely belongs to the desired orbital plane, which has been selected during the ascending trajectory of the first two stages (cf. [12]). This circumstance implies that the orbit inclination and RAAN do not vary during the last stage propulsion. In contrast, the remaining orbit elements, that is, the orbit semimajor axis , eccentricity , and argument of perigee , change up to reaching the desired final values (denoted with the subscript “”); that is, and ( is not defined for circular orbits).

At separation of the second stage, the previously mentioned orbit elements are known and denoted with , , and . The third stage trajectory is assumed to be composed of two phases: a coast arc and a thrust phase. It is relatively straightforward to prove that during the coast arc the true anomaly variation, denoted with , suffices to describe the rocket dynamics. In fact, if and represent, respectively, the time of the second stage separation and the ignition time of the upper stage, then . The orbital elements , , and do not vary along the coast (Keplerian) arc. As a result, the radius , inertial velocity , and inertial flight path angle at are simply given by where denotes the Earth gravitational parameter. In the powered phase, the third stage motion can be conveniently described through the use of the equations of motion for , , and : where the subscript “3” will be omitted henceforth and can be written in compact form as state equations: The thrust direction is specified through the thrust angle , which is the only time-varying control variable and is taken clockwise from the direction of the inertial velocity vector . The operational orbit is assumed to be a circular orbit of radius . This means that the final conditions at orbit injection (which occurs at ) are The left-hand sides of (4) through (6) and (9) are collected in the vector .

3. Optimal Thrust Programming

The trajectory of the upper stage is obtained by minimizing fuel consumption, which is equivalent to minimizing the thrust duration. Hence, the objective function for the third stage trajectory optimization is Hence, the optimization problem for the upper stage is the following: determine the optimal control law and the optimal true anomaly such that is minimized. The ignition time is then calculated through the Kepler’s law, where is the eccentric anomaly associated with .

Letting , to obtain the necessary conditions for an optimal solution a Hamiltonian and a function of the boundary conditions are introduced as where and ; and represent, respectively, the adjoint variable conjugate to the dynamics equations (7) and to the boundary conditions (4)–(6) and (9). With these definitions, the objective function assumes its extended form

3.1. First-Order Conditions

The first-order necessary conditions for (local) optimality [14] include the adjoint equations: which yield the following scalar relations for the costate : in conjunction with the respective boundary conditions: where the subscripts “0” and “ ” refer to and , respectively. In the presence of initial conditions depending on a parameter a pair of additional necessary conditions must hold [15]: The first equation yields a relation that expresses as a function of , , and , In addition, the optimal control can be expressed as a function of the costate through the Pontryagin minimum principle: which implies the fulfillment of the stationarity condition of with respect to ; that is, Lastly, as the final time is unspecified, the following transversality condition must hold: The necessary conditions for optimality allow translating the optimal control problem into a two-point boundary-value problem involving (15)–(24), with unknowns represented by the initial values of , , and .

However, the transversality condition can be proven to be ignorable. First, homogeneity of the costate equations (15)–(17) can be easily recognized. This circumstance implies that if an optimization algorithm is capable of finding some initial value of such that , , and (where the superscript “” denotes the actual optimal value of a variable), then the same proportionality holds between and at any , due to homogeneity of (15)–(17). Second, if the control is written as a function of through (22), then one can recognize that coincides with : This circumstance implies that if is proportional to , then the final conditions are fulfilled at the minimum final time even when the control is expressed as a function of the nonoptimal . In contrast, using the transversality condition is violated, because the value of , due to (24), turns out to be Hence, if the proportionality condition holds, then optimal control can be determined without considering the transversality condition, which in fact is ignorable in this context.

3.2. Second-Order Conditions

The derivation of the second-order optimality conditions, which have local character as the first-order conditions, involves the definition of an admissible comparison path, located in the neighborhood of the (local) optimal path, associated with the optimal state , costate , and control . The nonexistence of alternative neighboring optimal solutions is to be proven in order to guarantee optimality of the solution of interest.

3.2.1. Weierstrass and Legendre-Clebsch Conditions

An admissible comparison path can be defined by introducing a comparison control such that where and is small. Under the assumption that depends only on , the condition ensures fulfillment of the boundary conditions to first order ( is the variation of the state at the final time). The comparison control is not located in the neighborhood of the optimal solution and for this reason the variation belongs to the class of strong variations. In contrast, the resulting state variation is reasonably small, and the comparison state is sufficiently close to the optimal state . Let denote the local minimum value of the objective (associated with ) and the value corresponding to . After expanding in Taylor series, one obtains (cf. [15]) where and are associated with and . The quantity must be positive for any choice of . Since is arbitrary and , the condition must hold at any point along the minimizing path. Equation (29) represents the Weierstrass condition for a strong local minimum. For weak variations, that is, when (with sufficiently small), (29) can be expanded in Taylor series about the optimal solution: Along the optimal path , this property and the fact that is arbitrary imply that the Hessian matrix must be positive definite; that is, Equation (31) is the Clebsch-Legendre sufficient condition for a minimum; in the necessary (weak) form the sign “” replaces the inequality sign (i.e., the Hessian must be positive semidefinite).

3.2.2. Conjugate Point Condition

In general, a comparison path fulfills the state equations and boundary conditions to first order. This means that the state, costate, control, parameter, and final time displacements satisfy the linear equations (deriving from (8), (14), and (23)): accompanied by the boundary conditions (deriving from the boundary conditions (4)–(6), (9), (18), (19), and (24)): where . After solving (34) for , and inserting the resulting expression in (32)-(33), one obtains where the matrices , , , , and are defined in terms of the quantities appearing in (32)–(34). The final conditions (35) and (37) justify the definition of the sweep variables, introduced through the following relations: After inserting (43) through (45) into (32)-(33), one obtains with final conditions that must be consistent with (35) through (39); that is, Equations (44) and (45) can be solved simultaneously at to yield where If (48) is used at , then Letting , it is relatively straightforward to demonstrate that the first of (46) holds also for (with in place of ), with boundary condition as . It is worth remarking that all the matrices introduced in this section are evaluated along the optimal path. From (51), it is then straightforward to conclude that as , unless tends to infinity at an internal time , which is referred to as conjugate point. If and , then also , due to (40)–(42) and (48). This means that no neighboring optimal path exists, unless at a certain time . In other words, the nonexistence of conjugate points is It is worth remarking that the numerical backward integration for can start by integrating (46) up to a time sufficiently close to ; then one can switch to for the remaining time interval .

4. Optimal Ascending Trajectory

This section is focused on the numerical solution to the trajectory optimization problem defined in the previous sections and regarding the upper stage of the Muralm launch vehicle. To this end, the first-order conditions for optimality are used, in conjunction of a simple implementation of swarming algorithm.

4.1. Outline of the Particle Swarm Optimization Method

The particle swarm optimization (PSO) method is a heuristic technique aimed at finding the optimal values of a set of unknown parameters, for a generic dynamical system. In this research, the optimization problem involves a continuous time-dependent control variable and is translated into a parameter optimization problem through the necessary conditions for optimality, which allow expressing the control variable as a function of the adjoint variables conjugate to the dynamics equations. This means that unknown parameters for the problem at hand are

In general, unconstrained parameter optimization problems can be stated as follows: determine the optimal values of the unknown parameters such that the objective function is minimized. The PSO technique is a population-based method, where the population is represented by a swarm of particles. Each particle is associated with a position vector and with a velocity vector . The position vector includes the values of the unknown parameters of the problem, , whereas the velocity vector, whose components are denoted with , determines the position update. Each particle represents a possible solution to the problem and corresponds to a specific value of the objective function. The initial population is randomly generated by introducing particles, whose positions and velocities are (stochastically) uniformly distributed in the respective search spaces. The expressions for position and velocity update determine the swarm evolution toward the location of the globally optimal position, which corresponds to the globally optimal solution to the problem of interest. The algorithm terminates when the maximum number of iterations is reached or an alternative convergence criterion is met. The position vector of the best particle is expected to contain the optimal values of the unknown parameters, which correspond to the global minimum of .

The central idea underlying the method is contained in the formula for velocity updating. This formula includes three terms with stochastic weights: the first term is the so-called inertial component and for each particle is proportional to its velocity in the preceding iteration; the second component is termed the cognitive component, directed toward the personal best position, that is, the best position experienced by the particle; and finally the third term is the social component, directed toward the global best position, that is, the best position yet located by any particle in the swarm. Further details are reported in [16, 17].

Constrained optimization problems involve equalities and/or inequalities, regarding (directly or indirectly) the unknown parameters. Equality constraints narrow considerably the search space where feasible solutions can be located. This is due to the fact that (nonredundant) equality constraints actually reduce the degree of freedom of the problem according to their number. In fact, equality constraints reduce the degree of freedom by . Therefore, in the presence of unknown parameters, at most equality constraints are admissible : . The most popular approach for dealing with these constraints consists in penalizing them by adjoining additional terms to the objective function: This approach is employed also in this research, with and equality constraints given by (9) and (21). As the necessary conditions for optimality are used to express the control variable, the PSO assumes the expression (54) (with ) as the objective. Satisfaction of the first-order conditions is indicative of optimality of the solution.

4.2. Numerical Results

The ascending trajectories are determined by employing canonical units: the distance unit (DU) is the Earth radius , whereas the time unit (TU) is such that . Thus, and . The desired final radius is ; the true anomaly (at ) is specified.

The swarming optimizer is employed with the following settings: (number of particles) and (maximum number of iterations). The algorithm terminates prematurely its execution if the objective (cf. (35)) decreases below the value . The optimal values of the unknown parameters are sought in the following ranges: , , and . It is worth noticing that constraint reduction allows arbitrarily defining the search space for the initial values of the Lagrange multipliers. This means that they can be sought in the interval by the PSO algorithm.

Only 126 PSO iterations were needed to generate the optimal solution to the specified accuracy, corresponding to the following in-plane orbit elements and mass at burnout: Figures 2, 3, and 4 portray the state components, that is, altitude (directly related to the radius ), inertial velocity, and flight path angle. Figure 5 illustrates the costate variables, whereas Figure 6 depicts the optimal control time history. It is apparent that the thrust direction is nearly aligned with the velocity for the entire time of powered flight of the upper stage.

Two second-order conditions are described in Section 3.2, that is, the Clebsch-Legendre condition on the Hessian and the nonexistence of conjugate points (52). These conditions are checked for the optimal ascending path. First, due to (22), assumes the form which is nonnegative. From inspecting Figure 7, it is apparent that is positive for the entire time of flight. Moreover, backward integration of (46) yields the time history of . Figure 8 reports the absolute value of the determinant of (in logarithmic scale) and proves that is finite at any time in the interval . This suffices to demonstrate the two previously mentioned second-order conditions, which enforce optimality to second order.

5. Concluding Remarks

Trajectory optimization of multistage launch vehicles is a challenging problem, treated with different approaches in the past. In this research, the final objective of minimizing fuel consumption is achieved by finding the optimal control time history and the optimal thrust and coast durations. Specifically, for the upper stage the existence and duration of a coast arc (with no propulsion) and the optimal thrust direction are determined through the first-order necessary conditions for optimality, that is, the Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with a heuristic method, that is, the particle swarm algorithm. This technique is very intuitive and easy to implement and leads to numerically determining the state, control, and adjoint variables (conjugate to the dynamics equations). A further advantage is represented by the fact that the swarming approach does not need any starting guess to yield an accurate solution. In addition, the availability of the adjoint variables allows the direct evaluation of the second-order conditions for optimality, that is, the Clebsch-Legendre condition and the nonexistence of conjugate points. The numerical process for verifying these conditions is based on the sweep method and is illustrated in full detail in this work. The satisfaction of the second-order conditions enforces optimality of the ascending path found in this study and represents an essential premise for the possible development of an efficient neighboring optimal guidance.