#### Abstract

We present a new method to design the controller for Mars capsule atmospheric entry using deep neural networks and flight-proven Apollo entry data. The controller is trained to modulate the bank angle with data from the Apollo entry simulations. The neural network controller reproduces the classical Apollo results over a variation of entry state initial conditions. Compared to the Apollo controller as a baseline, the present approach achieves the same level of accuracy for both linear and nonlinear entry dynamics. The Apollo-trained controller is then applied to Mars entry missions. As in Earth environment, the controller achieves the desired level of accuracy for Mars missions using both linear and nonlinear entry dynamics with higher uncertainties in the entry states and the atmospheric density. The deep neural network is only trained with data from Apollo reentry simulation in an Earth model and works in both Earth and Mars environments. It achieves the desired landing accuracy for a Mars capsule. This method works with both linear and nonlinear integration and can generate the bank angle commands in real-time without a prestored trajectory.

#### 1. Introduction

Atmospheric entry requires a careful balance of landing accuracy, maximum acceleration, and heating requirements; however, landing accuracy is often sacrificed in order to meet other mission requirements [1]. While Earth reentry missions have become routine, Mars missions often do not have the landing accuracy desired. Additionally, Mars landings present unique challenges. Not counting the most recent InSight mission, there have been 20 missions that intended to land on Mars, of which only 7 were successful [2]. A summary of these missions is given in Table 1.

Most notably, the landing accuracy for the majority of these missions is on the order of hundreds of kilometers. To enable the next generation of Mars landers, future manned Mars missions will require a landing accuracy of tens of kilometers or even meters [3, 4]. In order to achieve that, several technical challenges need to be overcome. One of which is the capability to land accurately while maintaining acceptable acceleration and heating requirements set by the vehicle and cargo.

So far, there have been three generations of entry guidance algorithms. The first generation was designed for low-lifting capsule vehicles like Apollo. The Apollo entry guidance algorithm is broken up into different phases, e.g., preentry attitude hold, down control, up control, Kepler, and final entry [6, 7]. In each phase, the Apollo vehicle flies at a low angle of attack caused by the center of gravity offset with respect to its axis of symmetry, providing the lift for trajectory control [6]. This algorithm heavily depends on the given reference trajectory, and the command for the bank angle is generated to reduce the downrange error. To reduce the computational burden, this algorithm relies mostly on analytical, approximate, and empirical relationships, which affects the precision and applicability [8–13]. It is noteworthy that the Mars Science Laboratory (MSL) mission, which has the most accurate landing so far, used the Apollo reentry guidance algorithm to guide the vehicle to the parachute deployment velocity [14].

The second generation algorithm is the space shuttle entry guidance algorithm. The shuttle has an obviously higher lift-to-drag ratio than Apollo as well as longer flight time and downrange. Furthermore, the shuttle lands horizontally. In this acceleration-based entry guidance algorithm, a reference longitudinal trajectory is defined by a drag acceleration vs. Earth-relative velocity profile for high Mach numbers and a drag-vs.-energy profile for lower Mach numbers [8, 15, 16].

The third generation entry guidance algorithms in recent years originated from previous methods but rely much more on onboard computation for a real-time trajectory design and guidance solution. Predictor-corrector algorithms have shown great potential. These methods can adapt to large trajectory dispersions with no reliance on a given reference trajectory. Considering all their advantages and disadvantages, Lu developed a unified method based on the same algorithmic principles but applicable to multiple vehicle configurations [8]. The algorithm is relatively simple and highly robust using the bank angle to control the vertical component of the aerodynamic lift. Numerical predictor-corrector algorithms have also been adopted for various entry, descent, and landing scenarios, e.g., to address constraints [17] and aerocapture [18].

The Apollo reentry method is detailed in reentry guidance for Apollo by Morth [6]. It has been flight proven and is often the starting point of modern entry methods, including MSL [7, 19, 20]. On the other hand, even though several papers about MSL are published, the access to real Mars mission data is still limited. The large uncertainties in Mars atmosphere model should be considered which have significant influence on the results. Yu et al.’s review paper summarized the current navigation and guidance techniques for Mars pinpoint landing [2]. The paper indicated that the landing process is a critical and dangerous phase of the entire Mars landing mission leading to higher accuracy and safety requirements. The challenges for different phases including atmospheric entry, parachute descent, powered descent, and landing involve limited navigation information, nonlinearity and uncertainty of the dynamic model, and weak control capability in deep space.

Over the past few years, research using machine learning in spacecraft guidance and control was conducted. Recently, Biggs and Fournier used a recurrent neural network for state propagation and cost function evaluation and then trained a multilayer perceptron (MLP) offline with optimal control data to achieve more fuel-efficient attitude control by thrusters [21]. Reinforcement learning for closed-loop control was applied to satellite rendezvous missions from current three-degree-of-freedom (3-DoF) dynamics to six-degrees-of-freedom (6-DoF) in the future [22]. Deep reinforcement learning (DRL) techniques were also used in onboard autonomous attitude guidance operations dealing with high-dimensional, continuous observation and action spaces [23]. In addition, machine learning algorithms were used in Space Situational Awareness sensor tasking of object observation and compared to the traditional methods [24].

For atmospheric entry, especially Mars entry, most of the methods are combinations of machine learning and other classical control techniques and set in idealized situations [25, 26]. For example, Jiang et al. combined reinforcement learning and Gauss pseudospectral methods in Mars powered descent entry [25], and Li and Jiang discussed second-order sliding mode guidance with radial basis function (RBF) neural network for Mars entry [26]. Gaudet et al. used deep reinforcement learning in a Mars powered descent and pinpoint landing problem, and it performed well with heuristic method for value function learning [27]. However, reinforcement learning has to care about both policy network and value network, which increases the complexity of the problem and the computation requirements when compared to deep neural networks.

Considering the constraints and the challenges discussed above, this work presents a simple and efficient method to control the vehicle during entry dealing with uncertainties in the entry state and the atmosphere. The proposed method has a strong advantage that it only needs the position to the target as an input to the controller to derive the bank angle command. A deep neural network is trained with Apollo simulation data, the most realistic data accessible, to learn the bank angle control commands. Then, the trained controller is used to replace the original control block of the Apollo simulation. After it has shown success in Apollo guidance [28], it is transferred to Mars environment with uncertainties in the entry state and the atmosphere. Even though the guidance algorithm of Apollo was designed more than 50 years ago, it is still used as a reference for recent Mars missions [7, 19, 20], which motivates our approach for using it as the baseline for training the bank angle controller to guide the spacecraft in Mars atmosphere. Apollo capsule is a typical low-lifting vehicle with about 0.3 L/D ratio [6]. This neural network controller is designed for the same type of vehicle.

The paper is organized as follows. In Section 2, the general entry dynamics is introduced. Section 3 introduces the Apollo reentry case as the main reference of this work and then discusses the neural network setup and training process. After the discussion of the application of the trained controller and its performance in the Earth model in Section 3, Section 4 discusses the application and performance of the Mars model. The discussion of results, errors, and future improvements is presented in Section 5. Finally, concluding remarks are discussed in Section 6.

#### 2. Entry Dynamics

The equations governing the entry of a capsule is given by [1, 29] where is the distance from the center of the planet, is the velocity of the capsule, is the flight path angle, is the heading angle defined as the angle between the local parallel of latitude and the projection of velocity on the local horizontal plane [1, 29], is the latitude, is the longitude, is the mass of the capsule, and is the roll/bank angle.

In this study, no constraints are imposed on . The drag , lift , and gravity are given by where is the standard gravitational parameter; is the atmospheric density; and are the coefficients of lift and drag, respectively; and is the reference area used in calculating the aerodynamic forces. Both the nonspherical and the rotation effects are ignored as their effect on the dynamics is considered marginal [30]. During entry, there is often only one control parameter, the bank angle; thus, the spacecraft flies in an “S” or a zigzag pattern so that it never deviates too far from its target laterally.

Classical guidance methods generally rely on a predetermined trajectory that the bank angle commands are generated to track. However, next generation EDL missions of low-lifting vehicles require that the control command be derived from onboard computation [31]. The objective of the present controller is to provide the bank angle commands to accurately land a Mars capsule based on flight-proven Apollo training sets. The trained neural network controller enables the guidance system to guide the vehicle without prestored reference trajectory, which is important for low-lifting vehicles with limited maneuverability. This algorithm is simple and efficient and shows robustness with respect to uncertainties in the initial states and atmospheric conditions in Mars atmospheric entry where the higher uncertainty presents a unique challenge for traditional controller design.

#### 3. Apollo: Training and Verification

For Earth entry, the controller used for Apollo [6] is implemented to validate the model and provide training data for the neural network. The details of Apollo controller implementation in the Earth model were previously presented [28]. Simulations showed similar flight characteristics to the ones in [6]. Consistent with Apollo guidance [6], the following assumptions are made: point mass, constant mass, nonrotating planet, no thrust, drag acts in the direction opposite to velocity, lift is perpendicular to velocity, and gravity is directed along a vector from the point mass to the center of the planet. The Earth’s atmosphere is represented with the standard atmospheric model used in Apollo [32]. The model is represented by three separate curves to represent the troposphere, lower stratosphere, and upper stratosphere. Following Apollo control, linear time integration is used to produce the training data while both linear and nonlinear integration methods are used in validation and application in order to test the robustness of the present controller.

##### 3.1. Training the Deep Neural Network Based on Apollo

Artificial neural networks (ANN) are computing systems with interconnected groups of nodes inspired by the biological neural networks in animal brains [33]. Deep learning is the application of multineuron, multilayer neural networks which can perform regression, classification, clustering, and other tasks [34]. Since Hinton et al. published the famous deep belief network paper in 2006, deep learning technologies have shown their inherent capability of overcoming the drawback of traditional algorithms dependent on hand-designed features across disciplines [35, 36]. With machine learning techniques, computers have the capability of constructing algorithms that can learn from data and making data-driven decisions or predictions without being programmed explicitly [36]. Although recurrent neural network (RNN) is commonly used in time series related problems, our approach adopted feedforward neural network based method as it is simpler and more computationally tractable than RNN for the full simulations of the dynamics. In this work, the neural networks are used to perform regression.

Neural networks work by using a number of nodes and node connections, where each of these connections has a weight. The simplest form of neural network is a feedforward neural network, where for a given set of inputs, there is an output without any loops or cycles. These neural networks are arranged in layers, where each layer contains a number of nodes. In neural networks, a sigmoid neuron, the most basic neuron, can be considered a single logistic node. Each one is connected to the input ahead of it, and a loss function is used to update the weights of the neuron as well as optimize the logistic fit to the incoming data [34]. In our training, we use the back propagation feedforward neural networks. The output of a given node, *y*, is obtained from
where *a* is the net input, *x _{i}* represents the input to the node,

*w*is the weight applied to the input,

_{i}*b*is the bias term for the node,

*n*is the number of inputs to the node,

*f*is the hyperbolic tangent sigmoid function used as activation function to fit the ranges of input and output, [37, 38].

The backpropagation learning law to update the weights is also known as the “generalized delta rule,” which is the gradient descent method. In Equation (4), is used to update the weight between the output node and input node , is the learning rate, is the error of the output node and is the th input to node . The commonly used loss function is the mean squared error function [37–39].

To start training the neural network, initial conditions for the nominal trajectory and physical characteristics obtained from the original Apollo report [6] are used as shown in Table 2. Aerodynamic coefficients including and are regarded as functions of Mach number as recorded in real Apollo flight [1]. To obtain the training data, the Apollo simulation was conducted for 216 runs with 500-meter variation in , , and components of the initial position. The latitudes and longitudes of landing locations are shown in Figure 1(a), and the missed distances from the target are shown in Figure 1(b). The red filled point is the supposed target. From the plots, it is seen that all these points are close to the target and the missed distances are within the required range of Apollo mission guidance system, about 27 km [6].

**(a) Latitude and longitude distribution**

**(b) Missed distance from the target**

Since our Apollo simulation with traditional controller using linear integration showed similar behavior to the real Apollo mission [28], the training dataset collected from the simulation with varying initial positions can be regarded as the behavior of Apollo controller varying initial conditions. This dataset of 216 runs including 1669934 points from Apollo simulation is then used to train the neural network to improve the robustness of the new neural network controller.

In order to select the most critical parameters for training, mutual information is utilized. In probability and information theory, mutual information is a measure of the mutual dependence between two random variables. The maximal information coefficient (MIC) is a measure to apply mutual information on continuous random variables using binning [40]. Utilizing the software package minepy [41], MIC is calculated for the training dataset. The results show the strength of the linear or nonlinear association between variables including time, acceleration, position, velocity, distance between the current position and target, target location, and the bank angle. Five sets of 20000 randomly selected data points from the actual 216-run training trajectories are tested. The results are shown in Table 3.

Depending on the choice of dataset, the MIC values are different for the same combination of variables. However, time, distance, and velocity are more related to the bank angle since they generally have higher MIC and the MIC for acceleration and bank angle is always low. The target itself is out of the picture but distance considering target factor can be used instead of position since they have almost the same MIC values. As mentioned in Lu’s paper [31], time is usually not considered to be a critical parameter in entry flight. Using mutual information as a reference as well as training experience, we chose only distances between current positions and the target in , , and directions as inputs to the neural network in training.

Based on the results obtained from MIC, three inputs of the training neural network represent 3 components in the distance from the current position to the target. The output neuron represents the one control variable, the bank angle. In training, the deep neural network is set with 10 layers including 8 hidden layers; the first layer includes 3 inputs; the 8 hidden layers contain 512, 256, 128, 64, 32, 16, 8, and 4 nodes, respectively; and the final output layer only has 1 output, the bank angle . Figure 2 depicts the structure used in training the neural network with 3 input nodes for the distances from the target and 1 output node for the bank angle. This structure is started with 11 inputs and is constructed with a decreasing number of neurons on each layer as commonly used in deep neural network. The structure of layers is kept the same in all trials in order to be flexible with varied number of inputs.

The transfer functions connecting layers except the last layer are “hyperbolic tangent sigmoid.” The final/output layer is connected to the previous layer via “linear” transfer functions. Gradient descent is used as the training algorithm, and the full dataset feeds into training as a batch. The weights and biases are updated in the direction of the negative gradient of the selected performance function, which is the mean squared error function in this case. The goal is set to 0.01. The learning rate is set as 0.01, and 1000 epochs/iterations are conducted. Different results can be derived from different sets of training data with various numbers of runs and time steps. A 5-layer network and a 7-layer neural network including less hidden layers and neurons were trained first. From numerical experiments, more neurons and more layers lead to generally better landing accuracy in application at the expense of longer training time as the number of neurons is a more important impact factor. Although increasing the number of layers and neurons can improve the training accuracy, it may also lead to overfitting which reduces the generalization capability in application. As a result, we finally adopt the 10-layer network with 512 neurons on the first layer.

The use of neural networks produces a controller that can learn to control a vehicle with a wide variety of vehicle characteristics in a variety of atmospheric conditions. For the simulations, it is assumed that the inputs and outputs in application are within the same range as inputs and outputs in training. As a result, the same normalization method is applied with the same maximum and minimum values. The neural network is applied as an alternative Apollo controller to test this new method and then adapted for Mars entry with more uncertainties.

##### 3.2. Validation of Apollo Reentry

Using the same initial conditions of the Apollo case in Table 2, the neural network control is applied to the Earth entry problem. Linear integration is first implemented as in Apollo mission, and then, nonlinear integration is conducted. The trained deep neural network is used as a replacement of the original control block in this application. Figure 3 shows a simplified representation of how the neural network trained controller is called in the feedback loop to determine the bank angle input to the dynamics in Equation (1). The error is defined as the difference between the current state and the target state in Cartesian coordinates, , where is the current position of the entry capsule. All other environmental parameters are kept the same except the control block which is removed.

First-order Taylor series approximation is used to propagate the dynamics as was done in [6]. The details of the results with 5-layer neural network and linear integration were discussed in [28]. The run of 10-layer neural network controller with the same target gives a final state as shown in Table 4. The 10-layer neural network leads to a landing as close as 9 km to the target in the Earth model. The simulation trials all include the special case of lateral switches as designed in Apollo mission. The special case occurs in lateral logic when the roll angle is within 15 degrees; as a result, a constant 15-degree roll is called in response as in Apollo reentry [6].

The flight characteristics and roll angle behavior of the 10-layer neural network controller are shown in Figure 4. Figure 4(a) closely matches the Apollo case with the original control block in [6]; however, the curves are smooth as a result of the neural network training. During the application with linear integration, different time step sizes are tried to find the best performance. The step size of 2 seconds leads to the best result since the computer logic of Apollo reentry is designed to update every 2 seconds [6].

**(a) Flight characteristics**

**(b) Roll angle**

Although the Apollo mission used linear integration, to achieve higher control accuracy and more potential applications, high-order numerical integration is adopted for the nonlinear entry dynamics in Equation (1). In controller applications with linear integration, the time step size is adjusted to meet the requirements following Apollo program. However, skipping the dynamic details may lead to inaccurate results. To get more accurate performance and more realistic simulation, the nonlinear dynamic model with higher order numerical integration (MATLAB ode45) is adopted.

The 10-layer neural network trained with 216-run dataset is used here since this one has the best performance with linear integration. Table 5 shows that the neural network trained controller successfully drives the spacecraft with nonlinear integration in the Earth model with a missed distance of only 7.7 km from the target.

Although only distances between current positions and the target are used as neural network inputs, velocity is another critical parameter in entry flight control. From the results, both linear and nonlinear applications have a final velocity of about 136 m/s, close to the final velocity of the entry phase in Apollo mission. This further validates the neural network method.

Monte Carlo simulations are conducted to validate the method for the nonlinear dynamics with a 10-layer neural network in the Earth model. 1000 runs are generated with 3*σ* of 200 meters in , , and directions. The results are shown in Figure 5. The statistics of performance and confidence interval are summarized in Tables 6 and 7, respectively.

**(a) Latitude and longitude distribution**

**(b) Missed distance**

The corresponding projections of the error ellipsoid are plotted in Figure 6 with the modified Johnson function [42].

**(a)**- projection

**(b)**- projection

**(c)**- projectionThe results show that the range error of the final landing point with nonlinear integration is within 27 km range which is the design requirement for the Apollo capsule. The statistical error analysis shows that approximately 99% of the points are within the 3*σ* range. Additionally, the above results provided confidence in the neural network trained controller which is then applied to Mars entry dynamics.

#### 4. Mars Entry Neural Network-Based Control

The uncertainties in the Mars atmosphere model are much larger than those in the Earth model. It is hard to find one accurate enough although there are several Mars atmospheric models available. Considering reasonable entry conditions like speed of sound and realistic Mars environment, an exponential atmospheric density model is adopted, Equation (5), and MSL mission environment is considered in this application [26, 43]. The density is calculated based on the density on the surface of Mars, the radius of Mars, the constant scale height, and current position.
where is the density, kg/m^{3} is the density on the surface of Mars, is the current position, km is the radius of Mars, and m is the constant scale height.

Because of the complexity and uncertainties of current Mars atmosphere model, in this application, a constant temperature of 170 K is assumed during the entry process. In the trials, the effect due to the variation of temperature can be neglected since it is too small. The MSL data system flight result analysis showed that the temperature varied between about 150 K and 200 K which made this assumption reasonable [43].

The speed of sound is calculated based on ideal gas assumption. The MSL mission parameters and the simulation based on MSL are used as references when the trained neural network is applied to the Mars model since MSL capsule is also a low-lifting vehicle with a L/D ratio close to Apollo’s [26, 43, 44]. Table 8 shows the initial conditions and physical characteristics of the Mars capsule from the simulation in Li and Jiang and MSL report [26, 43]. Aerodynamic coefficients are set as having the same functions as in Apollo simulation. Velocity is used as the trigger to switch to descent phase from the entry phase at reasonable height as it is related to dynamic pressure. The value comes from the research of future Mars human mission considering the parachute design and other factors [45].

The same 10-layer neural network controller is applied in the Mars model with linear integration first. The reachable target is chosen based on the input and output ranges of the training set. The final state is shown in Table 9.

The flight characteristics and roll angle behavior of the 10-layer neural network controller with linear integration are shown in Figure 7.

**(a) Flight characteristics**

**(b) Roll angle**

It is seen that the capsule can land within 27 km designed as the requirement of track and range in Apollo guidance system with trained neural network controller [6]. The 10-layer neural network controller has successful landing in the Mars model with an error of 20 km. This is the best performed neural network with linear integration so far. Unlike in the Earth model with Apollo reference, the best time step size of the computer logic in the Mars model is worth investigating.

The 10-layer neural network controller is then applied to Mars nonlinear entry dynamics (Equation (1)). The performance of the 10-layer neural network controller in the Mars model is shown in Table 10.

The final landing location is still close to the target considering the scale. The missed distances are in a reasonable range which is about 20 km. The final positions with the same target are almost same in linear and nonlinear integration.

Monte Carlo simulations are conducted to validate the nonlinear method with 10-layer neural network in the Mars model. 1000 runs are generated with a 3*σ* of 500 meters in the entry state , , and directions, and another 1000 runs are for varying density with a 3*σ* of 10%. The results varying initial positions are shown in Figure 8, and the results varying density are plotted in Figure 9. The statistics of the neural network performance and confidence interval are summarized in Tables 11 and 12, respectively. The corresponding projections of the error ellipsoids are shown in Figures 10 and 11, respectively, utilizingthe modified Johnson’s function [42].

**(a) Latitude and longitude distribution**

**(b) Missed distance**

**(a) Latitude and longitude distribution**

**(b) Missed distance**

**(a)**- projection

**(b)**- projection

**(c)**- projection

**(a)**- projection

**(b)**- projection

**(c)**- projectionThe red dot shows the actual landing location with nominal trajectory. From the plots, the missed distances mostly fall around 27 km which is acceptable in Mars environment and the locations of final points are in good distribution.

From the statistics, both Monte Carlo simulations have good performance. The standard deviations are relatively small and the means as well as the medians are close to the missed distance of the nominal trajectory. The results show that about 98% of the points fall into the 3*σ* range of error.

Compared to the original uniform distribution of the Apollo dataset with the same variation in initial position, Figures 1(a) and 1(b), the Monte Carlo results are in similar distributions. Although all environmental parameters are changed to Mars atmosphere and the uncertainty has increased, these results show that the neural network controller can still work on Mars and land the capsule on target within the desired range. Considering the controller is trained by Earth reentry data only, this flexibility is notable as it provides a new approach for the control block design in new missions.

#### 5. Discussion

The results presented show that the new method works in both Earth and Mars models with missed distances within the design requirement of the Apollo capsule. This result shows a more accurate landing than most of previous Mars missions. This method is also much simpler since the neural network can be trained on Earth before the mission and only needs positions with respect to the target as inputs. Considering the deep neural network is trained with data from Apollo simulation using the algorithm used in real Apollo missions, the simulations are more practical than in idealized situations. Additionally, the presented results show the applicability of the trained controller to the full nonlinear dynamics achieving accurate results for both Earth and Mars applications.

In order to improve the training performance, the training set is different from the reference and the application case in certain aspects. To get more data points and to help the neural network learn “how to react” better in all possible situations, the time step size used in the training set is as small as 0.1 seconds. In addition, the special case of lateral switches in the final phase is removed from the training set since the roll angle changing dramatically at switch points is hard to be captured by neural network learning, and added back later in Earth application. For Mars, the special case was not used and the controller was applied directly as was computed from the neural network.

Several sources of error affect the results presented. The most important one is lack of real data. Although the Apollo results are based on the real Apollo reentry algorithm which makes this research more practical, not all required information is available. Sensors and average-g method are not applicable in this simulation. Additionally, the maximum and minimum values of inputs and outputs used in normalization are from Earth training data because there is no real Mars data available in this case. The reachable targets on Mars should be chosen based on Mars data.

Unlike the first generation controllers relying on prestored trajectories, the third generation controllers have an advantage in real-time implementation. Due to the nature of neural networks, the stability is hard to prove. However, the neural network controller is trained by Apollo flight data so it follows the stability rules of Apollo controller implicitly which accommodates biases in input signals to the steering equations and allows the vehicle to alternate between 0 and 180-degree roll angle [6].

More work is needed to determine the feasibility of neural networks for future missions. A map of initial position covering the whole Earth is developed to obtain new training datasets. Grid training is attempted based on the mentioned map so that any point can be used as the initial position with a corresponding trained neural network. More accurate atmospheric models, Mars GRAM [46], can be used to have more representative results for the Mars environment. The terminal state including final velocity and height can be optimized based on specific mission requirements and design limits integrated with other subsystems. Aside from the addition of more complexity, transfer learning [47] is strongly considered since it may lead to better results in the Mars model with less time and investment. We conjecture that additional data from Mars missions, albeit limited, can significantly improve the landing accuracy and the overall flight performance of the spacecraft. Transfer learning can also be applied to missions with different types of vehicles. The corresponding neural network will be trained from the base neural network of low-lifting vehicle with a limited amount of the new vehicle flight data.

#### 6. Conclusion

Although Mars entry problem is not a new area of research, the method of applying deep neural network as the main controller instead of approximating parameters is, to the best of our knowledge, novel. In this research, a case of 216 runs including 1669934 points from Apollo simulation is selected to train the neural network. The mutual information is calculated to decide input variables, which are the distances from the current position to the target in Cartesian coordinates. The neural network with 10 layers and 512 neurons on the first layer reaches the best performance in application. It is applied as a replacement of the original control block and lands the capsule on Earth within a range of about 9 km with linearized dynamics and 8 km for the full nonlinear 3-DoF dynamics. As Apollo is a low-lifting vehicle, the trained neural network is supposed to be applied to a similar vehicle. The neural network is then adopted to the Mars model and lands the low-lifting capsule about 20 km around the target for both linearized and nonlinear dynamics. The statistical error analysis shows the robustness of this method to uncertainties in initial states as well as in atmospheric conditions with about 99% and 98% of points within acceptable range errors for both Earth and Mars applications, respectively. The results show that the deep neural network trained with Apollo data can replace the traditional control block to land the capsule on target in atmospheric entry. The successful performance of high-order numerical integration shows the potential of the present algorithm to handle modeling uncertainties as well as parameters and state uncertainties as shown in density and initial state variation results.

The deep neural network can work as a powerful controller to land the capsule on target during atmospheric entry. The high flexibility of the neural network application can be seen from the fact that the deep neural network is only trained by Earth reentry data and can work in both Earth and Mars environments with a similar vehicle. Furthermore, only the position with respect to the target is required as an input to the neural network so that it can be trained and implemented efficiently. This new method enables all bank angle search to be completed online without any reference trajectory even in large uncertainty environment. The present method is shown to be beneficial for future mission development with limited time and budget and can serve as a good baseline for future entry algorithms. The method shows a strong possibility that deep neural network can be used in future missions with large uncertainty as it is simple, efficient, and flexible and meets online computation requirements.

#### Data Availability

Data is available on request.

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.

#### Acknowledgments

Article processing charges were provided in part by the UCF College of Graduates Studies Open Access Publishing Fund.