Abstract

An integrated guidance integrated estimation/guidance law is designed for exoatmospheric interceptors equipped with divert thrusters and optical seekers to intercept maneuvering targets. This paper considers an angles-only guidance problem for exoatmospheric maneuvering targets. A bounded differential game-based guidance law is derived against maneuvering targets using zero-effort-miss (ZEM). Estimators based the extended Kalman filter (EKF) and the unscented Kalman filter (UKF) are designed to estimate LOS rates that are contaminated by noise and target maneuver. Furthermore, to improve the observability of the range, an observability enhancement differential game guidance law is derived. The guidance law and the estimator are integrated together in the guidance loop. The proposed integrated estimation/guidance law has been tested in several three-dimensional nonlinear interception scenarios. Numerical simulations on a set of Monte-Carlo simulations prove the validity and superiority of the proposed guidance law in hit-to-kill interception.

1. Introduction

The problem of exoatmospheric interception has been studied for decades. Clearly, the interceptor must hit and kill the target at the end of the terminal phase. Both the midcourse and terminal guidance phases take place when the interceptor is outside the atmosphere [1, 2]. The major difference between these two phases is that the guidance law in the midcourse phase is executed using a ground-based long-range radar (LRR) station, and the interceptor does not lock-on the target, while in the terminal phase the interceptor uses an onboard seeker to guide itself after locking on to the target. That is to say, the patterns of getting information from the target are different, so the guidance laws are different.

Is there a single guidance law that can be used during both phases? In [3], the authors discussed this problem. In the terminal phase, the relative velocity is large, while the acceleration is relatively small. This generates fragility of the interception with respect to the LOS rates. For this problem, the midcourse law directed from the ground station must reduce LOS rates as much as possible within a required range.

An exoatmospheric interceptor is usually equipped with a strapdown optical seeker that can measure only LOS angles [4]; information relating to relative range and velocity is not measured. Even more concerning, the optical seeker’s measurements of LOS angles are nearly always corrupted by noise, resulting in inaccuracies in the LOS rate information. Using data from the ground-based station during the midcourse phase, the interceptor can get information on the position, velocity and even acceleration of the target. The more information needed, the more advanced the guidance laws should be that are used. For the reasons mentioned above, the designs of the guidance laws during the two phases are different. In [3], the interceptor uses a thrust vector control (TVC) system to adjust its flight path during the midcourse phase. The TVC system can provide axial acceleration for the interceptor. Because of the burning time limit, however, the TVC system can work only during a certain period of time, after which axial acceleration is no longer available, and the interceptor can use only a Divert Control System (DCS) [5] to steer itself in order to hit the target. Unlike TVC, DCS does not provide axial acceleration. The difference between TVC and DCS results in corresponding differences in the guidance laws applied. The “terminal” part of the terminal phase is the so-called “endgame” phase [6], and in this phase the interceptor must rely on the information from the onboard seeker for guidance to hit the target.

Differential game theory is a natural setup to discuss pursuit-evasion problems [7]. The most common pursuit-and-evasion game, called a “zero-sum differential game,” deals with two entities in relation to terminal cost function. In generating guidance laws, it is common practice to linearize with respect to a collision course, which implies linearized kinematics. There are two versions of the game [8]: the first is the “linear quadratic differential game” (LQDG), and the second is the “norm differential game” (NDG). In LQDG, the controls are unbounded and the cost function is the weighted sum of three quadratic terms: the square of the miss distance and two penalty terms that represent the integrals of the respective control energy of the players. The optimal solution of this formulation is linear. In NDG, on the other hand, the controls are hard bound and the cost is purely terminal to account for imposed on the miss distance. Contrary to LQDG, the optimal strategies in NDG are nonlinear; at a certain time before termination, the guidance law becomes pure bang-bang. In exoatmospheric interception, the divert thrusters are “on-off” devices, so the guidance commands from NDG are suitable for this kind of actuator. The guaranteed miss distance [9] for an interceptor must be very small for hit-to-kill, especially for evasive maneuvers. NDG uses the miss distance at terminal time as its cost function, another problem about time-to-go that has been studied by Shaul and Sergey [3, 10, 11]. They present an exoatmospheric guidance law based on a bounded differential game and a fourth-order time-to-go calculated while both the interceptor and the target perform the optimal strategies in the game, the guidance commands are not chatting and the directions of the optimal acceleration are constant. However, this kind of guidance is not suitable for an interceptor equipped with lateral thrust, and a first-pass phenomenon will occur if the initial condition is not perfect.

Another problem that must be considered when using an onboard optical seeker in the terminal phase is angles-only guidance [4, 5, 1214]. The implementation of advanced laws such as augmented proportional navigation (APN) and optimal guidance law (OGL) requires that the guidance system be provided with information about the time-to-go () [15]. In the case of angles-only measurements, however, the range and range rate cannot be measured directly; thus, needs to be estimated with noise, which will lead to decreased performance of the advanced laws, making bearings-only or angles-only estimation another problem in exoatmospheric interception [4, 6, 13]. A hybrid Kalman filter has been presented in [4, 6] that bused both Cartesian and spherical states to minimize estimation errors. The relative position and velocity estimations were propagated using Cartesian coordinates, and the measurement updates used spherical coordinates. In [13], a differential game guidance law with bearings-only measurements was derived in two-dimensional interception, an estimator based on an EKF was designed in the guidance loop, and a deterministic and a stochastic option were presented in the paper. An optimal guidance-to-collision law for an accelerating exoatmospheric interceptor as studied in [14], where the guidance law was based on LQDG, and EKF was used as the estimator in the guidance loop.

An integrated estimation/guidance (IEG) algorithm has been presented for maneuvering target interception [16, 17]. These guidance laws are designed by taking into consideration the estimation-delay of the moving target. A ZEM-based integrated estimation and guidance law for the interceptor in the endoatmosphere has been presented [18, 19], in which the ZEM components and time-to-go are estimated in the loop, and the estimation and guidance work in unison. This IEG approach has been found to be very effective in engaging both conventional maneuvering aircraft targets as well as incoming high speed ballistic missiles. In [20, 21], an IEG strategy is proposed that combines an interactive multiple model (IMM) estimator with a differential game guidance law (DGGL) for a realistically modeled seeker-less interceptor. The interceptor is not equipped with a seeker, so target information comes from the ground-based station and guidance commands are transmitted from the station to the interceptor using three-point-guidance.

In this paper, we study a maneuvering target intercepting problem for an exoatmospheric interceptor equipped with an angles-only measurement seeker and lateral thrusters. An angles-only measurements guidance problem is considered for an exoatmospheric maneuvering target that performs a bang-bang evasion maneuver. To solve this problem, an integrated estimation/guidance law which relied on an angles-only nonlinear filter is derived; furthermore, the traditional differential game guidance law is modified to enhance the observability of the range. With passive measurements, nonlinear filters with designs based on EKF and UKF were used to estimate LOS angles, LOS rates, and target maneuver acceleration for both nonmaneuvering and maneuvering targets. The integrated estimation/guidance law combines the estimator and guidance law in one and is effective for exoatmospheric maneuvering target interception. The paper is organized as follows. Section 2 presents the engagement formulation used for guidance law derivation and simulations. A guidance law based on a bounded differential game is discussed in Section 3. In Section 4, an angles-only estimation problem is studied. The system and measurement model in spherical coordinates is derived, and two nonlinear filters, EKF and UKF, are used to design the estimator. The integrated estimation/guidance law is presented in Section 5. Section 6 presents the nonlinear simulation results and is followed by conclusions in Section 7.

2. Problem Formulation

2.1. Vector in Inertial Coordinates

Consider two players, an interceptor M and a target T, moving in an exoatmospheric space. Let , , , be the position and velocity vectors of the objects in inertial coordinates, respectively. Let and be the interceptor and the target acceleration vectors, respectively, and and be the gravitation vectors of the two players. Then,

Define the relative position and velocity vectors , . Then, assuming , the relative kinematic equations can be written as

In exoatmospheric interception, the acceleration is usually obtained by divert motors, the process that has dynamics. We assume that the dynamics can be represented by arbitrary-order linear equations.

where is the internal state vector of the acceleration in each axis in inertial frame with . is the acceleration command and assumed to be limited in magnitude,

The acceleration vector can be represents as .

Define the state vector where , define command vector , and thenwhere

With the dynamics and constraints above, we associate the terminal cost with the final time tf. In particular,where J is a terminal cost to be minimized by and maximized by and

2.2. Spherical Coordinates

It is convenient to use spherical coordinates to describe the motion of the two players by using seekers in the endgame. Considering the LOS spherical coordinates , where, is the relative distance, are the elevation and azimuth line-of-sight angles, and are their rates, respectively. In the inertial frame, the relative range is , then

Using the vector derivation method, the relative equations of LOS coordinates can be obtained. Then,where , are, respectively, line-of-sight rates for the LOS coordinates: , , . , , are the actual acceleration of the interceptor and target in the LOS frame, respectively. The engagement geometry and the relationships are shown in Figure 1.

2.3. Estimation Model

In real-world implementation of guidance algorithms, an estimator to reconstruct the parameters that cannot be measured directly is required. In this particular study, the interceptor measures the LOS angles only and the range cannot be measured directly. The estimated state is defined as

Interceptor-related parameters (i.e., , and ) are not estimated, because they are directly measured and thus assumed to be known.

It is assumed that the target performs a constant maximum maneuver above the atmosphere by using divert motors. For this kind of maneuver, a Gaussian white noise is used to represent a target maneuver; the power spectral density is , where is the maneuver level and is the flight time during the terminal phase.

The complete model for estimation is therefore

2.4. Integrated Estimation/Guidance Problem

Modern advanced guidance laws such as the augmented proportional navigation (APN) and differential games based laws require the accurate estimate of the time-to-go variable . In this study, is defined as

In bearings-only tracking problems, the relative range and velocity cannot be measured directly; thus, must be estimated by the guidance filters by using the estimation model (15). In fact, there is a conflict between the optimal guidance law and the range estimation: the guidance lows null the LOS rates, but the small LOS rates lead to poor range estimation. To solve this issue, the guidance strategy must be designed to enhance the range observability for the engagement; this guidance law is derived in Section 5.

3. Guidance Laws Based on Differential Game

In this section, we are solving a zero-sum pursuit-evasion differential game with bounded control. The normal differential game was described in (8), the Zero-Effort-Miss (ZEM) is defined at first, and then the general solution of the differential game is analyzed. The game space for the interception is analyzed and finally the optimal guidance strategy is derived.

3.1. Zero-Effort-Miss

Define the ZEM vector as follows:

Then, the derivative with respect to time of the new state vector Z(t) is

where is the transition matrix of matrix A in (9).

With this new ZEM variable, the cost function from (8) can also be expressed using only the new state vector Z(t) as

Note that, besides reducing the order of the problem, the ZEM variable has an important physical meaning which is the miss distance if both the interceptor and the target will null their controls from the current time onward. For arbitrary-order players, the ZEM vector and its derivative can be expressed aswhere is the inverse Laplace transform operator. For high-order players, is complex; thus, the dynamics of the players are assumed to be zero or first order. For ideal players, the ZEM is

For the first-order player, the ZEM is where and are the time constant of the interceptor and the target, respectively.

The ZEM vector above is based on the vector pair in inertial frame. However, in angle-only guidance, the onboard seeker works in the LOS frame, so the information for a target is no longer based on the positions and velocities in the inertial frame. Thus, it is convenient to use spherical coordinates to describe the ZEM vector. The components of the ZEM vector in LOS spherical coordinates for ideal dynamics can be expressed as

3.2. General Solution of the Differential Game

Assuming the ideal players, the ZEM norm will be differentiated with respect to t to obtain

Differential with respect to t, and obtain

Therefore, the optimal controllers are

It is interesting that the optimal control directions for both players are the same. It is also obvious that when the interceptor uses the optimal strategy to pursue the target, the target will maneuver in the same direction to counteract the decrease in the miss distance.

Substitute into the derivative

Integration yields

3.3. Game Space Decomposition

For ideal players in the game, the saddle-point depends only on the maneuverability difference of the both players . In a real case, however, the dynamics sometimes cannot be ignored, especially in the endgame. To see the effect of dynamics, it is assumed that the dynamics reduce to first-order, which has a time constant .

Thus, the ZEM vector for the first-order dynamics can be rewritten as

And its derivative is

Thus

Depending on the assumptions about the adversaries’ dynamics and maneuverability, there are a number of game space structures that can be described, but many of them are assumed in endoatmospheric conditions with aerodynamics. In exoatmospheric interception, the lags of the thruster are much smaller than for aerodynamic surfaces; thus the time constant is small.

The game space can be divided into two parts: the capture zone (denoted under the optimal trajectory) and the avoidance zone (denoted above the optimal trajectory). In the capture zone, the optimal strategies are arbitrary, and the avoidance zone is a region where the optimal strategies are shown, as calculated in (27). The value of the game is a function of the initial conditions, where both interceptor and target would null their control command from to the end of the engagement.

A game space example is given in Figure 2. The capture zone is a singular region of the space where optimal strategies are arbitrary. Avoidance zone is a region where the optimal strategies are (27). When the bounds and are given, the optimal strategies of each player are determined.

3.4. Differential Game Guidance

The optimal strategies of each player in the game are shown in (27), where the optimal command is based on the estimated ZEM vector . Under imperfect observations, the ZEM vector estimation error will reduce the performance of the guidance law. The optimal guidance law cannot be achieved in this situation. Still, relying on the certainty equivalence principle, it is common practice to use this kind of law in stochastic setting. Considering the ideal case that both players in the game have perfect information, both the interceptor and the target have ideal dynamics. The optimal command is pure bang-bang, and the direction depends on the sign of LOS rates. Take the elevation channel; for example, the ZEM of this channel can be obtained from (25), then the optimal guidance strategy can be represented as

During the interception, , the sign of the optimal strategy only depends on the sign of ; thus,

Consider the situation that the target performs the optimal evasion; the acceleration is in the same direction of the interceptors; the optimal pursuit-evasion dynamics can be obtained aswhere . In exoatmospheric conditions, the closing velocity can be assumed as a constant, so that (36) can be written as

Consider that, in exoatmospheric conditions, if is assumed be a constant, then the ODE obtained above has the following solution:

Substitute (38) into (37); we havewhere is the flight time and is the initial LOS rate. By using the two-sided optimal strategies, the function can be expressed as

If , , one has

If ,,then , so that the unified form is

Multiply both sides of (42) by relative range

Equation (43) gives the lower bound of relative acceleration that can guarantee a desired miss distance. Comparing (43) to (29), it indicates that the differential game guidance has the same form in Cartesian coordinates and spherical coordinates.

4. Angles-Only Estimation in Guidance Loop

The differential equations that fully describe 3D engagement in the LOS frame are in (13). The dynamics of both players are ideal. Real-world implementation of guidance laws requires an estimator to reconstruct parameters that cannot be measured directly. In angles-only guidance, the range and the relative velocity are unavailable. At the beginning of the terminal phase, the ground-based station can send the position and velocity information about the target to the interceptor, so the initial values of relative distance and velocity are given. Next, the range and the velocity will be estimated by an estimator. At the same time, because the measured angles are disturbed by noise, the rates cannot be calculated directly according to difference; they need to be estimated by a filter. In this particular study, two nonlinear Kalman filters are employed: EKF and UKF. The complete model for estimation is represented in (15).

4.1. Measurement Model

The interceptor is assumed to be equipped with an electrooptic seeker. Since the noisy LOS angles are measured at a sample period and the measurement is assumed to be contaminated by zero-mean white Gaussian measurement noise , the measurement equation is where is the estimated state described in (14) and is the appropriate measurement matrix.

4.2. EKF in Guidance Loop

The filter model is nonlinear; thus, a nonlinear filter should be designed in the guidance loop. An EKF is a practical estimator in homing guidance. It needs to calculate the transition matrix from (15):

where the Jacobian matrix can be calculated as

Note that if we ignore the coupling of the azimuth and elevation channel with the small LOS angle, the transition matrix is simplified in (47).

The error covariance matrix of an EKF at the th iteration is calculated in two steps:

The covariance prediction estimates vary according to the transition matrix . The correction step updates the predicted covariance for the Kalman gain and the measurement matrix . From (46), the only target measurements for the interceptor are the bearing angles and .

The residual covariance matrix is

The Kalman gain can be computed as

The updated state estimate of the filter is

4.3. UKF in Guidance Loop

A UKF, like the EKF, is also an approximate filtering algorithm, but instead of using linearized approximation, the UKF uses the unscented transform (UT) to approximate the moments. This approach has two advantages over linearization: it avoids the need to calculate the Jacobian matrix, and it provides a more accurate approximation.

In the UKF, the standard Gaussian distribution is represented by a set of deterministically chosen weighted sample points as sigma points as follows:where is the th row of column of the matrix square root of and is the weight that is associated with the th sigma point. can be any number providing that .

Given the set of samples generated by UT, the prediction procedure is as follows:

The predicted mean is computed as

The predicted covariance is computed as

The measurement prediction is computed as

The Kalman gain can be computed as

The updated state estimate of the filter is

The updated covariance matrix is computed as

5. Integrated Estimation/Guidance Law

The integrated estimation/guidance strategy presented in this paper consists of

a nonlinear estimator based on EKF or UKF,

a differential game-based guidance law with angles-only measurement.

The differential game guidance law is derived based on uncorrupted information. In exoatmospheric interception, the thrust lags for both players are much smaller than those in endoatmospheric aerodynamics, so a hit-to-kill performance can be achieved as long as the maneuver of an interceptor bigger than that of a target. However, in a realistic interception, the onboard seeker is in a noisy measurement environment and lacks the relative range and velocity information about the target, which corrupts the information structure. The state variables required by the differential game-based guidance law or ZEM have to be estimated from the available corrupted measurements. In exoatmospheric interception, the maneuvering target can perform a bang-bang maneuver only. But this type maneuver is also the most cost-effective for evasion, so the nonlinear filter can use a unique model to estimate the target’s maneuver.

5.1. Observability Enhancement Differential Game Guidance Law

The optimal guidance strategy of (35) nulls the LOS rates during the interception. As mentioned above, the smaller the LOS rates, the poorer the range observability. To solve this issue, a modified guidance strategy is designed of observability enhancement.

Consider the game space described in Figure 2; the interceptor can perform any maneuvering in the singular zone denoted as ; the optimal trajectories and are the bounds of the zone. Using the optimal strategy (35), the LOS rates near zero, and the LOS dose not rotate; this is the main reason of the poor observability. In the capture region , the optimal strategies for the interceptor are arbitrary; in such a zone, the interceptor can perform maneuvering away from the collision triangle and make the LOS to rotate.

When in the region, the acceleration commands is considered in a finite set from and , and the command is chosen as follows:

where the bounds have been reduced by . When strategy (64) is first used, the command makes the interceptor maneuver away from the collision triangle. Once reaching the one bound, the command is changed and makes the interceptor maneuver to the other bound. Using this strategy, the interceptor is no longer near the collision triangle and rotates the LOS to enhance the observability of the range. This modified guidance law is denoted as the observability enhancement differential game guidance law (OEDGL).

Furthermore, when is small, the interceptor must try to reduce its ZEM and not to move away from the collision triangle to enhance the observability. A certain switch time is considered as the tuning parameters in the strategy; when , the strategy is switched to a pure differential game guidance law (PDGL) to reduce the ZEM until the end of the interception.

5.2. Modified Guidance Law for Chattering Phenomenon

In PDGL, the command is computed according to the direction of the ZEM vector. If the norm of the ZEM vector is near zero, the commands switch at a very fast rate that will result in chattering. For this reason, we introduce the ZEM threshold to solve this problem:

The LOS rate threshold can be computed as follows.

Consider (13); the motion of LOS rate can be expressed as

Without the relative acceleration , the zero-effort LOS rate can be computed as

With the desired LOS rate at range , the zero-effort LOS rate is

By tuning and , the is determined, and can be computed using (65).

At the beginning of PDGL, the interceptor reduces the ZEM vector as fast as it can. Then, when the norm of the ZEM is near zero, the interceptor switches the strategy to reduce the chattering. The modified PDGL can be written as

6. Simulation Results and Discussion

In this section, the performance of the integrated estimation and guidance laws is evaluated using nonlinear simulations. First, a large initial range error situation is considered to test the validity of the proposed observability enhancement differential guidance law with a nonmaneuvering target. Then, using the proposed guidance law, consider two engagement scenarios: one with a nonmaneuvering target and the other with a maneuvering one. Two kinds of nonlinear filters will be tested in the guidance loop.

6.1. Engagement Scenarios

The engagement starts with the interceptor and the target on a collision course. In the first scenario, the target does not perform evasion maneuvers (nonmaneuvering target, NMT), while in the second, the target performs evasion maneuvers at its maximum capability and at a uniformly distributed random time (a bang-bang form of maneuver). The initial parameters and values of the simulation are given in Table 1.

6.2. Performance Comparisons of OEDGL and PDGL

In this subsection, the proposed OEDGL is compared with PDGL against nonmaneuvering target. In this case, the target does not maneuver, when using PDGL; the LOS rates will be nulled at the beginning of the game; thus the observability of range is poor; with a large initial range error, the estimator error of range will not be reduced during the interception.

A 200-run Monte-Carlo (MC) simulation was used in order to show the results that comparison of the performance between OEDGL and PDGL in range estimation. The root mean squared error (RMSE), the actual standard deviation (), and the mean error of the 200-run simulation are used to evaluate the performance.

Figure 3 shows the performance comparison of range estimation between OEDGL and PDGL, from the results; it can be easily found that using OEDGL can improve the observability of the range; however, the range observability of PDGL is much poorer than that of OEDGL.

Figure 4 shows that the interceptors maneuver in the region to rotate the LOS by using OEDGL, while PDGL makes the ZEM to zero, and the commands chatter during the interception. The proposed OEDGL can improve the observability of range, can the command acceleration dose not chatter.

Figure 5 presents the miss distance CDF comparison of OEDGL and PDGL; the 200-run Monte–Carlo results shows that OEDGL performs better performance than PDGL.

6.3. Results: Nonmaneuvering Target

For NMT, the integrated estimation/guidance law with nonlinear filter and OEDGL is tested in a 200-run Monte-Carlo simulation. Figure 6 presents the RMES, expected standard deviations and the mean errors of range, relative velocity, LOS rates, and target acceleration, respectively. It can be found that, with the white Gaussian process noise, the model is uncertain, but the estimation error remains within the theoretical bounds approximately 68% of the time. We expect that the rotation of the LOS will enable us to estimate the states from angels-only measurements, and it improve the estimation of the range and the relative velocity, especially for NMT. The estimation errors of the nonmaneuvering target’s acceleration depend on the covariance matrix and . In the estimation model of that target’s acceleration, it is assumed that the maneuver starting time is uniformly distributed; thus, the estimation error will be affected by the target’s model. However, with the fast convergence of the estimated states, these effects will be reduced, as the results presented indicate.

The other nonlinear filter UKF was also tested. As Figure 7 shows, the results of UKF are similar to those of EKF for NMT. The main differences in the results are the expect standard deviations of range and the relative velocity; the one of EKF is much larger than that of UKF. The results show that both EKF and UKF perform well in estimating the LOS angles and the LOS rates in the homing loop of interceptor.

Figures 8 and 9 show that the ZEM and acceleration commands of UKF/OEDGL and EKF/OEDGL, respectively, for NMT, the ZEM, and the commands are very similar.

Mean miss distance and standard deviation (STD) of the runs for EKF-NMT were observed as 0.058m and 0.028m, respectively, while for UKF-NMT they were 0.057m and 0.021m, respectively. Figure 10 presents the miss distance CDF. It is evident that both EKF and UKF exhibit good estimation performance, and the guidance law is sufficient for the nonmaneuvering target. The required target hit-to-kill range to ensure a 95% kill probability is 0.09m for UKF and 0.10m for EKF, respectively. From the CDF, UKF is superior to EKF, but the differences are small.

6.4. Results: Maneuvering Target

The target performs three step-maneuvers to evade the interceptor. At it executes a -3g step maneuver in the y-axis of the LOS coordinates, followed by a 3g step maneuver in the z-axis. Then, at , it executes a 3g step maneuver in the y-axis and -3g step maneuver in the z-axis. The third step is at , and the maneuvers are 3g and -3g, respectively.

The estimation results of EKF-MT are presented in Figures 11 and 12, the range estimation seems to get worse at the third step maneuver, and the estimation error of the target acceleration is larger too. Like the MT, the expected standard deviation of range and relative velocity using EKF is much larger than that of UKF. The larger range estimation error leads to worse target acceleration estimation.

Figures 13 and 14 present the performance of UKF/OEDGL against maneuvering target; this combined estimation and guidance law performs much better than EKF/OEDGL in range and relative velocity estimation. From Figure 14 it can easily found that the target acceleration estimation is better than that of EKF; for a 3g step maneuver, the time it spends less than 1s to assess the target.

Figure 15 shows the miss distance CDF for MT. Mean miss distance and STD of the runs for EKF-MT have been observed at 0.413m and 1.25m, respectively, the worst miss distance is 6.98m, and it means that the interceptor will miss the target, while for UKF-MT they are at 0.131m and 0.055m, respectively. The results show that UKF is superior to EKF for MT estimation. The required target hit-to-kill range to ensure a 95% kill probability is 0.23m for UKF and 1.02m for EKF. Therefore, it is evident that UKF is superior to EKF for maneuvering target estimation.

7. Conclusion

An integrated estimation/guidance law for exoatmospheric maneuvering targets has been derived in this paper. An angles-only estimator was designed using EKF and UKF in a nonlinear filter in spherical coordinates. Within the pursuit-evasion game, a guidance law based on a bounded differential game was derived using the ZEM vector. With the onboard angles-only measurement seeker, the range and the relative velocity cannot be measured directly, and traditional differential game-based guidance law will lead to poor observability; to solve this issue, an observability enhancement differential game guidance law (OEDGL) was derived to enhance the observability for the estimator. A ZEM threshold is obtained to reduce the guidance command chattering. Nonmaneuvering and maneuvering target interceptions were then considered in a three-dimensional nonlinear simulation. The results show that both EKF and UKF perform well in nonmaneuvering target interception; the proposed OEDGL perform well in range estimation; UKF is superior to EKF for step maneuvering at uniformly distributed starting times; the differential game-based guidance law performed well in the endgame, and the commands are suitable for the divert thruster. Thus, the proposed integrated estimation/guidance law can ensure hit-to-kill reliability for maneuvering targets in exoatmospheric using angles-only measurements.

Abbreviations

LOS:Line-of-sight
ZEM:Zero-effort-miss
EKF:Extended Kalman filter
UKF:Unscented Kalman filter
LRR:Long-range radar
TVC:Thrust vector control
DCS:Divert control system
LQDG:Linear quadratic differential game
NDG:Norm differential game
APN:Augmented proportional navigation
OGL:Optimal guidance law
IEG:Integrated estimation/guidance
IMM:Interactive multiple model
PDGL:Pure differential game guidance law
OEDGL:Observability enhancement DGL.

Data Availability

The parameters and data of the simulation environments used to support the findings of this study are included within Simulation Results and Discussion.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Grants nos. 61873319, 61573161, and 61473124).