Abstract

Terminal guidance law against the maneuvering target is always the focal point. Most of the literatures focus on estimating the acceleration of target and time to go in guidance law, which are difficult to acquire. This paper presents a terminal guidance law based on receding horizon control strategy. The proposed guidance law adopts the basic framework of receding horizon control, and the guidance process is divided into several finite time horizons. Then, optimal control theory and target motion prediction model are used to derive guidance law for minimum time index function with continuous renewal of original conditions at the initial time of each horizon. Finally, guidance law performs repeated iteration until intercepting the target. The guidance law is of subprime optimal type, requiring less guidance information, and does not need to estimate the acceleration of target and time to go. Numerical simulation has verified that the proposed guidance law is more effective than traditional methods on constant and sinusoidal target with bounded acceleration.

1. Introduction

Generally, there are two design methods for unmanned aerial vehicle (UAV) guidance control system: The first method is to classify the flight system to high frequency attitude control inner-loop and low frequency guidance outer-loop, which are separately and independently designed [1], according to the principle of timescale separation. The other method is to introduce overload, aspect angle, and other information for integrated design of guidance control at inner-loop design [2]. The paper mainly studied the design of outer-loop guidance law specific to the first method. Target maneuver refers to constant changes of speed, angle, and acceleration of object motion. Attack guidance technology specific for maneuvering target has always been the emphasis for guidance law research. With the development of technology, the target maneuvering capability has been strengthening consistently and the target maneuver becomes even more difficult to predict which is a major issue restricting the improvement of guidance precision. Many scholars have carried out extensive studies on new guidance law specific to maneuvering target. Recently, the studies specific to the guidance law with maneuvering target are mainly classified to two types [36]: () the optimal guidance law; () nonlinear guidance law.

The optimal guidance law based on linear is to utilize optimal control theory to design the controller with end constraint. Optimal guidance law (OGL) is a guidance motion equation based on linearization. The setting quadratic performance index under terminal constraint condition will be minimum. Under the circumstance of known target information in [7], it is assumed that the guidance command is vertical to missile velocity vector. The optimal guidance law of intercept maneuvering target can be obtained through integration to obtain nonlinear algebraic equation. Du et al. [8] studied 3D guidance law with constraints. In case of external disturbance of target acceleration, guidance law design was transformed to application dynamic programming solution. Hexner and Shima [9] proposed a stochastic optimal control guidance law with terminal constraint. The guideline results indicate that, in case of target maneuvering boundary, the interception performance of guidance law is superior to classical optimal guidance law. A tracker with remaining time of variable motion is proposed as a recursive algorithm which has improved the estimation of remaining time and effectively improved the performance of guidance law [10]. In [11], a new impact angle control optimal guidance law has been developed for missiles with arbitrary velocity profiles against maneuvering targets.

The guidance law based on nonlinear control method is to derive guidance law by utilizing nonlinear control theory. The nonlinear control methods utilized extensively include variable structure control, Lyapunov optimizing feedback control, and H∞ [1217]. In [12], the presented guidance law is based on nonsingular terminal sliding mode, smooth second-order sliding mode, and finite time convergence disturbance observer. It is used to estimate and compensate the lumped uncertainty in missile guidance system; no prior knowledge of target maneuver is required. Shang et al. [13] considered target maneuver as external disturbance. Under the circumstance of small target-missing amount, a collision time controllable guidance law was proposed based on the theory of finite time convergence control. The system state can be converted to specified sliding mode within limited time. Zhou et al. [14] put forward a guidance law based on integral sliding mode control. The guidance law relaxed the assumption on conventional collision time control guidance law setting the speed as a constant. Vincent and Morgan [15] utilized Lyapunov optimal feedback control method to derive a nonlinear guidance law. The advantage is that the rate of LOS angle and target acceleration does not need to be measured. Yang and Chen [16] studied the guidance law which determined the target maneuver as disturbance input. The guiding issue of guided missile was changed to nonlinear disturbance attenuation control issue. Three kinds of H∞ guidance laws were obtained through deriving related Hamilton-Jacobi partial differential inequality.

Although guidance laws based on different theories were proposed specific to maneuvering target, there was no uniform treatment method specific to the target motion information specific to any maneuvering flight to estimate the uncertainty. The optimal guidance laws need to obtain estimation on target acceleration information and time to go. If the estimated precision is not high, the interception performance of guidance law will be significantly weakened [17, 18]. The estimation accuracy of the remaining time from linear guidance law will greatly affect the guidance performance; in addition, the linear optimal guidance law is effective in case of minor aspect angle. In most cases, unknown disturbance and nonmodeled dynamics exist in the actual system which will affect the performance of guidance law. There are many guidance information required by guidance law based on nonlinear method. The form is relatively complicated which will cause difficulties for actual engineering applications. And nonlinear methods cannot ensure optimality [19, 20]. Consequently, more effective guidance laws shall be designed specific to maneuvering target.

In order to treat the issues arising from such attacking maneuvering target, the paper proposed a UAV terminal guidance law based on receding horizon control strategy. Receding horizon control (RHC) is a control technology based on the application online calculation to solve the optimal control issue repeatedly according to the currently measured system status. It has been applied in aircraft control and guidance [1723]. This paper will use receding horizon control method to solve guidance law design problem. First of all, the paper conducted guidance law control strategy within the rolling time; and then the optimal control law was derived based on the minimum time within a receding horizon; guidance law and its iterative algorithm were designed on this basis. The algorithm utilized control commands generated by the optimal control law which formed suboptimal guidance law during the whole guidance law process. Compared to conventional terminal guidance law, the guidance law requires less guidance information. It does not need to estimate the time remaining and can intercept maneuvering target of bounded acceleration while not aware of the acceleration information in the future.

The remaining structures in the paper are as follows: Section 2 described the relative motion model between UAV and object; Section 3 described the terminal guidance law based on receding horizon control strategy; Section 4 introduced the comparative results and analysis of the guidance law and other several guidance law simulation results.

2. Problem Formulation

This section specified the guidance mathematical model for intercepting target. In order to highlight the major issues of the research, the following are assumed:(1)The UAV and target speed are constant.(2)The body axis of UAV and velocity direction are consistent. The error angle is negligible.(3)At the initial moment of guidance law, the objectives are within the field range of UAV.(4)The response time delay of aircraft can be negligible.

According to the above assumptions, the UAV and target can be abstracted as a controllable mass point. Since the three-dimensional movement can be divided into two mutually perpendicular two planar motions, studies on guidance law with the interception movements of UAV and target on the same plane were shown in Figure 1. Subscripts M and T represent the related physical quantities of UAV and target, where refers to the relative distance between the UAV and target, refers to LOS (line of sight) angle between UAV and target, refers to course angle of UAV, is the course angle of object, is the kinematic velocity of UAV, and is the kinematic velocity of object. , , , and are known quantities. is unknown quantity. The process of guidance is that the UAV impacts the target according to established guidance law.

The guidance equations are as follows:

Equations (1)~(4) constitute the guidance kinematic model of the UAV and target on the two-dimensional plane. The basic conditions that required satisfying successful guidance attack between UAV and object include

In formula (5), and refer to the minimum turning radius of UAV and objectives, respectively. Formula (5) is mainly to ensure that the speed of UAV is greater than target velocity. In addition, the maneuvering capability of UAV is stronger than the target maneuvering capability. In formula (6), is the last moment relative distance of guidance attack process (namely, the relative distance at target-missing quality) and refers to the detecting range of UAV.

The guidance law designed in the paper not only has to make sure that the relative distance between UAV and maneuvering target is within the detecting range of UAV at the terminal , but also has to make sure that UAV can attack the target within the shortest time once detecting the target. In other words, it has to satisfy the constraint conditions of inequation (6), as well as the minimum indicator function as shown in

Provided that, under the known changes, target trajectory can be estimated accurately in advance and the remaining time can be estimated accurately, formula (7) can be minimized by utilizing the optimal control theory. Assume that the minimum guidance time required is . During the actual applications, the target approaching motion information (trajectory and acceleration) can hardly be predicted accurately. Consequently, the paper adopts receding horizon control strategy thus to realize the optimal guidance based on the minimum time in each receding horizon so that the continuous receding horizon constitutes overall state feedback control during the whole guiding process and obtains a suboptimal solution for the minimum intercept time. Suboptimality is for the optimum control relative to the known target motion state. It can be seen from the aforementioned four assumptions that the motion trajectory of UAV can be determined by (constant in the plan ). Therefore, the minimum optimal problem which should satisfy formula (7) can be transformed to design optimal so that minimum indicator (7) can satisfy all constraint conditions of formula (5) and (6) simultaneously. The paper adopts as the controlled variable to design guidance law.

3. Guidance Method

In this section, we will derive a guidance law; the paper proposed a terminal guidance law based on receding horizon control strategy. By adopting receding horizon control strategy, the guidance process is divided into several finite time horizons. Conduct minimum time optimal control with continuous renewal of original conditions within each time horizon, and perform repeated iteration until intercepting the target. The guidance law is of subprime optimal type, requiring less guidance information. It does not need to estimate the time remaining and can intercept maneuvering target of bounded acceleration.

3.1. The Receding Horizon Control Strategy

The receding horizon control strategy is as shown in Figure 2. In the figure, refers to the unit time of line computation; refers to the online computation time of guidance algorithm; refers to updating cycle of guidance command;    refers to the time for guidance instruction updating; refers to the length of receding horizon; and refers to guidance command.

Receding horizon control strategy is to solve optimization issues in receding horizon by taking current state measurements as initial conditions and to calculate the optimal control solution online. Execute control in guidance instruction execution cycle until the system obtains new measurement values and takes it as the new initial conditions. Calculate the optimal control solution of the next finite horizon in the same way. Continuously repeat the process until satisfying the requirements and obtain a group of feedback control law. The receding horizon control only requires the optimal control of the state in the current trajectory of the system, avoiding the global and difficult-to-calculate Hamilton-Jacobi method. In addition, the closed-loop stability of receding horizon has been verified.

3.2. Derivation of the Guidance Law in One Receding Horizon

According to the receding horizon control strategy specified in Section 3.1, we derived an optimal guidance control law in a receding horizon specific to nonlinear optimal issues composed by (1), (2), and (7). Receding horizon control is to perform optimal tracking for target in different receding horizons. Because the future trajectory or maneuver of the target is unknown, in the th receding horizon , it is assumed that the target escapes with no-maneuvering. Meanwhile, the target moves with the state value of the initial time in each horizon as the initial condition. The target of tracking is to ensure target interception of UAV in the minimum interception time. Consequently, the issue to be solved is to obtain the control variables to be satisfied in the given time domain :

The minimum time interception can obtain analytical solution in the single time domain through the optimal control theory and minimum modulus principle. First of all, take the Hamilton function as follows: where is costate. The costate equation is

Solve the costate differential equations composed of (10) and (11) to derive and : where and are integration constants.

Since the terminal time in the time domain is free, the issue belongs to terminal time freedom. From the transversal conditions of , it can be seen that

Substitute formula (14) and (15) into (12) and (13):

The value can be obtained from (1):

Obtain from the control equation that

Substitute formula (16) and (17) into (19):

The control law expressed in formula (20) means that, in order to satisfy the requirements of indicator function (8), the course angle of UAV in one receding horizon needs to be controlled to the size of the aspect angle of the time domain. Namely, the direction of UAV attack shall be pointed to the objective angle defensive line at the terminal time . The guidance law based on receding horizon control strategy is the synthesis of optimal tracking control solution based on the series of finite horizons. At the beginning of receding horizon , UAV take the currently measured target state values as original values. Take expression (8) as objective optimization function to solve the optimal tracking control instruction of the time domain, adjust the flight direction of aircraft to the optimal tracking direction, and fly to the target along the collision line until obtaining the new target state value and enter the receding horizon to realize optimal tracking. Repeat the optimal tracking control solutions and control instruction execution process in the new receding horizon as the updated initial state. Formula (6) shall be verified in each receding horizon. In case of satisfying the conditions of formula (6), it refers to target interception. In case of not satisfying the conditions, perform the optimal control in the next receding horizon under the new initial conditions. Obviously, for each time domain, the control is an open loop. However, the whole tracking process guidance law is under closed-loop control.

UAV utilize receding horizon control strategy. In case of target interception in the receding horizon, the computational formula for the general time of target interception is as follows:

Obviously, ; the guidance time obtained by utilizing receding horizon control strategy is greater than the guidance time under the optimal guidance law which is greater than the predicted target motion information. Therefore, the guidance law presented in this paper is a suboptimal guidance law. The time loss relative to the optimal guidance law is

3.3. The Iterative Computation Algorithm of the Guidance Law

It can be seen from formula (3) that lateral overload controlled by UAV in two-dimensional planar is . In addition, aircraft track angle is adopted in the paper as controlled variable. The conditions satisfying the optimal tracking of track angle are given in formula (20). Although UAV are unaware of target prospective maneuvering, the target track can be predicted in the single receding horizon. According to the requirements of formula (20), the track angles of prospective aircraft in each receding horizon need to coincide with the aspect angles of virtual target. The guidance law can be produced according towhere is the proportionality coefficient and is the relative velocity between UAV and target without maneuvering. Since the assumed target in one time domain is free of maneuvering, can be obtained easily; refers to the estimated remaining time of target without maneuvering; indicates that the track angle of UAV in receding horizon reaches the angles prospected. Since the assumed target in one time domain is free of maneuvering, the virtual motion trajectory in time domain can be predicted accurately. Therefore, and can be calculated easily, not requiring specific measuring or estimation. is the LOS rate between UAV and realistic objective. The parameter can be measured by the sensors on UAV. is the track angle control issue making sure to satisfy the requirements in receding horizon of track angle as stipulated in formula (20). Exponential term is a smooth term. is a compensation dosage to make sure of the gap between target virtual position and maneuvering actual target position on the normal direction of UAV.

Since it is difficult to calculate the attack direction of UAV in each receding horizon via the nonlinear equations (1) and (2) accurately, it can be subject to approximate evaluation based on the current position of aircraft and terminal position of the target in the receding horizon, as shown in

During the actual iterative calculation, the value of receding horizon length is quite important. It plays a role balancing the load and system stability performance. The paper adopts fixed receding horizon following the method as stipulated in formula (26), where is calculated in [10].

The guidance law iterative algorithm can be obtained based on the receding horizon control strategy.

Step 1. Initialize parameters .

Step 2. Calculate .

Step 3. Select the length of receding horizon according to formula (26).

Step 4. Calculate the LOS of terminal time according to formula (25).

Step 5. Perform optimum control based on the minimum time within receding horizon according to formula (24).

Step 6. Determine if the calculation satisfies formula (6). If yes, complete iteration and guidance process; if not, update and return to Step 3 for iteration.

4. Simulations and Discussion

In this section, simulations of the proposed guidance law based on receding horizon control strategy are presented for a variety of scenarios. For validating the performance of our method, the new guidance law will be compared with some other guidance laws such as augmented proportional navigation (APN) and optimal guidance law (OGL) [12, 24]. The compared guidance laws can be written asThe initial parameters are set in Table 1. The initial LOS angle and initial relative distance are relating to the performance of the detectors on UAV. The values of and are decided by the velocity of UAV and target in real world. is the initial flight course angle, and its value should ensure the target to be in the field of view of detector. Acceleration limits, and , are decided by rules of thumb.

The parameter settings presented in formula (23), (26), and (27) are as shown as follows: . In order to ensure consistency of simulating comparative results, the values of parameters in formula (27) are around the intercept point. In case of remaining time , the equivalent gain of proportional guidance items is 5. The value is . According to formula (6), in the simulation refers to successful interception. In order to compare the application feasibility of each guidance law on self-detecting guided weapon with FOV limits, look angle is defined in formula (28) under the assumption of the consistent body axis and speed of UAV.

4.1. Case 1 (Impact a Target Moving Straight Line with Constant Velocity)

In this scenario, the target moves along a straight line without lateral acceleration. So the guidance law and is zero. The initial flight path angle of target, , is set to 90 deg. Since the target has no lateral acceleration, and the target speed is uniform, the target trajectory can be predicted for APN, OGL, and the guidance law proposed in the paper. Therefore, this is mainly to investigate the comparison of three types of guidance law performances under different target motion directions under such circumstances.

4.1.1.  m/s2,

Simulation results are as shown in Figure 3. It can be seen from Figure 3 that APN, OGL, and the guidance law proposed in the paper can intercept target. The trajectories of OGL and proposed law are relatively close. And those of APN, OGL, and proposed law are relatively straight. It can be seen from Figure 3(b) that the energy consumption of APN acceleration is low. And that of the other two guidance laws is comparatively high. About 12 s later, the acceleration of APC is around 0; the acceleration of OGL at guidance terminal increases to −2.6 m/s2 reversely; it can obviously detect the change rules of proposed law in each time domain. The last terminal acceleration is 0. It can be seen from Figure 3(c) that the LOS rate of APN and proposed law has decreased around 0°/s. And the LOS rate of OGL increased to −3.4°/s at the terminal. It can be seen from Figure 3(d) that look angle of APN increased and stabilized at 13.5°. The look angle of OGL decreased to 5.9°. And the look angle of proposed law decreased and stabilized at 7.7°. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.9714 m, 0.9137 m, and 0.8216 m, respectively; the impact time of OGL is 18.72 s; the impact time of proposed law is 18.73 s. And the time loss of proposed law is 0.05% according to formula (22).

4.1.2.  m/s2,

Simulation results are as shown in Figure 4. It can be seen from Figure 4(a) that APN, OGL, and the guidance law proposed in the paper can intercept target. The trajectories of the three guidance laws at the first half section of guidance process are relatively close. And those of APN, OGL, and proposed law at the remaining half section are relatively straight. It can be seen from Figure 4(b) that the energy consumption of APN acceleration is low. And that of the other two guidance laws is comparatively high. About 12 s later, the acceleration of APC is around 0; the acceleration of OGL at guidance terminal increases to −5.5 m/s2; it can obviously detect the change rules of proposed law in each time domain. The last terminal acceleration is 0. It can be seen from Figure 4(c) that the LOS rate of APN and proposed law has decreased around 0°/s. And the LOS rate of OGL increased to 6.9°/s at the terminal. It can be seen from Figure 4(d) that look angle of APN increased and stabilized at 21.5. The look angle of OGL look angle decreased to −12° reversely. And the look angle of proposed law decreased and stabilized to 14°. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.9901 m, 0.9584 m, and 0.9742 m, respectively; the impact time of OGL is 15.89 s; the impact time of proposed law is 15.96 s. And the time loss of proposed law is 0.4% according to formula (22).

4.1.3.  m/s2,

Simulation results are as shown in Figure 5. It can be seen from Figure 5(a) that APN, OGL, and the guidance law proposed in the paper can intercept target. The trajectories of three guidance laws of the first half section at the guidance process are relatively close. And those of OGL and proposed law of the last half section are relatively straight. It can be seen from Figure 5(b) that the energy consumption of APN acceleration is low. And that of the other two guidance laws is comparatively high. About 12 s later, the acceleration of APN is around 0; the acceleration of OGL at guidance terminal increases to 30 m/s2; it can obviously detect that the change rule of proposed law in each time domain at the last terminal is converged to 0 m/s2. It can be seen from Figure 5(c) that the LOS rate of APN and proposed law has decreased around 0°/s. And the LOS rate of OGL increased to 22.5°/s at the terminal. It can be seen from Figure 5(d) that look angle of APN increased reversely to −16°. The look angle of OGL decreased to −22.5° reversely. And the look angle of proposed law decreased to −20° reversely. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.5577 m, 0.9890 m, and 0.7703 m, respectively; the impact time of OGL is 7.22 s; the impact time of proposed law is 7.24 s. And the time loss of proposed law is 0.3% according to formula (22).

It can be seen from the guidance simulation results of different target initial velocity guidance simulation results that the trajectory of proposed law is close to the trajectory characteristics of OGL. When there is no acceleration maneuvering of the target, the energy consumption of APN is low, followed by proposed law. The energy consumption of OGL is the maximum, especially at the terminal of guidance process. The acceleration control amount of OGL will increase sharply. The acceleration of proposed law can be effectively reduced to the last time domain. The LOS rate of APN can be subject to convergence smoothly; and the LOS rate of OGL at the terminal will disperse reversely. Especially when the initial track angle of the target is 180°, the LOS rate at the terminal of guidance is 22.5°/s. Under such circumstances, it is prone to off-target; for the control effect of receding horizon, each time domain of proposed law can be updated at the initial period. And LOS rate can be controlled around 0°/s at the terminal. In most cases, look angle of proposed law is small. And look angle of APN is big. It can be seen from Figure 6 that the guidance law proposed in the paper is within the initial track angle of 0°~140°. The maximum look angle is the minimum one among the three guidance laws. Figure 7 shows that the target is within the initial track angle of 0°~180°. From the final miss distance of the three guidance laws, it can be seen that the final miss distance of three guidance laws is distributed within 0.5~1 m. Since the target motion direction remains unchanged, the time loss of proposed law is not great.

4.2. Case 2 (Impact a Maneuvering Target with Constant Lateral Acceleration)

In this scenario, the target moves with a constant lateral acceleration. The initial flight path angle of target is set to 90 deg. For APN and OGL, , the changing laws, are unknown. And for proposed law, the changing law of target maneuvering is unknown. According to the simulation results, APN works within the target maneuver of −2.3~4.9 m/s2. However, APN will disperse while exceeding the scope and cannot satisfy the guidance conditions (6). Therefore, the performance of the three guidance laws shall be investigated within the target maneuvering scope of −2.3~2.3 m/s2.

4.2.1.  m/s2

Simulation results are as shown in Figure 8. It can be seen from Figure 8(a) that the trajectories of OGL and that proposed in the paper are relatively close. And those of OGL and proposed law are relatively straight compared to that of APN. It can be seen from Figure 8(b) that the energy consumption of APN acceleration is high. And the guidance law proposed in the paper is equivalent to the energy consumption of OGL. The acceleration of APN is around 10 m/s2; the acceleration of OGL at guidance terminal increases to −7 m/s2; it can obviously detect that the accelerated velocity of the proposed law at the last time domain is converged to −4 m/s2. It can be seen from Figure 8(c) that LOS rate of APN is changing around 3~7°/s. That of OGL increased to −6°/s reversely. The guidance law proposed in the paper finally converged to −2 °/s. It can be seen from Figure 8(d) that look angle of APN increased to 60°. The look angle of OGL and proposed law is within 25°. In addition, the look angle of proposed law is small. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.7446 m, 0.9508 m, and 0.9336 m, respectively; the impact time of OGL is 17.87 s; the impact time of proposed law is 17.99 s. And the time loss of proposed law is 0.7% according to formula (22).

4.2.2.  m/s2

Simulation results are as shown in Figure 9. It can be seen from Figure 9(a) that the trajectory of three guidance laws is relatively straight. It can be seen from Figure 9(b) that the energy consumption of OGL acceleration is high. And the guidance law proposed in the paper is equivalent to the energy consumption of APN. The acceleration of APN is around 0 m/s2; the acceleration of OGL at guidance terminal increases to −1 m/s2; and the accelerated velocity of the proposed law at the terminal is around −2 m/s2. It can be seen from Figure 9(c) that LOS rate of APN reduced and stabilized to 1.5°/s. After the LOS rate of OGL decreasing to 0, it increased to −0.2°/s reversely. The guidance law proposed in the paper finally converged to −1°/s. It can be seen from Figure 9(d) that look angle of OGL reached the maximum value of 16°. The look angle of proposed law reached the minimum value of 10°. In addition, proposed law and look angle of OGL at the last time are converged around 0°. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.9259 m, 0.8748 m, and 0.8021 m, respectively; the impact time of OGL is 17.85 s; the impact time of proposed law is 17.93 s. And the time loss of proposed law is 0.5% according to formula (22).

4.2.3.  m/s2

Simulation results are as shown in Figure 10. It can be seen from Figure 10(a) that the trajectory of APN is comparatively bending and that of OGL is straight. The trajectory of the guidance law proposed in the paper is between the two. It can be seen from Figure 10(b) that the energy consumption of APN acceleration is high. And the guidance law proposed in the paper is equivalent to the energy consumption of OGL. The APN acceleration at the terminal is about −10 m/s2; OGL acceleration of guidance law increased to 13 m/s2 reversely; and the acceleration of proposed law at the terminal increased around −1 m/s2. It can be seen from Figure 10(c) that LOS rate of APN increased to −3°/s reversely. That of OGL increased to 7°/s reversely. The guidance law proposed in the paper finally converged around 0. It can be seen from Figure 10(d) that look angle of APN reached the maximum value of 45°. The look angle of proposed law increased around 28° reversely. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.7807 m, 0.9531 m, and 0.9716 mm, respectively; the impact time of OGL is 10.91 s; the impact time of proposed law is 11.21 s. And the time loss of proposed law is 3% according to formula (22).

It can be seen through the guidance simulation results of different target constant acceleration that the trajectory scale division of three guidance laws with acceleration maneuvering is high. In case of high acceleration, the trajectory bending of APN is high. In case of acceleration maneuvering of the target, the energy consumption of APN is high and the energy consumption of proposed law and that of OGL are equivalent. And at the terminal of guidance process, the acceleration control amount of OGL will increase sharply. The acceleration of proposed law can be effectively reduced at the last receding horizons. The LOS rate of APN will be enhanced along the increasing of target acceleration. Under such circumstance, it is prone to off-target; due to the restriction of proposed law at receding horizon, the original values will be upgraded at the initial time. And LOS rate can be controlled around the low value at the terminal. In most cases, look angle of proposed law is smaller than the look angle of APN. It can be seen from Figure 11 that the look angle of proposed law and that of OGL are related to comparison and target acceleration. The look angle of proposed law within −10~−8 m/s2 acceleration range is smaller than the look angle ranges of other guidance laws. And the look angle of −7~−5 m/s2 acceleration range scope is small. Figure 12 shows that the target is within the initial track angle of −10~10 m/s2. From the final miss distance of the three guidance laws, it can be seen that the final miss distance of three guidance laws is distributed within 0.4~1 m. With the increasing of target acceleration, the time loss of proposed law presents an increasing tendency.

4.3. Case 3 (Impact a Maneuvering Target with Sinusoidal Lateral Acceleration)

In this scenario, the target moves with a sinusoidal lateral acceleration, ; the above formula shows change frequency. The initial flight path angle of target is set to 90 deg. For APN and OGL, , the changing laws, are known. And for proposed law, the changing law of target maneuvering is unknown. According to the simulation calculation, APN will disperse while exceeding 0.02 rad/s and will not satisfy the guidance conditions (6). Therefore, select and compare the three guidance law performances through simulation calculation. Select , and compare the guidance performances of OGL and proposed law.

4.3.1.

Simulation results are as shown in Figure 13. It can be seen from Figure 13(a) that the trajectories of APN and that proposed in the paper are relatively close. And those of the three guidance laws are relatively straight. It can be seen from Figure 13(b) that the energy consumption of OGL acceleration is high. And the guidance law proposed in the paper is equivalent to the energy consumption of APN. The acceleration of APN at terminal is around 0 m/s2; the acceleration of OGL at guidance terminal increases to −1.5 m/s2; and the accelerated velocity of the proposed law at the terminal is around −3 m/s2 reversely. It can be seen from Figure 13(c) that LOS rate of APN is changing around 1~3.5°/s. That of OGL increased to 0.5°/s reversely. The guidance law proposed in the paper finally converged to −2 °/s. It can be seen from Figure 13(d) that look angle of OGL increased to the maximum value of 18°. The look angle of proposed law decreased to the minimum value of 11.5°. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.8905 m, 0.9586 m, and 0.8033 m, respectively; the impact time of OGL is 16.81 s; the impact time of proposed law is 17.00 s. And the time loss of proposed law is 1.1% according to formula (22).

4.3.2.

Simulation results are as shown in Figure 14. The trajectory scale division of three guidance laws is obvious as shown in Figure 14(a). The trajectory at APN guidance terminal is obviously bent which is related to −2.3 m/s2 approaching the target acceleration (relevant interpretation is as shown in simulation in Section 4.2). It can be seen from Figure 14(b) that the energy consumption of APN acceleration is high. And the guidance law proposed in the paper is equivalent to the energy consumption of OGL. The acceleration of APN is around 16 m/s2; the acceleration of OGL at guidance terminal increases to −8 m/s2; and the accelerated velocity of the proposed law at the terminal is around −6.5 m/s2. It can be seen from Figure 14(c) that LOS rate of APN is changing around 3.5~8°/s. That of OGL rate of OGL increased to −6°/s reversely. The guidance law proposed in the paper finally converged to −4.5°/s. It can be seen from Figure 14(d) that look angle of OGL increased to the maximum value of 42°. The look angle of proposed law decreased to the minimum value of 21.5°. Through simulation calculation, the final miss distances of APN, OGL, and proposed law are 0.8194 m, 0.9032 m, and 0.9752 m, respectively; the impact time of OGL is 17.23 s; the impact time of proposed law is 17.29 s. And the time loss of proposed law is 0.3% according to formula (22).

4.3.3.

Simulation results are as shown in Figure 15. It can be seen from Figure 15(a) that the trajectories of OGL and that proposed in the paper are relatively straight. It can be seen from Figure 15(b) that the energy consumption of OGL acceleration is low. And the guidance law proposed in the paper is high. The OGL acceleration at terminal is around 6.5 m/s2; the acceleration of proposed law at terminal reduces around 1 m/s2. Generally, the acceleration of proposed law can follow the changes of target acceleration. It can be seen from Figure 15(c) that LOS rate of OGL is −2~6°/s and that at the terminal is 6°/s. Proposed law decreased to 0.5°/s eventually. It can be seen from Figure 15(d) that look angle of OGL is 24°, greater than that of proposed law. The look angle scope of proposed law is 22.5°. Through simulation calculation, the final miss distances of OGL and proposed law are 0.7781 m and 0.776 m, respectively; the impact time of OGL is 15.51 s; the impact time of proposed law is 15.59 s. And the time loss of proposed law is 0.5% according to formula (22).

It can be seen from the guidance simulation results through different frequency changes of target acceleration that APN is only applicable to slow changes of target acceleration. In case of changes of target acceleration, the energy consumption of APN will reach the maximum value. The energy consumption of proposed law and that of OGL are equivalent. In case of frequency increasing of target acceleration changes, the energy consumption of proposed law will increase mainly because the proposed law is free of target maneuvering information compared to OGL. Therefore, more energy is needed to track the changes of target acceleration. The LOS rate of APN will be enhanced continuously under sine mobility. It is prone to off-target under such circumstance; due to the restriction of proposed law at receding horizon, the original values will be upgraded at the initial time. And LOS rate can be controlled around the low value at the terminal. In most cases, look angle of proposed law is smaller than that of APN and OGL. It can be seen from Figure 16 that, at most frequency sections, the look angle of proposed law is compared to the look angle of OGL. Figure 17 shows that the final miss distances of OGL and proposed law are distributed within 0.4~1 m within 0.01~1 rad/s under different frequencies of sine mobility. With the increasing of frequency, the time loss of proposed law presents no obvious increasing tendency and maintains a low level.

5. Conclusion

The paper proposed a suboptimal terminal guidance law based on receding horizon control strategy which can be used in self-optimizing guided weapon attacking maneuvering target. By adopting receding horizon control strategy, the guidance process is divided into several finite time horizons. Conduct minimum time optimal control with continuous renewal of original conditions within each time horizon, and perform repeated iteration until intercepting the target. The simulation results have verified that proposed law is an effective suboptimal guidance law. On the aspect of energy consumption, APN energy consumption reaches the minimum value while making uniform rectilinear motion, followed by proposed law. The energy consumption of APN is the maximum. In case of target maneuver, the energy consumption of APN will be maximum. The energy consumption of OGL and proposed law is low. On the aspect of guidance duration, no matter whether the target is mobile, the time loss of proposed law compared to OGL is not high. In addition, proposed law can reduce the terminal acceleration and LOS rate relying on the receding horizon control strategy thus to reduce the possibility of off-target. In most cases, the look angle changing range of proposed law is smaller which is in favor of self-optimizing guided weapon with field limitation. Although the guidance time and energy consumption are not optimal, the guidance information required by proposed law is scarce. Particularly, maneuvering target (constant value mobility and sine mobility) has stronger adaptability. The target can be intercepted not requiring estimation on target acceleration and remaining time.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.