Abstract

Air target threat assessment is a key issue in air defense operations. Aiming at the shortcomings of traditional threat assessment methods, such as one-sided, subjective, and low-accuracy, a new method of air target threat assessment based on gray neural network model (GNNM) optimized by improved moth flame optimization (IMFO) algorithm is proposed. The model fully combines with excellent optimization performance of IMFO with powerful learning performance of GNNM. Finally, the model is trained and evaluated using the target threat database data. The simulation results show that compared with the GNNM model and the MFO-GNNM model, the proposed model has a mean square error of only 0.0012 when conducting threat assessment, which has higher accuracy and evaluates 25 groups of targets in 10 milliseconds, which meets real-time requirements. Therefore, the model can be effectively used for air target threat assessment.

1. Introduction

Air target threat assessment refers to comprehensively considering various factors affecting the target threat value, establishing a reasonable indicator system, and quantifying it, and then establishing a threat assessment model to evaluate the target threat value. As battlefield environment becomes more and more complex and incoming targets are characterized by high speed, high maneuverability, stealth, antijamming, and remote precision guidance, the pressure on air defense system is increasing. Therefore, establishing a reasonable threat assessment model and conducting a rapid and accurate threat assessment of incoming targets is a prerequisite for target allocation in air defense operations and an important support for efficient command and control.

At present, a lot of research studies have been carried out on air target threat assessment at home and abroad. Two methods are mainly used. One is the method of reasoning and the other is the method of machine learning.

The method of reasoning is to analyze the relationship between index values and threat values and combine expert experience and prior probability to infer the threat values of different targets. This method mainly includes the Bayesian network [13] and intuitionistic fuzzy logic [46]. The Bayesian network method can intuitively express the relationship between indicators and threats, but the determination of prior probability depends too much on expert experience. Therefore, subjectivity is too strong. An intuitionistic fuzzy logic method can better deal with qualitative indicators, but the rules of reasoning are complicated and the coupling relationship between the indicators is not considered enough.

The machine learning method is to establish a threat assessment index system, quantify each index, and then learn the nonlinear relationship between the index values and the threat values through machine learning. This method mainly includes neural network [7, 8] and support vector machine [9]. Neural network has strong nonlinear fitting ability, but it needs large sample training. Support vector machine has advantages for small sample prediction, but it has disadvantages of unreasonable index selection and low evaluation accuracy.

Therefore, how to choose a reasonable index system and scientific quantitative methods, while overcoming the strong subjective deficiencies of traditional methods and improving the accuracy of assessment have always been the focus of research on threat assessment.

GNNM is a predictive model that combines the advantages of small sample prediction with gray system theory and the advantages of self-learning of neural network. Among them, gray system theory [10] is a systematic scientific theory proposed by Prof. Deng to predict the eigenvalues of uncertain system behaviors. Gray system modeling requires few samples, and the requirement for distribution law of data is not high. Neural network is a mathematical model mainly used to learn the nonlinear function relationship between input data and output data, with the advantages of parallel computing and self-learning, self-organization, and robustness. Combining gray theory with neural network to establish a gray neural network model can make full use of the advantages of both and improve the prediction accuracy. The performance of GNNM depends mainly on the choice of initial weights, so how to choose better initial weight parameters is particularly important.

Moth flame optimization (MFO) algorithm [11] is a new intelligent optimization algorithm [1214] proposed by Prof. Mirjalili in 2015. It has the advantages of less adjustment parameters, faster convergence, and simple implementation. Hence, it is often used in engineering optimization problems [15, 16]. However, MFO also has some shortcomings in the process of optimization. For example, the selection of current optimal solution in each iteration leads to trap in local optima and premature convergence, and the global optimization performance is poor due to low population diversity. Therefore, after improving the optimization performance of MFO, the optimal initial weight parameter is obtained by IMFO and the learning performance of GNNM is improved.

Therefore, this paper combines the actual situation of air target threat assessment, selects a reasonable indicator system, establishes a threat assessment framework, and scientifically quantifies each indicator. Then, this paper improves the global optimization performance of the original MFO by introducing Tent chaos, Lévy flight, and Metropolis criterion. Then this paper combined IMFO with GNNM, a threat assessment model based on IMFO-GNNM is established. Finally, the proposed model is evaluated from accuracy and real-time performance. The results show that the proposed model has higher accuracy and better real-time performance in threat assessment. Therefore, this study has certain advantages and important practical significance.

2. Threat Assessment Problem Modeling

Air target threat assessment needs to analyze the air target and our asset to obtain the threat value of air target, thus providing a basis for our command and control. Due to the particularity of combat problem, on one hand, the accuracy of threat assessment should be considered, and on the other hand, the real-time performance of threat assessment should be considered. Therefore, it is crucial to choose a reasonable and accurate threat assessment indicator system and an efficient threat assessment method. Since air target threat assessment is not only related to the threat capability of target, but also related to the spatial location of target and the value of our asset. This paper considers threat level, threat capability, and threat extent to establish a target threat assessment framework, and it scientifically and reasonably quantifies each threat attribute. Assessment framework is shown in Figure 1.

2.1. Threat Level

Since the value of asset on the battlefield is different, the threat value of air target is related to the value of our asset. The higher the value of our asset, the higher the target threat level and the higher the threat value. The value of our asset is generally related to the political, economic, and military value of the asset. Therefore, refer to the importance of asset evaluation standard [17], and a threat level evaluation function is established, as shown in the following equation:

In the formula, represents the value of the jth evaluation factor of our asset i and its value range , and s is the number of evaluation factors. The value of the asset is usually given by the superior command.

2.2. Threat Capability

The target threat capability includes the penetration ability of the target to our asset and the degree of damage caused by the successful penetration. Among them, the penetration ability is related to the target type and interference ability and the degree of damage is determined by the target damage ability.

2.2.1. Target Type

There are many types of air targets. Different types of targets are different in size, radar cross-section (RCS), and threat values. Air targets are mainly classified into aircraft and guided weapons according to their size. Aircraft mainly includes bombers (BA), fighter bombers (FB), and armed helicopters (AH). Guided weapons mainly include precision guided bombs (PGB), air-to-ground missiles (AGM), cruise missiles (CM), tactical ballistic missiles (TBM), and antiradiation missiles (ARM). Therefore, the domain expert group quantifies the RCS size of different types of targets, and the results are shown in Table 1.

2.2.2. Damage Ability

The target damage ability is determined by target damage probability and our survival probability. The damage probability is obtained through the intelligence data, and the survival probability is obtained by analyzing the factors such as the invulnerability and protection of our asset.(a)When the air target is an aircraft type, the probability of damage is related to the type, quantity, and single-damage probability of the weapon carried by the aircraft. Therefore, its damage probability is the joint damage probability of different types of weapons, and its damage ability is quantified according to the following equation:In the formula, represents the damage ability of the target j to the asset i, represents the survival probability of the asset i, represents the probability of damage of the PGB to the asset i, and represents the number of PGB mounted by the BA.(b)When the air target is a guided weapon, its damage ability is quantified by

2.2.3. Interference Ability

According to the electronic interference pod and interference means, the target interference capability is classified into five levels: super strong, strong, medium, weak, and no. According to Miller’s quantitative theory [18], it is quantized to 0.9, 0.7, 0.5, 0.3, and 0.1.

2.3. Threat Extent

The target threat extent mainly measures the threat of the target to our asset from the spatial situation. As shown in Figure 2, T represents the incoming target, K represents our asset, D represents the target distance, and θ represents the airway angle, that is, the angle between the projection of the target velocity in the horizontal plane and the connection between the target and the asset.

It can be seen from Figure 2 that the closer the target distance, the higher the target speed and the higher the target threat value. In addition, considering killing boundary of the target, when the distance is fixed, the smaller the airway angle, the shorter the airway short cut, the more likely our asset is to fall into the target kill zone, and the higher the target threat value. Therefore, the target threat extent is measured by the three dimensions of the airway angle, the target speed, and the target distance.

2.3.1. Airway Angle Threat

The airway angle usually varies in the range of [0°, 180°], but it is generally considered that when the airway angle is within [0°, 90°], the target is approaching to the asset. At this time, the larger the airway angle, the larger the airway short cut and the lesser the threat extent. Hence, the negative exponential function is used to quantify the airway angle. The formula is as follows:

2.3.2. Speed Threat

The higher the target speed, the shorter the response time and the higher the threat value of the target. Therefore, the speed threat is quantified by

2.3.3. Distance Threat

The target distance is the distance between the target and the asset. The target distance directly affects the threat value of the target. The closer to our asset, the higher the threat and the farther the distance, the lower the threat value. Therefore, distance threat is quantified by

3. Construction of IMFO-GNNM

3.1. GNNM

Suppose there is a sequence

We can get a new sequence by accumulating (AGO):

It is easy to know is exponential growth law, so according to the idea of derivative and difference, we can construct differential equations to fit .

For the sake of convenience, the original sequence is recorded as , the new sequence is recorded as , and the prediction result is ; so, the differential equation expression of the gray neural network with n parameters is as follows:

In the formula, the output variable is , the input variable is , and and are parameters.

Solving the differential equation we can get

If , we will derive

Then we map it to the neural network to get a gray neural network. Its structure is shown in Figure 3.

Network output is

The weight of the input layer is , the weight of the input layer and the hidden layer is , , the weight of the hidden layer and the output layer is , the output of the input layer is , the output of the hidden layer is , , and the deviation of the output layer is . Note that when the input variable is n − 1, the number of hidden layer nodes is n.

3.2. IMFO
3.2.1. MFO

The MFO was inspired by the lateral positioning of moths during night flight [11]. The moths often keep the same angle with the moon when flying. Because they are far away from the moon, the moth and the moon are connected in parallel at different times. So, the moths can fly in a straight line. However, in reality, the moth is very close to the flame, and the moth still maintains the angle with the flame, so that the moth flies along the equiangular spiral to the flame. Hence, there is a “moth to flame,” as shown in Figure 4. Prof. Mirjalili was inspired by this isometric spiral function to invent the MFO algorithm.

In the original MFO algorithm, M is the moth matrix, that is, the agent that searches for the solution, ZM is the moth fitness value matrix, F is the flame position matrix, that is, the current optimal solution, and ZF is the flame fitness value matrix, as shown in the formula (13) and (14). At the initial moment, the flame matrix and the moth matrix have the same dimensions:where n is the number of moths and d is the number of dimensions.

The flame positions are sorted according to the fitness values from small to large, and the moths fly around the sorted flames, along the isometric spiral of equation (15), and update their position by changing the parameter t:where is the new position of the ith moth, , formulated as , is the distance between the ith moth and the jth flame, b is the isometric spiral parameter, which determines the shape of the isometric spiral, and t is a random number between [−1, 1], controlling the distance between the moth and the flame, as shown in Figure 5. The smaller the t is, the closer the moth is to the flame. By changing t, the moth can reach the position around the flame, thus enhancing the local optimization performance of the algorithm.

In order to improve the search efficiency of the algorithm, MFO adopts the flame adaptive reduction mechanism, which makes the later convergence faster, without continuing to optimize around the inferior solution. The number of flames is adaptively reduced bywhere NF is the current number of flames, Nmax is the maximum number of flames, I is the current number of iterations, and Imax is the maximum number of iterations. The number of flames varies with the number of iterations as shown in Figure 6.

When the number of flames is less than the number of moths, the moths fly around the worst-fitted flame.

3.2.2. Tent Chaos

The chaotic sequence [1922] has ergodicity and randomness. The chaotic sequence can make the initial solution distribution more uniform, which is beneficial to find the global optimization solution. At present, two chaotic sequences are mainly used, namely logistic mapping and tent mapping. Shan et al. [23] proved that tent mapping is better ergodic than logistic mapping. Therefore, the tent mapping is used to optimize the initial population in this paper. Tent mapping and logistic mapping distribution are shown in Figure 7. Figure 7(a) is a tent map result of mapping 400 times from 0.01 to 0.5 according to the tent mapping formula, and Figure 7(b) is a logistic map result of mapping 200 times from 3 to 4 according to the logistic mapping formula.

The Tent mapping formula is as follows:

3.2.3. Lévy Flights

Since the original MFO algorithm selects the current optimality for each iteration and is prone to trap in local optima, Lévy flight is introduced. Lévy flight, different from the Brownian motion, is a random motion proposed by French mathematician Paul Pierre Lévy [2426], which is characterized by a large number of small step movements mixed with a small amount of large step movement, as shown in Figure 8. Figure 8 is the result of flying 3000 times according to the Lévy formula. It can also be seen from this picture that Lévy flight has large steps and small steps.

The step size of each step movement and direction obeys Lévy distribution [27], as shown in the following equation:

Among them, is a standard gamma function, u and obey the normal distribution, and β is a constant.

The current optimal solution updates as follows:where α is a scale factor that controls the learning step size for each step. In this way, after moving a small step, it will also move a large step size, which balances exploration and exploitation of the search space and helps to jump out of the local optimal solution.

3.2.4. Metropolis Criterion

The Metropolis criterion is the core of the simulated annealing algorithm. It accepts the current inferior solution with a certain probability, so that the algorithm has the ability to jump out of the local optimum and converge to the global optimal solution.

The Metropolis criterion uses the following process (take the minimization problem as an example):(1)Adding the Lévy flight to the current optimal solution S1 to generate a new solution S2.(2)Weight the new solution and the old solution according to equation (20) to get the final new solution. The new solution obtained in this way is beneficial to retain the advantages of the old solution and reduce the influence of the disturbance error:(3)Calculate the increment, , is the cost function of S1.(4)According to the Metropolis criterion, the formula is expressed as follows:

If , we accept the new solution; otherwise, we accept the new solution with the probability of .

3.2.5. IMFO

The IMFO algorithm is given in Algorithm 1.

(1)Initialize solution population using tent chaos map
(2)iteration = 1
(3)while (iteration ≤ Max_iteration)
(4) OM = FitnessFunction(M)
(5) if iteration = = 1
(6)  F = sort(M)
(7)  OF = sort(OM)
(8) else
(9)  F = sort(Mt−1, Mt)
(10)  OF = sort(Mt-1, Mt)
(11) end if
(12) for i = 1 : n
(13)  for j = 1 : d
(14)   update t
(15)   calculate D with respect to the corresponding flame
(16)   update M(i, j) using equation (15) with respect to the corresponding flame
(17)  end for
(18) end for
(19) update the position of the current optimal agent using Lévy-flight
(20) F_lévy = Lévy(F)
(21) OF_lévy = FitnessFunction(F_lévy)
(22) using the Metropolis criterion for OF and OF_lévy
(23) update the position best flame obtained so far
(24) update flame number using equation (16)
(25) iteration = iteration + 1
(26)end while
3.3. IMFO-GNNM

Because the neural network learns the functional relationship between input and output by adjusting the weight connection, the learning effect depends on its weight parameters. In order to enhance the learning effect of GNNM, IMFO is used to optimize its weight parameters, and then GNNM with optimized weights is used to predict. Therefore, the IMFO-GNNM algorithm is obtained, and the algorithm flow is shown in Figure 9.

4. Results and Discussion

4.1. Verification Tests by Benchmark Functions

This paper selects six classical benchmark functions in CEC2010 to test the performance of the algorithm, as shown in Table 2. Among them, the Sphere function is a smooth monotonic function with only one global minimum, which is used to test the convergence speed of the algorithm. The Rosenbrock function is a nonconvex unimodal function with multiple local extremums, which is mainly used to test the convergence speed and optimization accuracy of the algorithm; Schwefel 2.26, Rastrigin, Ackley, and Griewank functions are complex multimodal functions with multiple local minima. They are mainly used to test the global optimization performance of the algorithm and ability to jump out of local extremums. These functions can more comprehensively examine the optimization performance of the algorithm.

In order to evaluate the effectiveness of the IMFO algorithm, this paper uses the above benchmark function to test and compare it with the MFO algorithm. The simulation was carried out with MATLAB R2014a. The parameters were set as follows: the number of moths is 30, the maximum number of iterations is 1000, the helix parameter is 1, the Lévy flight parameter is β = 1.5, and the weighting coefficient of the new solution and the old solution is 0.5. In order to reduce the error, the fitness mean obtained by continuously running 30 times is taken as the test result.

The fitness convergence curves of the 10-dimensional benchmark function test are shown in Figure 1015. The fitness mean and standard deviation of the 10-dimensional and 50-dimensional benchmark function tests are shown in Table 3.

It can be seen from Figure 1015 that for the unimodal function, the IMFO algorithm converges faster than the MFO algorithm, and the optimization precision is higher. For the multimodal function, the MFO algorithm basically traps in local optima and the convergence speed is slow. However, the IMFO algorithm can jump out of the local optimal trap and search for the optimal solution all the time. It has good global optimization performance and faster convergence speed.

It can be seen from Table 3 that the mean and standard deviation of the optimization of the IMFO algorithm are smaller than the mean and standard deviation of the MFO algorithm, which proves that the IMFO algorithm has better performance and good robustness. Especially for the Schwefel function and the Rastrigin function, the MFO algorithm converges prematurely and stagnates in advance to local optima, while the IMFO algorithm almost reaches the global minimum.

In summary, because the IMFO algorithm introduces Tent chaotic sequence initialization, Lévy flight, and Metropolis criterion, the initial population distribution is more uniform. At the same time, it has better ability to jump out of local optima in function optimization, and the global optimization performance is better.

4.2. Threat Assessment Tests
4.2.1. Test Settings

According to the threat assessment framework, the input dimension of the neural network is determined to be 7, that is, threat value, target type, damage ability, interference ability, airway angle, target speed, and target distance, and the output dimension is 1, that is, threat value. Therefore, the gray neural network structure is 1-1-8-1.

The IMFO algorithm parameter setting is the same as above, dimension is 7 and maximum running times of GNNM is 200. The experimental data randomly selected 100 groups from the target threat database, of which 75 groups are training sets and 25 groups are test sets. Some data are shown in Table 4.

Since the range of indicators is not uniform after quantification, the quantized data is normalized, and the data of each index is uniformly compressed into the interval [0, 1], which is beneficial to speed up the training of the model and improve the learning outcomes. The formula is as follows:where x represents the actual value of a variable, xmax represents the maximum value, xmin represents the minimum value, and x′ represents the normalized value.

In order to reduce the error, the experiment takes the average of 30 running results and compares it with the simulation results of the GNNM model, the original MFO-optimized GNNM model, and the IMFO-optimized GNNM model.

4.2.2. Accuracy Analysis

Firstly, the parameters of GNNM are optimized by the training set to obtain the optimal initialization parameters. The error evolution curve is shown in Figure 16.

As can be seen from Figure 16, in the weight optimization process, IMFO has a faster optimization speed, a smaller mean square error, and better weight parameters.

After the three models are trained in the training set, the test set is input to test its accuracy. The GNNM model uses random initialization weights, the MFO-GNNM model uses MFO-optimized weight parameters, and the IMFO-GNNM model uses IMFO-optimized weight parameters. The test results are shown in Figure 17.

The expected value in the figure represents the actual threat value. It can be seen from the figure that when the unoptimized GNNM predicts the test set, the predicted value of the individual points deviates greatly from the expected value and the accuracy is not high enough. The prediction effect of the MFO-GNNM model has improved. The IMFO-GNNM model with optimized initial weights has the best weight value, and the deviation between the predicted value and the expected value is the smallest, and the prediction effect is the best.

The relative error and mean square error of the three models on the test set are compared below, as shown in Figure 18 and Table 5.

It can be seen from Figure 18 and Table 5 that the relative error of the individual points predicted by the GNNM model reaches about 0.8, the relative error is larger, and the mean square error is 0.013. The relative error of the MFO-GNNM model decreases, and the mean square error is reduced to 0.004; the IMFO-GNNM model has a small overall prediction error and a mean square error of only 0.0012. This indicates that MFO algorithm optimization weight can improve the learning effect of the model. The IMFO algorithm can obtain better initial weight parameters, so the prediction relative error is the smallest and the accuracy is the highest, which verifies the effectiveness of the proposed method.

4.2.3. Real-Time Performance Analysis

The training set is used to evaluate the running time of the three algorithms. The results are shown in Figure 19.

As can be seen from the figure, the running time of the three models: GNNM < IMFO-GNNM < MFO-GNNM. This is because the GNNM model directly inputs the training set for training after random initialization, while both the MFO-GNNM model and the IMFO-GNNM model need to be trained in the training set to obtain the optimized weight parameters, so the model runs for a long time. At the same time, the IMFO model has been improved and the search speed is faster, so the IMFO-GNNM model runs slightly faster than the MFO-GNNM model.

Considering that the neural network algorithm adopts offline training and online prediction, the evaluation time of 5, 10, 15, 20, and 25 sets of data is compared with the three algorithms. The results are shown in Figure 20.

With reference to Figure 20, the evaluation time of the three models is approximative. This is because the three models use the same network structure, but with different weights. Moreover, it takes less than 10 ms to evaluate 25 sets of data, which meets the real-time requirements.

In summary, the IMFO-GNNM model sacrifices the training time of the algorithm while improving its accuracy, but it does not affect the real-time assessment. Therefore, the IMFO-GNNM model can improve the accuracy of air target threat assessment and has better real-time performance.

5. Conclusions

(1)By analyzing the actual situation of air defense operations and considering threat level, threat ability, and threat extent, a reasonable air target threat assessment framework is established, and each indicator is scientifically quantified.(2)The IMFO algorithm is proposed. By introducing Tent chaos, Lévy flight, and Metropolis criterion, the global optimization performance of the algorithm is improved.(3)Combining powerful learning ability of GNNM and excellent optimization performance of IMFO, a new air target threat assessment model based on IMFO-GNNM is established, which has high accuracy and good real-time performance for threat assessment of incoming targets.

We will continue to study and propose better threat assessment models in the future. At the same time, we will study how to incorporate the proposed threat assessment model into the air defense system.

Data Availability

The data used to support the findings of this paper are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of the paper.

Authors’ Contributions

Longfei Yue planned the work, completed the simulation experiment, and drafted the main part of the paper. Rennong Yang and Jialiang Zuo performed error analysis. Hao Luo and Qiuliang Li contributed to setup type.

Acknowledgments

The work described in this paper is partially supported by the National Natural Science Foundation of China under Grant no. 61503409.

Supplementary Materials

See the threat_data.xlsx file in the Supplementary Material for data. See the tent.m file and logistic.m file in the Supplementary Material for Figure 7. See the levy.m file in the Supplementary Material for Figure 8. See the main.m file in the Supplementary Material for Figures 1015. (Supplementary Materials)