Abstract

Multiobjective weapon-target assignment is a type of NP-complete problem, and the reasonable assignment of weapons is beneficial to attack and defense. In order to simulate a real battlefield environment, we introduce a new objective—the value of fighter combat on the basis of the original two-objective model. The new three-objective model includes maximizing the expected damage of the enemy, minimizing the cost of missiles, and maximizing the value of fighter combat. To solve the problem with complex constraints, an improved nondominated sorting algorithm III is proposed in this paper. In the proposed algorithm, a series of reference points with good performances in convergence and distribution are continuously generated according to the current population to guide the evolution; otherwise, useless reference points are eliminated. Moreover, an online operator selection mechanism is incorporated into the NSGA-III framework to autonomously select the most suitable operator while solving the problem. Finally, the proposed algorithm is applied to a typical instance and compared with other algorithms to verify its feasibility and effectiveness. Simulation results show that the proposed algorithm is successfully applied to the multiobjective weapon-target assignment problem, which effectively improves the performance of the traditional NSGA-III and can produce better solutions than the two multiobjective optimization algorithms NSGA-II and MPACO.

1. Introduction

With the rapid development of military air combat, the weapon-target assignment (WTA) problem has attracted worldwide attention [1]. The WTA problem is a classic scheduling problem that aims to assign weapons to maximize military effectiveness and meet all constraints. So, it is important to find a proper assignment of weapons to targets.

The study of the WTA problem can be traced back to the 1950s and 1960s when Manne [2] and Day [3] built the model of the WTA problem. From the perspective of the quantity of objective functions, Hosein and Athans [4] classify the WTA problem into two classes: the single-objective weapon-target assignment problem and the multiple-objective weapon-target assignment (MWTA) problem. When taking the time factor into account, Galati and Simaan [5] divide the WTA problem into two categories: dynamic weapon-target assignment problem and static weapon-target assignment problem. The current research status of various WTA problems are summarized in Table 1.

In contrast to the single-objective weapon-target assignment problem, MWTA can take different criterions into consideration that are more in line with real combat decision making. In this paper, we mainly focus on the static multiobjective weapon-target assignment (SMWTA) problem, which aims at finding proper static assignments.

The combination of simulation and optimization algorithms to solve the SMWTA problem is not new. At present, a number of studies address this problem. In Liu et al. [16], an improved multiobjective particle swarm optimization (MOPSO) was used to solve the SMWTA problem with two objective functions: maximum enemy damage probability and minimum total firepower unit. The specific example they used contains only 7 platforms and 10 targets.

Zhang et al. [17] proposed a decomposition-based evolutionary multiobjective optimization method based on the MOEA/D algorithm. Considering the constraints of attack resource and damage probability, a mathematic model on weapon-target assignment was formulated. Both the proposed repair method and appropriate decomposition approaches can effectively improve the performance of the algorithm. But the algorithm has not been tested on a large-scale WTA problem, and it has a low convergence speed.

In the work of Li et al. [19], a new optimization approach for the MWTA problem was developed based on the combination of two types of multiobjective optimizers: NSGA-II (domination-based) and MOEA/D (decomposition-based) enhanced with an adaptive mechanism. Then, a comparison study among the proposed algorithms, NSGA-II and MOEA/D, on solving instances of a three-scale MWTA problem was performed, and four performance metrics were used to evaluate each algorithm. They only applied the proposed adaptive mechanism to the MWTA problem, but they did not verify the behavior of the proposed adaptive mechanism on standard problems. In addition, they also considered the next step to solve the MWTA problem with an improved version of NSGA-II (called NSGA-III [24]).

In our previous work [23], we proposed a modified Pareto ant colony optimization (MPACO) algorithm to solve the bi-objective weapon-target assignment (BOWTA) problem and introduce the pilot operation factor into a WTA mathematic model. The proposed algorithm and two multiobjective optimization algorithms NSGA-II and SPEA-II were applied to solve different scales of instances. Simulation results show that the MPACO algorithm is successfully applied in the field of WTA, which improves the performance of the traditional Pareto ant colony optimization (P-ACO) algorithm effectively and produces better solutions than the other two algorithms.

Although the above methods have remarkable effects on solving the SMWTA problem, all of them considered two objectives, maximizing the expected damage of the enemy and minimizing the cost of missiles, without considering the attack power. Due to the fact that fighters cannot destroy the targets at once, we put forward the value of fighter combat to evaluate the ability of sustained operational capability. On the basis of the original double-objective model, we propose the three-objective model, which is closer to real air combat. The new three-objective model includes maximizing the expected damage of the enemy, minimizing the cost of missiles, and maximizing the value of fighter combat. As the number of objectives is increased from two to three, the performance of evolutionary multiobjective algorithms (EMOAs) may deteriorate. They face some difficulties as follows: (i) A large fraction of the population is nondominated. (ii) The evaluation of diversity is computationally complex. (iii) The recombination operation may be inefficient [25]. Recently, EMOAs like the nondominated sorting genetic algorithm III (NSGA-III) [26] have been proposed to deal with these difficulties and scale with the number of objectives.

In 2014, Deb and Jain [24, 26] proposed a reference point-based many-objective evolutionary algorithm following the NSGA-II framework (namely, NSGA-III). The basic framework of the NSGA-III remains similar to the NSGA-II algorithm, but the NSGA-III improves the ability to solve the multiobjective optimization problem (MOP) by changing the selection mechanism of its predecessor. Namely, the main difference is the substitution of crowding distance for a selection based on well-distributed and adaptive reference points. These reference points help maintain the diversity of population members and also allow the NSGA-III to perform well on MOP with differently scaled objective values. This is an advantage of the NSGA-III and another reason why we choose the NSGA-III algorithm to solve the SMWTA problem.

The NSGA-III has been successfully applied to real-world engineering problems [27, 28] and has several proposed variants, such as combining different variation operators [29], solving monoobjective problems [30], and integrating alternative domination schemes [31]. As far as we know, none of the previous related work has studied the MWTA problem of three objective functions and applied the NSGA-III algorithm to solve the MTWA problem.

In this paper, we have proposed an improved NSGA-III (I-NSGA-III) for solving the SMWTA problem. The proposed algorithm is used to seek better Pareto-optimal solutions between maximizing the expected damage, minimizing the cost, and maximizing the value of fighter combat. Based on the framework of the original NSGA-III, the proposed algorithm is devised with several attractive features to enhance the optimization performance, including an improvement strategy of reference points and an online operator selection mechanism.

Improvement Strategy of Reference Points. We can see from studies [24, 26] that reference points of the original NSGA-III are uniformly distributed on a hyperplane to guide solutions to converge. The locations of these reference points are predefined, but the true Pareto front of the SMWTA problem is unknown beforehand. So the mismatches between the reference points and the true Pareto front may degrade the search ability of the algorithms. If appropriate reference points can be continuously generated during the evolution according to information provided by the current population, it will be possible to achieve a solution set with good performances. Therefore, we add the improvement strategy of reference points to the original NSGA-III algorithm, that is, continuously generating good reference points and eliminating useless reference points.

Online Operator Selection Mechanism. Crossover and mutation operators used in the evolutionary process of optimization with NSGA-III can generate offspring solutions to update the population and seriously affect search capability. The task of choosing the right operators depends on experience and knowledge about the problem. The online operator selection mechanism proposed in this paper aims to automatically select operators from a pool with simulated binary crossover, DE/rand/1, and nonlinear differential evolution crossover. Different crossover operators can be selected online according to the information of generations. Another benefit of this mechanism is that the operator choice can adapt to the search landscape and improve the quality of Pareto-optimal solutions.

The rest of this paper is organized as follows. Section 2 reviews the related work. In Section 3, a new mathematical model of the SMWTA problem and assumption descriptions are presented. Section 4 provides the introduction of the NSGA-III algorithm and presents the proposed I-NSGA-III for solving the WTA problem. Detailed improvements of the proposed algorithm are also introduced in Section 4. Section 5 is divided into two subsections as follows: (i) In order to verify the proposed algorithm, five state-of-the-art algorithms are considered for comparison studies. (ii) The proposed algorithm and others, like NSGA-III [9], MPACO [7], and NSGA-II [16], are tested on the SMWTA problem. Section 6 concludes the paper and presents a direction for future work.

Many realistic problems contain several (two or more) conflicting objectives that are to be minimized or maximized simultaneously [32]. Most single-objective optimization problems can find only one solution (others may lack the appropriate conditions), but multiobjective optimization problems (MOPs) can find a set of Pareto solutions that consider all the objective functions and constraints. Generally, multiobjective optimization can be presented as follows [33]: where denotes the space of decision variables and denotes the space of objective functions.

There have been various studies on multiobjective evolutionary optimization besides NSGA-III, such as a set-based genetic algorithm for interval many-objective optimization problems, set-based many-objective optimization guided by a preferred region, a many-objective evolutionary algorithm using a one-by-one selection strategy, and many-objective evolutionary optimization based on reference points. The review on the related work will be further enriched if these studies are included.

2.1. Set-Based Evolutionary Optimization

The goal of a multiobjective evolutionary algorithm (MOEA) is to seek a Pareto solution set which is well converged, evenly distributed, and well extended. If a set of solutions and its performance indicators are taken as the decision variable and objectives of a new optimization problem, respectively, it is more likely that a Pareto-optimal set that satisfies the performance indicators will be obtained. Based on this idea, a many-objective optimization (MaOP) can be transformed into an MOP with two or three objectives, and then a series of set-based evolutionary operators are employed to solve the transformed MOP [34]. Compared with the traditional MOEAs, set-based MOEAs have two advantages: (i) the new objectives are used to measure a solution set and (ii) each individual of the set-based evolutionary optimization is a solution set consisting of several solutions of the original problem.

Researchers have carried out studies on set-based MOEAs, including the frameworks, the methods of transforming objectives, the approaches for comparing set-based individuals, and so on The first set-based MOEA was proposed by Bader et al. [35]. In their work, solutions in a population are firstly divided into a number of solution sets of the same size, and then the hypervolume indicator is adopted to assess the performance of those sets. In the method proposed by Zitzler et al. [36], not only is the preference relation between a pair of set-based individuals defined, but the representation of preferences, the design of the algorithm, and the performance evaluation are also incorporated into a framework. Bader et al. [35] presented a set-based Pareto dominance relation and designed a fitness function reflecting the decision maker’s preference to effectively solve MaOPs. A comparison of results with traditional MOEAs shows that the proposed method is effective. Besides, Gong et al. [37] also presented a set-based genetic algorithm for interval MaOPs based on hypervolume and imprecision.

2.2. Local Search-Based Evolutionary Optimization

Ishibuchi and Murata [38] firstly proposed the study combining MOEA and local search method, IM-MOGLS for short. In their work, a weighted sum approach is used to combine all objectives into one objective. After generating offspring by genetic operators, a local search is conducted starting from each new individual, optimizing the combined objectives. Based on IM-MOGLS, Ishibuchi et al. [39] add more selectivity to its starting points. Ishibuchi and Narukawa [40] proposed a hybrid of NSGA-II and local search. Knowles and Corne [41] combined local search with PAES. The use of gradients appeared in the Pareto descent method (PDM) proposed by Harada et al. in [42] and in the work of Bosman [43]. One of the few applications of achievement scalarization functions (ASFs) in MOEA area was done by Sindhya et al. [44]. MOEA was combined with a rough set-based local search in the work of Santana-Quintero et al. [45]. A genetic local search algorithm for multiobjective combinatorial optimization (MOCO) was proposed by Jaszkiewicz [46]. Firstly, Pareto ranking and a utility function are applied to obtain the best solutions. Secondly, pairs of solutions are selected randomly to undergo recombination. Finally, local search is applied to offspring pairs. In their study, MOCO is used to successfully solve the traveling salesperson problem (TSP).

The above studies combine an MOEA with classical local search methods; however, none of them applies the local search method to theoretically identify poor solutions in a population. Abouhawwash et al. [47] proposed a new hybrid MOEA procedure that first identified poorly converged nondominated solutions and then improved it by using an ASF-based local search operator. They encouraged researchers to pay more attention to the Karush Kuhn Tucker proximity metric (KKTPM) and other theoretical optimality properties of solutions in arriving at better multiobjective optimization algorithms.

2.3. Reference Point-Based Evolutionary Optimization

Existing reference point-based approaches usually adopt only one reference point to represent the decision maker’s ideal solution. Wierzbicki [48] firstly proposed a reference point approach in which the goal is to achieve a Pareto solution closest to a supplied reference point of aspiration level based on solving an achievement scalarization problem. Deb and Sundar [32] introduced the decision maker’s preference to find a preferred set of solutions near the reference point. Decomposition strategies have also been incorporated into reference point approaches to find preferred regions in the method proposed by Mohammadi et al. [49].

Up to date, there is only a few researches on achieving the whole Pareto-optimal solution set by employing multiple reference points. Figueira et al. [50] proposed a parallel multiple reference point approach for MOPs. In their work, the reference points are generated by estimating the bounds of the Pareto front, and solutions near each reference point can be obtained in parallel. This priori method is very convenient; however, the later evolution process increases the computational complexity. Wang et al. [51] proposed a preference-inspired coevolutionary approach. Although solutions and reference points are optimized simultaneously during the evolution process, the fitness value of an individual is calculated by the traditional Pareto dominance. In the work done by Deb and Jain [24], a hyperplane covering the whole objective space is obtained according to the current population, then a set of well-distributed reference points are generated on the hyperplane. However, the Pareto fronts of most practical problems are not uniformly distributed in the whole objective space, and it is necessary to adopt reference points which are adaptive to various problems. Liu et al. [52] proposed a reference point-based evolutionary algorithm (RPEA) method. However, the value of which seriously affects the performance of the algorithm is a constant during evolution. In addition, the Tchebychev approach is only adopted in this study, and it may not be appropriate to all kinds of problems.

2.4. Indicator-Based Evolutionary Optimization

Compared with the above approaches, indicator-based evolutionary algorithms (IBEAs) [53] adopt a single indicator which accounts for both convergence and distribution performances of a solution set. Because solutions can be selected one by one based on the performance indicator, the algorithm is also called one-by-one selection evolutionary optimization. The hypervolume is usually adopted as the indicator in IBEAs. However, the computational complexity for calculating hypervolume increases exponentially as the number of objectives increases. So it is hard to be used to solve MaOPs. To address this issue, Bader and Zitzler [54] proposed an improved hypervolume-based algorithm—HypE. In their work, Monte Carlo simulations are applied to estimate the hypervolume. This method can save computational resources while ensuring the accuracy of the estimated hypervolume. In recent years, some algorithms [55, 56] are also proposed to enhance the computational efficiency of IBEAs for solving MaOPs.

Motivated by simultaneously measuring the distance of the solutions to the Pareto-optimal front, and maintaining a sufficient distance between each other, Liu et al. [57] proposed a many-objective evolutionary algorithm using a one-by-one selection strategy, 1by1EA for short. However, there are two issues in this algorithm. (i) In 1by1EA, the contour lines formed by the convergence indicator have a similar shape to that of the Pareto-optimal front of an optimization problem. However, the shape of the Pareto-optimal front of a practical optimization problem is frequently unknown beforehand. (ii) The algorithm does not include a mechanism that can adaptively choose an appropriate convergence indicator or use an ensemble of multiple convergence indicators during the evolution. Based on the above two issues, we have improved the original NSGA-III.

3. Problem Formulation

The WTA formation can be described as finding a proper assignment of weapon units to target units as illustrated in Figure 1. Some formulation of the problem, including the assumptions and the new three-objective mathematical model, are introduced in this section.

3.1. Assumption Description

In this research, to establish a reasonable WTA mathematical model, the following assumptions can be defined:

Assumption 1. We assume that the mathematical model is composed of fighters, missiles, and targets and the opposing groups are not necessarily equal in quantity. (Each fighter is equivalent to one platform, which possess different kinds and quantities of missiles).

Assumption 2. Each fighter can use different missiles to attack one target. (Each missile can only attack one target).

Assumption 3. The distributed unit total of each type of missile cannot exceed the number of assigned missile unit resources in a military air operation.

Assumption 4. We assume that the probability of a kill, which is labeled as , between the missile (th unit of ) and the unit being attacked (th unit of ) is provided.

Assumption 5. If the target is within the work area, a missile can be assigned effectively. If not, the missile is not.

3.2. Mathematical Model

Multiobjective WTA optimization is used to seek a balance among the maximum expected damage, minimum missile consumption, and maximum combat value. Thus, definitions and constraints related to the optimization model are shown as follows.

Definition 1. One introduces a new objective—the value fighters engage in combat on the basis of the bi-objective WTA model that one established in the literature [23]. The multiobjective model of WTA is to maximize the total effectiveness of attack, minimize the cost of missiles, and maximize the value of fighter combat. The mathematical functions of the model are shown as

Definition 2. represents the number of missiles carried by the th fighter, and represents the number of missiles that have been launched on the th fighter.

Definition 3. represents the kill probability of each missile attacking different targets.

Definition 4. The decision table can be described as Table 2, where is a Boolean value and represents whether missile is assigned to target. The relationship between the Boolean value and missile allocation is shown in Table 3.

Definition 5. The constant represents the cost of th missile in this paper.

Definition 6. Based on the real battlefield, we assume that the number of missiles per fighter is not more than 4.

Definition 7. is the pilot operation factor proposed by our previous article [23] .

Constraint. Three constraints that the above function variables must satisfy are shown in Table 4.

4. Nondominated Sorting Genetic Algorithm III

4.1. Introduction of NSGA-III

The basic framework of the NSGA-III algorithm remains similar to the NSGA-II algorithm with significant changes in its mechanism [24]. But unlike in NSGA-II, a series of reference points are introduced to improve the diversity and convergence of NSGA-III. The proposed improved NSGA-III is based on the structure of NSGA-III; hence, we give a description of NSGA-III here.

The algorithm starts with an initial population (feasible solutions) of size and a series of widely distributed -dimensional reference points. represents the division and is given by the user. Das and Dennis’s systematic approach [58] is used to place reference points on the normalized hyperplane having an intercept of one on each axis. The total number of reference points () in an -objective problem is given by

The NSGA-III uses a set of reference directions to maintain diversity among solutions. A reference direction is a ray starting at the origin point and passing through a reference point, as illustrated in Figure 2. The population size is chosen to be the smallest multiple of four greater than , with the idea that for every reference direction, one population member is expected to be found [30].

Let us suppose the parent population at the generation is (of size ). The offspring population having members is obtained by recombination and mutation of . is the combination of parent and offspring population (of size ). To preserve elite members, is sorted to different nondomination levels (). Thereafter, individuals of each nondomination level are selected to construct a new population , starting from the first nondomination level , until the size of (the first time larger than ). Suppose that the last level included is the th level. In most situations, individuals of the th level are only sorted partially by the diversity maintenance operator. This is achieved by computing crowding distance for the th level in NSGA-II. But the NSGA-III replaces it with reference direction-based niching. Before the above operation, objective values are normalized by formulas (4), (5), and (6). where the ideal point of the population is determined by identifying the minimum value (), for each objective function and by constructing the ideal point . denotes the translated objective functions . denotes the extreme point value in each objective axis, and is the weight vector of each objective (when , it will be replaced by a small number 10−6). denotes the normalized objective functions, and represents the intercept of the th objective axis.

After the normalizing operation, the original reference points calculated by formulas (4) and (6) lie on this normalized hyperplane. In order to associate each population member with a reference point, the shortest perpendicular distance between each individual of and each reference line is calculated.

Finally, a niche-preservation strategy is employed to select individuals from th level that are associated with each reference point. represents the number of population members from connected to the th reference point. The specific niche-preservation strategy is shown as follows: (1)The reference point set with the is identified. When , one is selected at random.(2)According to the value of , two cases are discussed. (i)(a)If one or more members in front the th level are associated with the th reference point, the one having the shortest perpendicular distance from the th reference line is added to . The count will also add one.(b)If one member in the front th level is associated with the th reference point, the th reference point will be ignored for the current generation.(ii)

A randomly chosen member from the front th level that is associated with the th reference point is added to , and the count is then incremented by one.

After counts are updated, the above procedure will be repeated for individuals to fill the entire .

The flow chart of the NSGA-III algorithm is shown in Figure 3.

4.2. Improvement Strategy of Reference Points

In this section, an improvement strategy of reference points is proposed. The strategy mainly includes two parts: (i) generation of new reference points and (ii) elimination of useless reference points.

NSGA-III requires a set of reference points to be supplied before the algorithm can be applied [26]. If the user does not put forward specific requirements for the Pareto-optimal front, a structured set of points created by Das and Dennis’s approach [58] is located on a normalized hyperplane. NSGA-III was originally designed to find Pareto-optimal points that are closer to each of these reference points, and the positions of the structured reference points are predefined at the beginning. However, the true Pareto front of a practical optimization problem (like the MWTA problem) is usually unknown beforehand, so the preset reference points may not reflect the development trend of the true Pareto-optimal front. In this study, a set of reference points with good performances in convergence and distribution are created by making full use of information provided by the current population. Due to the increase of the new reference points, the total number of reference points increases, and the computational complexity of the algorithm increases simultaneously. In order to keep the convergence speed of the algorithm, we propose the elimination mechanism of the reference point.

4.2.1. Generation of New Reference Points

We can learn from the above NSGA-III algorithm that the population is created and the niche count for different supplied reference points is updated after the niche operation. All reference points are expected to be useful in finding nondominated fronts ( for every reference point). If the th reference point is not associated with any population member, the niche count of the th reference point () is zero. If the NSGA-III will never find an associated population member for the th reference point, the th reference point is considered useless. It is then better to replace the th reference point with a new reference point that correctly reflects the direction of the Pareto-optimal front. However, we do not know if the th reference point is eventually useful in advance. Under this circumstance, we simply add a set of reference points by adopting the formulas (7) and (8). The scale of new reference points equals the number of reference points. where is a random number that belongs to and denotes a new reference point for the objective . and are the maximal and minimal values, respectively, of the th objective.

The pseudo code can be demonstrated as follows:

Input: , ,
Output:
1: After niche operation, is created and is updated;
2: for do
3:      for do
4:      According to formula (7) and (8), then generate the reference points based on these selected
individuals;
5:      check position ;
6:      if lie on the first quadrant then
7:       check existence ;
8:      if doesn’t exist in then
9:       store ;
10:       end if
11:    end for
13:   end for
14:   Output .

In many cases, the total number of reference points will be greatly increased by the above operations, and many of the new reference points eventually become useless. With the large increase of reference points, the algorithm will also be slowed down. Thus, we consider keeping the convergence speed of the algorithm by eliminating useless reference points, as described in the following subsection.

4.2.2. Elimination of Useless Reference Points

After new reference points are generated by the above operation, the niche count of all reference points will be recalculated and recorded. Note that the total value of niche count is equal to population size (namely, ). Ideally, each reference point is exactly associated with one solution from the population; that is, solutions are well-distributed among the reference points. Then, all reference points for are removed from the total reference point set . However, in order to maintain uniform distribution of the reference points, the reference points obtained by Das and Dennis’s systematic approach [58] will not be deleted. Thus, the existing reference points consist of two parts: (i) the original reference points (even if their niche count is zero) and (ii) all reference points.

Based on the niche count of the respective reference points and information provided by the current population, the reference point set is adaptively redefined by generation and elimination operations. The improvement strategy for the reference points is intended to maintain diversity and guide the solutions closer to the Pareto-optimal front.

4.3. Online Operator Selection Mechanism

In multiobjective evolutionary algorithms (MOEAs), crossover and mutation operators are used to generate offspring solutions to update the population, and it can seriously affect search capability. In the original NSGA-III algorithm, only a simulated binary crossover (SBX) operator is adopted, and different crossover operators cannot be selected online according to the information of generations. Therefore, we propose a strategy based on the performance of previous generations to select operators adaptively from a pool, that is, an online operator selection (OOS) mechanism. According to a study in the literature [59], adaptive operator selection can select an appropriate strategy to adapt to the search landscape and produce better results.

Based on the information collected by the credit assignment methods, the OOS is applied to select operators for generating new solutions. In this paper, we use a probability method that uses a roulette wheel-like process for selecting an operator to solve the dilemma of exploration and exploitation.

Probability matching (PM) is one of the famous probability operator selection methods. The formula for calculating the probability of operator being selected at next generation is shown below [60]: where is the number of operators and is the minimal probability of any operator. is the quality associated with operator . Clearly, the sum of probabilities for all operators is 1 . If one operator gets rewards during many generations and the others’ rewards are almost 0, its maximum selection probability is equal to .

The pool in the proposed algorithm is composed of three well-known strategies: simulated binary crossover (SBX), DE/rand/1, and nonlinear differential evolution crossover (NDE). The parent individuals , , and are randomly selected from the population in the original NSGA-III algorithm. The small difference in our algorithm is that the first parent individual of all strategies is equal to the current solution. Three operators are shown as below.

4.3.1. Simulated Binary Crossover

The simulated binary crossover (SBX) operator is proposed by Deb and Agrawal and is found to be particularly useful in problems where the upper and lower bounds of the multiobjective global optimum are not known a priori [61]. Two offspring solutions, and , are created from two parent solutions, and , by formulas (10) and (11) as follows: where the spread factor is defined as the ratio of the absolute difference in offspring values to that of the parents and is a random number; can be calculated by the formula (12) as follows: where is a random number between , and is the crossover parameter given by the user.

4.3.2. DE/rand/1

The DE/rand/1 operator is one of the most commonly used DE variants [62], and all different solutions are randomly chosen from the population. So, this strategy does not generate biased or special search directions, and then a new direction is selected at random each time. The DE/rand/1 strategy can be defined by formula (13). where represents the mutation scaling factor.

4.3.3. Nonlinear Differential Evolution Crossover

The nonlinear differential evolution crossover (NDE) strategy was presented in the literature [63] for the MOEA/D framework and was a hybrid crossover operator based on polynomials. The advantage of this strategy is that it ignores the values of the crossover rate and the mutation scaling factor. The offspring can be generated by formula (14). where is generated according to an interpolation probability () [63] and can be defined by formula (15).

The parameters , , and are given by formulas (16), (17), and (18) accordingly.

4.4. The Proposed NSGA-III Algorithm

In this paper, we add the improvement strategy of reference points and online operator selection mechanism to the NSGA-III framework. The pseudo code of the proposed NSGA-III is shown in Algorithm 2, where differences with the original NSGA-III are set in italic and bold-italic.

Input: , , ,
Output: Pareto solutions
1:   Initialize uniform distribution reference points ;
2:   ;
3:   while termination conditions are not satisfied () do
4:      ←Select operator();
5:       Recombination & Mutation ();
6:      () = Non-dominated-sort ();
7:      ;
8:      repeat
9:           ;
10:         ;
11:       until ;
12:       if then
13:         break;
14:       else
15:         Normalize the objectives;
16:         Delete the useless reference points;
17:         Associate each solution in with a reference point;
18:         Compute niche count of reference points;
19:         Fill with solutions from using niching information;
20:         Generate new reference points;
21:       end if
22:       Calculate operator rewards (, );
23:       Update operator information ();
24:       ;
25: end while
26: return Pareto-optimal front.

Compared with the original NSGA-III, the proposed algorithm has two main parts. Bold-italic marks are the improvement strategy of the reference points, and italic marks are the online operator selection mechanism. (1)In the bold-italic parts, the reference points that satisfy the condition (Section 4.2.2) are first removed according to the niche count of all reference points. Second, new reference points are generated by referring the process of Algorithm 1. Based on the niche count of each reference point and information provided by the current population, the reference point set is adaptively redefined by generation and elimination operations.(2)There are two differences between our proposed algorithm and the original NSGA-III in the italic parts. (i)The first difference is the selection of the operator to be used (Step 4), which occurs based on the probabilities associated with each operator. With the success probability of an operator increasing, the most successful operator will be selected more often, and the quality of the solutions will also be improved theoretically. Because the same operator is selected by a deterministic approach during all recombination (Step 5) of the same generation, an undesirable bias should be introduced to select the best performance operator in the initial generations. So, stochastic selection mechanisms are applied in this paper.(ii)Other differences are the calculation of the rewards (Step 22) and the update of information with each operator (Step 23). (a)In this paper, the in the pool has associated a probability of being selected at generation by formula (9). The adapted process is based on its quality , which is updated according to a reward .The reward adopted in this paper is a Boolean value in Step 22. If is generated by the operator , does not belong to but belongs to ; then, ; otherwise, is equal to 0.(b)After calculating the rewards, the quality of the operator available in the pool can be updated by formula (19):where is the adaption rate .

Finally, the operator selection probabilities are updated by formula (9).

5. Experiment Results and Analysis

5.1. Test Problem

In order to verify the proposed algorithm, five state-of-the-art algorithms are considered for comparison studies. They are NSGA-III-OSD [64], NSGA-III [24], NSGA-II [65], MPACO [23], and MOEA/D [66]. NSGA-III and NSGA-II are the traditional NSGA algorithms. NSGA-III-OSD is an improved version of the NSGA-III based on objective space decomposition. In MOEA/D algorithm, a predefined set of weight vectors is applied to maintain the solution diversity. MPACO is the algorithm we proposed before. All algorithms are tested with 4 different benchmark MaOP problems known as DTLZ1 to DTLZ4 [67].

For DTLZ, 4 instances (DTLZ1–4) with 3, 5, 8, 10, and 15 objectives are used. Based on the work of Deb et al. [67], the number of decision variables is set as , where for DTLZ1 and for DTLZ2–4. According to the work [68], the number of decision variables is set as . The parameters the position-related variable and the distance-related value are set to and 20, respectively. The main characteristics of all benchmark problems are shown in Table 5.

In this subsection, inverted generational distance (IGD) indicator [69] is used to evaluate the quality of a set of obtained nondominated solutions. This indicator can measure the convergence and diversity of solutions simultaneously. Smaller IGD values indicate that the last nondominated population has better convergence and coverage of the Pareto front. Each algorithm is conducted 30 runs independently on each test problem. Aiming to be as fair as possible, in each run, all the comparison algorithms perform the same maximum iteration as shown in Table 6. The population size used in this study for different numbers of objectives is shown in Table 7.

The six algorithms considered in this study need to set some parameters, and five of them are shown in Table 8. The parameters of the MPACO algorithm can be found in the literature [23].

Comparison results of I-NSGA-III with five other MOEAs in terms of IGD values on different objectives of DTLZ1–4 test problems are presented in Tables 912. It shows both the median and standard deviation of the IGD values on 30 independent runs for the six compared MOEAs, where the best median and standard deviation are highlighted in italic.

Based on the statistical results of DTLZ1, we can see that I-NSGA-III shows better performance than the other five MOEAs on three-, eight- and ten-objective test problems. For five- and fifteen-objective problems, it achieves the second smallest IGD value. Furthermore, NSGA-III can obtain the best IGD value on the fifteen-objective test problem and MPACO can obtain the best IGD value on the five-objective test problem. NSGA-II can deal with three-objective instances but works worse on more than three objectives.

Based on the statistical results of DTLZ2, we can see that the performance of I-NSGA-III, NSGA-III-OSD, and MOEA/D is comparable in this problem. The NSGA-III is worse than two improved algorithms (I-NSGA-III and MOEA/D) and MOEA/D, however, is better than MPACO. Based on the IGD values obtained by NSGA-II on three- and five-objective problems, we find that this algorithm can perform well. But when the number of objectives increases, NSGA-II still works worst among the six algorithms.

Based on the statistical results of DTLZ3, we find that I-NSGA-III performs significantly best among six algorithms on different objectives of the DTLZ3 problem. NSGA-III-OSD can achieve a similar IGD value as NSGA-III. Although MOEA/D can achieve the close to smallest IGD value on the three- and five-objective test problems, it significantly worsens on more than five-objective test problems. MPACO can defeat MOEA/D on eight-, ten- and fifteen-objective test problems. In addition, NSGA-II is still helpless for more than three-objective problems.

Based on the statistical results of DTLZ4, we find that I-NSGA-III performs significantly better than the other five MOEAs on almost all DTLZ4 test problems, except the eight-objective instance. Furthermore, the standard deviation of IGD obtained by I-NSGA-III shows that the proposed algorithm is rather robust. NSGA-III-OSD shows the closest overall performance to I-NSGA-III. MOEA/D is significantly worse than MPACO. NSGA-II still has no advantage on the DTLZ4 test problem.

The above results show that the proposed I-NSGA-III could perform well on almost all the instances in DTLZ1–4, and the obtained solution sets have good convergence and diversity. In the next subsection, the proposed algorithm will be applied to solve an SMWTA problem.

5.2. SMWTA Problem
5.2.1. Parameter Setting

For the MWTA problem, the population size of the I-NSGA-III algorithm is 150, and the maximum number of iterations is 200. According to the approach in the literature [58], the total number of reference points is set to 120. The DE scaling factor , polynomial mutation probability , and PM minimum probability are used in this work, as they are frequently used in the literature [68]. Some parameters for the I-NSGA-III are shown in Table 13.

Table 14 shows the number of reference points , population size , and maximum iteration for different algorithms. Other parameters in the NSGA-III algorithm are the same as those in the literature [24]. The parameters of the MPACO algorithm and the NSGA-II algorithm can be found in the literature [23].

5.2.2. Simulation Environment

We can see from Table 15 that the proposed algorithm has been implemented in C++ on a CPU Intel(R) Core(TM) i5-4460T with 1.90 GHz and 8 GB of RAM. The operating system is Windows 7 64-bit.

5.2.3. Numerical Experiments and Analysis

We use the same specific instance as in our previous work [23] to verify the performance of the algorithm. The instance includes 4 fighters that carry different numbers of missiles (12 missiles in total) and 10 targets. Appendix A shows missile damage probability , pilot operation factor , and cost of missiles .

First, an enumeration approach [19] is employed to get a set of evenly distributed true optimal solutions and thus obtain evenly distributed true Pareto solutions (PSs) for the specific instance. Second, in order to verify the applicability and feasibility of the proposed algorithm, we apply I-NSGA-III, NSGA-III, MPACO, and NSGA-II to find PSs in the instance. The statistical results are shown in Figures 47.

Figures 14 show the distribution of the true PSs solved by the enumeration method and final PSs obtained by I-NSGA-III, NSGA-III, MPACO, and NSGA-II. We can see that the optimization results of the I-NSGA-III algorithm are obviously better than those of the other algorithms because the I-NSGA-III can find close and even PSs in the objective space. It is evident that the algorithm can guarantee the quality of solutions. So the I-NSGA-III for optimizing the SMWTA is well verified to be feasible.

In Figure 1, 150 evenly distributed solutions in the real Pareto front are found, while only 120 reference points are preset. Since the reference point and the solution are one-to-one correspondence, the effectiveness of the improvement strategy of reference points—generating new reference points and eliminating useless reference points operations—is demonstrated. This strategy increases the number of reference points from 120 to 150, improves the efficiency of the algorithm, and finds more Pareto-optimal solutions that meet the requirements. Meanwhile, due to the adoption of the online operator selection mechanism in the I-NSGA-III algorithm, the search landscape is reduced and the quality of the solutions is improved. However, in Figure 2, the original NSGA-III algorithm can only find 95 solutions on the premise of the initial 120 reference points, and only a small number of individuals are located at the Pareto front. Therefore, the original NSGA-III is less efficient than the algorithm proposed in this paper for solving the SMWTA problem. As we can see from Figures 3 and 4, although MPACO is superior to NSGA-II to some extent, the two algorithms are obviously inferior to the I-NSGA-III algorithm.

We analyze results from Appendix B and Appendix C as follows:

When funds for national defense are sufficient and detailed enemy information are available, we can choose Scheme 1, which costs the most money and obtains the greatest expected damage to the enemy to complete a fatal attack. As we are in a repressive state of military power, we can accomplish the task with only one attack in Scheme 1. However, this situation is rare in real combat. When we have only a small amount of information about the enemy or it is difficult to launch a large-scale attack, we choose one scheme among Schemes 147, 149, and 150 that can achieve maximum fighter combat value to launch a probing attack. In these three schemes, taking into account the least cost in terms of money and the greatest expected damage value, we should choose Scheme 150. Considering that funds are insufficient, we can only choose Scheme 148. Considering that all targets must be allocated and that the cost of missiles should be minimized, we can only choose Scheme 4.

Solving the SMWTA problem is the foundation of the dynamic multiobjective weapon-target assignment problem (DMWTA). The goal of the DMWTA is to provide a set of near-optimal or acceptable real-time decisions in real air combat. So, the time performance of algorithms is also an important index. In the end part, we test four algorithms on the specific instance in 30 runs and record the iteration time of each algorithm. The statistical results of time performances are shown in Figure 8.

In real air combat situations, pilots often make deadly decisions within seconds or even within milliseconds. We can see from Figure 5 that I-NSGA-III has a time advantage in solving the special instance compared with other algorithms, and the time performance of the NSGA-II is worst among the four algorithms. Although improvement of the strategy of reference points affects the iteration rate, an appropriate strategy can be selected by an online operator selection mechanism to improve the mutation efficiency and the quality of solutions. Compared with the original NSGA-III algorithm, the improvement strategy of reference points plays a more important role than the online operator selection mechanism in the time performance field.

In this section, we do some work as follows: Firstly, we use four classic test problems (DTLZ1–4) to evaluate the proposed algorithm and compare it with the other five state-of-the-art algorithms. Secondly, we test the four different algorithms on a specific example, verify the applicability and feasibility of the proposed algorithm, give a comparison study among the four algorithms, and show the corresponding distribution results in Appendix B and Appendix C. Thirdly, we show the time performance of four algorithms in 30 runs. To summarize, I-NSGA-III has been proved to be an effective technique for the SMWTA optimization problem and is obviously the best among the four algorithms.

6. Conclusion

We apply NSGA-III to the WTA problem and propose the I-NSGA-III to solve SMWTA in this paper. The main contributions of the thesis are summarized as follows: on the one hand, the expected damage to the enemy and the cost of missiles are taken into account from a practical viewpoint; in terms of the other objective—the value of fighter combat was introduced to make the model in line with real air combat. In this paper, an improvement strategy of reference points and an online operator selection mechanism are proposed and embedded into the original NSGA-III algorithm to improve the performances of the I-NSGA-III algorithm. The experiments have shown that I-NSGA-III can find better Pareto solutions than the other three algorithms for the SMWTA problem. More importantly, I-NSGA-III is more suitable for solving the problem from the time performance viewpoint.

However, we have mainly studied the SMWTA problem; few studies have focused on dynamic problems, which are more instructive to real air combat. In recent years, more and more studies have begun to pay attention to DWTA problems. A further study on this topic is one of our future tasks.

Appendix

Appendix A contains three data tables. These three tables come from our previous work [23] and are used to express the data used in the instance to verify the performance of the proposed algorithm. In Table 16, each data represents the damage probability of different missiles attacking different targets. In Table 17, is the pilot operation factor and represents the talent, training time, and operation stability of the pilot which may affect the attack performance. In Table 18, represents the cost of th missile (). The larger the value is, the more the missile costs.

The data in Appendix B represent the results and the corresponding distribution by I-NSGA-III. In each scheme, the first three columns represent the values of the three objective functions, and the latter twelve columns represent the corresponding results of the WTA distribution. (As an example, in Scheme 33, , , and . Missile 2 is assigned to Target 3, Missiles 3 and 4 are assigned to Target 7, Missiles 6 and 10 are assigned to Target 9, Missiles 8 and 12 are assigned to Target 5, and Missile 9 is assigned to Target 1).

In order to compare the statistical results of all algorithms, the statistical results obtained by NSGA-III, MPACO, and NSGA-II are given in Appendix C and shown in Figures 68.

A. The Value of the Specific Example Used in This Paper

B. The Value of PSs and Results of WTA Distribution

C. The Statistical Results of NSGA-III, MPACO, and NSGA-II

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.