Table of Contents Author Guidelines Submit a Manuscript
International Journal of Aerospace Engineering
Volume 2018, Article ID 8302324, 23 pages
https://doi.org/10.1155/2018/8302324
Research Article

An Improved Nondominated Sorting Genetic Algorithm III Method for Solving Multiobjective Weapon-Target Assignment Part I: The Value of Fighter Combat

1Aeronautics and Astronautics Engineering College, Air Force Engineering University, Xi’an, Shaanxi 710038, China
2College of Electronic Communication, Northwestern Polytechnical University, Xi’an 710072, China

Correspondence should be addressed to Zhanwu Li; nc.981@wzluefa

Received 12 December 2017; Revised 11 March 2018; Accepted 22 March 2018; Published 19 June 2018

Academic Editor: Mahmut Reyhanoglu

Copyright © 2018 You Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Multiobjective weapon-target assignment is a type of NP-complete problem, and the reasonable assignment of weapons is beneficial to attack and defense. In order to simulate a real battlefield environment, we introduce a new objective—the value of fighter combat on the basis of the original two-objective model. The new three-objective model includes maximizing the expected damage of the enemy, minimizing the cost of missiles, and maximizing the value of fighter combat. To solve the problem with complex constraints, an improved nondominated sorting algorithm III is proposed in this paper. In the proposed algorithm, a series of reference points with good performances in convergence and distribution are continuously generated according to the current population to guide the evolution; otherwise, useless reference points are eliminated. Moreover, an online operator selection mechanism is incorporated into the NSGA-III framework to autonomously select the most suitable operator while solving the problem. Finally, the proposed algorithm is applied to a typical instance and compared with other algorithms to verify its feasibility and effectiveness. Simulation results show that the proposed algorithm is successfully applied to the multiobjective weapon-target assignment problem, which effectively improves the performance of the traditional NSGA-III and can produce better solutions than the two multiobjective optimization algorithms NSGA-II and MPACO.

1. Introduction

With the rapid development of military air combat, the weapon-target assignment (WTA) problem has attracted worldwide attention [1]. The WTA problem is a classic scheduling problem that aims to assign weapons to maximize military effectiveness and meet all constraints. So, it is important to find a proper assignment of weapons to targets.

The study of the WTA problem can be traced back to the 1950s and 1960s when Manne [2] and Day [3] built the model of the WTA problem. From the perspective of the quantity of objective functions, Hosein and Athans [4] classify the WTA problem into two classes: the single-objective weapon-target assignment problem and the multiple-objective weapon-target assignment (MWTA) problem. When taking the time factor into account, Galati and Simaan [5] divide the WTA problem into two categories: dynamic weapon-target assignment problem and static weapon-target assignment problem. The current research status of various WTA problems are summarized in Table 1.

Table 1: Summary of variant metaheuristic algorithms and implementation of various WTA [6].

In contrast to the single-objective weapon-target assignment problem, MWTA can take different criterions into consideration that are more in line with real combat decision making. In this paper, we mainly focus on the static multiobjective weapon-target assignment (SMWTA) problem, which aims at finding proper static assignments.

The combination of simulation and optimization algorithms to solve the SMWTA problem is not new. At present, a number of studies address this problem. In Liu et al. [16], an improved multiobjective particle swarm optimization (MOPSO) was used to solve the SMWTA problem with two objective functions: maximum enemy damage probability and minimum total firepower unit. The specific example they used contains only 7 platforms and 10 targets.

Zhang et al. [17] proposed a decomposition-based evolutionary multiobjective optimization method based on the MOEA/D algorithm. Considering the constraints of attack resource and damage probability, a mathematic model on weapon-target assignment was formulated. Both the proposed repair method and appropriate decomposition approaches can effectively improve the performance of the algorithm. But the algorithm has not been tested on a large-scale WTA problem, and it has a low convergence speed.

In the work of Li et al. [19], a new optimization approach for the MWTA problem was developed based on the combination of two types of multiobjective optimizers: NSGA-II (domination-based) and MOEA/D (decomposition-based) enhanced with an adaptive mechanism. Then, a comparison study among the proposed algorithms, NSGA-II and MOEA/D, on solving instances of a three-scale MWTA problem was performed, and four performance metrics were used to evaluate each algorithm. They only applied the proposed adaptive mechanism to the MWTA problem, but they did not verify the behavior of the proposed adaptive mechanism on standard problems. In addition, they also considered the next step to solve the MWTA problem with an improved version of NSGA-II (called NSGA-III [24]).

In our previous work [23], we proposed a modified Pareto ant colony optimization (MPACO) algorithm to solve the bi-objective weapon-target assignment (BOWTA) problem and introduce the pilot operation factor into a WTA mathematic model. The proposed algorithm and two multiobjective optimization algorithms NSGA-II and SPEA-II were applied to solve different scales of instances. Simulation results show that the MPACO algorithm is successfully applied in the field of WTA, which improves the performance of the traditional Pareto ant colony optimization (P-ACO) algorithm effectively and produces better solutions than the other two algorithms.

Although the above methods have remarkable effects on solving the SMWTA problem, all of them considered two objectives, maximizing the expected damage of the enemy and minimizing the cost of missiles, without considering the attack power. Due to the fact that fighters cannot destroy the targets at once, we put forward the value of fighter combat to evaluate the ability of sustained operational capability. On the basis of the original double-objective model, we propose the three-objective model, which is closer to real air combat. The new three-objective model includes maximizing the expected damage of the enemy, minimizing the cost of missiles, and maximizing the value of fighter combat. As the number of objectives is increased from two to three, the performance of evolutionary multiobjective algorithms (EMOAs) may deteriorate. They face some difficulties as follows: (i) A large fraction of the population is nondominated. (ii) The evaluation of diversity is computationally complex. (iii) The recombination operation may be inefficient [25]. Recently, EMOAs like the nondominated sorting genetic algorithm III (NSGA-III) [26] have been proposed to deal with these difficulties and scale with the number of objectives.

In 2014, Deb and Jain [24, 26] proposed a reference point-based many-objective evolutionary algorithm following the NSGA-II framework (namely, NSGA-III). The basic framework of the NSGA-III remains similar to the NSGA-II algorithm, but the NSGA-III improves the ability to solve the multiobjective optimization problem (MOP) by changing the selection mechanism of its predecessor. Namely, the main difference is the substitution of crowding distance for a selection based on well-distributed and adaptive reference points. These reference points help maintain the diversity of population members and also allow the NSGA-III to perform well on MOP with differently scaled objective values. This is an advantage of the NSGA-III and another reason why we choose the NSGA-III algorithm to solve the SMWTA problem.

The NSGA-III has been successfully applied to real-world engineering problems [27, 28] and has several proposed variants, such as combining different variation operators [29], solving monoobjective problems [30], and integrating alternative domination schemes [31]. As far as we know, none of the previous related work has studied the MWTA problem of three objective functions and applied the NSGA-III algorithm to solve the MTWA problem.

In this paper, we have proposed an improved NSGA-III (I-NSGA-III) for solving the SMWTA problem. The proposed algorithm is used to seek better Pareto-optimal solutions between maximizing the expected damage, minimizing the cost, and maximizing the value of fighter combat. Based on the framework of the original NSGA-III, the proposed algorithm is devised with several attractive features to enhance the optimization performance, including an improvement strategy of reference points and an online operator selection mechanism.

Improvement Strategy of Reference Points. We can see from studies [24, 26] that reference points of the original NSGA-III are uniformly distributed on a hyperplane to guide solutions to converge. The locations of these reference points are predefined, but the true Pareto front of the SMWTA problem is unknown beforehand. So the mismatches between the reference points and the true Pareto front may degrade the search ability of the algorithms. If appropriate reference points can be continuously generated during the evolution according to information provided by the current population, it will be possible to achieve a solution set with good performances. Therefore, we add the improvement strategy of reference points to the original NSGA-III algorithm, that is, continuously generating good reference points and eliminating useless reference points.

Online Operator Selection Mechanism. Crossover and mutation operators used in the evolutionary process of optimization with NSGA-III can generate offspring solutions to update the population and seriously affect search capability. The task of choosing the right operators depends on experience and knowledge about the problem. The online operator selection mechanism proposed in this paper aims to automatically select operators from a pool with simulated binary crossover, DE/rand/1, and nonlinear differential evolution crossover. Different crossover operators can be selected online according to the information of generations. Another benefit of this mechanism is that the operator choice can adapt to the search landscape and improve the quality of Pareto-optimal solutions.

The rest of this paper is organized as follows. Section 2 reviews the related work. In Section 3, a new mathematical model of the SMWTA problem and assumption descriptions are presented. Section 4 provides the introduction of the NSGA-III algorithm and presents the proposed I-NSGA-III for solving the WTA problem. Detailed improvements of the proposed algorithm are also introduced in Section 4. Section 5 is divided into two subsections as follows: (i) In order to verify the proposed algorithm, five state-of-the-art algorithms are considered for comparison studies. (ii) The proposed algorithm and others, like NSGA-III [9], MPACO [7], and NSGA-II [16], are tested on the SMWTA problem. Section 6 concludes the paper and presents a direction for future work.

2. Related Work

Many realistic problems contain several (two or more) conflicting objectives that are to be minimized or maximized simultaneously [32]. Most single-objective optimization problems can find only one solution (others may lack the appropriate conditions), but multiobjective optimization problems (MOPs) can find a set of Pareto solutions that consider all the objective functions and constraints. Generally, multiobjective optimization can be presented as follows [33]: where denotes the space of decision variables and denotes the space of objective functions.

There have been various studies on multiobjective evolutionary optimization besides NSGA-III, such as a set-based genetic algorithm for interval many-objective optimization problems, set-based many-objective optimization guided by a preferred region, a many-objective evolutionary algorithm using a one-by-one selection strategy, and many-objective evolutionary optimization based on reference points. The review on the related work will be further enriched if these studies are included.

2.1. Set-Based Evolutionary Optimization

The goal of a multiobjective evolutionary algorithm (MOEA) is to seek a Pareto solution set which is well converged, evenly distributed, and well extended. If a set of solutions and its performance indicators are taken as the decision variable and objectives of a new optimization problem, respectively, it is more likely that a Pareto-optimal set that satisfies the performance indicators will be obtained. Based on this idea, a many-objective optimization (MaOP) can be transformed into an MOP with two or three objectives, and then a series of set-based evolutionary operators are employed to solve the transformed MOP [34]. Compared with the traditional MOEAs, set-based MOEAs have two advantages: (i) the new objectives are used to measure a solution set and (ii) each individual of the set-based evolutionary optimization is a solution set consisting of several solutions of the original problem.

Researchers have carried out studies on set-based MOEAs, including the frameworks, the methods of transforming objectives, the approaches for comparing set-based individuals, and so on The first set-based MOEA was proposed by Bader et al. [35]. In their work, solutions in a population are firstly divided into a number of solution sets of the same size, and then the hypervolume indicator is adopted to assess the performance of those sets. In the method proposed by Zitzler et al. [36], not only is the preference relation between a pair of set-based individuals defined, but the representation of preferences, the design of the algorithm, and the performance evaluation are also incorporated into a framework. Bader et al. [35] presented a set-based Pareto dominance relation and designed a fitness function reflecting the decision maker’s preference to effectively solve MaOPs. A comparison of results with traditional MOEAs shows that the proposed method is effective. Besides, Gong et al. [37] also presented a set-based genetic algorithm for interval MaOPs based on hypervolume and imprecision.

2.2. Local Search-Based Evolutionary Optimization

Ishibuchi and Murata [38] firstly proposed the study combining MOEA and local search method, IM-MOGLS for short. In their work, a weighted sum approach is used to combine all objectives into one objective. After generating offspring by genetic operators, a local search is conducted starting from each new individual, optimizing the combined objectives. Based on IM-MOGLS, Ishibuchi et al. [39] add more selectivity to its starting points. Ishibuchi and Narukawa [40] proposed a hybrid of NSGA-II and local search. Knowles and Corne [41] combined local search with PAES. The use of gradients appeared in the Pareto descent method (PDM) proposed by Harada et al. in [42] and in the work of Bosman [43]. One of the few applications of achievement scalarization functions (ASFs) in MOEA area was done by Sindhya et al. [44]. MOEA was combined with a rough set-based local search in the work of Santana-Quintero et al. [45]. A genetic local search algorithm for multiobjective combinatorial optimization (MOCO) was proposed by Jaszkiewicz [46]. Firstly, Pareto ranking and a utility function are applied to obtain the best solutions. Secondly, pairs of solutions are selected randomly to undergo recombination. Finally, local search is applied to offspring pairs. In their study, MOCO is used to successfully solve the traveling salesperson problem (TSP).

The above studies combine an MOEA with classical local search methods; however, none of them applies the local search method to theoretically identify poor solutions in a population. Abouhawwash et al. [47] proposed a new hybrid MOEA procedure that first identified poorly converged nondominated solutions and then improved it by using an ASF-based local search operator. They encouraged researchers to pay more attention to the Karush Kuhn Tucker proximity metric (KKTPM) and other theoretical optimality properties of solutions in arriving at better multiobjective optimization algorithms.

2.3. Reference Point-Based Evolutionary Optimization

Existing reference point-based approaches usually adopt only one reference point to represent the decision maker’s ideal solution. Wierzbicki [48] firstly proposed a reference point approach in which the goal is to achieve a Pareto solution closest to a supplied reference point of aspiration level based on solving an achievement scalarization problem. Deb and Sundar [32] introduced the decision maker’s preference to find a preferred set of solutions near the reference point. Decomposition strategies have also been incorporated into reference point approaches to find preferred regions in the method proposed by Mohammadi et al. [49].

Up to date, there is only a few researches on achieving the whole Pareto-optimal solution set by employing multiple reference points. Figueira et al. [50] proposed a parallel multiple reference point approach for MOPs. In their work, the reference points are generated by estimating the bounds of the Pareto front, and solutions near each reference point can be obtained in parallel. This priori method is very convenient; however, the later evolution process increases the computational complexity. Wang et al. [51] proposed a preference-inspired coevolutionary approach. Although solutions and reference points are optimized simultaneously during the evolution process, the fitness value of an individual is calculated by the traditional Pareto dominance. In the work done by Deb and Jain [24], a hyperplane covering the whole objective space is obtained according to the current population, then a set of well-distributed reference points are generated on the hyperplane. However, the Pareto fronts of most practical problems are not uniformly distributed in the whole objective space, and it is necessary to adopt reference points which are adaptive to various problems. Liu et al. [52] proposed a reference point-based evolutionary algorithm (RPEA) method. However, the value of which seriously affects the performance of the algorithm is a constant during evolution. In addition, the Tchebychev approach is only adopted in this study, and it may not be appropriate to all kinds of problems.

2.4. Indicator-Based Evolutionary Optimization

Compared with the above approaches, indicator-based evolutionary algorithms (IBEAs) [53] adopt a single indicator which accounts for both convergence and distribution performances of a solution set. Because solutions can be selected one by one based on the performance indicator, the algorithm is also called one-by-one selection evolutionary optimization. The hypervolume is usually adopted as the indicator in IBEAs. However, the computational complexity for calculating hypervolume increases exponentially as the number of objectives increases. So it is hard to be used to solve MaOPs. To address this issue, Bader and Zitzler [54] proposed an improved hypervolume-based algorithm—HypE. In their work, Monte Carlo simulations are applied to estimate the hypervolume. This method can save computational resources while ensuring the accuracy of the estimated hypervolume. In recent years, some algorithms [55, 56] are also proposed to enhance the computational efficiency of IBEAs for solving MaOPs.

Motivated by simultaneously measuring the distance of the solutions to the Pareto-optimal front, and maintaining a sufficient distance between each other, Liu et al. [57] proposed a many-objective evolutionary algorithm using a one-by-one selection strategy, 1by1EA for short. However, there are two issues in this algorithm. (i) In 1by1EA, the contour lines formed by the convergence indicator have a similar shape to that of the Pareto-optimal front of an optimization problem. However, the shape of the Pareto-optimal front of a practical optimization problem is frequently unknown beforehand. (ii) The algorithm does not include a mechanism that can adaptively choose an appropriate convergence indicator or use an ensemble of multiple convergence indicators during the evolution. Based on the above two issues, we have improved the original NSGA-III.

3. Problem Formulation

The WTA formation can be described as finding a proper assignment of weapon units to target units as illustrated in Figure 1. Some formulation of the problem, including the assumptions and the new three-objective mathematical model, are introduced in this section.

Figure 1: Illustration of the WTA problem.
3.1. Assumption Description

In this research, to establish a reasonable WTA mathematical model, the following assumptions can be defined:

Assumption 1. We assume that the mathematical model is composed of fighters, missiles, and targets and the opposing groups are not necessarily equal in quantity. (Each fighter is equivalent to one platform, which possess different kinds and quantities of missiles).

Assumption 2. Each fighter can use different missiles to attack one target. (Each missile can only attack one target).

Assumption 3. The distributed unit total of each type of missile cannot exceed the number of assigned missile unit resources in a military air operation.

Assumption 4. We assume that the probability of a kill, which is labeled as , between the missile (th unit of ) and the unit being attacked (th unit of ) is provided.

Assumption 5. If the target is within the work area, a missile can be assigned effectively. If not, the missile is not.

3.2. Mathematical Model

Multiobjective WTA optimization is used to seek a balance among the maximum expected damage, minimum missile consumption, and maximum combat value. Thus, definitions and constraints related to the optimization model are shown as follows.

Definition 1. One introduces a new objective—the value fighters engage in combat on the basis of the bi-objective WTA model that one established in the literature [23]. The multiobjective model of WTA is to maximize the total effectiveness of attack, minimize the cost of missiles, and maximize the value of fighter combat. The mathematical functions of the model are shown as

Definition 2. represents the number of missiles carried by the th fighter, and represents the number of missiles that have been launched on the th fighter.

Definition 3. represents the kill probability of each missile attacking different targets.

Definition 4. The decision table can be described as Table 2, where is a Boolean value and represents whether missile is assigned to target. The relationship between the Boolean value and missile allocation is shown in Table 3.

Definition 5. The constant represents the cost of th missile in this paper.

Definition 6. Based on the real battlefield, we assume that the number of missiles per fighter is not more than 4.

Definition 7. is the pilot operation factor proposed by our previous article [23] .

Constraint. Three constraints that the above function variables must satisfy are shown in Table 4.

Table 2: The decision table of WTA.
Table 3: The relationship between Boolean value and missile allocation.
Table 4: Detailed information about the constraints.

4. Nondominated Sorting Genetic Algorithm III

4.1. Introduction of NSGA-III

The basic framework of the NSGA-III algorithm remains similar to the NSGA-II algorithm with significant changes in its mechanism [24]. But unlike in NSGA-II, a series of reference points are introduced to improve the diversity and convergence of NSGA-III. The proposed improved NSGA-III is based on the structure of NSGA-III; hence, we give a description of NSGA-III here.

The algorithm starts with an initial population (feasible solutions) of size and a series of widely distributed -dimensional reference points. represents the division and is given by the user. Das and Dennis’s systematic approach [58] is used to place reference points on the normalized hyperplane having an intercept of one on each axis. The total number of reference points () in an -objective problem is given by

The NSGA-III uses a set of reference directions to maintain diversity among solutions. A reference direction is a ray starting at the origin point and passing through a reference point, as illustrated in Figure 2. The population size is chosen to be the smallest multiple of four greater than , with the idea that for every reference direction, one population member is expected to be found [30].

Figure 2: Three reference points are shown on a normalized reference line for a two-objective problem.

Let us suppose the parent population at the generation is (of size ). The offspring population having members is obtained by recombination and mutation of . is the combination of parent and offspring population (of size ). To preserve elite members, is sorted to different nondomination levels (). Thereafter, individuals of each nondomination level are selected to construct a new population , starting from the first nondomination level , until the size of (the first time larger than ). Suppose that the last level included is the th level. In most situations, individuals of the th level are only sorted partially by the diversity maintenance operator. This is achieved by computing crowding distance for the th level in NSGA-II. But the NSGA-III replaces it with reference direction-based niching. Before the above operation, objective values are normalized by formulas (4), (5), and (6). where the ideal point of the population is determined by identifying the minimum value (), for each objective function and by constructing the ideal point . denotes the translated objective functions . denotes the extreme point value in each objective axis, and is the weight vector of each objective (when , it will be replaced by a small number 10−6). denotes the normalized objective functions, and represents the intercept of the th objective axis.

After the normalizing operation, the original reference points calculated by formulas (4) and (6) lie on this normalized hyperplane. In order to associate each population member with a reference point, the shortest perpendicular distance between each individual of and each reference line is calculated.

Finally, a niche-preservation strategy is employed to select individuals from th level that are associated with each reference point. represents the number of population members from connected to the th reference point. The specific niche-preservation strategy is shown as follows: (1)The reference point set with the is identified. When , one is selected at random.(2)According to the value of , two cases are discussed. (i)(a)If one or more members in front the th level are associated with the th reference point, the one having the shortest perpendicular distance from the th reference line is added to . The count will also add one.(b)If one member in the front th level is associated with the th reference point, the th reference point will be ignored for the current generation.(ii)

A randomly chosen member from the front th level that is associated with the th reference point is added to , and the count is then incremented by one.

After counts are updated, the above procedure will be repeated for individuals to fill the entire .

The flow chart of the NSGA-III algorithm is shown in Figure 3.

Figure 3: A flow chart of the NSGA-III algorithm.
4.2. Improvement Strategy of Reference Points

In this section, an improvement strategy of reference points is proposed. The strategy mainly includes two parts: (i) generation of new reference points and (ii) elimination of useless reference points.

NSGA-III requires a set of reference points to be supplied before the algorithm can be applied [26]. If the user does not put forward specific requirements for the Pareto-optimal front, a structured set of points created by Das and Dennis’s approach [58] is located on a normalized hyperplane. NSGA-III was originally designed to find Pareto-optimal points that are closer to each of these reference points, and the positions of the structured reference points are predefined at the beginning. However, the true Pareto front of a practical optimization problem (like the MWTA problem) is usually unknown beforehand, so the preset reference points may not reflect the development trend of the true Pareto-optimal front. In this study, a set of reference points with good performances in convergence and distribution are created by making full use of information provided by the current population. Due to the increase of the new reference points, the total number of reference points increases, and the computational complexity of the algorithm increases simultaneously. In order to keep the convergence speed of the algorithm, we propose the elimination mechanism of the reference point.

4.2.1. Generation of New Reference Points

We can learn from the above NSGA-III algorithm that the population is created and the niche count for different supplied reference points is updated after the niche operation. All reference points are expected to be useful in finding nondominated fronts ( for every reference point). If the th reference point is not associated with any population member, the niche count of the th reference point () is zero. If the NSGA-III will never find an associated population member for the th reference point, the th reference point is considered useless. It is then better to replace the th reference point with a new reference point that correctly reflects the direction of the Pareto-optimal front. However, we do not know if the th reference point is eventually useful in advance. Under this circumstance, we simply add a set of reference points by adopting the formulas (7) and (8). The scale of new reference points equals the number of reference points. where is a random number that belongs to and denotes a new reference point for the objective . and are the maximal and minimal values, respectively, of the th objective.

The pseudo code can be demonstrated as follows:

Algorithm 1: Generation of New Reference Points.

In many cases, the total number of reference points will be greatly increased by the above operations, and many of the new reference points eventually become useless. With the large increase of reference points, the algorithm will also be slowed down. Thus, we consider keeping the convergence speed of the algorithm by eliminating useless reference points, as described in the following subsection.

4.2.2. Elimination of Useless Reference Points

After new reference points are generated by the above operation, the niche count of all reference points will be recalculated and recorded. Note that the total value of niche count is equal to population size (namely, ). Ideally, each reference point is exactly associated with one solution from the population; that is, solutions are well-distributed among the reference points. Then, all reference points for are removed from the total reference point set . However, in order to maintain uniform distribution of the reference points, the reference points obtained by Das and Dennis’s systematic approach [58] will not be deleted. Thus, the existing reference points consist of two parts: (i) the original reference points (even if their niche count is zero) and (ii) all reference points.

Based on the niche count of the respective reference points and information provided by the current population, the reference point set is adaptively redefined by generation and elimination operations. The improvement strategy for the reference points is intended to maintain diversity and guide the solutions closer to the Pareto-optimal front.

4.3. Online Operator Selection Mechanism

In multiobjective evolutionary algorithms (MOEAs), crossover and mutation operators are used to generate offspring solutions to update the population, and it can seriously affect search capability. In the original NSGA-III algorithm, only a simulated binary crossover (SBX) operator is adopted, and different crossover operators cannot be selected online according to the information of generations. Therefore, we propose a strategy based on the performance of previous generations to select operators adaptively from a pool, that is, an online operator selection (OOS) mechanism. According to a study in the literature [59], adaptive operator selection can select an appropriate strategy to adapt to the search landscape and produce better results.

Based on the information collected by the credit assignment methods, the OOS is applied to select operators for generating new solutions. In this paper, we use a probability method that uses a roulette wheel-like process for selecting an operator to solve the dilemma of exploration and exploitation.

Probability matching (PM) is one of the famous probability operator selection methods. The formula for calculating the probability of operator being selected at next generation is shown below [60]: where is the number of operators and is the minimal probability of any operator. is the quality associated with operator . Clearly, the sum of probabilities for all operators is 1 . If one operator gets rewards during many generations and the others’ rewards are almost 0, its maximum selection probability is equal to .

The pool in the proposed algorithm is composed of three well-known strategies: simulated binary crossover (SBX), DE/rand/1, and nonlinear differential evolution crossover (NDE). The parent individuals , , and are randomly selected from the population in the original NSGA-III algorithm. The small difference in our algorithm is that the first parent individual of all strategies is equal to the current solution. Three operators are shown as below.

4.3.1. Simulated Binary Crossover

The simulated binary crossover (SBX) operator is proposed by Deb and Agrawal and is found to be particularly useful in problems where the upper and lower bounds of the multiobjective global optimum are not known a priori [61]. Two offspring solutions, and , are created from two parent solutions, and , by formulas (10) and (11) as follows: where the spread factor is defined as the ratio of the absolute difference in offspring values to that of the parents and is a random number; can be calculated by the formula (12) as follows: where is a random number between , and is the crossover parameter given by the user.

4.3.2. DE/rand/1

The DE/rand/1 operator is one of the most commonly used DE variants [62], and all different solutions are randomly chosen from the population. So, this strategy does not generate biased or special search directions, and then a new direction is selected at random each time. The DE/rand/1 strategy can be defined by formula (13). where represents the mutation scaling factor.

4.3.3. Nonlinear Differential Evolution Crossover

The nonlinear differential evolution crossover (NDE) strategy was presented in the literature [63] for the MOEA/D framework and was a hybrid crossover operator based on polynomials. The advantage of this strategy is that it ignores the values of the crossover rate and the mutation scaling factor. The offspring can be generated by formula (14). where is generated according to an interpolation probability () [63] and can be defined by formula (15).

The parameters , , and are given by formulas (16), (17), and (18) accordingly.

4.4. The Proposed NSGA-III Algorithm

In this paper, we add the improvement strategy of reference points and online operator selection mechanism to the NSGA-III framework. The pseudo code of the proposed NSGA-III is shown in Algorithm 2, where differences with the original NSGA-III are set in italic and bold-italic.

Algorithm 2: The Proposed NSGA-III Algorithm.

Compared with the original NSGA-III, the proposed algorithm has two main parts. Bold-italic marks are the improvement strategy of the reference points, and italic marks are the online operator selection mechanism. (1)In the bold-italic parts, the reference points that satisfy the condition (Section 4.2.2) are first removed according to the niche count of all reference points. Second, new reference points are generated by referring the process of Algorithm 1. Based on the niche count of each reference point and information provided by the current population, the reference point set is adaptively redefined by generation and elimination operations.(2)There are two differences between our proposed algorithm and the original NSGA-III in the italic parts. (i)The first difference is the selection of the operator to be used (Step 4), which occurs based on the probabilities associated with each operator. With the success probability of an operator increasing, the most successful operator will be selected more often, and the quality of the solutions will also be improved theoretically. Because the same operator is selected by a deterministic approach during all recombination (Step 5) of the same generation, an undesirable bias should be introduced to select the best performance operator in the initial generations. So, stochastic selection mechanisms are applied in this paper.(ii)Other differences are the calculation of the rewards (Step 22) and the update of information with each operator (Step 23). (a)In this paper, the in the pool has associated a probability of being selected at generation by formula (9). The adapted process is based on its quality , which is updated according to a reward .The reward adopted in this paper is a Boolean value in Step 22. If is generated by the operator , does not belong to but belongs to ; then, ; otherwise, is equal to 0.(b)After calculating the rewards, the quality of the operator available in the pool can be updated by formula (19):where is the adaption rate .

Finally, the operator selection probabilities are updated by formula (9).

5. Experiment Results and Analysis

5.1. Test Problem

In order to verify the proposed algorithm, five state-of-the-art algorithms are considered for comparison studies. They are NSGA-III-OSD [64], NSGA-III [24], NSGA-II [65], MPACO [23], and MOEA/D [66]. NSGA-III and NSGA-II are the traditional NSGA algorithms. NSGA-III-OSD is an improved version of the NSGA-III based on objective space decomposition. In MOEA/D algorithm, a predefined set of weight vectors is applied to maintain the solution diversity. MPACO is the algorithm we proposed before. All algorithms are tested with 4 different benchmark MaOP problems known as DTLZ1 to DTLZ4 [67].

For DTLZ, 4 instances (DTLZ1–4) with 3, 5, 8, 10, and 15 objectives are used. Based on the work of Deb et al. [67], the number of decision variables is set as , where for DTLZ1 and for DTLZ2–4. According to the work [68], the number of decision variables is set as . The parameters the position-related variable and the distance-related value are set to and 20, respectively. The main characteristics of all benchmark problems are shown in Table 5.

Table 5: Characteristics of test problems.

In this subsection, inverted generational distance (IGD) indicator [69] is used to evaluate the quality of a set of obtained nondominated solutions. This indicator can measure the convergence and diversity of solutions simultaneously. Smaller IGD values indicate that the last nondominated population has better convergence and coverage of the Pareto front. Each algorithm is conducted 30 runs independently on each test problem. Aiming to be as fair as possible, in each run, all the comparison algorithms perform the same maximum iteration as shown in Table 6. The population size used in this study for different numbers of objectives is shown in Table 7.

Table 6: Maximum iteration for each test problem.
Table 7: Number of population size.

The six algorithms considered in this study need to set some parameters, and five of them are shown in Table 8. The parameters of the MPACO algorithm can be found in the literature [23].

Table 8: Parameter setting of each algorithm.

Comparison results of I-NSGA-III with five other MOEAs in terms of IGD values on different objectives of DTLZ1–4 test problems are presented in Tables 912. It shows both the median and standard deviation of the IGD values on 30 independent runs for the six compared MOEAs, where the best median and standard deviation are highlighted in italic.

Table 9: Median and standard deviation of the IGD values achieved by each algorithm on DTLZ1 (the best medians are in italic font).
Table 10: Median and standard deviation of the IGD values achieved by each algorithm on DTLZ2 (the best medians are in italic font).
Table 11: Median and standard deviation of the IGD values achieved by each algorithm on DTLZ3 (the best medians are in italic font).
Table 12: Median and standard deviation of the IGD values achieved by each algorithm on DTLZ4 (the best medians are in italic font).

Based on the statistical results of DTLZ1, we can see that I-NSGA-III shows better performance than the other five MOEAs on three-, eight- and ten-objective test problems. For five- and fifteen-objective problems, it achieves the second smallest IGD value. Furthermore, NSGA-III can obtain the best IGD value on the fifteen-objective test problem and MPACO can obtain the best IGD value on the five-objective test problem. NSGA-II can deal with three-objective instances but works worse on more than three objectives.

Based on the statistical results of DTLZ2, we can see that the performance of I-NSGA-III, NSGA-III-OSD, and MOEA/D is comparable in this problem. The NSGA-III is worse than two improved algorithms (I-NSGA-III and MOEA/D) and MOEA/D, however, is better than MPACO. Based on the IGD values obtained by NSGA-II on three- and five-objective problems, we find that this algorithm can perform well. But when the number of objectives increases, NSGA-II still works worst among the six algorithms.

Based on the statistical results of DTLZ3, we find that I-NSGA-III performs significantly best among six algorithms on different objectives of the DTLZ3 problem. NSGA-III-OSD can achieve a similar IGD value as NSGA-III. Although MOEA/D can achieve the close to smallest IGD value on the three- and five-objective test problems, it significantly worsens on more than five-objective test problems. MPACO can defeat MOEA/D on eight-, ten- and fifteen-objective test problems. In addition, NSGA-II is still helpless for more than three-objective problems.

Based on the statistical results of DTLZ4, we find that I-NSGA-III performs significantly better than the other five MOEAs on almost all DTLZ4 test problems, except the eight-objective instance. Furthermore, the standard deviation of IGD obtained by I-NSGA-III shows that the proposed algorithm is rather robust. NSGA-III-OSD shows the closest overall performance to I-NSGA-III. MOEA/D is significantly worse than MPACO. NSGA-II still has no advantage on the DTLZ4 test problem.

The above results show that the proposed I-NSGA-III could perform well on almost all the instances in DTLZ1–4, and the obtained solution sets have good convergence and diversity. In the next subsection, the proposed algorithm will be applied to solve an SMWTA problem.

5.2. SMWTA Problem
5.2.1. Parameter Setting

For the MWTA problem, the population size of the I-NSGA-III algorithm is 150, and the maximum number of iterations is 200. According to the approach in the literature [58], the total number of reference points is set to 120. The DE scaling factor , polynomial mutation probability , and PM minimum probability are used in this work, as they are frequently used in the literature [68]. Some parameters for the I-NSGA-III are shown in Table 13.

Table 13: Parameters of I-NSGA-III algorithm (Part).

Table 14 shows the number of reference points , population size , and maximum iteration for different algorithms. Other parameters in the NSGA-III algorithm are the same as those in the literature [24]. The parameters of the MPACO algorithm and the NSGA-II algorithm can be found in the literature [23].

Table 14: Number of reference points, population size, and maximum number of iterations for all algorithms.
5.2.2. Simulation Environment

We can see from Table 15 that the proposed algorithm has been implemented in C++ on a CPU Intel(R) Core(TM) i5-4460T with 1.90 GHz and 8 GB of RAM. The operating system is Windows 7 64-bit.

Table 15: Parameters of simulation environment.
5.2.3. Numerical Experiments and Analysis

We use the same specific instance as in our previous work [23] to verify the performance of the algorithm. The instance includes 4 fighters that carry different numbers of missiles (12 missiles in total) and 10 targets. Appendix A shows missile damage probability , pilot operation factor , and cost of missiles .

First, an enumeration approach [19] is employed to get a set of evenly distributed true optimal solutions and thus obtain evenly distributed true Pareto solutions (PSs) for the specific instance. Second, in order to verify the applicability and feasibility of the proposed algorithm, we apply I-NSGA-III, NSGA-III, MPACO, and NSGA-II to find PSs in the instance. The statistical results are shown in Figures 47.

Figure 4: Plots of the true PSs and the final PSs found by I-NSGA-III on the specific instance.
Figure 5: Plots of the true PSs and the final PSs found by NSGA-III on the specific instance.
Figure 6: Plots of the true PSs and the final PSs found by NSGA-II on the specific instance.
Figure 7: Plots of the true PSs and the final PSs found by MPACO on the specific instance.

Figures 14 show the distribution of the true PSs solved by the enumeration method and final PSs obtained by I-NSGA-III, NSGA-III, MPACO, and NSGA-II. We can see that the optimization results of the I-NSGA-III algorithm are obviously better than those of the other algorithms because the I-NSGA-III can find close and even PSs in the objective space. It is evident that the algorithm can guarantee the quality of solutions. So the I-NSGA-III for optimizing the SMWTA is well verified to be feasible.

In Figure 1, 150 evenly distributed solutions in the real Pareto front are found, while only 120 reference points are preset. Since the reference point and the solution are one-to-one correspondence, the effectiveness of the improvement strategy of reference points—generating new reference points and eliminating useless reference points operations—is demonstrated. This strategy increases the number of reference points from 120 to 150, improves the efficiency of the algorithm, and finds more Pareto-optimal solutions that meet the requirements. Meanwhile, due to the adoption of the online operator selection mechanism in the I-NSGA-III algorithm, the search landscape is reduced and the quality of the solutions is improved. However, in Figure 2, the original NSGA-III algorithm can only find 95 solutions on the premise of the initial 120 reference points, and only a small number of individuals are located at the Pareto front. Therefore, the original NSGA-III is less efficient than the algorithm proposed in this paper for solving the SMWTA problem. As we can see from Figures 3 and 4, although MPACO is superior to NSGA-II to some extent, the two algorithms are obviously inferior to the I-NSGA-III algorithm.

We analyze results from Appendix B and Appendix C as follows:

When funds for national defense are sufficient and detailed enemy information are available, we can choose Scheme 1, which costs the most money and obtains the greatest expected damage to the enemy to complete a fatal attack. As we are in a repressive state of military power, we can accomplish the task with only one attack in Scheme 1. However, this situation is rare in real combat. When we have only a small amount of information about the enemy or it is difficult to launch a large-scale attack, we choose one scheme among Schemes 147, 149, and 150 that can achieve maximum fighter combat value to launch a probing attack. In these three schemes, taking into account the least cost in terms of money and the greatest expected damage value, we should choose Scheme 150. Considering that funds are insufficient, we can only choose Scheme 148. Considering that all targets must be allocated and that the cost of missiles should be minimized, we can only choose Scheme 4.

Solving the SMWTA problem is the foundation of the dynamic multiobjective weapon-target assignment problem (DMWTA). The goal of the DMWTA is to provide a set of near-optimal or acceptable real-time decisions in real air combat. So, the time performance of algorithms is also an important index. In the end part, we test four algorithms on the specific instance in 30 runs and record the iteration time of each algorithm. The statistical results of time performances are shown in Figure 8.

Figure 8: Time performance of four different algorithms. represent extreme outliers.

In real air combat situations, pilots often make deadly decisions within seconds or even within milliseconds. We can see from Figure 5 that I-NSGA-III has a time advantage in solving the special instance compared with other algorithms, and the time performance of the NSGA-II is worst among the four algorithms. Although improvement of the strategy of reference points affects the iteration rate, an appropriate strategy can be selected by an online operator selection mechanism to improve the mutation efficiency and the quality of solutions. Compared with the original NSGA-III algorithm, the improvement strategy of reference points plays a more important role than the online operator selection mechanism in the time performance field.

In this section, we do some work as follows: Firstly, we use four classic test problems (DTLZ1–4) to evaluate the proposed algorithm and compare it with the other five state-of-the-art algorithms. Secondly, we test the four different algorithms on a specific example, verify the applicability and feasibility of the proposed algorithm, give a comparison study among the four algorithms, and show the corresponding distribution results in Appendix B and Appendix C. Thirdly, we show the time performance of four algorithms in 30 runs. To summarize, I-NSGA-III has been proved to be an effective technique for the SMWTA optimization problem and is obviously the best among the four algorithms.

6. Conclusion

We apply NSGA-III to the WTA problem and propose the I-NSGA-III to solve SMWTA in this paper. The main contributions of the thesis are summarized as follows: on the one hand, the expected damage to the enemy and the cost of missiles are taken into account from a practical viewpoint; in terms of the other objective—the value of fighter combat was introduced to make the model in line with real air combat. In this paper, an improvement strategy of reference points and an online operator selection mechanism are proposed and embedded into the original NSGA-III algorithm to improve the performances of the I-NSGA-III algorithm. The experiments have shown that I-NSGA-III can find better Pareto solutions than the other three algorithms for the SMWTA problem. More importantly, I-NSGA-III is more suitable for solving the problem from the time performance viewpoint.

However, we have mainly studied the SMWTA problem; few studies have focused on dynamic problems, which are more instructive to real air combat. In recent years, more and more studies have begun to pay attention to DWTA problems. A further study on this topic is one of our future tasks.

Appendix

Appendix A contains three data tables. These three tables come from our previous work [23] and are used to express the data used in the instance to verify the performance of the proposed algorithm. In Table 16, each data represents the damage probability of different missiles attacking different targets. In Table 17, is the pilot operation factor and represents the talent, training time, and operation stability of the pilot which may affect the attack performance. In Table 18, represents the cost of th missile (). The larger the value is, the more the missile costs.

The data in Appendix B represent the results and the corresponding distribution by I-NSGA-III. In each scheme, the first three columns represent the values of the three objective functions, and the latter twelve columns represent the corresponding results of the WTA distribution. (As an example, in Scheme 33, , , and . Missile 2 is assigned to Target 3, Missiles 3 and 4 are assigned to Target 7, Missiles 6 and 10 are assigned to Target 9, Missiles 8 and 12 are assigned to Target 5, and Missile 9 is assigned to Target 1).

In order to compare the statistical results of all algorithms, the statistical results obtained by NSGA-III, MPACO, and NSGA-II are given in Appendix C and shown in Figures 68.

A. The Value of the Specific Example Used in This Paper

Table 16: Missile damage probability.

Table 17: Pilot operation factor table.

Table 18: The cost of each missile.

B. The Value of PSs and Results of WTA Distribution

Table 19

C. The Statistical Results of NSGA-III, MPACO, and NSGA-II

Table 20

Table 21

Table 22

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. R. N. Rai and N. Bolia, “Optimal decision support for air power potential,” IEEE Transactions on Engineering Management, vol. 61, no. 2, pp. 310–322, 2014. View at Publisher · View at Google Scholar · View at Scopus
  2. A. S. Manne, “A target-assignment problem,” Operations Research, vol. 6, no. 3, pp. 346–351, 1958. View at Publisher · View at Google Scholar
  3. R. H. Day, “Allocating weapons to target complexes by means of nonlinear programming,” Operations Research, vol. 14, no. 6, pp. 992–1013, 1966. View at Publisher · View at Google Scholar
  4. P. A. Hosein and M. Athans, Preferential Defense Strategies. Part I: The Static Case, MIT Laboratory for Information and Decision Systems, Cambridge, MA, USA, 1990.
  5. D. Galati and M. Simaan, “Effectiveness of the Nash strategies in competitive multi-team target assignment problems,” IEEE Transactions on Aerospace and Electronic Systems, vol. 43, no. 1, pp. 126–134, 2007. View at Publisher · View at Google Scholar · View at Scopus
  6. Z. J. Lee, C. Y. Lee, and S. F. Su, “An immunity-based ant colony optimization algorithm for solving weapon–target assignment problem,” Applied Soft Computing, vol. 2, no. 1, pp. 39–47, 2002. View at Publisher · View at Google Scholar · View at Scopus
  7. Z. J. Lee, S. F. Su, and C. Y. Lee, “A genetic algorithm with domain knowledge for weapon-target assignment problems,” Journal of the Chinese Institute of Engineers, vol. 25, no. 3, pp. 287–295, 2002. View at Publisher · View at Google Scholar · View at Scopus
  8. Z.-J. Lee and W.-L. Lee, “A hybrid search algorithm of ant colony optimization and genetic algorithm applied to weapon-target assignment problems,” in Intelligent Data Engineering and Automated Learning, pp. 278–285, Springer, Berlin, Heidelberg, 2003. View at Publisher · View at Google Scholar
  9. M.-Z. Lee, “Constrained weapon–target assignment: enhanced very large scale neighborhood search algorithm,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 40, no. 1, pp. 198–204, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. B. Xin, J. Chen, J. Zhang, L. Dou, and Z. Peng, “Efficient decision makings for dynamic weapon-target assignment by virtual permutation and tabu search heuristics,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 40, no. 6, pp. 649–662, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. Y. Li and Y. Dong, “Weapon-target assignment based on simulated annealing and discrete particle swarm optimization in cooperative air combat,” Acta Aeronautica Et Astronautica Sinica, vol. 31, no. 3, pp. 626–631, 2010. View at Google Scholar
  12. P. Chen, B.-j. Shen, L.-s. Zhou, and Y.-w. Chen, “Optimized simulated annealing algorithm for thinning and weighting large planar arrays,” Journal of Zhejiang University SCIENCE C, vol. 11, no. 4, pp. 261–269, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. B. Xin, J. Chen, Z. Peng, L. Dou, and J. Zhang, “An efficient rule-based constructive heuristic to solve dynamic weapon-target assignment problem,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 41, no. 3, pp. 598–606, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. A. G. Fei, L. Y. Zhang, and Q. J. Ding, “Multi-aircraft cooperative fire assignment based on auction algorithm,” Systems Engineering and Electronics, vol. 34, no. 9, pp. 1829–1833, 2012. View at Google Scholar
  15. Z. R. Bogdanowicz, A. Tolano, K. Patel, and N. P. Coleman, “Optimization of weapon–target pairings based on kill probabilities,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 1835–1844, 2013. View at Publisher · View at Google Scholar · View at Scopus
  16. X. Liu, Z. Liu, W. S. Hou, and J. H. Xu, “Improved MOPSO algorithm for multi-objective programming model of weapon-target assignment,” Systems Engineering and Electronics, vol. 35, no. 2, pp. 326–330, 2013. View at Google Scholar
  17. Y. Zhang, R. N. Yang, J. L. Zuo, and X. Jing, “Weapon-target assignment based on decomposition-based evolutionary multi-objective optimization algorithms,” Systems Engineering and Electronics, vol. 36, no. 12, pp. 2435–2441, 2014. View at Google Scholar
  18. D. K. Ahner and C. R. Parson, “Optimal multi-stage allocation of weapons to targets using adaptive dynamic programming,” Optimization Letters, vol. 9, no. 8, pp. 1689–1701, 2015. View at Publisher · View at Google Scholar · View at Scopus
  19. J. Li, J. Chen, B. Xin, and L. H. Dou, “Solving multi-objective multi-stage weapon target assignment problem via adaptive NSGA-II and adaptive MOEA/D: A comparison study,” in 2015 IEEE Congress on Evolutionary Computation (CEC), pp. 3132–3139, Sendai, Japan, May 2015. View at Publisher · View at Google Scholar · View at Scopus
  20. N. Dirik, S. N. Hall, and J. T. Moore, “Maximizing strike aircraft planning efficiency for a given class of ground targets,” Optimization Letters, vol. 9, no. 8, pp. 1729–1748, 2015. View at Publisher · View at Google Scholar · View at Scopus
  21. L. Hongtao and K. Fengju, “Adaptive chaos parallel clonal selection algorithm for objective optimization in WTA application,” Optik - International Journal for Light and Electron Optics, vol. 127, no. 6, pp. 3459–3465, 2016. View at Publisher · View at Google Scholar · View at Scopus
  22. N. Li, W. Huai, and S. Wang, “The solution of target assignment problem in command and control decision-making behaviour simulation,” Enterprise Information Systems, vol. 11, pp. 1–19, 2017. View at Publisher · View at Google Scholar · View at Scopus
  23. Y. Li, Y. Kou, Z. Li, A. Xu, and Y. Chang, “A modified Pareto ant colony optimization approach to solve biobjective weapon-target assignment problem,” International Journal of Aerospace Engineering, vol. 2017, Article ID 1746124, 14 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  24. K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014. View at Publisher · View at Google Scholar · View at Scopus
  25. H. Ishibuchi, N. Tsukamoto, and Y. Nojima, “Behavior of evolutionary many-objective optimization,” in Tenth International Conference on Computer Modeling and Simulation (uksim 2008), pp. 266–271, Cambridge, UK, April 2008. View at Publisher · View at Google Scholar · View at Scopus
  26. H. Jain and K. Deb, “An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: handling constraints and extending to an adaptive approach,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 602–622, 2014. View at Publisher · View at Google Scholar · View at Scopus
  27. W. Mkaouer, M. Kessentini, A. Shaout et al., “Many-objective software remodularization using NSGA-III,” ACM Transactions on Software Engineering and Methodology, vol. 24, no. 3, pp. 1–45, 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. F. Chen, J. Zhou, C. Wang, C. Li, and P. Lu, “A modified gravitational search algorithm based on a non-dominated sorting genetic approach for hydro-thermal-wind economic emission dispatching,” Energy, vol. 121, pp. 276–291, 2017. View at Publisher · View at Google Scholar · View at Scopus
  29. Y. Zhu, J. Liang, J. Chen, and Z. Ming, “An improved NSGA-III algorithm for feature selection used in intrusion detection,” Knowledge-Based Systems, vol. 116, pp. 74–85, 2017. View at Publisher · View at Google Scholar · View at Scopus
  30. H. Seada and K. Deb, “U-NSGA-III: a unified evolutionary optimization procedure for single, multiple, and many objectives: proof-of-principle results,” in International Conference on Evolutionary Multi-Criterion Optimization, pp. 34–49, Guimarães, Portugal, 2015. View at Publisher · View at Google Scholar · View at Scopus
  31. Y. Yuan, H. Xu, and B. Wang, “An improved NSGA-III procedure for evolutionary many-objective optimization,” in Proceedings of the 2014 conference on Genetic and evolutionary computation - GECCO '14, pp. 661–668, New York, NY USA, July 2014. View at Publisher · View at Google Scholar · View at Scopus
  32. K. Deb and J. Sundar, “Reference point based multi-objective optimization using evolutionary algorithms,” in Proceedings of the 8th annual conference on Genetic and evolutionary computation - GECCO '06, pp. 635–642, Seattle, WA, USA, July 2006. View at Publisher · View at Google Scholar
  33. R. M. F. Alves and C. R. Lopes, “Using genetic algorithms to minimize the distance and balance the routes for the multiple traveling salesman problem,” in 2015 IEEE Congress on Evolutionary Computation (CEC), pp. 3171–3178, Sendai, Japan, May 2015. View at Publisher · View at Google Scholar · View at Scopus
  34. D. W. Gong, X. F. Ji, and X. Y. Sun, “Solving many-objective optimization problems using set-based evolutionary algorithms,” Acta Electronica Sinica, vol. 42, no. 1, pp. 77–83, 2014. View at Google Scholar
  35. J. Bader, D. Brockhoff, S. Welten, and E. Zitzler, “On using populations of sets in multiobjective optimization,” in International Conference on Evolutionary Multi-Criterion Optimization, pp. 140–154, Nantes, France, April 2009.
  36. E. Zitzler, L. Thiele, and J. Bader, “On set-based multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 1, pp. 58–79, 2010. View at Publisher · View at Google Scholar · View at Scopus
  37. D. Gong, J. Sun, and Z. Miao, “A set-based genetic algorithm for interval many-objective optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 1, pp. 47–60, 2018. View at Publisher · View at Google Scholar · View at Scopus
  38. H. Ishibuchi and T. Murata, “A multi-objective genetic local search algorithm and its application to flowshop scheduling,” IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), vol. 28, no. 3, pp. 392–403, 1998. View at Publisher · View at Google Scholar · View at Scopus
  39. H. Ishibuchi, T. Yoshida, and T. Murata, “Balance between genetic search and local search in memetic algorithms for multiobjective permutation flowshop scheduling,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 2, pp. 204–223, 2003. View at Publisher · View at Google Scholar · View at Scopus
  40. H. Ishibuchi and K. Narukawa, “Performance evaluation of simple multiobjective genetic local search algorithms on multiobjective 0/1 knapsack problems,” in Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753), pp. 441–448, Portland, OR, USA, June 2004. View at Publisher · View at Google Scholar
  41. J. D. Knowles and D. W. Corne, “M-PAES: a memetic algorithm for multiobjective optimization,” in Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512), pp. 325–332, La Jolla, CA, USA, July 2000. View at Publisher · View at Google Scholar · View at Scopus
  42. K. Harada, J. Sakuma, K. Ikeda, I. Ono, and S. Kobayashi, “Local search for multiobjective function optimization: Pareto descent method,” Transactions of the Japanese Society for Artificial Intelligence, vol. 21, no. 4, pp. 350–360, 2006. View at Publisher · View at Google Scholar · View at Scopus
  43. P. A. N. Bosman, “On gradients and hybrid evolutionary algorithms for real-valued multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 1, pp. 51–69, 2012. View at Publisher · View at Google Scholar · View at Scopus
  44. K. Sindhya, K. Deb, and K. Miettinen, “A local search based evolutionary multi-objective optimization approach for fast and accurate convergence,” in International Conference on Parallel Problem Solving from Nature, pp. 815–824, Dortmund, Germany, September 2008. View at Publisher · View at Google Scholar · View at Scopus
  45. L. V. Santana-Quintero, A. G. Hernández-Díaz, J. Molina, C. A. Coello Coello, and R. Caballero, “DEMORS: a hybrid multi-objective optimization algorithm using differential evolution and rough set theory for constrained problems,” Computers & Operations Research, vol. 37, no. 3, pp. 470–480, 2010. View at Publisher · View at Google Scholar · View at Scopus
  46. A. Jaszkiewicz, “Genetic local search for multi-objective combinatorial optimization,” European Journal of Operational Research, vol. 137, no. 1, pp. 50–71, 2002. View at Publisher · View at Google Scholar · View at Scopus
  47. M. Abouhawwash, H. Seada, and K. Deb, “Towards faster convergence of evolutionary multi-criterion optimization algorithms using Karush Kuhn Tucker optimality based local search,” Computers & Operations Research, vol. 79, pp. 331–346, 2017. View at Publisher · View at Google Scholar · View at Scopus
  48. A. P. Wierzbicki, “The use of reference objectives in multiobjective optimization,” in Multiple criteria decision making theory and application, pp. 468–486, Springer, Berlin Heidelberg, 1980. View at Publisher · View at Google Scholar
  49. A. Mohammadi, M. N. Omidvar, and X. Li, “Reference point based multi-objective optimization through decomposition,” in 2012 IEEE Congress on Evolutionary Computation, pp. 1–8, Brisbane, QLD, Australia, June 2012. View at Publisher · View at Google Scholar · View at Scopus
  50. J. R. Figueira, A. Liefooghe, E. G. Talbi, and A. P. Wierzbicki, “A parallel multiple reference point approach for multi-objective optimization,” European Journal of Operational Research, vol. 205, no. 2, pp. 390–400, 2010. View at Publisher · View at Google Scholar · View at Scopus
  51. R. Wang, R. C. Purshouse, and P. J. Fleming, “Preference-inspired coevolutionary algorithms for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 4, pp. 474–494, 2013. View at Publisher · View at Google Scholar · View at Scopus
  52. Y. Liu, D. Gong, X. Sun, and Y. Zhang, “Many-objective evolutionary optimization based on reference points,” Applied Soft Computing, vol. 50, pp. 344–355, 2017. View at Publisher · View at Google Scholar · View at Scopus
  53. E. Zitzler and S. Künzli, “Indicator-based selection in multiobjective search,” in International Conference on Parallel Problem Solving from Nature, pp. 832–842, Birmingham, UK, September 2004. View at Publisher · View at Google Scholar
  54. J. Bader and E. Zitzler, “HypE: an algorithm for fast hypervolume-based many-objective optimization,” Evolutionary Computation, vol. 19, no. 1, pp. 45–76, 2011. View at Publisher · View at Google Scholar · View at Scopus
  55. M. Wagner and F. Neumann, “A fast approximation-guided evolutionary multi-objective algorithm,” in Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference - GECCO '13, pp. 687–694, Amsterdam, Netherlands, July 2013. View at Publisher · View at Google Scholar · View at Scopus
  56. R. H. Gómez and C. A. Coello Coello, “Improved metaheuristic based on the R2 indicator for many-objective optimization,” in Proceedings of the 2015 on Genetic and Evolutionary Computation Conference - GECCO '15, pp. 679–686, Madrid, Spain, July 2015. View at Publisher · View at Google Scholar · View at Scopus
  57. Y. Liu, D. Gong, J. Sun, and Y. Jin, “A many-objective evolutionary algorithm using a one-by-one selection strategy,” IEEE Transactions on Cybernetics, vol. 47, no. 9, pp. 2689–2702, 2017. View at Publisher · View at Google Scholar · View at Scopus
  58. I. Das and J. E. Dennis, “Normal-boundary intersection: a new method for generating the Pareto surface in nonlinear multicriteria optimization problems,” SIAM Journal on Optimization, vol. 8, no. 3, pp. 631–657, 1998. View at Publisher · View at Google Scholar · View at Scopus
  59. K. Li, Á. Fialho, S. Kwong, and Q. Zhang, “Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 1, pp. 114–130, 2014. View at Publisher · View at Google Scholar · View at Scopus
  60. W. Gong, Á. Fialho, Z. Cai, and H. Li, “Adaptive strategy selection in differential evolution for numerical optimization: an empirical study,” Information Sciences, vol. 181, no. 24, pp. 5364–5386, 2011. View at Publisher · View at Google Scholar · View at Scopus
  61. K. Deb and R. B. Agrawal, “Simulated binary crossover for continuous search space,” Complex Systems, vol. 9, no. 3, pp. 115–148, 1995. View at Google Scholar
  62. S. Biswas, S. Kundu, and S. Das, “Inducing niching behavior in differential evolution through local information sharing,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 2, pp. 246–263, 2015. View at Publisher · View at Google Scholar · View at Scopus
  63. K. Sindhya, S. Ruuska, T. Haanpää, and K. Miettinen, “A new hybrid mutation operator for multiobjective optimization with differential evolution,” Soft Computing, vol. 15, no. 10, pp. 2041–2055, 2011. View at Publisher · View at Google Scholar · View at Scopus
  64. X. Bi and C. Wang, “An improved NSGA-III algorithm based on objective space decomposition for many-objective optimization,” Soft Computing, vol. 21, no. 15, pp. 4269–4296, 2017. View at Publisher · View at Google Scholar · View at Scopus
  65. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  66. Q. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007. View at Publisher · View at Google Scholar · View at Scopus
  67. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable test problems for evolutionary multiobjective optimization,” in Evolutionary Multiobjective Optimization, A. Abraham, L. Jain, and R. Goldberg, Eds., Advanced Information and Knowledge Processing, pp. 105–145, Springer, London, 2005. View at Publisher · View at Google Scholar
  68. Y. Yuan, H. Xu, and B. Wang, “An experimental investigation of variation operators in reference-point based many-objective optimization,” in Proceedings of the 2015 on Genetic and Evolutionary Computation Conference - GECCO '15, pp. 775–782, New York, NY, USA, July 2015. View at Publisher · View at Google Scholar · View at Scopus
  69. E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. da Fonseca, “Performance assessment of multiobjective optimizers: an analysis and review,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 2, pp. 117–132, 2003. View at Publisher · View at Google Scholar · View at Scopus