Research Article  Open Access
Yanjiao Wang, Xiaonan Sun, "A ManyObjective Optimization Algorithm Based on Weight Vector Adjustment", Computational Intelligence and Neuroscience, vol. 2018, Article ID 4527968, 21 pages, 2018. https://doi.org/10.1155/2018/4527968
A ManyObjective Optimization Algorithm Based on Weight Vector Adjustment
Abstract
In order to improve the convergence and distribution of a manyobjective evolutionary algorithm, this paper proposes an improved NSGAIII algorithm based on weight vector adjustment (called NSGAIIIWA). First, an adaptive weight vector adjustment strategy is proposed to decompose the objective space into several subspaces. According to different subspace densities, the weight vector is sparse or densely adjusted to ensure the uniformity of the weight vector distribution on the Pareto front surface. Secondly, the evolutionary model that combines the new differential evolution strategy and genetic evolution strategy is proposed to generate new individuals and enhance the exploration ability of the weight vector in each subspace. The proposed algorithm is tested on the optimization problem of 3–15 objectives on the DTLZ standard test set and WFG test instances, and it is compared with the five algorithms with better effect. In this paper, the Whitney–Wilcoxon ranksum test is used to test the significance of the algorithm. The experimental results show that NSGAIIIWA has a good effect in terms of convergence and distribution.
1. Introduction
Manyobjective optimization problems (MAOPs) [1] refer to optimization problems whose number of objectives is over three and need to be processed simultaneously. As the number of objectives increases, the number of nondominated solutions will grow explosively in the form of e exponent [2], most of the solutions are nondominated, the pros and cons between the solutions become more difficult to be evaluated, and namely, manyobjective evolutionary algorithms (MOEAs) have poor performance in convergence. In addition, the sensitivity of the Pareto front surface [3] will increase along with the increase of the spatial dimension, which makes it difficult to maintain the distribution among individuals.
In order to ensure the convergence and distribution of MOEAs [4], scholars have proposed the following three solutions to improve the aforementioned problems:(1)Change the dominance relationship [5], and then increase the pressure of selecting solution to improve the convergence of the algorithm. In 2005, Laumanns et al. proposed edomination [6] to reduce the fitness value of each individual by (1 − e) times before the individual performs the Pareto dominance relationship comparison. In 2014, the ϵdominant mechanism proposed by HernándezDíaz et al. [7] added an acceptable threshold to the comparison of individual fitness. In 2014, Yuan et al. proposed the θ domination [8] to maintain the balance between convergence and diversity in EMO. In 2015, Jingfeng et al. proposed a simple Pareto adaptive εdomination differential evolution algorithm for multiobjective optimization [9]. In 2016, Yue et al. proposed a gridbased evolutionary algorithm [10] which modifies the dominance criterion to increase the convergence speed of evolutionary manyobjective optimization (EMO). In 2017, Lin et al. proposed an evolutionary manyobjective optimization based on alpha dominance, which provides strict Pareto stratification [11] to remove most of the blocked domination solutions. The methods above can enhance the selection pressure, but this relaxed strategy is limited to handle the situation with a little number of objectives. Moreover, it is very hard that parameters need to be adjusted for different optimization problems.(2)Method based on decomposition [12], which means to decompose the objective space into several subspaces without changing the dimension of the objective, thus transforming MAOP into singleobjective subproblem or manyobjective subproblems. In 2007, Zhang and Li proposed a manyobjective evolutionary algorithm based on decomposition [13] (MOEA/D) for the first time whose convergence effect is significantly better than MOGLS and NSGAII. In 2016, Yuan et al. proposed a distancebased update strategy, MOEA/DD [14], to maintain the diversity of algorithms in the evolutionary process by exploiting the vertical distance between solutions and weight vectors. In 2017, Segura and Miranda proposed a decompositionbased MOEA/DEVSD [15] evolutionary algorithm, steadystate form, and a reference direction to guide the search. In 2017, Xiang et al. proposed the framework of VAEA [16] algorithm based on angle decomposition. The algorithm does not require the reference point, and the convergence and diversity of manyobjective space are well balanced. However, the selfadjusting characteristics of the algorithms mentioned above make them fall into local optimum more easily although the convergence speed is improved. The distribution is not unsatisfactory.(3)The reference point method, this kind of algorithm decomposes the MAOPs into a group of manyobjective optimization subproblems with simple frontier surfaces [17]. However, unlike the decomposition method, the subproblem is solved using the manyobjective optimization method. In 2013, Wang et al. [18] proposed the PICEAwr algorithm whose set of weight vectors evolved along with populations; the weight vector adjusts adaptively according to its own optimal solution. In 2014, Qi et al. adopted an enhanced weight vector adjustment method in the MOEA/DAWA algorithm [19]. In 2014, Deb and Jain proposed a nondominated sorting evolution manyobjective optimization algorithm based on reference points [20] (NSGAIII), and its reference point is uniformly distributed throughout the objective space; in the same year, Liu et al. proposed the MOEA/DM2M method; the entire PF can be divided into multiple segments and solved separately by dividing the entire objective space into multiple subspaces. Each segment corresponds to a manyobjective optimal subproblem [21], which improves the distribution of solution sets. In 2016, Bi and Wang proposed an improved NSGAIII manyobjective optimization algorithm [22] (NSGAIIIOSD) based on objective space decomposition. The uniformly distributed weight vector was decomposed into several subspaces through a clustering approach. The weight vectors can specify a unique subarea. A smaller objective space helps overcome the invalidity of the manyobjective Pareto dominance relationship [23]; the distribution and the uniformity of the solution surface decrease because of the sparse solution of each subspace edge caused by the fixed subspace. In 2016, Cheng et al. proposed an evolutionary algorithm based on reference vector guidance for solving MAOPs [24] (RVEA). Its principle of the adaptive strategy is to adjust the weight vectors dynamically according to the objective function form. The weight vectors generated by the methods above are uniformly distributed, while the reference point on the solution surface cannot be guaranteed to be uniform, and there is also a possibility that the convergence may be lost.
In order to further improve the convergence and distribution of manyobjective algorithms, based on NSGAIII, a manyobjective optimization algorithm (NSGAIIIWA) based on weight vector adjustment is proposed. First, in order to enhance the exploration ability of the solution to the weight vector, an evolutionary model in which a novel differential evolution strategy and a genetic evolution strategy are integrated is used to generate new individuals. Then, in order to ensure the uniform distribution of weight vectors in the solution surface, the objective space is divided into several subspaces by clustering the objective vectors. According to the adjustment of the weight of each subspace, the spatial distribution of the objective is improved. We will carry out simulation experiments on the DTLZ standard test set [25] and WFG standard test set [26]. We compare the proposed algorithm with the five algorithms that are currently performing better on the optimization problem of 3 to 15 objectives. The GD, IGD, and HV are compared as performance indicators. The experimental results show that NSGAWA has good effect in convergence and distribution.
The rest of the paper is organized as follows. Section 2 introduces the original algorithm. Section 3 describes the proposed manyobjective evolutionary algorithm. Section 4 compares the similarities and differences between this algorithm and similar algorithms. Section 5 gives the experimental parameters of each algorithm and comprehensive experiments and analysis. Finally, Section 6 summarizes the full text and points out the issues to be studied next.
2. NSGAIII
The NSGAIII algorithm is similar to the NSGAII algorithm in that it selects individuals based on nondominated ordering. The difference is that the individual choice after nondominated sorting is different. The NSGAIII algorithm is introduced as follows:
First, a population A of size N is set up, and population P_{t} is operated by genetic operators (selection, reorganization, and variation) to obtain a population Q_{t} of the same size, and then population P_{t} and population Q_{t} are mixed to obtain a population R_{t} of 2N.
The population R_{t} is subjected to nondominated sorting to obtain layers of individuals with nondominated levels (F1, F2, and so on). Individuals with nondominated levels are sequentially added to the set S_{t} of the next generation of children until the size of the set S_{t} is greater than N. The nondominated level at this time is defined as the L layer. Pick K individuals from the L level so that the sum of K and all previous levels is equal to N. Prior to this, the objective value is normalized by the ideal point and the extreme point. After normalization, the ideal point of S_{t} is a zero vector, and the provided reference point is located exactly on the normalized hyperplane. The vertical distance between each individual in S_{t} and each weight vector (connect the origin to the reference point) is calculated. Each individual in S_{t} is then associated with a reference point having a minimum vertical distance.
Finally, the niche operation is used to select members from F1. A reference point may be associated with one or more objective vectors or there are also possibilities that none of the objective vectors is associated with a reference point. The purpose of the niche operation is to select the K closest reference points from the F1 layer into the next generation. Firstly, calculate the number of individuals associated with each reference point in the S_{t}/F_{l} population and use to represent the number of individuals associated with the reference point. The specific operation is as follows:
When the number of individuals associated with a reference point is zero, in other words, is equal to zero, the operation next depends on whether there are individuals related to the reference point in F_{l}. If one or more individuals are related to the reference vector, extract the point with the smallest distance, add it to the next generation, and set . If no individual is associated with the reference point in F_{l}, the reference point vector in this generation will be deleted. If , choose the nearest reference point until the population size is N.
3. The Proposed Algorithm
3.1. The Proposed Algorithm Framework
In order to further improve the convergence speed and distribution of NSGAIII algorithm, a multiobjective optimization algorithm based on weight vector adjustment (NSGAIIIWA) is proposed. Algorithm 1 is the framework of the NSGAIIIWA algorithm. The algorithm is mainly improved in two aspects: evolution strategy and weight vector. This paper also adds the discriminating condition for enabling weight vector adjustment, which speeds up the running of the algorithm without affecting the performance of the algorithm. First, we initialize population P_{t} with population size N and weight vector W_unit. Secondly, we enter the algorithm iteration process, generate the population Q_{t} by the operating population P_{t} using the differential operator, and then obtain population R_{t} sized 2N using the combination of P_{t} and Q_{t}. R_{t} should be updated through the environmental selection strategy. The next generation of population P_{t+1} is obtained. Lastly, adjust the weight vector and determine if the termination condition is satisfied. If so, output the current result and terminate the algorithm; otherwise, continue iterating.

3.2. Initialization
The initial population is randomly generated whose size is the same as the number of weight vectors in its space. This article uses Das and Dennis’s systematic method [27] to set weight vectors . The total number of weight vectors is equal to , where H represents the dimension of the solution vector and is the number of objective functions. The initialized weight vectors (reference points) are uniformly distributed in the objective space, and each weight vector generation method is as follows: for , , .
3.3. Evolutionary Strategy
The evolutionary strategy is essential to the convergence speed and accuracy of the solutions because it will determine the quality of new solutions to subquestions directly during evolutionary process. In order to improve the convergence speed, this paper proposes a new differential evolution strategy to replace the original strategy. The pseudocode is shown in Algorithm 2. Every individual performs the same operation as follows.

3.3.1. Variation
It is mainly divided into two parts:(a)Select three individuals randomly from the population. A new individual will be obtained using (1) for parental vector variation to maintain population diversity. At the beginning of the algorithm, the mutation rate should be relatively large to make the individuals different from each other. This not only improves the search ability of the algorithm, but also prevents the individual from falling into local optimum. The mutation rate should decrease as the number of iterations increases and the solution approaches the Pareto optimal front surface (PFs) [28]. At this time, the mutation rate should decrease to accelerate the convergence of the algorithm to the optimal. This not only improves the convergence speed, but also reduces the complexity of the algorithm. Based on the above analysis of the needs of the algorithm, this paper proposes an adaptive mutated factor . It can be seen that as the number of iterations increases, the mutation rate decreases in size. At the beginning of the algorithm, it can enhance the individual’s ability to jump out of local optimum and find superior individuals. As the mutation rate is smaller, the algorithm tends to be stable. To maintain the diversity of populations, select individuals to generate a new individual (line 3 in Algorithm 2) by simulating the binary recombination operator (2) and (3). In (4), is a random number between [0, 1] and is a constant with the fixed value of 20.(b)Select an individual through probability selection from (which are generated in step a) to execute the crossover operation (lines 4 to 12 in Algorithm 2). The specific operation is as follows: Generate a random number between [0, 1] and compare it with . If the number is smaller than , execute lines 5–9 of the pseudocode in Algorithm 2; otherwise, select to enter the crossover operation (the detailed operations of lines 5–9 in Algorithm 2 are as follows: Generate a random number between [0, 1] and compare it with 0.5. If it is smaller than 0.5, then select to enter the crossover operation. Otherwise, select to enter the crossover operation). Here, is 0.5.
3.3.2. Crossover
This article selects the singlepoint crossing method, which is located in lines 13–20 of the pseudocode in Algorithm 2. In this way, individual selectivity is enhanced. Generate a random number between [0, 1]. If the number is less than or equal to the crossover operator CR, then select a certain dimension of the individual randomly and execute the crossover operations at the selected point. A large number of experiments have confirmed that when the value of CR is 0.4, the effect is better. Then use the generated individuals and their fitness values to replace the original ones.
3.4. Environmental Selection
The purpose of the environmental selection operation is to select the next generation of individuals. The framework of Algorithm 3 includes the following steps: (1) Nondominated sorting is conducted on the population R_{t}, and individuals with a rank of 1, 2, 3, … after nondominating sorting are added to the offspring collection S_{t} in order. (2) When the size of S_{t} is greater than or equal to N, note the nondominant level F_{l} at this time and determine when to terminate the operation () or enter the next step (). (3) Select individuals in and enter P_{t+1} until its size is N. The specific operation is discussed below.

3.4.1. Normalize Objective
Since the magnitude of the respective objective values is different, it is necessary to normalize objective values for the sake of fairness. First, calculate the minimum value of each dimension for every objective function. The sets of constitute the ideal points. All individuals are then normalized according to (5), where is the intercept of each dimension that can be calculated according to the achievement scalarizing function (ASF) shown in (6).
3.4.2. Associate Each Member of S_{t} with a Reference Point
In order to associate each individual in S_{t} with the reference points after normalization, a reference line is defined for each reference point on the hypersurface. In the normalized objective space, the reference point with the shortest distance is considered to be related to population members.
3.4.3. Compute the Niche Count of the Reference Point
Traversing every individual in the population, calculate the distances between itself and all reference points and record the number of individuals associated with each reference point using , which represents the number of individuals associated with the reference point.
3.4.4. Niche Preservation Operation
If the number of populations associated with this reference point is zero (but there is an individual associated with the reference point vector in F_{l}), then find the point with the smallest distance and extract it from F_{l} to join the selected next generation population. In the setting, the number of associated populations is increased by one. If each individual is not referenced to the reference point in the F_{l}, the reference point vector is deleted. If the number of associated populations is not zero, then the nearest reference point is selected until the population size is N.
3.5. Weight Adjustment
The uniformity of the solution surface cannot be achieved when the algorithm reaches a certain stable state, although the weight vectors distribute uniformly in the space. This is because of the complexity caused by the irregular shape of the PFs of the objective functions. The distribution of weight vectors is particularly important when all individuals are indistinguishable from each other and locate on the first level of the dominance level. Therefore, in order to improve the distribution of manyobjective algorithms, a weight vector adjustment strategy whose framework is shown in Algorithm 4 is proposed. The distribution of weight vectors is appropriately adjusted according to the shape of the nondominated frontier. In order to prevent the weight vector adjustment in the highdimensional space from being concentrated to a certain objective, the Kmeans clustering method is used to divide the weight vector into different subspaces. The specific operations are described below.

First, each weight vector is associated with the population member (line 1 in Algorithm 4). Secondly, the solution space is decomposed into many subspaces using the Kmeans clustering method (lines 2–5 in Algorithm 4), as shown in Figure 1. To prevent errors caused by excessive differences in the solution set in space, the subspace should not be too large or small, and it can be divided into C spaces according to the size of the population. A large number of experiments confirmed that when C = 13, better results can usually be obtained. The solution set is decomposed into [N/C] cluster spaces, and the weight vectors are adjusted by comparing the density of the entire objective space and subspaces (lines 6–17 in Algorithm 4). As shown in Figure 2, should be away from and approach . Finally, the number of weight vectors is adjusted to ensure that it can match the original number. If the number is greater than N, then the weight vector is deleted at the densest position in the entire objective space. If the number is less than N, then a weight vector is added in sparse position (lines 18–20 in Algorithm 4).
The definition of spatial density is obtained by averaging the distances of similar individuals in the population. The minimum spatial density is defined as . The maximum spatial density is defined as . Under normal circumstances, when the value of h1 is 0.2 and h2 is 1.3, relatively good results can be obtained. Note is the density of the overall objective space and is density of the subspace. The adjustment process is divided into two situations described below:(1)When the subspace density is less than the objective space density, determine whether the subspace density is too small. If the density of the subspace is less than the minimum space density , then the subspace density is considered too small. In this case, the weight vector should be evacuated. At this point, using the two nearest neighbor weight vectors and adding their sum vectors to the set of weight vectors, the parent vectors are deleted. Otherwise, the weight vector should be finetuned to achieve uniformity across the objective plane. At this time, according to the density difference, the nearest two weight vectors in the subspace are adjusted according to (7) and (8). Among them, the vectors and are the closest weight vectors. Let be the minimum distance. Let be the density value. The vectors and are neighbor weights of the respective weights.(2)When the subspace density is greater than the objective space density, determine whether the subspace density is too large. If the density of the subspace is greater than the maximum space density , then the subspace density is too large. In this case, the weight vector should be aggregated. At this point, take the two furthest neighboring weight vectors and add their sum vectors to the set of weight vectors. Otherwise, at this time, according to the density difference, adjust the weight vectors according to (9) and (10). Among them, the vectors and are the furthest weight vectors. Note is the maximum distance and is the density value.
It is worth emphasizing that the edge vector is immovable; otherwise, the search range of the algorithm will be affected. Half of the maximum number of iterations was selected as an enabling condition for the weight vector adjustment strategy and adjust every four generations in this paper. In this time, the objective vectors have approached the PFs, so the guidance of the weight vectors is relatively accurate, and the population update is relatively stable (i.e., it is close to the PF).
4. Discussion
The previous section described the NSGAIIIWA algorithm in detail. In this section, we compare the similarities and differences between NSGAIIIWA, NSGAIII, and VAEA.
4.1. The Similarities and Differences between NSGAIIIWA and NSGAIII
(a)Both algorithms use Pareto dominance to select individuals.(b)The evolution of the two algorithms is different. NSGAIII adopts the original genetic evolution strategy, while NSGAIIIWA adopts a new differential evolution strategy to optimize individuals. It has better effect on convergence speed and solution accuracy.(c)Both algorithms have different strategies for dividing the objective space. The NSGAIII algorithm uses the original method of generating weight vectors to evenly divide the objective space. The NSGAIIIWA algorithm divides the objective space into several subspaces and adjusts the weight vectors according to the individual density of the objective space. This method can better ensure the uniformity of the weight vectors on the objective surface, thus ensuring the uniformity of the solution set.
4.2. The Similarities and Differences between NSGAIIIWA and VAEA
(a)Both algorithms use Pareto dominance to select individuals.(b)Both algorithms need to normalize the population. The difference is that VAEA normalizes the population according to the ideal and lowest point of the population, while NSGAIIIWA obtains the intercept of each objective axis by calculating the ASF and then normalizes the population. The latter is more universal and more reasonable.(c)Both algorithms have associated operations. VAEA does not relate to the association of reference points. It achieves the association between individuals and individuals and thus cannot guarantee individual distribution. NSGAIIIWA associates individuals with weight vectors to improve the distribution of the algorithm.
5. Simulation Results
In order to verify the performance of the proposed algorithm on MAOPs, this paper selects the general test function set DTLZ and WFG in the field of manyobjective optimization for simulation experiments. The proposed algorithm is compared with five reliable algorithms, MOEA/D, NSGAIII, VAEA, RAEA, and MOEA/DM2M, and representative algorithms with the objectives 3, 5, 8, 10, and 15 on the DTLZ16 test function and WFG14 test instances. Performance indicators GD [29], IGD [30], and HV [31, 32] are used for comparative analysis: first, to briefly introduce the corresponding parameter settings for each algorithm, and second, to explain the experimental results, compare, and analyze them.
5.1. General Experimental Settings
The number of decision variables for all test functions is V = M + k − 1, and M is the number of objective functions, k = 5 for DTLZ1, and k = 10 for DTLZ26. The number of decision variables for all WFG test functions is , where the position variable is and the objective dimension is M; the distance variable is . The population sizes of NSGAIII and RVEA are related to the uniformly distributed weight scale and determined by the combination number of M and the number of p on each objective. The doublelayer distribution method in [13] is adopted in order to tackle the problem. The specific parameter settings are given in Table 1. For fair comparison, the population size is the same as the other three algorithms. The algorithm runs independently for 30 times on each test function. The algorithm uses the maximum function evaluation (MFE) as the terminal condition for each run.

Due to objective dimensions of the solution problem, MFE is also not the same. According to [16], the specific settings are shown in Table 2. The maximum number of iterations is calculated by . The parameter settings of NSGAIII and MOEA/D are shown in Table 3. In addition, the number of MOEA/D neighbor weight T is 10; the RVEA penalty factor change rate is 2, and the VAEA angle threshold is expressed as .


5.2. Results and Analysis
In order to verify the performance saliency of the proposed NSGAIIIWA algorithm on many objective optimization problems, general performance evaluation indicators GD, IGD and HV were used. It is compared with five good algorithms, MOEA/D, NSGAIII, VAEA, RAEA, and MOEA/DM2M, and representative algorithms with the objectives 3, 5, 8, 10, and 15 on the DTLZ16 test function and WFG14 test instances.
5.2.1. Testing and Analysis of DTLZ Series Functions
This section shows the results and analysis of the GD, IGD, and HV performance test data of the DTLZ16 test function. The experimental results are shown in Tables 4–6. They are the average values and standard deviations of 30 independent running results. The best results are shown in black and bold, and the values in parentheses indicate the standard deviation; the number in square brackets is the algorithm performance ranking, which is based on the Whitney–Wilcoxon ranksum test [33]. To investigate whether NSGAIIIWA is statistically superior to other algorithms, Wilcoxon’s ranksum test is performed at a 0.05 significance level between NSGAIIIWA and each competing algorithm on each test case. The test results are given at the end of each cell, represented by the symbols “+,” “=,” or “−,” which indicate that the NSGAIII performance is better than the algorithm in the corresponding column, equal to, and worse. At the same time, the last row of Tables 4–6 summarizes the number of test instances that NSGAIIIWA is significantly better than, equal to, and below its competitors. Tables 7–9 show the results of the comparison of NSGAIIIWA algorithm with the other five algorithms under different objective numbers.



 
Note: “+,” “=,” and “−” represent wins, equal to, and lose. 
 
Note: “+,” “=,” and “−” represent wins, equal to, and lose. 
 
Note: “+,” “=,” and “−” represent wins, equal to, and lose. 
It can be seen from the experimental results in Table 4 that the GD values of NSGAIIIWA on the DTLZ14 are superior to the other five algorithms, only 8th, 10th, and 15th dimensions are superior on the DTLZ5, and NSGAIII on the DTLZ6 gets the best results. It shows that in solving manyobjective problems, the convergence of NSGAIIIWA is more effective than that of NSGAIII algorithm and is better than other algorithms. Table 7 shows summary of statistical test results from Table 4. NSGAIIIWA is compared to five other more advanced multiobjective algorithms and counts the number of wins (+), equal to (=), and number of loses (−). As can be seen from the table, NSGAIIIWA is clearly superior to the five most advanced designs selected.
From Table 5, the NSGAIIIWA can get the best results especially the objectives 5, 10, and 15 on the DTLZ1, 8, 10, and 15 on the DTLZ2 and DTLZ3, 3, 5, and 8 on the DTLZ4, and 5, 8, and 15 on the DTLZ5. Moreover, the objectives 3 and 8 on the DTLZ1, the objectives 3 and 5 on the DTLZ2, the objective 3 on the DTLZ3, and the objective 10 on the DTLZ5 achieve the second best results. Nevertheless, on the DTLZ5 and DTLZ6, the results of NSGAIIIWA are not significant because the DTLZ5 and DTLZ6 are used to test the ability to converge to a curve. Owing to the reason that NSGAIIIWA needs to build a hypersurface, M extreme points cannot be found in the later stage of the algorithm to construct the hypersurface, and it cannot converge to a curve well. In addition, NSGAIIIWA has noticeable effects on other test functions and is a kind of stable and relatively comprehensive algorithm. Table 8 shows summary of statistical test results from Table 5. It can be seen from the table that NSGAIIIWA performs best on the other five algorithms.
From Table 6, it can be seen that the NSGAIIIWA can effectively handle most test problems. It can get the best results especially the objectives 3, 8, 10, and 15 on the DTLZ1, 5 and 10 on the DTLZ2, 3, 5, 10, and 15 on the DTLZ3, 10 and 15 on the DTLZ4, 5, 8, and 10 on the DLTZ5, and 3, 5, and 8 on the DTLZ6. Moreover, the objective 5 on the DTLZ1, objective 15 on the DTLZ2, objectives 3 and 5 on the DTLZ4, objectives 3 and 15 on the DTLZ5, and objectives 10 and 15 on the DTLZ6 achieve the second best results. However, the performances of objective 3 on the DTLZ2 and 8 on the DTLZ3 are poor. Although NSGAIII, VAEA, RVEA, and MOEA/D can obtain optimal values for a particular dimension in the function, NSGAIIIWA has the best overall performance considering all dimensional objective results. Table 9 shows summary of statistical test results from Table 6. As can be seen from the table, NSGAIIIWA hypervolume performance is better than the other five algorithms.
In order to express the effect of the algorithm more intuitively, the performance of the algorithm is presented in the form of a box diagram. Due to space limitations, only the analysis of the box diagrams of the four algorithms under five goals and fifteen goals is given here. Figures 3–8 show the performance box diagram under the four goals, and Figures 9–14 show the performance box diagram under the fifteen goals. Each box diagram is calculated by inputting 30 independent running results. It reflects the median, maximum, minimum, upper quartile, lower quartile, and outliers of the five algorithms on indicators GD, IGD, and HV.
From Figures 3–8, it can be seen that NSGAIIIWA can achieve better results when dealing with most test problems. Its convergence and breadth are significantly better than the other five algorithms. The overall performance indicators achieve the best results on the DTLZ1 and DTLZ4 and get the second best results on the DTLZ2 and DTLZ3. Although the minimum value is obtained on the DTLZ5 and DTLZ6, there exist abnormal values, indicating that the algorithm is relatively unstable. This is because DTLZ5 and DTLZ6 test the ability of the algorithm to converge to a straight line, while NSGAIIIWA needs to build a hypersurface, so it cannot converge to a curve well. However, the overall robustness of NSGAIIIWA is relatively better with all test function results.
From Figures 9–14, it can be seen that the NSGAIIIWA has the ability to handle most problems under the 15 objectives. The convergence on the DTLZ1DTLZ5 is significantly better than the other five algorithms. The overall performance obtains the best results on the DTLZ1DTLZ3, DTLZ5, and DTLZ6. The NSGAIIIWA under the 15 objectives can get the minimum on DTLZ5 and DTLZ6 but is relatively unstable. The breadth achieves the best results on the DTLZ1, DTLZ3, and DTLZ4 and gets the second best results on the DTLZ2, DTLZ5, and DTLZ6, and there exist abnormal values. On the 15 objectives, it is evident that the outliers of each algorithm increase. That is explained by the fact that the stability of algorithms in the highdimensional space will decline due to the increase of the spatial breadth. Depending on the results of all test functions, NSGAIIIWA has better stability.
In order to visually reflect the distribution of the solution set in the highdimensional target space, parallel coordinates are used to visualize the highdimensional data as shown in Figure 15.
(a)
(b)
(c)
(d)
(e)
(f)
From Figure 15, it can be seen that NSGAIIIWA and RVEA find the final solution set in this problem to be similar in convergence and distribution. In contrast, MOEA/DM2M and NSGAIII are slightly less distributed than the above three algorithms. VAEA finds that the distribution of the solution is poor. MOEA/D appears the concentrated solution. Lose extreme solutions at 12 objectives and the distribution of MOEA/D is seriously missing.
5.2.2. Testing and Analysis of WFG Series Functions
The performance indicators of the WFG test function are mainly IGD and HV indicators. Therefore, this section tests the WFG14 test instance and analyzes the results, as shown in Tables 10–13.

 
Note: “+,” “=,” and “−” represent wins, equal to, and lose. 

 
Note: “+,” “=,” and “−” represent wins, equal to, and lose. 
From the results in Table 10, it can be seen that NSGAIIIWA can handle most of the considered examples well. In particular, it achieved the best overall performance on the objectives 3, 5, and 10 on WFG2 instances and the objectives 5, 8, and 15 on WFG3 instances. In addition, it achieves the best performance on the objective 15 on WFG8 and the objectives 3 and 8 on WFG9. The VAEA performed well on the objective 8 on WFG1 and WFG2 test instances and also achieved good results on the objectives 3 and 10 on WFG3 and the objective 4 on WFG3. RVEA obtains the best IGD value on the objective 3 on WFG1 and the objectives 5 and 10 on WFG4. It is worth noting that RVEA performs poorly for WFG2 and WFG3 instances. But it performs relatively well compared to the NSGA3 and MOEA/D algorithms. NSGAIII and MOEA/DM2M typically have moderate performance on most WFG problems, and good results can only be achieved on specific WFG test instances. MOEA/D does not produce satisfactory results in all WFG test instances. As the number of objectives increases, the results gradually deteriorate. Table 11 shows summary of statistical test results from Table 10. It can be seen from the table that the performance of NSGAIIIWA is significantly better than that of the other five algorithms.
From the results in Table 12, it can be seen that NSGAIIIWA has obtained the best performance for most of the highdimensional objective problems. NSGAIII works well on the WFG1 and WFG2 test instances, and VAEA also gets good results on the objectives 10 and 15 on WFG3 test instances. RVEA obtains the best HV value on the objectives 3, 5, and 10 on WFG4. MOEA/D and MOEA/DM2M are not quite effective on these five instances. Table 13 shows summary of statistical test results from Table 12. The threedimensional performance of the NSGAIIIWA algorithm is not very prominent. The performance under the eightdimensional algorithm is the same as that of the RVEA algorithm, but the NSGAIIIWA algorithm can achieve better performance in highdimensional objective problems. In general, the NSGAIIIWA algorithm outperforms the other five algorithms in this performance.
In summary, after comparing the test results of GD, IGD, and HV performance, the performance of the NSGAIIIWA algorithm is superior.
6. Conclusion
This paper proposes a manyobjective optimization algorithm based on weight vector adjustment, which increases the individual’s ability to evolve through new differential evolution strategies, and at the same time, dynamically adjust the weight vector by means of the Kmeans to make the weight vector as evenly distributed as possible on the objective surface. The NSGAIIIWA algorithm has good convergence ability and good distribution. To prove its effectiveness, the NSGAIIIWA is experimentally compared with the other five most advanced algorithms on the DTLZ test set and WFG test instances. The experimental results show that the proposed NSGAIIIWA performs well on the DTLZ test set and WFG test instances we studied, and the obtained solution set has good convergence and distribution. However, the proposed algorithm has high complexity and it only plays the role of alleviating sensitive frontiers. Further research will be conducted on the above problems.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China under Grant nos. 61501107 and 61501106, the Education Department of Jilin Province Science and Technology Research Project of “13th FiveYear” under Grant no. 95, and the Project of Scientific and Technological Innovation Development of Jilin under Grant nos. 201750219 and 201750227.
References
 H. Son and C. Kim, “Evolutionary manyobjective optimization for retrofit planning in public buildings: a comparative study,” Journal of Cleaner Production, vol. 190, pp. 403–410, 2018. View at: Publisher Site  Google Scholar
 M. Pal, S. Saha, and S. Bandyopadhyay, “DECOR: differential evolution using clustering based objective reduction for manyobjective optimization,” Information Sciences, vol. 423, pp. 200–218, 2017. View at: Publisher Site  Google Scholar
 Q. Zhang, W. Zhu, B. Liao, X. Chen, and L. Cai, “A modified PBI approach for multiobjective optimization with complex Pareto fronts,” Swarm & Evolutionary Computation, vol. 40, pp. 216–237, 2018. View at: Publisher Site  Google Scholar
 A. Pan, H. Tian, L. Wang, and Q. Wu, “A decompositionbased unified evolutionary algorithm for manyobjective problems using particle swarm optimization,” Mathematical Problems in Engineering, vol. 2016, Article ID 6761545, 15 pages, 2016. View at: Publisher Site  Google Scholar
 W. Gong, Z. Cai, and L. Zhu, “An efficient multiobjective differential evolution algorithm for engineering design,” Structural & Multidisciplinary Optimization, vol. 38, no. 2, pp. 137–157, 2012. View at: Publisher Site  Google Scholar
 M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining convergence and diversity in evolutionary multiobjective optimization,” Evolutionary Computation, vol. 10, no. 3, pp. 263–282, 2005. View at: Publisher Site  Google Scholar
 A. G. HernándezDíaz, L. V. SantanaQuintero, C. A. Coello Coello, and J. Molina, “Paretoadaptive ϵdominance,” Evolutionary Computation, vol. 15, no. 4, pp. 493–517, 2014. View at: Publisher Site  Google Scholar
 Y. Yuan, H. Xu, and B. Wang, “An improved NSGAIII procedure for evolutionary manyobjective optimization,” in Proceedings of Conference on Genetic & Evolutionary Computation, pp. 661–668, ACM, Nanchang, China, October 2014. View at: Google Scholar
 Y. Jingfeng, L. Meilian, X. Zhijie, and X. Jin, “A simple pareto adaptive εdomination differential evolution algorithm for multiobjective optimization,” Open Automation & Control Systems Journal, vol. 7, no. 1, pp. 338–345, 2015. View at: Publisher Site  Google Scholar
 X. Yue, Z. Guo, Y. Yin, and X. Liu, “Manyobjective Edominance dynamical evolutionary algorithm based on adaptive grid,” Soft Computing, vol. 22, no. 1, pp. 137–146, 2016. View at: Publisher Site  Google Scholar
 M. M. Lin, H. Zhou, and L. P. Wang, “Research of manyobjective evolutionary algorithm based on alpha dominance,” Computer Science, vol. 44, no. 1, pp. 264–270, 2017. View at: Google Scholar
 C. Dai and Y. Wang, “A new multiobjective evolutionary algorithm based on decomposition of the objective space for multiobjective optimization,” Journal of Applied Mathematics, vol. 2014, Article ID 906147, 9 pages, 2014. View at: Publisher Site  Google Scholar
 Q. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007. View at: Publisher Site  Google Scholar
 Y. Yuan, H. Xu, B. Wang et al., “A new dominance relationbased evolutionary algorithm for manyobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 1, pp. 16–37, 2016. View at: Google Scholar
 C. Segura and G. Miranda, “A multiobjective decompositionbased evolutionary algorithm with enhanced variable space diversity control,” in Proceedings of Genetic and Evolutionary Computation Conference Companion, pp. 1565–1571, ACM, Berlin, Germany, July 2017. View at: Google Scholar
 Y. Xiang, Y. Zhou, M. Li, and Z. Chen, “A vector anglebased evolutionary algorithm for unconstrained manyobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 1, pp. 131–152, 2017. View at: Publisher Site  Google Scholar
 X. Guo, Y. Wang, and X. Wang, “Using objective clustering for solving manyobjective optimization problems,” Mathematical Problems in Engineering, vol. 2013, Article ID 584909, 12 pages, 2013. View at: Publisher Site  Google Scholar
 R. Wang, R. C. Purshouse, and P. J. Fleming, “Preferenceinspired coevolutionary algorithm using adaptively generated goal vectors,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 916–923, Cancun, QROO, Mexico, June 2013. View at: Google Scholar
 Y. Qi, X. Ma, F. Liu, L. Jiao, J. Sun, and J. Wu, “MOEA/D with adaptive weight adjustment,” Evolutionary Computation, vol. 22, no. 2, pp. 231–264, 2014. View at: Publisher Site  Google Scholar
 K. Deb and H. Jain, “An evolutionary manyobjective optimization algorithm using referencepointbased nondominated sorting approach, Part I: solving problems with box constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014. View at: Publisher Site  Google Scholar
 H. L. Liu, F. Gu, and Q. Zhang, “Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 3, pp. 450–455, 2014. View at: Publisher Site  Google Scholar
 X. Bi and C. Wang, “An improved NSGAIII algorithm based on objective space decomposition for manyobjective optimization,” Soft Computing, vol. 21, no. 15, p. 4269, 2016. View at: Publisher Site  Google Scholar
 M. Zhang and H. Li, “A reference direction and entropy based evolutionary algorithm for manyobjective optimization,” Applied Soft Computing, vol. 70, pp. 108–130, 2018. View at: Publisher Site  Google Scholar
 R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A reference vector guided evolutionary algorithm for manyobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 773–791, 2016. View at: Publisher Site  Google Scholar
 Y. Li, Y. Kou, and Z. Li, “An improved nondominated sorting genetic algorithm III method for solving multiobjective weapontarget assignment Part I: the value of fighter combat,” International Journal of Aerospace Engineering, vol. 2018, Article ID 8302324, 23 pages, 2018. View at: Publisher Site  Google Scholar
 Y. Sun, G. G. Yen, and Z. Yi, “IGD indicatorbased evolutionary algorithm for manyobjective optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 99–114, 2018. View at: Publisher Site  Google Scholar
 J. Xiao, J. J. Li, X. X. Hong et al., “An improved MOEA/D based on reference distance for software project portfolio optimization,” Mathematical Problems in Engineering, vol. 2018, Article ID 3051854, 16 pages, 2018. View at: Publisher Site  Google Scholar
 J. Shi, M. Gong, W. Ma, and L. Jiao, “A multipopulation coevolutionary strategy for multiobjective immune algorithm,” The Scientific World Journal, vol. 2014, Article ID 539128, 23 pages, 2014. View at: Publisher Site  Google Scholar
 K. Deb and S. Jain, “Running performance metrics for evolutionary multiobjective optimization,” Technical Report No. 2002004, Indian Institute of Technology Kanpur, Kanpur, India, 2002. View at: Google Scholar
 S. Chand and M. Wagner, “Evolutionary manyobjective optimization: a quickstart guide,” Surveys in Operations Research & Management Science, vol. 20, no. 2, pp. 35–42, 2015. View at: Publisher Site  Google Scholar
 H. R. Maier, Z. Kapelan, J. Kasprzyk et al., “Evolutionary algorithms and other metaheuristics in water resources: current status, research challenges and future directions,” Environmental Modelling & Software, vol. 62, pp. 271–299, 2014. View at: Publisher Site  Google Scholar
 A. DíazManríquez, G. Toscano, J. H. BarronZambrano, and E. TelloLeal, “R2based multi/manyobjective particle swarm optimization,” Computational Intelligence and Neuroscience, vol. 2016, Article ID 1898527, 10 pages, 2016. View at: Publisher Site  Google Scholar
 K. Li, S. Kwong, and K. Deb, “A dualpopulation paradigm for evolutionary multiobjective optimization,” Information Sciences, vol. 309, pp. 50–72, 2015. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 Yanjiao Wang and Xiaonan Sun. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.