Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2019, Article ID 5126239, 27 pages
https://doi.org/10.1155/2019/5126239
Research Article

An Opposition-Based Evolutionary Algorithm for Many-Objective Optimization with Adaptive Clustering Mechanism

Zhejiang University of Technology, Hangzhou, Zhejiang 310023, China

Correspondence should be addressed to Wan Liang Wang; nc.ude.tujz@lwwtujz

Received 7 January 2019; Revised 14 February 2019; Accepted 9 April 2019; Published 2 May 2019

Academic Editor: Michele Migliore

Copyright © 2019 Wan Liang Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Balancing convergence and diversity has become a key point especially in many-objective optimization where the large numbers of objectives pose many challenges to the evolutionary algorithms. In this paper, an opposition-based evolutionary algorithm with the adaptive clustering mechanism is proposed for solving the complex optimization problem. In particular, opposition-based learning is integrated in the proposed algorithm to initialize the solution, and the nondominated sorting scheme with a new adaptive clustering mechanism is adopted in the environmental selection phase to ensure both convergence and diversity. The proposed method is compared with other nine evolutionary algorithms on a number of test problems with up to fifteen objectives, which verify the best performance of the proposed algorithm. Also, the algorithm is applied to a variety of multiobjective engineering optimization problems. The experimental results have shown the competitiveness and effectiveness of our proposed algorithm in solving challenging real-world problems.

1. Introduction

Over the last two decades, evolutionary algorithm (EA) has been proven to be prevalent and efficient to solve real-world optimization problem [1]. Some of these well-known methodologies include genetic algorithms (GAs) [2], evolution strategies (ES) [3], and ant colony optimization (ACO) [4]. However, real-world optimization problems always involve multiple objectives, which means there is no single solution when considering multiple objectives as the goal of the optimization process [5]. In this case, the solutions for a multiobjective problem(MOP), which is the main focus of the algorithm, represent the trade-offs between the objectives due to the nature of such problems [6].

For the evolutionary approach to address multiobjective optimization, which is called multiobjective evolutionary algorithm (MOEA), different varieties of the algorithms have been proposed in recent years. Among them, the algorithms inspired by nature have drawn the attention like improving strength-Pareto evolutionary algorithm (SPEA2) [7], nondominated sorting genetic algorithm version 2 (NSGA-II) [8], multiobjective particle swarm optimization (MOPSO) [9], multiobjective moth-flame algorithm [10], and multiobjective ant lion optimizer [11].

There is no doubt that MOEA has been proven to be prevalent and efficient to solve optimization problem with less than three objectives [1]. However, with the gradually rising scale of data and the pluralism of target requirements, many real-world optimization problems containing more than three objectives named “many-objective problems” (MaOPs) [12, 13] are widely appearing in engineering [14], traffic [15], and water [16]. Unfortunately, the effectiveness of previous MOEAs tends to deteriorate dramatically with the increase in the number of objectives, which has been verified in [17, 18]. This can be attributed to the situation that almost all solutions in a population become nondominated with one another, and the conflict between convergence and diversity becomes aggravated with the increasing number of objectives in MaOPs [19, 20]. Moreover, computational complexity of calculating some performance metrics and the representation and visualization of the trade-off surface are also the difficulties of MaOPs. To overcome these drawbacks, a series of many-objective evolutionary algorithms (MaOEAs) have been proposed to address these optimization problems with more than three objectives. In summary, the proposed algorithms can be roughly classified into the following four types.

1.1. New Domination Relation-Based Approach

As the selection criterion based on the standard dominance relationship fails to distinguish solutions in MaOPs, a number of new domination principles have been proposed to adaptively discretize the Pareto-optimal front, for instance, the ϵ-dominance [21, 22], CDAS-dominance [23, 24], α-dominance [25], fuzzy Pareto dominance [26], L-dominance [27], cone-domination [28, 29], and grid-based evolutionary algorithm (GrEA) [30]. Furthermore, the recently proposed generalization of Pareto optimality (GPO) [31] expands the dominance area of solutions to enhance the scalability of existing Pareto-based algorithms [32] and employs a shift-based density estimation strategy (SDE) into the dominance-based criterion, and θ-dominance [33, 34]introduces a new dominance relation to rank solutions, which all have been proven to be more effective than the original Pareto dominance relation. In a word, these algorithms are proposed with several variants of the Pareto dominance to enhance the selection pressure toward the PF. However, the drive toward a more aggressive selection pressure could make diversity maintenance more difficult in these new dominance-based MaOEAs.

1.2. Indicator-Based Approach

Aiming to obtain a desired ordering among the representative PF approximations, indicator-based MOEAs have been widely studied, for example, the indicator-based evolutionary algorithm (IBEA) [35], SMS-EMOA [36], the fast hypervolume-based evolutionary algorithm (HypE) [37], DNMOEA/HI [38], R2 indicator based [39], [40], stochastic ranking algorithm based on multiple indicators (SRA) [41], and IGD indicator-based evolutionary algorithm (MaOEA/IGD) [42]. However, the computational cost of some performance calculation could be prohibitively expensive as the number of objectives increases.

1.3. Decomposition-Based Approach

These algorithms always decompose a MaOP into several single-objective optimization problems and use aggregation functions to differentiate many-objective solutions. Among the methods that use a set of weight vectors to generate multiple aggregation functions, multiobjective evolutionary algorithm based on decomposition (MOEA/D) [43] is the most representative algorithm, which aggregates the objectives of an MOP into an aggregation function with a unique weight vector. Around the MOEA/D, several variants have been proposed to strike a better balance between convergence and diversity such as I-DBEA [44], MOEA/D-DD [45], MOEA/D-DU [46], and MOEA/D-LWS [47]. However, the situation that the number of scalarizing functions is usually very limited might cause difficulties for diversity maintenance of the solution especially when compared with the exponentially increasing objective space.

1.4. Reference Set-Based Approach

The algorithms of this category use a set of reference solutions to measure the quality of solutions. Thus, the search process is guided by the solutions in the reference solution set. Praditwong and Yao [48] and Wang [49] proposed a novel two archive algorithm (TAA) and its improved version (Two_Arch2), in which the convergence archive (CA) can be seen as an online-updated real reference set. The recently proposed vector angle-based evolutionary algorithm (VaEA) [50] uses the population as the reference set to dynamically guide the evolutionary process. As for the algorithm using virtual reference set, NSGA-III [51, 52]is predominant which employs a set of predefined reference points to manage the diversity of the candidate solutions. Hereafter, a number of algorithms have been proposed with the reference point or vector such as the reference vector-guided evolutionary algorithm for many-objective optimization (RVEA) [53], reference vector-guided evolutionary Pareto evolutionary algorithm with reference direction (SPEAR) [54], and many-objective evolutionary optimization based on reference points (RPEA) [55].

The algorithm from the third category and fourth category becomes particularly prevalent for many-objective optimization, which could be attributed to the low cost to achieve a representative subset of the entire PF especially in a limited population size. However, most algorithms belong to these categories simply use the angle or distance solely to measure the quality of the population members with the reference set, which may lose some good solutions due to their simplex selection mechanism. Furthermore, it has been logically proved by the No Free-Lunch (NFL) theorem [56] that none of these algorithms is able to solve all optimization problems, which allows the researchers to propose new methods or improve the current algorithms for better solving the problems [33, 57]. Therefore, this paper proposes an opposition-based multiobjective evolutionary algorithm with an adaptive clustering mechanism, in short named OBEA to strengthen the selection mechanism through comprehensive consideration of the angle and the distances. The main properties of OBEA can be summarized as follows:(i)A new initialization approach is designed with the assistance of the opposition-based learning (OBL) to generate the population. Due to the fact that random initialization lowers the chance of sampling better regions in algorithms, here we use the OBL to initialize the populations in stand of the previous random method. Moreover, opposition-based learning is also adapted in the evolutionary process with the aim of enhancing the probability of obtaining the better solutions.(ii)An adaptive clustering strategy is integrated in this algorithm. In this proposed strategy, the acute angle and perpendicular distance between the candidate solutions and the reference vectors are combined through an adaptive mechanism to cluster the candidate. Moreover, a novel selective approach is designed to dynamically select the individual with comprehensive consideration of the balance convergence and diversity.

Furthermore, an extensive comparison between the proposed OBEA with nine algorithms is implemented on 60 instances of 14 test problems taken from two well-known test suites. The results indicate that OBEA is a very promising alternative for many-objective optimization. The rest of this paper is organized as follows. Section 2 introduces the background knowledge, and details of the proposed OBEA are described in Section 3. Section 4 presents the numerical results of OBEA on benchmark and the detailed analysis of the proposed algorithm together with nine MaOEAs. Finally, conclusions and future work are given in Section 5.

2. Background

In this section, the main components of multiobjective optimization problem (MOP) are given first, which involve the basic knowledge of optimization and Pareto dominate. Next, a brief description of the reference vector is given, which is used as the underlying mechanism for solving many-objective optimization problems.

2.1. Multiobjective Problem

Generally, a multiobjective optimization problem can be stated as follows:where is the decision vector that satisfies and stands for the decision space. The objective function vector consists of m () objectives and refers to the objective space.

Definition 1. Given two vector , x is Pareto dominate y (denoted as ) if and only if for each , and , .

Definition 2. A decision vector is said to be Pareto optimal, if and only if there is no .

Definition 3. The set of Pareto-optimal solutions (PS) is defined as .

Definition 4. The Pareto-optimal front (PF) is defined as .

2.2. Reference Vector

As an underlying mechanism throughout the algorithm, OBEA uses Das and Dennis’s systematic approach [58], and adaptation of this approach generates the reference points and thus forms the reference vectors. The original Das and Dennis’s method places points on a normalized hyperplane, where the number of reference points depends on the dimension of objective space m and positive integer H. The equation can be described as follows:

However, the number of reference points would rapidly increase when m is a relatively large number. To address the drawback of the computational burden of the reference point, a number of new approaches have been proposed [45, 59]. OBEA utilizes the two-layered reference point generation approach, as suggested in [33]. The hyperplane is divided into two parts, which are the boundary and inner layers, as shown in Figure 1. The detail of implementation is as follows:

Figure 1: An illustration of the approach to generate the reference point and the reference vector in three objective spaces. As the figure shows, given the and , respectively, the final number of the reference point is 9.

The reference vector is the fundamental mechanism for the proposed algorithm. On the one hand, the acute angle and perpendicular distance between the candidate solutions and the reference vectors are adopted in OBEA; on the other hand, the proposed algorithm tends to find near Pareto-optimal solutions corresponding to the reference vectors. Furthermore, the Das and Dennis’s systematic approach is utilized in 3-objective and 5-objective problems, while the two-layered reference point generation approach is used in other situations ().

3. Proposed Algorithm: OBEA

The pseudocode of the proposed OBEA is presented in Algorithm 1. This algorithm shares a common framework of many evolutionary algorithms and consists of four main phases. First, N solutions and reference vectors are initialized with the OBL. Then, potential solutions are selected into the mating pool. In what follows, a set of offspring solutions is obtained by applying crossover and mutation operations. Finally, solution P is chosen by adopting the environmental selection procedure. The above steps continue until the termination criteria are met. In the following sections, the details of each component in OBEA will be explained step by step (Algorithm 1).

Algorithm 1: General framework of OBEA.
3.1. Initialization with the Opposition-Based Learning

Since the opposition-based learning (OBL) was first proposed by Tizhoosh [60], the OBL has been adopted in various algorithms, which is due to its promising potential to improve the performance of the algorithms [61]. Because of the fact that OBL can obtain fitter starting candidates even when there is no a priori knowledge and enhance the probability of detecting better regions, here we use the OBL to initialize the algorithm. First, the definition of the opposite number and the opposite point is given below.

Definition 5. Let be a real number. Its opposite number is defined as follows [62]:

Definition 6. Let be a point in D-dimensions space, where , . Its opposite point is defined as follows [62]:The initialization phase of OBEA is shown in Figure 2. Here we first divide population into two parts, and the first half population is generated by a random distribution. Thereafter, the remaining half population is initialized in terms of OBL, as follows:Finally, the set is restructured as the initial population .
Note that the initialization of OBEA uses the OBL strategy to calculate the opposite point which shares the same idea from our previous work [63]. There are some similarities between them. In detail, both of these algorithms divide the population into two parts and use the random methods and OBL strategy to generate one of them, respectively. Furthermore, both of the algorithms apply the OBL strategy in the process of optimization. However, there are still some differences between our proposed OBEA and our previous work. For our previous work, the OBL strategy-based initialization in this algorithm is only for the multiobjective optimization problem, which the number of the objective is two or three. As for the initialization in our proposed OBEA, the number of objective is far more than that in our previous work as OBEA is designed to solve the many-objective optimization problem. Moreover, OBL strategy in OBEA is also included to design the individual selection and construct the last front with the aim of improving the probability of detecting better regions, which are shown in the Section 3.4. However, OBL strategy in our previous work is only adopted to calculate opposite individuals.

Figure 2: An example showing the initialization phase in OBEA.
3.2. Offspring Creation

In the proposed OBEA, two steps are involved: (1) mating selection, which chooses parents for offspring generation, and (2) variation operation, which generates new candidate solutions. Due to the poor performance to generate offspring solutions in high-dimensional objective space, different methods have been proposed. Here the polynomial mutation and the simulated binary crossover (SBX) [64] are employed as in many other algorithms [59, 65].

3.3. Environmental Selection

Environmental selection aims at surviving N individuals from the current population and their offspring to select the optimal population with the hybrid selection-based nondominated sort. To be specific, the environmental selection consists of four steps: nondominated sorting, normalization, adaptive clustering, and opposition-based selection. The procedure of environmental selection is shown in Algorithm 2.

Algorithm 2: Environmental selection.
3.3.1. Normalization

The normalization procedure incorporated into OBEA is similar to the other algorithms. In normalization, the objective , can be replaced as follows:where is the ideal point with the aim to find the minimum value of each objective for all solutions in S. Similarly, is the nadir point. Moreover, the norm (in the normalized objective space) of each solution in S is also calculated, which is used to calculate the acute angle and the perpendicular distance between the candidate solutions and the reference vectors in the normalized objective space. Given , its norm (denoted as norm()) is defined as follows:

3.3.2. Adaptive Clustering Operation

In OBEA, the clustering is done to the population S at each generation. After the normalization, population S is partitioned into N (N is the number of reference vectors) sub-populations by associating each individual with its closest reference vector (refer to Figure 3). In the normalized objective space, given the normalized objective vector and the reference vector , the acute angle between an objective vector and a reference vector can be calculated as

Figure 3: An illustration of the acute angle θ, distance , and distance .

Furthermore, u is the projection of on , and the distance between the origin and u, denoted as , is calculated as

The perpendicular distance between and , denoted as , is calculated as

For the clustering operator, as most algorithms only consider or θ, here the sum of the and θ with an adaptive mechanism will be involved together aswhere s is a sigmoid function which can be described as follows:where t is the current generation number, is the maximal generation number, and is the control parameter.

As Algorithm 3 shows, the core concept of the adaptive clustering operation is to allocate individuals to the subpopulations with the AD values. To be more specific, for one individual , we first calculate the between the individual and each reference vector. Thereafter, minimum AD is obtained; thus, the related subpopulation is the cluster where the individual belongs to. In this way, an individual is allocated to a subpopulation if and only if the is minimal.

Algorithm 3: Adaptive clustering operation.
3.4. Opposition-Based Selection

To solve the MOPs with good convergence and diversity, the opposition-based selection has been conducted. The main idea of the proposed opposition-based selection is taking the advantages of opposition-based learning and combined the clustering operation into an efficient method to achieve the goal. Following paragraph will describe the selection mechanism in the details, and the general framework is shown in Algorithm 4.

Algorithm 4: Opposition-based selection.

In the opposition-based selection, we first construct the last front through the cluster C. To be more specific, our motivation is to find the solution on each reference vector that is closest to the ideal point with the better convergence criterion and diversity. Hence, a hybrid distance () is proposed where the acute angle and perpendicular distance between the candidate solutions and the reference vectors are combined through an adaptive mechanism to select the candidate in the cluster C effectively. The hybrid distance is as follows:where k is the current generation number and K is the maximal generation number. γ can be described as , and the is the smallest angle value between the reference vector and the other reference vectors in the current generation.

After constructing the last front , we calculate the opposite value of individuals in according to the OBL in Section 3.1. Whereafter, the opposite values are added into . Finally, the K individuals are randomly selected from to construct .

3.5. Computational Complexity of OBEA

The normalization and the calculation of norm in OBEA require additions. The time complexity for clustering is . In addition, the selection holds a computational complexity of . To sum up, the overall worst complexity of one generation of OBEA is approximately .

4. Experiment Description

In this section, experiments with twelve algorithms such as MOEA/D, dMOPSO [66], MOMBI2 [67], ϵ-MOEA [68], NSGA-III, RVEA, MaOEARD [69], MyODEMR [70] MOEA/DD, MOEA/DVA [71], Two_Arch2, and SPEAR have been adopted in order to evaluate the performance of the proposed OBEA algorithm on 15 benchmark test problems taken from two widely used DTLZ test suites [72] and WFG test suites [73]. For each test problem, objective numbers varying from 3 to 15, i.e., , are considered.

In the following subsections, the test problems and the quality indicators used in our comparative experiments are first presented. Then, the experimental settings adopted in this study are provided. Moreover, thirty independent runs are executed for each test problem to avoid randomness, and the Wilcoxon rank sum test is adopted to compare the results obtained by OBEA and those nine compared algorithms at a significance level of 0.05.

4.1. Test Problems

Aiming to evaluate the performance effectively, two well-known test suites Deb-Thiele-Laumanns-Zitzler (DTLZ) and Walking-Fish-Group (WFG) are involved in the experiments, as shown in Table 1. Since the nature of DTLZ5 and DTLZ6s PFs is unclear beyond three objectives [73], here we only consider DTLZ 1–4 and DTLZ7 problems for the DTLZ test suite. The main features of these problems are summarized in Table 2. For DTLZ1-DTLZ4 and DTLZ7, the total number of decision variables is given by , where m is the number of objectives and k is set to 5 for DTLZ1, 10 for DTLZ2-DTLZ4, and 20 for DTLZ7. As for all WFG problems, the number of decision variables is set to 24 and the position-related parameter is set to according to [33, 73].

Table 1: The statistical results (mean and standard deviation) of the HV values obtained by each algorithm on DTLZ1 to DTLZ4 and DTLZ7. The best results are italicized.
Table 2: The features of the test problems.
4.2. Performance Metrics

In our experimental study, two widely used metrics are chosen to evaluate the performance of each algorithm, which are named the Inverted Generational Distance (IGD) [74] and Hypervolume (HV) [75]. The IGD and HV can measure both the convergence and diversity of obtained solutions effectively.

4.2.1. Inverted Generational Distance (IGD)

Let denote a set of uniformly distributed solutions in the objective space along the Pareto front. P is an approximation to the , which is obtained by the algorithm. The IGD is described aswhere is the Euclidean distance between a point and its nearest neighbor in P, and is the cardinality of . It can be seen from the definition of IGD that, for a large , it can cover approximately the entire Pareto front, which is another aspect of metric in terms of diversity.

4.2.2. Hypervolume (HV)

Consider the set of final nondominated points S and a reference point in the objective space which is dominated by any point in the set S. Then, the hypervolume of S with regard to a can be described as follows:

It should be noted that choosing r that is slightly larger than the nadir point is suitable [76, 77]. Here, we set a to 1.1, which is the same as in [33]. In addition, for problems with no more than 10 objectives, the recently proposed fast hypervolume calculation method is adopted to calculate the exact hypervolume [78]. As for problems having 15 objectives, the Monte Carlo method [37] with 1,000,000 sampling points is adopted, and all hypervolume values presented in this work are all normalized to [0,1].

4.3. Parameter Settings

As for the parameter settings, several general settings for algorithms are given as follows:(1)Population Size. The setting of the population size N for NSGA-III, MOEA/D, MOEA/DD, MOEA/DVA, and OBEA is controlled by a parameter H. Since MOMBI2 involves a binary tournament selection, we use the same population size as in NSGA-III or OBEA. Moreover, for the other algorithms, the population size is the same as above for ensuring a fair comparison. Population sizes N used in this study for different number of objectives are listed in Table 3.(2)Parameters for Operator. Since the PBI function is involved in MOEA/D, dMOPSO, Two_Arch2, and MOEA/DD, the penalty parameter θ is set to 5 as suggested in [43]. Furthermore, as the simulated binary crossover (SBX) and polynomial mutation are employed to generate offspring solutions, the crossover probability and mutation probability are set to 1.0 and , respectively. For the SBX operator, its distribution index is  = 30, and the distribution index of the mutation operator is  = 20 [45].(3)Parameter Settings for Algorithms. Besides the parameters mentioned above, algorithms also have their specific parameters. These parameters are set mainly according to the suggestions given by their developers with the purpose of ensuring the impartiality and objectivity of the experiment. The details are shown below.(a)Parameter setting in MOEA/D: the neighborhood size T is set to 20 [43].(b)Parameter setting in MOEA/DD: the neighborhood size T is set to 20, and the probability used to select in the neighborhood is [45].(c)Parameter setting in MOEA/DVA: the neighborhood size T is set to 20, the number of sampling solutions in control variable analysis is 20, and the maximum number of tries required to judge the interaction is 6 [71].(d)Parameter setting in dMOPSO: the age threshold is set to 2 [66].(e)Parameter setting in MOMBI2: following the practice in [67], two parameters in MOMBI2 are set as ε =  and α = 0.5, respectively [67].(f)Parameter setting in ϵ-MOEA: the parameter in grid location calculation is [68].(g)Parameter setting in RVEA: in the experimental comparisons, α = 2 and  = 0.1 are used for all test instances [53].(h)Parameter setting in Two_Arch2: the CA size is equal to the population size, and the fractional distance p is equal to (where m is the number of the objective) [49].(i)Parameter setting in OBEA: μ = 0.35 is used for the adaptive clustering mechanism.

Table 3: The setting of the population size.
4.4. Experimental Results and Analysis

In this section, OBEA is compared with the four efficient evolutionary algorithms that are proposed in Section 4.4.1. Then, six state-of-the-art evolutionary algorithms are also included as the comparators to investigate the ability of OBEA for solving many-objective optimizations in Section 4.4.2. The analysis of the adaptive strategy design is included in Section 4.4.3, and the discussion of the effectiveness analysis of OBL and adaptive strategy is present in Section 4.4.5, followed by the comparisons between HD, TCH, and PBI in Section 4.4.4. Furthermore, investigation of the evolutionary behavior of OBEA on parts of test problems is also included in Section 4.4.6.

4.4.1. Comparison with the Previous Algorithms

Statistical results of the IGD values obtained by OBEA and four algorithms named MOEA/D, dMOPSO, MOMBI2, and ϵ-MOEA are summarized in Tables 4 and 5, where the best results are italicized. The significance of difference between OBEA and the peer algorithms is determined by using the Wilcoxon rank sum test, where “,” “,” and “” indicate the competitor is better than, worse than, or similar to the proposed OBEA, respectively, and the results are summarized as “,” which denotes that corresponding competitor wins on functions, loses on l functions, and ties on t functions, compared with the proposed OBEA.

Table 4: The statistical results (mean and standard deviation) of the IGD values obtained by MOEA/D, dMOPSO, MOBI2, ϵ-MOEA, and OBEA on DTLZ1 to DTLZ4 and DTLZ7. The best results are italicized.
Table 5: The statistical results (mean and standard deviation) of the IGD values obtained by MOEA/D, dMOPSO, MOBI2, ϵ-MOEA, and OBEA on WFG test suits. The best results are italicized.

It can be observed that OBEA covers the majority of the best values among the compared algorithms on the five original DTLZ test instances, especially on DTLZ1, DTZL3, and DTLZ7. As for the DTLZ2 test problem, the best values on 5-objective, 8-objective, and 10-objective are obtained by ϵ-MOEA and MOEA/D, respectively. However, the proposed OBEA tends to perform well on 3-objective and 15-objective test instances. While, for the DTLZ4 test problem, the performance of OBEA is little worse than MOMBI2 and ϵ-MOEA on 3-objective, 10-objective, and 15-objective instances, the proposed algorithm still outperforms the others on 5-objective instance and is similar to ϵ-MOEA on the 8-objective test instance according to the Wilcoxon rank sum test.

The statistical results in Table 5 also indicate that OBEA has achieved the best performance among the five algorithms on all WFG1 and WFG6–9 test instances. For WFG2 and WFG3 test problems, ϵ-MOEA and MOMBI2 show best performance on most test instances. However, the proposed OBEA tends to perform well on parts of instances. Furthermore, OBEA still outperforms the other algorithms like MOEA/D and dMOPSO on most instances according to the Wilcoxon rank sum test. As for the WFG4 and WFG5 test instances, ϵ-MOEA and OBEA cover all the best values among five algorithms, which also verify the best performance of the proposed algorithm.

As evidenced by statistical results of the HV values on DTLZ test suits summarized in Table 1, OBEA has covered the best values on DTLZ1 and DTLZ3 test problems. As for DTLZ2, the best values on 8-, 10-, and 15-objective instances have been obtained by OBEA, while MOMBI2 and ϵ-MOEA have achieved the best performance on different instances, respectively. Moreover, the best values on the 15-objective DTLZ4 test instance have been obtained by OBEA, which indicates the best performance of the proposed algorithm for high-dimensional problems. By contrast, the performance of OBEA on the DTLZ7 test functions is not as good as that on the DTLZ1 and DTLZ3 test functions; however, the performance of the proposed algorithm is superior to MOEA/D and ϵ-MOEA on most DTLZ7 test instances.

Similar observations can be made about the results on the WFG test problem, where OBEA has shown the most competitive performance on most test instances in Table 6, while MOMBI2 and ϵ-MOEA have also achieved the best performance on different instances, respectively. Above all, it can be concluded that OBEA can balance the diversity and convergence better than these four algorithms on most instances.

Table 6: The statistical results (mean and standard deviation) of the HV values obtained by each algorithm on WFG test suits. The best results are italicized.
4.4.2. Comparison with the State-of-the-Art Algorithms

In this section, the proposed algorithm is compared with several state-of-the-art algorithms on the DTLZ and WFG test problems. The experimental results of each compared algorithms on the DTLZ and WFG test instances are shown in Tables 7 and 8, in terms of IGD metrics. The experimental results of HV on these instances are also demonstrated in Tables 9 and 10. Moreover, thirty independent runs are executed for these algorithms on the test problems with the Wilcoxon rank sum test at a significance level of 0.05.

Table 7: The statistical results (mean and standard deviation) of the IGD values obtained by RVEA, NSGA-III, SPEAR, MOEA/DD, Two_Arch2, MOEA/DVA, and OBEA on DTLZ test suits. The best results are italicized.
Table 8: The statistical results (mean and standard deviation) of the IGD values obtained by RVEA, NSGA-III, SPEAR, MOEA/DD, Two_Arch2, MOEA/DVA, and OBEA on WFG test suits. The best results are italicized.
Table 9: The statistical results (mean and standard deviation) of the HV values obtained by RVEA, NSGA-III, SPEAR, MOEA/DD, Two_Arch2, MOEA/DVA, and OBEA on DTLZ test suits. The best results are italicized.
Table 10: The statistical results (mean and standard deviation) of the HV values obtained by RVEA, NSGA-III, SPEAR, MOEA/DD, Two_Arch2, MOEA/DVA, and OBEA on WFG test suits. The best results are italicized.

It can be observed that OBEA obtained most of the best values among the compared algorithms on the five original DTLZ test instances. For DTLZ1, OBEA and MOEA/DVA show better performance than the other algorithms according to the Wilcoxon rank sum test. For the DTLZ2 and DTLZ7 test problems, although Two_Arch2 and MOEA/DD, together with MOEA/DVA, cover parts of the best values, OBEA still tends to outperform most algorithms on these instances. As for the DTLZ3, although NSGA-III and RVEA share the same idea with OBEA in using the reference line, OBEA is still superior to NSGA-III and RVEA, which may due to the effectiveness of selection strategy in our algorithm. As for the DTLZ7, MOEA/DD, Two_Arch2 and MOEA/DVA shows good performance on parts of instances. However, the proposed OBEA shows competitive results on the other high-dimensional instances.

As for IGD results on WFG test problems shown in Table 8, OBEA still perform well on most test instances. To be more specific, OBEA covers all the best values on WFG1 and WFG7 test problems, which shows the best performance of the proposed algorithms. As for the WFG2 test problem, which covers the convex, disconnected, and nonseparable PF, the performance of OBEA is a little worse than the NSGA-III and Two_Arch2. However, the proposed OBEA still tends to outperform the other algorithms on parts of instances in WFG2. Although the best values on 3-objective, 8-objective, and 10-objective instances in WFG3 are obtained by MOEA/DVA and Two_Arche2, respectively, OBEA covers all the other instances on the WFG3 test problem. For the WFG4 test problem, OBEA and MOEA/DVA, together with Two_Arch2, obtain the best value. As for the WFG5 and WFG6 test problems, although RVEA covers the best value on 10-objective instance, OBEA still tends to outperform most of the algorithms on the remaining instances. Although best values on some instances of WFG8 have been covered by MOEA/DD and Two_Arch2, OBEA still obtained the best performance on 5-objective and 8-objective test instances. As for the WFG9, NSGA-III shows better performance than that on the DTLZ test problem. However, OBEA still outperforms most of the algorithms, especially on 3-objective, 5-objective, and 15-objective test instances of WFG9.

The statistical results of the HV values obtained by the six algorithms are summarized in Tables 9 and 10. The values of HV presented in tables are all normalized to [0,1]. Clearly, a higher value of HV is recommended. OBEA obtains the majority of the best median values over the instances on DTLZ, which indicates the obvious advantages of the proposed algorithm. For the DTLZ1 test problem, although MOEA/DVA obtains the best values on 8-objective and 10-objective instances, OBEA shows good performance on 3-objective, 8-objective, and 15-objective instances. Since DTLZ2 test problems involve concave PFs, Two_arch2 and MOEA/DVA, together with RVEA, obtain the best performance on 3-objective, 8-objective, and 15-objective instances, respectively, while OBEA covers the best values on 5-objective as well as 10-objective instance and tends to be far more superior to the other algorithms. Furthermore, although DTLZ3 is a concave and multimodal test problem, OBEA obtains most of the best HV value on this test problem. As for the DTLZ4, the performance of OBEA is little worse than Two_Arch2 and MOEA/DVA. However, OBEA still outperform most of the algorithms, especially on 15-objective DTLZ4. In addition, due to the fact that disconnected and multimodal PFs have been employed in the DTLZ7 test problem, the performance of OBEA is not that good on 5-objective and 8-objective instances; however, OBEA still outperforms other algorithms, especially on the other instances.

Statistical results of the HV values on WFG test sets are shown in Table 10. OBEA has shown the most competitive performance on WFG1, WFG6, and WFG8. By contrast, performance of RVEA on the WFG test functions is not as good as that on the DTLZ test functions. However, SPEAR shows good performance on WFG test problems, especially on the WFG5 test problem. Moreover, although the performance of OBEA is little worse than Two_Arch2 on some instances in WFG2, it still tends to be superior on most test instances like the 15-objective WFG5 instance, which is shown in Figure 4. As for the WFG2 test problem, the performance of NSGA-III is much better than that on other test instances. Finally, for the WFG7 and WFG9 test problems, although Two_Arch2 and MOEA/DVA cover some of the best values, OBEA still shows the best performance among most algorithms on these test instances.

Figure 4: Parallel coordinates of the nondominated front obtained by each algorithm on 15-objective WG5 in the run associated with the median HV value. (a) OBEA on WFG5. (b) RVEA on WFG5. (c) NSGA-III on WFG5. (d) MOEA/DD on WFG5. (e) SPEAR on WFG5. (f) MOEA/DVA on WFG5. (g) Two Arch2 on WFG5.

In a word, OBEA can obtain the best performance on these test problems from multiobjective to many-objective. This may be due to the fact that the exploration ability and convergence of OBEA are emphasized due to the employed OBL and adaptive clustering mechanisms, which lead the results revealing high efficiency. Note that the performance of OBEA may become little worse while handling some test problems with irregular PF. This is because OBEA uses a set of uniformly distributed reference vectors to handle the problem. While using the uniformly distributed reference vectors is based on an assumption that the PF has a regular geometrical structure, this may have the influence on the performance of OBEA on the test problem with irregular PF. To sum up, the proposed OBEA can obtain the best performance on most test problems with the PF which has regular geometrical structure. As for the problem with quite irregular PF, the performance of OBEA is not as well as on the previous problem. However, the proposed OBEA still tends to provide the competitive results.

As to quantify how well each algorithm performs with the HV overall, the performance score [37] is introduced to rank all the algorithms. Given a specific problem instance, suppose there are k algorithms involved in the comparison. is set to 1 if is significantly better than in terms of HV, and 0 otherwise. Thereafter, for each algorithm , the performance score is determined as follows:

This value represents the number of algorithms which have significantly better performance than the corresponding algorithm on the entire tested instances, and zero means that no other algorithm tends to be significantly better in terms of the HV indicator. Clearly, the smaller the index, the better the algorithm; Figure 5 shows the average performance score over all 70 test instances among the ten algorithms in terms of HV, and the rank of each algorithm according to the score is also given.

Figure 5: Ranking and score of average performance obtained by each compared algorithms in terms of HV.

As the figure shows, OBEA is in the first place among the eleven algorithms, which indicates the better performance of the proposed algorithm on these test problems. Note that although MOEA/D has been ranked in the last place, it still shows good performance on some test problems. Moreover, in order to better visualize the performance, the average performance score summarized for different numbers of objectives and different test problems in terms of the HV are presented in Figure 6, respectively.

Figure 6: (a) Average performance score obtained by eleven algorithms over all test problems of different numbers of objectives in terms of the HV and (b) average performance score obtained by ten algorithms on dimensions for different test problems in terms of the HV, Dx for DTLZ, and Wx for WFG. The values of the proposed OBEA are connected by a solid red line.

Figure 6(a) shows the average performance over all test problems for different number of objectives. The proposed OBEA works well on nearly all the considered problems of diverse objectives expect on 3-objective problems. However, OBEA still outperforms most of the algorithms on 3-objective problems. Furthermore, the performance score for the individual test problems is shown in Figure 6(b), which covers all number of objectives. For DTLZ1-DTLZ3, OBEA shows good performance and nearly outperforms all other algorithms. As for DTLZ4 and DTLZ7, the performance of OBEA is not as good as that on DTLZ1-3 but still is generally superior among the corresponding algorithms. At last, for the WFG test problem, OBEA nearly covers the best values. In a word, OBEA is under a good performance among the ten algorithms in terms of the HV.

4.4.3. Analysis of Adaptive Clustering Mechanism

In the proposed OBEA, the adaptive clustering is adopted to assist the algorithm for selecting the individual from each subpopulation. Nowadays, different approaches have been proposed to cluster the individuals. For example, RVEA and NSGA-III use the angle to partition the population and θ-DEA uses the perpendicular distance to cluster the individuals. Although most of these approaches simply focus on the acute angle or distance alone to cluster the individuals and may have superior performance in some situations. However, simplex use of the acute angle or distance may lead the crowding of individuals in cluster increase and lose some better situations and thus affects the performance of the algorithm in balancing diversity and convergence.

In the view of this, in our proposed OBEA, the acute angle and the distance are considered comprehensively with an adaptive mechanism to assist the algorithm to assign individuals efficiently. For example, in Figure 7(a), giving the individual x and its normalization , and are two different reference vectors. and indicate the acute angles between and and , respectively. Moreover, and indicate the distances between the origin and the projection of on and , respectively. Considering the distance-based approach to assign the individual x, the reference vector will be the preferred one as its distance is smaller than the distance . However, to focus on the acute angle-based clustering, the individual x will be associated with according to the fact that the acute angle of is smaller than the angle of , obviously. In our adaptive clustering strategy, the individuals will be associated with the reference vector according to the distance between the origin and the projection point at first to increase the diversity of the individuals in the cluster. Thereafter, with increase in the number of iterations, the individuals will be assigned according to the angle through our adaptive function.

Figure 7: (a) An illustration of the special situation when solely using the acute angle or perpendicular distance to select individual. (b) An illustration of the three functions versus the iterative generations.

As the function s is the key point in the adaptive scheme, here we consider the following three different functions to do the further analysis:

Here, Figure 7(b) shows the values of S using three different functions. t is the current generation number, is the maximal generation number, and is the ceiling function. Moreover, μ is the phase control parameter for the sigmoid function. As Figure 8 shows, the IGD results on different test problems when using various μ values have indicated that 0.3, 0.4 is the best setting for the algorithm. Due to the fact that the function can obtain the standard phase in [0, 1] with this value of μ equal to 0.3 or 0.4 as shown in Figure 8, here is adopted in the following experiments. In order to validate the effectiveness of the proposed schemes, we compare OBEA (here we named our algorithm OBEA-sig) with the following two variants.(1)Variant I: in this variant, the adaptive scheme in OBEA is implemented based on adaptive clustering with the linear function (OBEA-lin).(2)Variant II: the difference of this variant and OBEA is that the adaptive scheme is constructed through the exponential function (OBEA-exp).

Figure 8: (a) An illustration of the IGD results on 15-objective DTLZ2 when using different μ. (b) An illustration of the IGD results on 10-objective WFG7 when using different μ. (c) An illustration of the IGD results on 8-objective DTLZ4 when using different μ. (d) An illustration of the IGD results on 5-objective WFG4 when using different μ.

In this study, OBEA with two variants are tested on the DTLZ test suite and the WFG test suit with three to fifteen objectives for 30 runs and applied HV to quantify the performance. The parameter settings of the two variants are the same as in OBEA. The mean and standard deviation of HV values of the two variants as well as OBEA are listed in Table 11, where the best mean value for each test instance is italicized. Also, the Wilcoxon rank sum test is performed to investigate whether there is a significant performance difference between OBEA and its two variants. As Table 11 shows, OBEA-sig obtains most of the best mean values among the DTLZ and WFG test instances. Although OBEA-lin and OBEA-exp have obtained parts of the best values, OBEA-sig still performs well on most test instances according to the Wilcoxon rank sum test. In a word, OBEA-sig is more stable and efficient than others. Therefore, the sigmoid function of adaptive strategy is adopted in this paper.

Table 11: Performance comparison of OBEA and its two variants on DTLZ and WFG test suites in terms of HV.
4.4.4. Analysis of Opposition-Based Selection

The key point in our opposition-based selection part is the proposed hybrid distance (HD). Here, we use the contrast experiment to further explore the effectiveness of the strategy. Note that the formulation of HD shares some similarity to angle-penalized distance (APD) [53] and penalty-based boundary intersection (PBI) [43], which are adopted in the RVEA and decomposition-based MOEAs, respectively. In this section, the comparative studies on the original OBEA, our proposed algorithm with the APD approach, and the proposed algorithm with the PBI approach are carried out. In particular, we replace the HD with APD or PBI. Thereafter, the opposition point is calculated according to the Section 3.4. Moreover, the experiment is also conducted with the Wilcoxon rank sum test. For simplicity, the proposed algorithm with the APD and PBI approaches are denoted as OBEA-APD and OBEA-PBI, respectively.

The performance of OBEA, OBEA-APD, and OBEA-PBI has been verified on four benchmark test problems selected from different test suites named DTLZ1, DTLZ2, WFG2, and WFG4. As shown by the statistical results summarized in Table 12, OBEA shows the best performance on most problems, especially for the DTLZ2, on which the best values are all obtained by the proposed algorithm with HD. As for OBEA-APD and OBEA-PBI, these two algorithms obtain the best values on some instances. However, performance of OBEA on these instances is equal to these two algorithms according to the Wilcoxon rank sum test. To sum up, the results indicate that the proposed HD has better capability and effectiveness than APD and PBI, which further verify the effect of the proposed strategy on handling the many-objective problems.

Table 12: The statistical results of the HV obtained by OBEA, OBEA-APD, and OBEA-PBI. The best results are italicized.
4.4.5. Effectiveness Analysis of OBL and Adaptive Clustering in OBEA

As two main strategies have been designed in our OBEA, here three variational algorithms are generated to investigate the effectiveness of each strategy. In particular, they are OBEA1 (OBEA without OBL strategy), OBEA2 (OBEA without adaptive clustering strategy), and OBEA3 (OBEA without OBL strategy and adaptive clustering strategy). Moreover, the performance of these algorithms is shown in Table 13 and the better values are italicized.

Table 13: The statistical results of the HV obtained by various OBEAs. The best results are italicized.

The results show that the two strategies both have ability to improve the performance of the algorithm. In detail, although OBEA1 and OBEA2 both show the better capability of searching for the best function values for most cases, the performance of OBEA2 is slightly worse than OBEA1 according to the Wilcoxon rank sum test, which indicates that adaptive strategy has further influence on assisting algorithm than the OBL strategy. This can be attributed to the best structure of the adaptive clustering mechanism and the effectiveness of the reference line. Moreover, although the performance of OBEA3 is little worse than the others, the values of the HV of this variation still tend to be good, which indicates the best potential of nondominated sorting in OBEA. To sum up, due to good performance of these strategies, our proposed OBEA gains best potential in solving different kinds of problems.

4.4.6. Investigation of the Evolutionary Behavior

As to observe the evolutionary behaviors of the algorithms, further studies with these ten algorithms have been carried out to exhibit the evolutionary trajectories. Moreover, Figure 9 plots the performance trajectories of IGD versus the number of function evaluations for the ten algorithms on the 15-objective DTLZ1, DTLZ4, DTLZ7, WFG4, WFG7, and WFG9 test instances. Depending on the figures, OBEA shows an obvious advantage over most of its competitors. Although the performance of OBEA is little worse on DTLZ4, the figure clearly shows that the proposed OBEA still tends to be superior to the other algorithms on these two instances. By simultaneously considering the IGD, HV, and evolutionary behaviors, we can conclude that the overall performance of OBEA is better than or at least equal to the nine competitors.

Figure 9: Trajectory of the mean IGD value on ten algorithms with fifteen objectives. (a) DTLZ1. (b) DTLZ4. (c) DTLZ7. (d) WFG4. (e) WFG7. (f) WFG9.

5. Conclusion

Due to the loss of selection pressure and the ineffective design of balancing the diversity and convergence, general MOEAs always encounter challenges in solving MaOPs. Although diversity of algorithms, especially the reference-based algorithms, have been designed to address this issue, most algorithms that belong to these categories simply use the angle or distance solely to measure the quality of the population members with the reference set, which may lose some good solutions due to their simplex selection mechanism. Therefore, we have presented a new many-objective evolutionary algorithm, called OBEA. Two strategies called OBL and adaptive clustering have been adopted in environmental selection with the purpose of improving the performance of the algorithm in balancing the convergence and the diversity.

To establish the strong competitiveness, we have employed extensive experimental comparison of OBEA with ten algorithms. A number of well-known benchmark problems such as DTLZ test suits and WFG test suits are chosen to explore the abilities of the algorithms. The statistical results reveal that the proposed OBEA performs well on almost all the problem instances, and it obviously outperforms most state-of-the-art many-objective optimizers. However, the result also indicates that none of the algorithms is capable of preceding any of the other algorithms on all the instances which reflect the importance of the choice of algorithms carefully when solving the MaOPs.

In the future, we will extend OBEA to solve constrained many-objective problems by incorporating constraint handling techniques in order to further verify its effectiveness.

Data Availability

All data included in this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant 61873240.

References

  1. K. Deb, K. Sindhya, and J. Hakanen, “Multi-objective optimization,” in Decision Sciences: Theory and Practice, pp. 145–184, CRC Press, Boca Raton, FL, USA, 2016. View at Google Scholar
  2. J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT Press, Cambridge, MA, USA, 1992.
  3. H.-G. Beyer and H.-P. Schwefel, “Evolution strategies–a comprehensive introduction,” Natural computing, vol. 1, no. 1, pp. 3–52, 2002. View at Publisher · View at Google Scholar
  4. M. Dorigo and M. Birattari, “Ant colony optimization,” in Encyclopedia of Machine Learning, pp. 36–39, Springer, Berlin, Germany, 2011. View at Google Scholar
  5. K. Deb, “Multi-objective optimization,” in Search Methodologies, pp. 403–449, Springer, Berlin, Germany, 2014. View at Google Scholar
  6. C. A. C. Coello, G. B. Lamont, D. A. Van Veldhuizen et al., Evolutionary Algorithms for Solving Multi-Objective Problems, vol. 5, Springer, Berlin, Germany, 2007.
  7. E. Zitzler, M. Laumanns, and L. Thiele, “Spea2: improving the strength pareto evolutionary algorithm,” TIK-Report, vol. 103, 2001. View at Google Scholar
  8. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: Nsga-ii,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  9. C. A. C. Coello, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives with particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 256–279, 2004. View at Publisher · View at Google Scholar · View at Scopus
  10. W. K. Li, W. L. Wang, and L. Li, “Optimization of water resources utilization by multi-objective moth-flame algorithm,” Water Resources Management, vol. 47, no. 10, pp. 3303–3316, 2018. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Mirjalili, P. Jangir, and S. Saremi, “Multi-objective ant lion optimizer: a multi-objective optimization algorithm for solving engineering problems,” Applied Intelligence, vol. 46, no. 1, pp. 79–95, 2017. View at Publisher · View at Google Scholar · View at Scopus
  12. B. Li, J. Li, K. Tang, and X. Yao, “Many-objective evolutionary algorithms: a survey,” ACM Computing Surveys (CSUR), vol. 48, no. 1, p. 13, 2015. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Farina and P. Amato, “On the optimal solution definition for many-criteria optimization problems,” in Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society, NAFIPS 2002, pp. 233–238, IEEE, New Orleans LA, USA, June 2002.
  14. P. J. Fleming, R. C. Purshouse, and R. J. Lygoe, “Many-objective optimization: an engineering design perspective,” in Proceedings of the International Conference on Evolutionary Multi-Criterion Optimization, pp. 14–32, Springer, Guanajuato, Mexico, March 2005.
  15. J. G. Herrero, A. Berlanga, and J. M. M. Lopez, “Effective evolutionary algorithms for many-specifications attainment: application to air traffic control tracking filters,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 1, pp. 151–168, 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. G. Fu, Z. Kapelan, J. R. Kasprzyk, and P. Reed, “Optimal design of water distribution systems using many-objective visual analytics,” Journal of Water Resources Planning and Management, vol. 139, no. 6, pp. 624–633, 2012. View at Google Scholar
  17. J. Knowles and D. Corne, “Quantifying the effects of objective space dimension in evolutionary multiobjective optimization,” in Proceedings of the Evolutionary Multi-Criterion Optimization, pp. 757–771, Springer, Matsushima, Japan, March 2007.
  18. P. Kata and X. Yao, “How well do multi-objective evolutionary algorithms scale to large problems,” in Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2007, pp. 3959–3966, IEEE, Singapore, Septembe 2007.
  19. H. Ishibuchi, N. Tsukamoto, and Y. Nojima, “Evolutionary many-objective optimization: a short review,” in Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2008 (IEEE World Congress on Computational Intelligence), pp. 2419–2426, IEEE, Hong Kong, China, June 2008.
  20. R. C. Purshouse and P. J. Fleming, “On the evolutionary optimization of many conflicting objectives,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 770–784, 2007. View at Publisher · View at Google Scholar · View at Scopus
  21. M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining convergence and diversity in evolutionary multiobjective optimization,” Evolutionary computation, vol. 10, no. 3, pp. 263–282, 2002. View at Publisher · View at Google Scholar · View at Scopus
  22. H. Aguirre and K. Tanaka, “Space partitioning with adaptive ε-ranking and substitute distance assignments: a comparative study on many-objective mnk-landscapes,” in Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation, pp. 547–554, ACM, Montréal, Canada, July 2009.
  23. H. Sato, H. Aguirre, and K. Tanaka, “Controlling dominance area of solutions and its impact on the performance of moeas,” in Proceedings of the Evolutionary Multi-Criterion Optimization, pp. 5–20, Springer, Matsushima, Japan, March 2007.
  24. H. Sato, H. Aguirre, and K. Tanaka, “Improved s-cdas using crossover controlling the number of crossed genes for many-objective optimization,” in Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, pp. 753–760, ACM, Dublin, Ireland, July 2011.
  25. K. Ikeda, H. Kita, and S. Kobayashi, “Failure of pareto-based moeas: does non-dominated really mean near to optimal?” in Proceedings of the 2001 Congress on Evolutionary Computation, vol. 2, pp. 957–962, IEEE, Seoul, Korea, May 2001.
  26. Z. He, G. G. Yen, and J. Zhang, “Fuzzy-based pareto optimality for many-objective evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 2, pp. 269–285, 2014. View at Publisher · View at Google Scholar · View at Scopus
  27. X. Zou, Y. Chen, M. Liu, and L. Kang, “A new evolutionary algorithm for solving many-objective optimization problems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 38, no. 5, pp. 1402–1412, 2008. View at Publisher · View at Google Scholar · View at Scopus
  28. J. Branke, T. Kaußler, and H. Schmeck, “Guidance in evolutionary multi-objective optimization,” Advances in Engineering Software, vol. 32, no. 6, pp. 499–507, 2001. View at Publisher · View at Google Scholar · View at Scopus
  29. G. Eichfelder, “Optimal elements in vector optimization with a variable ordering structure,” Journal of Optimization Theory and Applications, vol. 151, no. 2, pp. 217–240, 2011. View at Publisher · View at Google Scholar · View at Scopus
  30. S. Yang, M. Li, X. Liu, and J. Zheng, “A grid-based evolutionary algorithm for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 5, pp. 721–736, 2013. View at Publisher · View at Google Scholar · View at Scopus
  31. C. Zhu, L. Xu, and E. D. Goodman, “Generalization of pareto-optimality for many-objective evolutionary optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 2, pp. 299–315, 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. M. Li, S. Yang, and X. Liu, “Shift-based density estimation for pareto-based algorithms in many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 3, pp. 348–365, 2014. View at Publisher · View at Google Scholar · View at Scopus
  33. Y. Yuan, H. Xu, B. Wang, and X. Yao, “A new dominance relation-based evolutionary algorithm for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 1, pp. 16–37, 2016. View at Publisher · View at Google Scholar · View at Scopus
  34. C. Zhou, G. Dai, and M. Wang, “Enhanced dominance and density selection based evolutionary algorithm for many-objective optimization problems,” Applied Intelligence, vol. 1, pp. 1–21, 2017. View at Google Scholar
  35. E. Zitzler and K. . Simon, “Indicator-based selection in multiobjective search,” in Proceedings of the International Conference on Parallel Problem Solving from Nature, pp. 832–842, Springer, Birmingham, UK, September 2004.
  36. N. Beume, B. Naujoks, and M. Emmerich, “Sms-emoa: multiobjective selection based on dominated hypervolume,” European Journal of Operational Research, vol. 181, no. 3, pp. 1653–1669, 2007. View at Publisher · View at Google Scholar · View at Scopus
  37. J. Bader and E. Zitzler, “HypE: an algorithm for fast hypervolume-based many-objective optimization,” Evolutionary computation, vol. 19, no. 1, pp. 45–76, 2011. View at Publisher · View at Google Scholar · View at Scopus
  38. K. Li, S. Kwong, J. Cao, M. Li, J. Zheng, and R. Shen, “Achieving balance between proximity and diversity in multi-objective evolutionary algorithm,” Information Sciences, vol. 182, no. 1, pp. 220–242, 2012. View at Publisher · View at Google Scholar · View at Scopus
  39. R. H. Gómez and C. A. C. Coello, “Mombi: a new metaheuristic for many-objective optimization based on the r2 indicator,” in Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), pp. 2488–2495, IEEE, Cancun, Mexico, June 2013.
  40. O. Schutze, X. Esquivel, A. Lara, and C. A. C. Coello, “Using the averaged hausdorff distance as a performance measure in evolutionary multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 4, pp. 504–522, 2012. View at Publisher · View at Google Scholar · View at Scopus
  41. B. Li, K. Tang, J. Li, and X. Yao, “Stochastic ranking algorithm for many-objective optimization based on multiple indicators,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 6, pp. 924–938, 2016. View at Publisher · View at Google Scholar · View at Scopus
  42. Y. Sun, G. G. Yen, and Y. Zhang, “IGD indicator-based evolutionary algorithm for many-objective optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 2, pp. 173–187, 2018. View at Google Scholar
  43. Q. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on evolutionary computation, vol. 11, no. 6, pp. 712–731, 2007. View at Publisher · View at Google Scholar · View at Scopus
  44. M. Asafuddoula, T. Ray, and R. Sarker, “A decomposition-based evolutionary algorithm for many objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 3, pp. 445–460, 2015. View at Publisher · View at Google Scholar · View at Scopus
  45. K. Li, K. Deb, Q. Zhang, and S. Kwong, “An evolutionary many-objective optimization algorithm based on dominance and decomposition,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 5, pp. 694–716, 2015. View at Publisher · View at Google Scholar · View at Scopus
  46. Y. Yuan, H. Xu, B. Wang, B. Zhang, and X. Yao, “Balancing convergence and diversity in decomposition-based many-objective optimizers,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 2, pp. 180–198, 2016. View at Publisher · View at Google Scholar · View at Scopus
  47. R. Wang, Z. Zhou, H. Ishibuchi, T. Liao, and T. Zhang, “Localized weighted sum method for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 1, pp. 3–18, 2018. View at Publisher · View at Google Scholar · View at Scopus
  48. P. Kata and X. Yao, “A new multi-objective evolutionary optimisation algorithm: the two-archive algorithm,” in Proceedings of the 2006 International Conference on Computational Intelligence and Security, vol. 1, pp. 286–291, IEEE, Guangzhou, China, November 2006.
  49. H. Wang, L. Jiao, and X. Yao, “Two_arch2: an improved two-archive algorithm for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 4, pp. 524–541, 2015. View at Publisher · View at Google Scholar · View at Scopus
  50. Y. Xiang, Y. Zhou, M. Li, and Z. Chen, “A vector angle-based evolutionary algorithm for unconstrained many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 1, pp. 131–152, 2017. View at Publisher · View at Google Scholar · View at Scopus
  51. K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: solving problems with box constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014. View at Publisher · View at Google Scholar · View at Scopus
  52. X. Bi and C. Wang, “A niche-elimination operation based NSGA-III algorithm for many-objective optimization,” Applied Intelligence, vol. 48, no. 1, pp. 118–141, 2018. View at Publisher · View at Google Scholar · View at Scopus
  53. R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A reference vector guided evolutionary algorithm for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 773–791, 2016. View at Publisher · View at Google Scholar · View at Scopus
  54. S. Jiang and S. Yang, “A strength pareto evolutionary algorithm based on reference direction for multiobjective and many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 3, pp. 329–346, 2017. View at Publisher · View at Google Scholar · View at Scopus
  55. Y. Liu, D. Gong, X. Sun, and Y. Zhang, “Many-objective evolutionary optimization based on reference points,” Applied Soft Computing, vol. 50, pp. 344–355, 2017. View at Publisher · View at Google Scholar · View at Scopus
  56. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67–82, 1997. View at Publisher · View at Google Scholar · View at Scopus
  57. M. Li, S. Yang, X. Liu, and R. Shen, “A comparative study on evolutionary algorithms for many-objective optimization,” in Proceedings of the International Conference on Evolutionary Multi-Criterion Optimization, pp. 261–275, Springer, Sheffield, UK, March 2013.
  58. I. Das and J. E. Dennis, “Normal-boundary intersection: a new method for generating the pareto surface in nonlinear multicriteria optimization problems,” SIAM Journal on Optimization, vol. 8, no. 3, pp. 631–657, 1998. View at Publisher · View at Google Scholar · View at Scopus
  59. H.-L. Liu, F. Gu, and Q. Zhang, “Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 3, pp. 450–455, 2014. View at Publisher · View at Google Scholar · View at Scopus
  60. H. R. Tizhoosh, “Opposition-based learning: a new scheme for machine intelligence,” in Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), vol. 1, pp. 695–701, IEEE, Vienna, Austria, November 2005.
  61. S. Mahdavi, S. Rahnamayan, and K. Deb, “Opposition based learning: a literature review,” Swarm and evolutionary computation, vol. 39, pp. 1–23, 2018. View at Publisher · View at Google Scholar · View at Scopus
  62. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition-based differential evolution,” in Proceedings of the IEEE Symposium on Foundations of Computational Intelligence, FOCI 2007, pp. 81–88, Honolulu, HI, USA, April 2007.
  63. W. L. Wang, W. K. Li, Z. Wang, and L. Li, “Opposition-based multi-objective whale optimization algorithm with global grid ranking,” Neurocomputing, vol. 341, pp. 41–59, 2019. View at Publisher · View at Google Scholar
  64. K. Deb and R. B. Agrawal, “Simulated binary crossover for continuous search space,” Complex Systems, vol. 9, no. 3, pp. 115–148, 1994. View at Google Scholar
  65. W. Wang, S. Ying, L. Li, Z. Wang, and W. Li, “An improved decomposition-based multiobjective evolutionary algorithm with a better balance of convergence and diversity,” Applied Soft Computing, vol. 57, pp. 627–641, 2017. View at Publisher · View at Google Scholar · View at Scopus
  66. A. Carlos and C. Coello, “A multi-objective particle swarm optimizer based on decomposition,” in Proceedings of the Conference on Genetic and Evolutionary Computation, pp. 69–76, Dublin, Ireland, July 2011.
  67. G. Raquel Hernández and C. A. C. Coello, “Improved metaheuristic based on the r2 indicator for many-objective optimization,” in Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, pp. 679–686, ACM, Madrid, Spain, July 2015.
  68. K. Deb, M. Mohan, and S. Mishra, “Towards a quick computation of well-spread pareto-optimal solutions,” in Proceedings of the International Conference on Evolutionary Multi-Criterion Optimization, pp. 222–236, Faro, Portugal, April 2003.
  69. Z. He and G. G. Yen, “Many-objective evolutionary algorithm: objective space reduction and diversity improvement,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 1, pp. 145–160, 2016. View at Publisher · View at Google Scholar · View at Scopus
  70. D. Roman, L. Costa, and I. E. Santo, “Many-objective optimization using differential evolution with variable−wise mutation restriction,” in Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, pp. 591–598, ACM, Amsterdam, Netherlands, July 2013.
  71. X. Ma, F. Liu, Y. Qi et al., “A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with large-scale variables,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 2, pp. 275–298, 2016. View at Publisher · View at Google Scholar · View at Scopus
  72. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi-objective optimization test problems,” in Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02, pp. 825–830, Honolulu, HI, USA, May 2002.
  73. S. Huband, P. Hingston, L. Barone, and L. While, “A review of multiobjective test problems and a scalable test problem toolkit,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 5, pp. 477–506, 2006. View at Publisher · View at Google Scholar · View at Scopus
  74. E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. Da Fonseca, “Performance assessment of multiobjective optimizers: an analysis and review,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 2, pp. 117–132, 2003. View at Publisher · View at Google Scholar · View at Scopus
  75. L. While, P. Hingston, L. Barone, and S. Huband, “A faster algorithm for calculating hypervolume,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 1, pp. 29–38, 2006. View at Publisher · View at Google Scholar · View at Scopus
  76. A. Auger, J. Bader, D. Brockhoff, and E. Zitzler, “Theory of the hypervolume indicator: optimal μ-distributions and the choice of the reference point,” in Proceedings of the Tenth ACM SIGEVO Workshop on Foundations of Genetic Algorithms, pp. 87–102, ACM, Orlando, Forida, USA, January 2009.
  77. H. Ishibuchi, Y. Hitotsuyanagi, N. Tsukamoto, and Y. Nojima, “Many-objective test problems to visually examine the behavior of multiobjective evolution in a decision space,” in Proceedings of the International Conference on Parallel Problem Solving From Nature, pp. 91–100, Kraków, Poland, September 2010. View at Publisher · View at Google Scholar · View at Scopus
  78. L. While, L. Bradstreet, and L. Barone, “A fast way of calculating exact hypervolumes,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 1, pp. 86–95, 2012. View at Publisher · View at Google Scholar · View at Scopus