Abstract

Balancing convergence and diversity has become a key point especially in many-objective optimization where the large numbers of objectives pose many challenges to the evolutionary algorithms. In this paper, an opposition-based evolutionary algorithm with the adaptive clustering mechanism is proposed for solving the complex optimization problem. In particular, opposition-based learning is integrated in the proposed algorithm to initialize the solution, and the nondominated sorting scheme with a new adaptive clustering mechanism is adopted in the environmental selection phase to ensure both convergence and diversity. The proposed method is compared with other nine evolutionary algorithms on a number of test problems with up to fifteen objectives, which verify the best performance of the proposed algorithm. Also, the algorithm is applied to a variety of multiobjective engineering optimization problems. The experimental results have shown the competitiveness and effectiveness of our proposed algorithm in solving challenging real-world problems.

1. Introduction

Over the last two decades, evolutionary algorithm (EA) has been proven to be prevalent and efficient to solve real-world optimization problem [1]. Some of these well-known methodologies include genetic algorithms (GAs) [2], evolution strategies (ES) [3], and ant colony optimization (ACO) [4]. However, real-world optimization problems always involve multiple objectives, which means there is no single solution when considering multiple objectives as the goal of the optimization process [5]. In this case, the solutions for a multiobjective problem(MOP), which is the main focus of the algorithm, represent the trade-offs between the objectives due to the nature of such problems [6].

For the evolutionary approach to address multiobjective optimization, which is called multiobjective evolutionary algorithm (MOEA), different varieties of the algorithms have been proposed in recent years. Among them, the algorithms inspired by nature have drawn the attention like improving strength-Pareto evolutionary algorithm (SPEA2) [7], nondominated sorting genetic algorithm version 2 (NSGA-II) [8], multiobjective particle swarm optimization (MOPSO) [9], multiobjective moth-flame algorithm [10], and multiobjective ant lion optimizer [11].

There is no doubt that MOEA has been proven to be prevalent and efficient to solve optimization problem with less than three objectives [1]. However, with the gradually rising scale of data and the pluralism of target requirements, many real-world optimization problems containing more than three objectives named “many-objective problems” (MaOPs) [12, 13] are widely appearing in engineering [14], traffic [15], and water [16]. Unfortunately, the effectiveness of previous MOEAs tends to deteriorate dramatically with the increase in the number of objectives, which has been verified in [17, 18]. This can be attributed to the situation that almost all solutions in a population become nondominated with one another, and the conflict between convergence and diversity becomes aggravated with the increasing number of objectives in MaOPs [19, 20]. Moreover, computational complexity of calculating some performance metrics and the representation and visualization of the trade-off surface are also the difficulties of MaOPs. To overcome these drawbacks, a series of many-objective evolutionary algorithms (MaOEAs) have been proposed to address these optimization problems with more than three objectives. In summary, the proposed algorithms can be roughly classified into the following four types.

1.1. New Domination Relation-Based Approach

As the selection criterion based on the standard dominance relationship fails to distinguish solutions in MaOPs, a number of new domination principles have been proposed to adaptively discretize the Pareto-optimal front, for instance, the ϵ-dominance [21, 22], CDAS-dominance [23, 24], α-dominance [25], fuzzy Pareto dominance [26], L-dominance [27], cone-domination [28, 29], and grid-based evolutionary algorithm (GrEA) [30]. Furthermore, the recently proposed generalization of Pareto optimality (GPO) [31] expands the dominance area of solutions to enhance the scalability of existing Pareto-based algorithms [32] and employs a shift-based density estimation strategy (SDE) into the dominance-based criterion, and θ-dominance [33, 34]introduces a new dominance relation to rank solutions, which all have been proven to be more effective than the original Pareto dominance relation. In a word, these algorithms are proposed with several variants of the Pareto dominance to enhance the selection pressure toward the PF. However, the drive toward a more aggressive selection pressure could make diversity maintenance more difficult in these new dominance-based MaOEAs.

1.2. Indicator-Based Approach

Aiming to obtain a desired ordering among the representative PF approximations, indicator-based MOEAs have been widely studied, for example, the indicator-based evolutionary algorithm (IBEA) [35], SMS-EMOA [36], the fast hypervolume-based evolutionary algorithm (HypE) [37], DNMOEA/HI [38], R2 indicator based [39], [40], stochastic ranking algorithm based on multiple indicators (SRA) [41], and IGD indicator-based evolutionary algorithm (MaOEA/IGD) [42]. However, the computational cost of some performance calculation could be prohibitively expensive as the number of objectives increases.

1.3. Decomposition-Based Approach

These algorithms always decompose a MaOP into several single-objective optimization problems and use aggregation functions to differentiate many-objective solutions. Among the methods that use a set of weight vectors to generate multiple aggregation functions, multiobjective evolutionary algorithm based on decomposition (MOEA/D) [43] is the most representative algorithm, which aggregates the objectives of an MOP into an aggregation function with a unique weight vector. Around the MOEA/D, several variants have been proposed to strike a better balance between convergence and diversity such as I-DBEA [44], MOEA/D-DD [45], MOEA/D-DU [46], and MOEA/D-LWS [47]. However, the situation that the number of scalarizing functions is usually very limited might cause difficulties for diversity maintenance of the solution especially when compared with the exponentially increasing objective space.

1.4. Reference Set-Based Approach

The algorithms of this category use a set of reference solutions to measure the quality of solutions. Thus, the search process is guided by the solutions in the reference solution set. Praditwong and Yao [48] and Wang [49] proposed a novel two archive algorithm (TAA) and its improved version (Two_Arch2), in which the convergence archive (CA) can be seen as an online-updated real reference set. The recently proposed vector angle-based evolutionary algorithm (VaEA) [50] uses the population as the reference set to dynamically guide the evolutionary process. As for the algorithm using virtual reference set, NSGA-III [51, 52]is predominant which employs a set of predefined reference points to manage the diversity of the candidate solutions. Hereafter, a number of algorithms have been proposed with the reference point or vector such as the reference vector-guided evolutionary algorithm for many-objective optimization (RVEA) [53], reference vector-guided evolutionary Pareto evolutionary algorithm with reference direction (SPEAR) [54], and many-objective evolutionary optimization based on reference points (RPEA) [55].

The algorithm from the third category and fourth category becomes particularly prevalent for many-objective optimization, which could be attributed to the low cost to achieve a representative subset of the entire PF especially in a limited population size. However, most algorithms belong to these categories simply use the angle or distance solely to measure the quality of the population members with the reference set, which may lose some good solutions due to their simplex selection mechanism. Furthermore, it has been logically proved by the No Free-Lunch (NFL) theorem [56] that none of these algorithms is able to solve all optimization problems, which allows the researchers to propose new methods or improve the current algorithms for better solving the problems [33, 57]. Therefore, this paper proposes an opposition-based multiobjective evolutionary algorithm with an adaptive clustering mechanism, in short named OBEA to strengthen the selection mechanism through comprehensive consideration of the angle and the distances. The main properties of OBEA can be summarized as follows:(i)A new initialization approach is designed with the assistance of the opposition-based learning (OBL) to generate the population. Due to the fact that random initialization lowers the chance of sampling better regions in algorithms, here we use the OBL to initialize the populations in stand of the previous random method. Moreover, opposition-based learning is also adapted in the evolutionary process with the aim of enhancing the probability of obtaining the better solutions.(ii)An adaptive clustering strategy is integrated in this algorithm. In this proposed strategy, the acute angle and perpendicular distance between the candidate solutions and the reference vectors are combined through an adaptive mechanism to cluster the candidate. Moreover, a novel selective approach is designed to dynamically select the individual with comprehensive consideration of the balance convergence and diversity.

Furthermore, an extensive comparison between the proposed OBEA with nine algorithms is implemented on 60 instances of 14 test problems taken from two well-known test suites. The results indicate that OBEA is a very promising alternative for many-objective optimization. The rest of this paper is organized as follows. Section 2 introduces the background knowledge, and details of the proposed OBEA are described in Section 3. Section 4 presents the numerical results of OBEA on benchmark and the detailed analysis of the proposed algorithm together with nine MaOEAs. Finally, conclusions and future work are given in Section 5.

2. Background

In this section, the main components of multiobjective optimization problem (MOP) are given first, which involve the basic knowledge of optimization and Pareto dominate. Next, a brief description of the reference vector is given, which is used as the underlying mechanism for solving many-objective optimization problems.

2.1. Multiobjective Problem

Generally, a multiobjective optimization problem can be stated as follows:where is the decision vector that satisfies and stands for the decision space. The objective function vector consists of m () objectives and refers to the objective space.

Definition 1. Given two vector , x is Pareto dominate y (denoted as ) if and only if for each , and , .

Definition 2. A decision vector is said to be Pareto optimal, if and only if there is no .

Definition 3. The set of Pareto-optimal solutions (PS) is defined as .

Definition 4. The Pareto-optimal front (PF) is defined as .

2.2. Reference Vector

As an underlying mechanism throughout the algorithm, OBEA uses Das and Dennis’s systematic approach [58], and adaptation of this approach generates the reference points and thus forms the reference vectors. The original Das and Dennis’s method places points on a normalized hyperplane, where the number of reference points depends on the dimension of objective space m and positive integer H. The equation can be described as follows:

However, the number of reference points would rapidly increase when m is a relatively large number. To address the drawback of the computational burden of the reference point, a number of new approaches have been proposed [45, 59]. OBEA utilizes the two-layered reference point generation approach, as suggested in [33]. The hyperplane is divided into two parts, which are the boundary and inner layers, as shown in Figure 1. The detail of implementation is as follows:

The reference vector is the fundamental mechanism for the proposed algorithm. On the one hand, the acute angle and perpendicular distance between the candidate solutions and the reference vectors are adopted in OBEA; on the other hand, the proposed algorithm tends to find near Pareto-optimal solutions corresponding to the reference vectors. Furthermore, the Das and Dennis’s systematic approach is utilized in 3-objective and 5-objective problems, while the two-layered reference point generation approach is used in other situations ().

3. Proposed Algorithm: OBEA

The pseudocode of the proposed OBEA is presented in Algorithm 1. This algorithm shares a common framework of many evolutionary algorithms and consists of four main phases. First, N solutions and reference vectors are initialized with the OBL. Then, potential solutions are selected into the mating pool. In what follows, a set of offspring solutions is obtained by applying crossover and mutation operations. Finally, solution P is chosen by adopting the environmental selection procedure. The above steps continue until the termination criteria are met. In the following sections, the details of each component in OBEA will be explained step by step (Algorithm 1).

Output: final population P
Initialization: create the initial population , reference vector V
while do
  Mating-selection
  Variation
  
  Environmental-selection
return P
3.1. Initialization with the Opposition-Based Learning

Since the opposition-based learning (OBL) was first proposed by Tizhoosh [60], the OBL has been adopted in various algorithms, which is due to its promising potential to improve the performance of the algorithms [61]. Because of the fact that OBL can obtain fitter starting candidates even when there is no a priori knowledge and enhance the probability of detecting better regions, here we use the OBL to initialize the algorithm. First, the definition of the opposite number and the opposite point is given below.

Definition 5. Let be a real number. Its opposite number is defined as follows [62]:

Definition 6. Let be a point in D-dimensions space, where , . Its opposite point is defined as follows [62]:The initialization phase of OBEA is shown in Figure 2. Here we first divide population into two parts, and the first half population is generated by a random distribution. Thereafter, the remaining half population is initialized in terms of OBL, as follows:Finally, the set is restructured as the initial population .
Note that the initialization of OBEA uses the OBL strategy to calculate the opposite point which shares the same idea from our previous work [63]. There are some similarities between them. In detail, both of these algorithms divide the population into two parts and use the random methods and OBL strategy to generate one of them, respectively. Furthermore, both of the algorithms apply the OBL strategy in the process of optimization. However, there are still some differences between our proposed OBEA and our previous work. For our previous work, the OBL strategy-based initialization in this algorithm is only for the multiobjective optimization problem, which the number of the objective is two or three. As for the initialization in our proposed OBEA, the number of objective is far more than that in our previous work as OBEA is designed to solve the many-objective optimization problem. Moreover, OBL strategy in OBEA is also included to design the individual selection and construct the last front with the aim of improving the probability of detecting better regions, which are shown in the Section 3.4. However, OBL strategy in our previous work is only adopted to calculate opposite individuals.

3.2. Offspring Creation

In the proposed OBEA, two steps are involved: (1) mating selection, which chooses parents for offspring generation, and (2) variation operation, which generates new candidate solutions. Due to the poor performance to generate offspring solutions in high-dimensional objective space, different methods have been proposed. Here the polynomial mutation and the simulated binary crossover (SBX) [64] are employed as in many other algorithms [59, 65].

3.3. Environmental Selection

Environmental selection aims at surviving N individuals from the current population and their offspring to select the optimal population with the hybrid selection-based nondominated sort. To be specific, the environmental selection consists of four steps: nondominated sorting, normalization, adaptive clustering, and opposition-based selection. The procedure of environmental selection is shown in Algorithm 2.

Input: R, unit reference vector set V
Output:
,
repeat
   and
untill
Last front to be included:
if then
  , break
else
  
  
  
  
3.3.1. Normalization

The normalization procedure incorporated into OBEA is similar to the other algorithms. In normalization, the objective , can be replaced as follows:where is the ideal point with the aim to find the minimum value of each objective for all solutions in S. Similarly, is the nadir point. Moreover, the norm (in the normalized objective space) of each solution in S is also calculated, which is used to calculate the acute angle and the perpendicular distance between the candidate solutions and the reference vectors in the normalized objective space. Given , its norm (denoted as norm()) is defined as follows:

3.3.2. Adaptive Clustering Operation

In OBEA, the clustering is done to the population S at each generation. After the normalization, population S is partitioned into N (N is the number of reference vectors) sub-populations by associating each individual with its closest reference vector (refer to Figure 3). In the normalized objective space, given the normalized objective vector and the reference vector , the acute angle between an objective vector and a reference vector can be calculated as

Furthermore, u is the projection of on , and the distance between the origin and u, denoted as , is calculated as

The perpendicular distance between and , denoted as , is calculated as

For the clustering operator, as most algorithms only consider or θ, here the sum of the and θ with an adaptive mechanism will be involved together aswhere s is a sigmoid function which can be described as follows:where t is the current generation number, is the maximal generation number, and is the control parameter.

As Algorithm 3 shows, the core concept of the adaptive clustering operation is to allocate individuals to the subpopulations with the AD values. To be more specific, for one individual , we first calculate the between the individual and each reference vector. Thereafter, minimum AD is obtained; thus, the related subpopulation is the cluster where the individual belongs to. In this way, an individual is allocated to a subpopulation if and only if the is minimal.

Input: S, unit reference vector set V
Output: C
for to do
for to do
Calculate the , ,
for to do
k = min
return C
3.4. Opposition-Based Selection

To solve the MOPs with good convergence and diversity, the opposition-based selection has been conducted. The main idea of the proposed opposition-based selection is taking the advantages of opposition-based learning and combined the clustering operation into an efficient method to achieve the goal. Following paragraph will describe the selection mechanism in the details, and the general framework is shown in Algorithm 4.

Input: ,  = number of the C
Output: K
for do
for do
  Calculate the (equation (14))
for do
Calculate the opposite individuals in with the OBL according (equations (4)–(6))
Add the opposite individuals into the
Randomly choose K individuals from
return K

In the opposition-based selection, we first construct the last front through the cluster C. To be more specific, our motivation is to find the solution on each reference vector that is closest to the ideal point with the better convergence criterion and diversity. Hence, a hybrid distance () is proposed where the acute angle and perpendicular distance between the candidate solutions and the reference vectors are combined through an adaptive mechanism to select the candidate in the cluster C effectively. The hybrid distance is as follows:where k is the current generation number and K is the maximal generation number. γ can be described as , and the is the smallest angle value between the reference vector and the other reference vectors in the current generation.

After constructing the last front , we calculate the opposite value of individuals in according to the OBL in Section 3.1. Whereafter, the opposite values are added into . Finally, the K individuals are randomly selected from to construct .

3.5. Computational Complexity of OBEA

The normalization and the calculation of norm in OBEA require additions. The time complexity for clustering is . In addition, the selection holds a computational complexity of . To sum up, the overall worst complexity of one generation of OBEA is approximately .

4. Experiment Description

In this section, experiments with twelve algorithms such as MOEA/D, dMOPSO [66], MOMBI2 [67], ϵ-MOEA [68], NSGA-III, RVEA, MaOEARD [69], MyODEMR [70] MOEA/DD, MOEA/DVA [71], Two_Arch2, and SPEAR have been adopted in order to evaluate the performance of the proposed OBEA algorithm on 15 benchmark test problems taken from two widely used DTLZ test suites [72] and WFG test suites [73]. For each test problem, objective numbers varying from 3 to 15, i.e., , are considered.

In the following subsections, the test problems and the quality indicators used in our comparative experiments are first presented. Then, the experimental settings adopted in this study are provided. Moreover, thirty independent runs are executed for each test problem to avoid randomness, and the Wilcoxon rank sum test is adopted to compare the results obtained by OBEA and those nine compared algorithms at a significance level of 0.05.

4.1. Test Problems

Aiming to evaluate the performance effectively, two well-known test suites Deb-Thiele-Laumanns-Zitzler (DTLZ) and Walking-Fish-Group (WFG) are involved in the experiments, as shown in Table 1. Since the nature of DTLZ5 and DTLZ6s PFs is unclear beyond three objectives [73], here we only consider DTLZ 1–4 and DTLZ7 problems for the DTLZ test suite. The main features of these problems are summarized in Table 2. For DTLZ1-DTLZ4 and DTLZ7, the total number of decision variables is given by , where m is the number of objectives and k is set to 5 for DTLZ1, 10 for DTLZ2-DTLZ4, and 20 for DTLZ7. As for all WFG problems, the number of decision variables is set to 24 and the position-related parameter is set to according to [33, 73].

4.2. Performance Metrics

In our experimental study, two widely used metrics are chosen to evaluate the performance of each algorithm, which are named the Inverted Generational Distance (IGD) [74] and Hypervolume (HV) [75]. The IGD and HV can measure both the convergence and diversity of obtained solutions effectively.

4.2.1. Inverted Generational Distance (IGD)

Let denote a set of uniformly distributed solutions in the objective space along the Pareto front. P is an approximation to the , which is obtained by the algorithm. The IGD is described aswhere is the Euclidean distance between a point and its nearest neighbor in P, and is the cardinality of . It can be seen from the definition of IGD that, for a large , it can cover approximately the entire Pareto front, which is another aspect of metric in terms of diversity.

4.2.2. Hypervolume (HV)

Consider the set of final nondominated points S and a reference point in the objective space which is dominated by any point in the set S. Then, the hypervolume of S with regard to a can be described as follows:

It should be noted that choosing r that is slightly larger than the nadir point is suitable [76, 77]. Here, we set a to 1.1, which is the same as in [33]. In addition, for problems with no more than 10 objectives, the recently proposed fast hypervolume calculation method is adopted to calculate the exact hypervolume [78]. As for problems having 15 objectives, the Monte Carlo method [37] with 1,000,000 sampling points is adopted, and all hypervolume values presented in this work are all normalized to [0,1].

4.3. Parameter Settings

As for the parameter settings, several general settings for algorithms are given as follows:(1)Population Size. The setting of the population size N for NSGA-III, MOEA/D, MOEA/DD, MOEA/DVA, and OBEA is controlled by a parameter H. Since MOMBI2 involves a binary tournament selection, we use the same population size as in NSGA-III or OBEA. Moreover, for the other algorithms, the population size is the same as above for ensuring a fair comparison. Population sizes N used in this study for different number of objectives are listed in Table 3.(2)Parameters for Operator. Since the PBI function is involved in MOEA/D, dMOPSO, Two_Arch2, and MOEA/DD, the penalty parameter θ is set to 5 as suggested in [43]. Furthermore, as the simulated binary crossover (SBX) and polynomial mutation are employed to generate offspring solutions, the crossover probability and mutation probability are set to 1.0 and , respectively. For the SBX operator, its distribution index is  = 30, and the distribution index of the mutation operator is  = 20 [45].(3)Parameter Settings for Algorithms. Besides the parameters mentioned above, algorithms also have their specific parameters. These parameters are set mainly according to the suggestions given by their developers with the purpose of ensuring the impartiality and objectivity of the experiment. The details are shown below.(a)Parameter setting in MOEA/D: the neighborhood size T is set to 20 [43].(b)Parameter setting in MOEA/DD: the neighborhood size T is set to 20, and the probability used to select in the neighborhood is [45].(c)Parameter setting in MOEA/DVA: the neighborhood size T is set to 20, the number of sampling solutions in control variable analysis is 20, and the maximum number of tries required to judge the interaction is 6 [71].(d)Parameter setting in dMOPSO: the age threshold is set to 2 [66].(e)Parameter setting in MOMBI2: following the practice in [67], two parameters in MOMBI2 are set as ε =  and α = 0.5, respectively [67].(f)Parameter setting in ϵ-MOEA: the parameter in grid location calculation is [68].(g)Parameter setting in RVEA: in the experimental comparisons, α = 2 and  = 0.1 are used for all test instances [53].(h)Parameter setting in Two_Arch2: the CA size is equal to the population size, and the fractional distance p is equal to (where m is the number of the objective) [49].(i)Parameter setting in OBEA: μ = 0.35 is used for the adaptive clustering mechanism.

4.4. Experimental Results and Analysis

In this section, OBEA is compared with the four efficient evolutionary algorithms that are proposed in Section 4.4.1. Then, six state-of-the-art evolutionary algorithms are also included as the comparators to investigate the ability of OBEA for solving many-objective optimizations in Section 4.4.2. The analysis of the adaptive strategy design is included in Section 4.4.3, and the discussion of the effectiveness analysis of OBL and adaptive strategy is present in Section 4.4.5, followed by the comparisons between HD, TCH, and PBI in Section 4.4.4. Furthermore, investigation of the evolutionary behavior of OBEA on parts of test problems is also included in Section 4.4.6.

4.4.1. Comparison with the Previous Algorithms

Statistical results of the IGD values obtained by OBEA and four algorithms named MOEA/D, dMOPSO, MOMBI2, and ϵ-MOEA are summarized in Tables 4 and 5, where the best results are italicized. The significance of difference between OBEA and the peer algorithms is determined by using the Wilcoxon rank sum test, where “,” “,” and “” indicate the competitor is better than, worse than, or similar to the proposed OBEA, respectively, and the results are summarized as “,” which denotes that corresponding competitor wins on functions, loses on l functions, and ties on t functions, compared with the proposed OBEA.

It can be observed that OBEA covers the majority of the best values among the compared algorithms on the five original DTLZ test instances, especially on DTLZ1, DTZL3, and DTLZ7. As for the DTLZ2 test problem, the best values on 5-objective, 8-objective, and 10-objective are obtained by ϵ-MOEA and MOEA/D, respectively. However, the proposed OBEA tends to perform well on 3-objective and 15-objective test instances. While, for the DTLZ4 test problem, the performance of OBEA is little worse than MOMBI2 and ϵ-MOEA on 3-objective, 10-objective, and 15-objective instances, the proposed algorithm still outperforms the others on 5-objective instance and is similar to ϵ-MOEA on the 8-objective test instance according to the Wilcoxon rank sum test.

The statistical results in Table 5 also indicate that OBEA has achieved the best performance among the five algorithms on all WFG1 and WFG6–9 test instances. For WFG2 and WFG3 test problems, ϵ-MOEA and MOMBI2 show best performance on most test instances. However, the proposed OBEA tends to perform well on parts of instances. Furthermore, OBEA still outperforms the other algorithms like MOEA/D and dMOPSO on most instances according to the Wilcoxon rank sum test. As for the WFG4 and WFG5 test instances, ϵ-MOEA and OBEA cover all the best values among five algorithms, which also verify the best performance of the proposed algorithm.

As evidenced by statistical results of the HV values on DTLZ test suits summarized in Table 1, OBEA has covered the best values on DTLZ1 and DTLZ3 test problems. As for DTLZ2, the best values on 8-, 10-, and 15-objective instances have been obtained by OBEA, while MOMBI2 and ϵ-MOEA have achieved the best performance on different instances, respectively. Moreover, the best values on the 15-objective DTLZ4 test instance have been obtained by OBEA, which indicates the best performance of the proposed algorithm for high-dimensional problems. By contrast, the performance of OBEA on the DTLZ7 test functions is not as good as that on the DTLZ1 and DTLZ3 test functions; however, the performance of the proposed algorithm is superior to MOEA/D and ϵ-MOEA on most DTLZ7 test instances.

Similar observations can be made about the results on the WFG test problem, where OBEA has shown the most competitive performance on most test instances in Table 6, while MOMBI2 and ϵ-MOEA have also achieved the best performance on different instances, respectively. Above all, it can be concluded that OBEA can balance the diversity and convergence better than these four algorithms on most instances.

4.4.2. Comparison with the State-of-the-Art Algorithms

In this section, the proposed algorithm is compared with several state-of-the-art algorithms on the DTLZ and WFG test problems. The experimental results of each compared algorithms on the DTLZ and WFG test instances are shown in Tables 7 and 8, in terms of IGD metrics. The experimental results of HV on these instances are also demonstrated in Tables 9 and 10. Moreover, thirty independent runs are executed for these algorithms on the test problems with the Wilcoxon rank sum test at a significance level of 0.05.

It can be observed that OBEA obtained most of the best values among the compared algorithms on the five original DTLZ test instances. For DTLZ1, OBEA and MOEA/DVA show better performance than the other algorithms according to the Wilcoxon rank sum test. For the DTLZ2 and DTLZ7 test problems, although Two_Arch2 and MOEA/DD, together with MOEA/DVA, cover parts of the best values, OBEA still tends to outperform most algorithms on these instances. As for the DTLZ3, although NSGA-III and RVEA share the same idea with OBEA in using the reference line, OBEA is still superior to NSGA-III and RVEA, which may due to the effectiveness of selection strategy in our algorithm. As for the DTLZ7, MOEA/DD, Two_Arch2 and MOEA/DVA shows good performance on parts of instances. However, the proposed OBEA shows competitive results on the other high-dimensional instances.

As for IGD results on WFG test problems shown in Table 8, OBEA still perform well on most test instances. To be more specific, OBEA covers all the best values on WFG1 and WFG7 test problems, which shows the best performance of the proposed algorithms. As for the WFG2 test problem, which covers the convex, disconnected, and nonseparable PF, the performance of OBEA is a little worse than the NSGA-III and Two_Arch2. However, the proposed OBEA still tends to outperform the other algorithms on parts of instances in WFG2. Although the best values on 3-objective, 8-objective, and 10-objective instances in WFG3 are obtained by MOEA/DVA and Two_Arche2, respectively, OBEA covers all the other instances on the WFG3 test problem. For the WFG4 test problem, OBEA and MOEA/DVA, together with Two_Arch2, obtain the best value. As for the WFG5 and WFG6 test problems, although RVEA covers the best value on 10-objective instance, OBEA still tends to outperform most of the algorithms on the remaining instances. Although best values on some instances of WFG8 have been covered by MOEA/DD and Two_Arch2, OBEA still obtained the best performance on 5-objective and 8-objective test instances. As for the WFG9, NSGA-III shows better performance than that on the DTLZ test problem. However, OBEA still outperforms most of the algorithms, especially on 3-objective, 5-objective, and 15-objective test instances of WFG9.

The statistical results of the HV values obtained by the six algorithms are summarized in Tables 9 and 10. The values of HV presented in tables are all normalized to [0,1]. Clearly, a higher value of HV is recommended. OBEA obtains the majority of the best median values over the instances on DTLZ, which indicates the obvious advantages of the proposed algorithm. For the DTLZ1 test problem, although MOEA/DVA obtains the best values on 8-objective and 10-objective instances, OBEA shows good performance on 3-objective, 8-objective, and 15-objective instances. Since DTLZ2 test problems involve concave PFs, Two_arch2 and MOEA/DVA, together with RVEA, obtain the best performance on 3-objective, 8-objective, and 15-objective instances, respectively, while OBEA covers the best values on 5-objective as well as 10-objective instance and tends to be far more superior to the other algorithms. Furthermore, although DTLZ3 is a concave and multimodal test problem, OBEA obtains most of the best HV value on this test problem. As for the DTLZ4, the performance of OBEA is little worse than Two_Arch2 and MOEA/DVA. However, OBEA still outperform most of the algorithms, especially on 15-objective DTLZ4. In addition, due to the fact that disconnected and multimodal PFs have been employed in the DTLZ7 test problem, the performance of OBEA is not that good on 5-objective and 8-objective instances; however, OBEA still outperforms other algorithms, especially on the other instances.

Statistical results of the HV values on WFG test sets are shown in Table 10. OBEA has shown the most competitive performance on WFG1, WFG6, and WFG8. By contrast, performance of RVEA on the WFG test functions is not as good as that on the DTLZ test functions. However, SPEAR shows good performance on WFG test problems, especially on the WFG5 test problem. Moreover, although the performance of OBEA is little worse than Two_Arch2 on some instances in WFG2, it still tends to be superior on most test instances like the 15-objective WFG5 instance, which is shown in Figure 4. As for the WFG2 test problem, the performance of NSGA-III is much better than that on other test instances. Finally, for the WFG7 and WFG9 test problems, although Two_Arch2 and MOEA/DVA cover some of the best values, OBEA still shows the best performance among most algorithms on these test instances.

In a word, OBEA can obtain the best performance on these test problems from multiobjective to many-objective. This may be due to the fact that the exploration ability and convergence of OBEA are emphasized due to the employed OBL and adaptive clustering mechanisms, which lead the results revealing high efficiency. Note that the performance of OBEA may become little worse while handling some test problems with irregular PF. This is because OBEA uses a set of uniformly distributed reference vectors to handle the problem. While using the uniformly distributed reference vectors is based on an assumption that the PF has a regular geometrical structure, this may have the influence on the performance of OBEA on the test problem with irregular PF. To sum up, the proposed OBEA can obtain the best performance on most test problems with the PF which has regular geometrical structure. As for the problem with quite irregular PF, the performance of OBEA is not as well as on the previous problem. However, the proposed OBEA still tends to provide the competitive results.

As to quantify how well each algorithm performs with the HV overall, the performance score [37] is introduced to rank all the algorithms. Given a specific problem instance, suppose there are k algorithms involved in the comparison. is set to 1 if is significantly better than in terms of HV, and 0 otherwise. Thereafter, for each algorithm , the performance score is determined as follows:

This value represents the number of algorithms which have significantly better performance than the corresponding algorithm on the entire tested instances, and zero means that no other algorithm tends to be significantly better in terms of the HV indicator. Clearly, the smaller the index, the better the algorithm; Figure 5 shows the average performance score over all 70 test instances among the ten algorithms in terms of HV, and the rank of each algorithm according to the score is also given.

As the figure shows, OBEA is in the first place among the eleven algorithms, which indicates the better performance of the proposed algorithm on these test problems. Note that although MOEA/D has been ranked in the last place, it still shows good performance on some test problems. Moreover, in order to better visualize the performance, the average performance score summarized for different numbers of objectives and different test problems in terms of the HV are presented in Figure 6, respectively.

Figure 6(a) shows the average performance over all test problems for different number of objectives. The proposed OBEA works well on nearly all the considered problems of diverse objectives expect on 3-objective problems. However, OBEA still outperforms most of the algorithms on 3-objective problems. Furthermore, the performance score for the individual test problems is shown in Figure 6(b), which covers all number of objectives. For DTLZ1-DTLZ3, OBEA shows good performance and nearly outperforms all other algorithms. As for DTLZ4 and DTLZ7, the performance of OBEA is not as good as that on DTLZ1-3 but still is generally superior among the corresponding algorithms. At last, for the WFG test problem, OBEA nearly covers the best values. In a word, OBEA is under a good performance among the ten algorithms in terms of the HV.

4.4.3. Analysis of Adaptive Clustering Mechanism

In the proposed OBEA, the adaptive clustering is adopted to assist the algorithm for selecting the individual from each subpopulation. Nowadays, different approaches have been proposed to cluster the individuals. For example, RVEA and NSGA-III use the angle to partition the population and θ-DEA uses the perpendicular distance to cluster the individuals. Although most of these approaches simply focus on the acute angle or distance alone to cluster the individuals and may have superior performance in some situations. However, simplex use of the acute angle or distance may lead the crowding of individuals in cluster increase and lose some better situations and thus affects the performance of the algorithm in balancing diversity and convergence.

In the view of this, in our proposed OBEA, the acute angle and the distance are considered comprehensively with an adaptive mechanism to assist the algorithm to assign individuals efficiently. For example, in Figure 7(a), giving the individual x and its normalization , and are two different reference vectors. and indicate the acute angles between and and , respectively. Moreover, and indicate the distances between the origin and the projection of on and , respectively. Considering the distance-based approach to assign the individual x, the reference vector will be the preferred one as its distance is smaller than the distance . However, to focus on the acute angle-based clustering, the individual x will be associated with according to the fact that the acute angle of is smaller than the angle of , obviously. In our adaptive clustering strategy, the individuals will be associated with the reference vector according to the distance between the origin and the projection point at first to increase the diversity of the individuals in the cluster. Thereafter, with increase in the number of iterations, the individuals will be assigned according to the angle through our adaptive function.

As the function s is the key point in the adaptive scheme, here we consider the following three different functions to do the further analysis:

Here, Figure 7(b) shows the values of S using three different functions. t is the current generation number, is the maximal generation number, and is the ceiling function. Moreover, μ is the phase control parameter for the sigmoid function. As Figure 8 shows, the IGD results on different test problems when using various μ values have indicated that 0.3, 0.4 is the best setting for the algorithm. Due to the fact that the function can obtain the standard phase in [0, 1] with this value of μ equal to 0.3 or 0.4 as shown in Figure 8, here is adopted in the following experiments. In order to validate the effectiveness of the proposed schemes, we compare OBEA (here we named our algorithm OBEA-sig) with the following two variants.(1)Variant I: in this variant, the adaptive scheme in OBEA is implemented based on adaptive clustering with the linear function (OBEA-lin).(2)Variant II: the difference of this variant and OBEA is that the adaptive scheme is constructed through the exponential function (OBEA-exp).

In this study, OBEA with two variants are tested on the DTLZ test suite and the WFG test suit with three to fifteen objectives for 30 runs and applied HV to quantify the performance. The parameter settings of the two variants are the same as in OBEA. The mean and standard deviation of HV values of the two variants as well as OBEA are listed in Table 11, where the best mean value for each test instance is italicized. Also, the Wilcoxon rank sum test is performed to investigate whether there is a significant performance difference between OBEA and its two variants. As Table 11 shows, OBEA-sig obtains most of the best mean values among the DTLZ and WFG test instances. Although OBEA-lin and OBEA-exp have obtained parts of the best values, OBEA-sig still performs well on most test instances according to the Wilcoxon rank sum test. In a word, OBEA-sig is more stable and efficient than others. Therefore, the sigmoid function of adaptive strategy is adopted in this paper.

4.4.4. Analysis of Opposition-Based Selection

The key point in our opposition-based selection part is the proposed hybrid distance (HD). Here, we use the contrast experiment to further explore the effectiveness of the strategy. Note that the formulation of HD shares some similarity to angle-penalized distance (APD) [53] and penalty-based boundary intersection (PBI) [43], which are adopted in the RVEA and decomposition-based MOEAs, respectively. In this section, the comparative studies on the original OBEA, our proposed algorithm with the APD approach, and the proposed algorithm with the PBI approach are carried out. In particular, we replace the HD with APD or PBI. Thereafter, the opposition point is calculated according to the Section 3.4. Moreover, the experiment is also conducted with the Wilcoxon rank sum test. For simplicity, the proposed algorithm with the APD and PBI approaches are denoted as OBEA-APD and OBEA-PBI, respectively.

The performance of OBEA, OBEA-APD, and OBEA-PBI has been verified on four benchmark test problems selected from different test suites named DTLZ1, DTLZ2, WFG2, and WFG4. As shown by the statistical results summarized in Table 12, OBEA shows the best performance on most problems, especially for the DTLZ2, on which the best values are all obtained by the proposed algorithm with HD. As for OBEA-APD and OBEA-PBI, these two algorithms obtain the best values on some instances. However, performance of OBEA on these instances is equal to these two algorithms according to the Wilcoxon rank sum test. To sum up, the results indicate that the proposed HD has better capability and effectiveness than APD and PBI, which further verify the effect of the proposed strategy on handling the many-objective problems.

4.4.5. Effectiveness Analysis of OBL and Adaptive Clustering in OBEA

As two main strategies have been designed in our OBEA, here three variational algorithms are generated to investigate the effectiveness of each strategy. In particular, they are OBEA1 (OBEA without OBL strategy), OBEA2 (OBEA without adaptive clustering strategy), and OBEA3 (OBEA without OBL strategy and adaptive clustering strategy). Moreover, the performance of these algorithms is shown in Table 13 and the better values are italicized.

The results show that the two strategies both have ability to improve the performance of the algorithm. In detail, although OBEA1 and OBEA2 both show the better capability of searching for the best function values for most cases, the performance of OBEA2 is slightly worse than OBEA1 according to the Wilcoxon rank sum test, which indicates that adaptive strategy has further influence on assisting algorithm than the OBL strategy. This can be attributed to the best structure of the adaptive clustering mechanism and the effectiveness of the reference line. Moreover, although the performance of OBEA3 is little worse than the others, the values of the HV of this variation still tend to be good, which indicates the best potential of nondominated sorting in OBEA. To sum up, due to good performance of these strategies, our proposed OBEA gains best potential in solving different kinds of problems.

4.4.6. Investigation of the Evolutionary Behavior

As to observe the evolutionary behaviors of the algorithms, further studies with these ten algorithms have been carried out to exhibit the evolutionary trajectories. Moreover, Figure 9 plots the performance trajectories of IGD versus the number of function evaluations for the ten algorithms on the 15-objective DTLZ1, DTLZ4, DTLZ7, WFG4, WFG7, and WFG9 test instances. Depending on the figures, OBEA shows an obvious advantage over most of its competitors. Although the performance of OBEA is little worse on DTLZ4, the figure clearly shows that the proposed OBEA still tends to be superior to the other algorithms on these two instances. By simultaneously considering the IGD, HV, and evolutionary behaviors, we can conclude that the overall performance of OBEA is better than or at least equal to the nine competitors.

5. Conclusion

Due to the loss of selection pressure and the ineffective design of balancing the diversity and convergence, general MOEAs always encounter challenges in solving MaOPs. Although diversity of algorithms, especially the reference-based algorithms, have been designed to address this issue, most algorithms that belong to these categories simply use the angle or distance solely to measure the quality of the population members with the reference set, which may lose some good solutions due to their simplex selection mechanism. Therefore, we have presented a new many-objective evolutionary algorithm, called OBEA. Two strategies called OBL and adaptive clustering have been adopted in environmental selection with the purpose of improving the performance of the algorithm in balancing the convergence and the diversity.

To establish the strong competitiveness, we have employed extensive experimental comparison of OBEA with ten algorithms. A number of well-known benchmark problems such as DTLZ test suits and WFG test suits are chosen to explore the abilities of the algorithms. The statistical results reveal that the proposed OBEA performs well on almost all the problem instances, and it obviously outperforms most state-of-the-art many-objective optimizers. However, the result also indicates that none of the algorithms is capable of preceding any of the other algorithms on all the instances which reflect the importance of the choice of algorithms carefully when solving the MaOPs.

In the future, we will extend OBEA to solve constrained many-objective problems by incorporating constraint handling techniques in order to further verify its effectiveness.

Data Availability

All data included in this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant 61873240.