Abstract

Multiobjective evolutionary algorithms (MOEAs) have witnessed prosperity in solving many-objective optimization problems (MaOPs) over the past three decades. Unfortunately, no one single MOEA equipped with given parameter settings, mating-variation operator, and environmental selection mechanism is suitable for obtaining a set of solutions with excellent convergence and diversity for various types of MaOPs. The reality is that different MOEAs show great differences in handling certain types of MaOPs. Aiming at these characteristics, this paper proposes a flexible ensemble framework, namely, ASES, which is highly scalable for embedding any number of MOEAs to promote their advantages. To alleviate the undesirable phenomenon that some promising solutions are discarded during the evolution process, a big archive that number of contained solutions be far larger than population size is integrated into this ensemble framework to record large-scale nondominated solutions, and also an efficient maintenance strategy is developed to update the archive. Furthermore, the knowledge coming from updating archive is exploited to guide the evolutionary process for different MOEAs, allocating limited computational resources for efficient algorithms. A large number of numerical experimental studies demonstrated superior performance of the proposed ASES. Among 52 test instances, the ASES performs better than all the six baseline algorithms on at least half of the test instances with respect to both metrics hypervolume and inverted generational distance.

1. Introduction

In real-world applications, various kinds of optimization problems [13] have multiple conflicting objectives, such as design hybrid renewable energy systems [4], resource management for intelligent traffic [5, 6], optimization of wireless networks [7, 8], resource allocation for radar system [9, 10], and workflow scheduling in distributed environments [11]. These problems are often called multiobjective optimization problems (MOPs) and mathematically expressed as

As shown in (1), a decision vector and its corresponding objective vector are denoted as and . Then, the decision vector can be detailed as , where corresponds to the -th decision variable and represents the count of decision variables. For objective function, the is the -th optimization objective, which maps each decision vector from -dimensional decision space to a real number, i.e., . Generally speaking, MOPs with at least four optimization objectives (i.e., ) are referred to many-objective optimization problems (MaOPs) [1214].

On account of inherent conflicts among different optimization objectives, there is no one optimal solution that minimizes all the objective functions in unison and is replaced by a set of compromise solutions [1517]. With regard to two feasible solutions , one solution is said to be dominated by the other solution (indicated as ) iff has lower or equal objective value in each objective dimension than , and at least, one objective function of has a lower value. Any feasible solution that is not dominated by all other solutions is called the Pareto-optimal solution. For an MOP, the number of Pareto-optimal solutions is very large, even infinite. The solution set composing of all the Pareto-optimal solutions is referred to Pareto set () in decision space, i.e., . The projection of onto objective space is referred to Pareto-optimal front (), i.e., .

As population-based evolutionary algorithms can search and maintain a set of approximating solutions during each run, their advantages in dealing with MOPs have been demonstrated, and they have been used in various applications. Until now, a plethora of MOEAs have been developed and improved. On the basis of their environment selection mechanisms, these existing works generally fall into the following three categories: decomposition-based, indicator-based, and dominance relation-based MOEAs [1820]. Decomposition-based MOEAs leverage a set of reference vectors to transform the original MOP into a series of subproblems, which concurrently are solved in a collaborative way [2123]. For indicator-based MOEAs, instead of comparing solutions using objective vectors directly, they attempt to measure solutions by one single metric, such as hypervolume, and [24, 25]. The MOEAs based on dominance relation, represented by NSGA-II [26], have been widely used, and there exist a large number of relevant works [18]. The environment selection mechanisms of MOEAs based on dominance relation consist of two main steps: (1) cluster the solutions into many groups using their dominance relations, such as Pareto dominance and fuzzy dominance [26, 27], and (2) a secondary metric, e.g., crowding distance and shift-based density estimation [28], is employed to sort the solutions in the last accepted group.

Unfortunately, no one single MOEA equipped with given parameter settings, mating and variation operators, and environmental selection mechanism is suitable for obtaining a set of solutions with excellent convergence and diversity for various types of MaOPs. The reality is that different MOEAs show great differences in handling certain types of MaOPs. The analyses in [29, 30] demonstrate that Pareto dominance-based selection approaches have competitive performance in maintaining good diversity, while decomposition-based and indicator-based selection approaches are able to provide strong convergence pressure.

To achieve better overall performance in solving a diverse range of MaOPs, up to present, there exist some works devoted to combining strengths of different MOEAs via ensemble of multiple mating operators, variation operators, or environmental selection strategies. For instance, the AdaBoost framework is employed to ensemble of multiple variation operators to promote search abilities on both the exploitation and exploration [30]. The work [31] makes use of two external archives for ensemble of dominance-based and indicator-based environmental selection approaches. Also, ensemble of mating selection strategies is studied [32]. Most of existing works are focused on ensemble of multiple approaches for a certain module, while few works consider integrating multiple complete MOEAs. However, each module of an MOEA has a significant impact on its overall performance, and even different approaches in one module are competitive in solving some specific types of MaOPs. Therefore, one of the emphases in this paper is to integrate and collaborate multiple complete MOEAs with quite different characteristics for solving MaOPs with various characteristics, such as nonlinearity, multimodality, and irregular PF shapes.

Solution selection for evolutionary multiobjective optimization is a tricky task. For two nondominated solutions and , the following contradictions often appear in different evolutionary stages: is better than in some stages, while the fitness of is better in other stages. The work [33] has provided a visual example to illustrate the above contradictions. These contradictions often lead to that some promising solutions are discarded during the evolution process. To avoid good solutions being discarded, Ishibuchi et al. suggested using unlimited external archives to store all the generated solutions [33, 34]. Nevertheless, maintaining an unlimited external archive during each generation inevitably results in unacceptable computational overhead. Therefore, another focus of this paper is how to effectively alleviate the negative phenomena that some promising solutions are discarded during the evolution process.

Aiming at the above two challenges, this paper mainly makes the following technical contributions:A flexible ensemble framework is designed to integrate any number of MOEAs to promote their advantages for better solving a wide range of MOPs. In this framework, a big archive is embedded to store a large number of nondominated solutions to alleviate the undesirable phenomenon that some promising solutions are discarded during the evolution process.We develop an efficient environmental selection strategy to maintain the big archive. The time complexity of this selection strategy is linear with the number of optimization objectives and the size of big archive.An adaptive strategy is proposed to online reward competitive mating and variation operators to generate more offspring solutions, such combining their advantages for solving various MOPs.Three representative algorithms (i.e., RVEA [15], VaEA [35], and SPEA2+SDE [28]) are selected to realize a prototype for the proposed framework. Also, the effectiveness of the proposal is verified by comparing it with six representative algorithms on 52 test instances, and the comparison results demonstrate the superiority of the proposal.

The structure of this paper is as follows. Section 2 reviews the related studies on MOEAs, especially on ensemble approaches. Then, the proposed approach is detailed in Section 3, followed by experimental verification in Section 4. At last, Section 5 concludes the paper and gives several research directions.

A MOEA mainly contains the following three operators: mating operator, variation operator, and environment selection operator. During each generation, the first two operators are combined to generate a new offspring population, while environment selection operator is employed to select a population from the combination of offspring and parent populations. Over the past three decades, multiobjective evolutionary optimization community is booming, and a plethora of relevant approaches have been developed to solve MOPs and MaOPs.

2.1. Ensemble of Variation Operators

So far, there exist many works focusing on the variation operator, and the powerful evolutionary algorithms have been widely employed. To produce an offspring population, the simulated binary crossover [36] and the polynomial mutation [37] have been widely employed in multiobjective evolutionary optimization. Additionally, other evolutionary algorithms, such as differential evolution [38, 39], artificial bee colony [40], covariance matrix adaptation [41, 42], and particle swarm optimization [43], have been explored to reinforce search ability of multiobjective evolutionary optimization.

As we all know, different evolutionary algorithms possess quite different capabilities in terms of exploitation and exploration, and they have their own advantages in solving different types of MaOPs. To date, many efforts have been devoted to integrate multiple evolutionary algorithms. For instance, Venske et al. [44] suggested two mechanisms of strategy adaptation to integrate three versions of differential evolution, i.e., DE/rand/1/bin, DE/rand/2/bin, and DE/nonlinear, into decomposition-based evolutionary multiobjective optimization. Wang et al. [45] suggested a competitive and cooperative ensemble approach that simulated binary crossover and differential evolution are simultaneously used to generate new solutions in each generation, and the number of generated offspring solutions is assigned adaptively according to their average fitness improvement at generation. Santiago et al. [46] employed a fuzzy logic controller to dynamically choose variation operators during different evolutionary phases on the basis of variation operators’ contributions in the past generations. Li et al. [47] made use of the sliding window to count the fitness contributions of multiple variation operators and also suggested a bandit-based adaptive operator selection mechanism to pick variation operators for decomposition-based MOEAs during one generation. Wang et al. [30] followed the AdaBoost ensemble framework in a machine learning area and then designed multiple subpopulation-assisted operator ensemble approach to adjust evolution times for different operators.

2.2. Ensemble of Environmental Selection Operators

From the perspective of environmental selection, the existing MOEAs can be roughly divided into three categories: decomposition-based [22, 48], dominance-based [26, 49], and indicator-based [25]. Many studies have revealed that there is no one environmental selection approach performing well on a wide range of scenarios. For instance, the decomposition-based and indicator-based MOEAs perform better on convergence pressure, while dominance-based MOEAs are capable of maintaining diversity. However, their respective shortcomings are also obvious. For decomposition-based MOEAs, their performance is heavily dependent on the PF shapes of MOPs [29]. The high complexity of indicator-based MOEAs often limits their scalability in the objective dimension [25]. Besides, dominance resistance [50] seriously hinders Pareto dominance-based MOEAs from solving MOPs with many objectives.

Driven by these facts, many efforts have been dedicated to integrating multiple environmental selection operators, aiming to combine their advantages. For instance, Li et al. [51] mixed decomposition-based and dominance-based environmental selection operators to make good trade-offs between convergence and diversity. The algorithms MOEA/D-M2M [52] and OPE-MOEA [53] combined decomposition-based and Pareto dominance-based strategies. In these two algorithms, a set of reference vectors were employed to partition the objective space, and the Pareto dominance-based strategy was used to maintain the solutions in each subspace. Zhang et al. [32] regarded multiple environmental selection approaches having different characteristics as voters. Then, the solutions elected by more voters are selected to construct the mating pool for generating new offspring population. Liang et al. [54] integrated indicator and dominance-based strategy to balance the convergence, diversity, and coverage for solving MaOPs.

3. Algorithm Design

In this section, we first design the flexible ensemble framework, followed by a prototype using three representative MOEAs. Also, the maintaining strategy with low time complexity for big archive is detailed.

3.1. Flexible Ensemble Framework

Most of the existing evolutionary multiobjective ensemble frameworks are dedicated to integrating multiple operators for a certain module, such as mating selection, offspring generation, and environment selection. Unlike them, this paper strives to design a flexible ensemble framework, which is scalable for embedding all modules of multiple MOEAs, because each module of an MOEA has a significant impact on its overall performance. Also, a big archive is added to alleviate the bad phenomenon that some promising solutions are discarded during the evolution process. The visual flow diagram of proposed ensemble framework is given in Figure 1.

In the proposed ensemble framework, the mating and variation operators of an existing MOEA are taken as a whole, which are denoted as . When existing MOEAs are implemented, their mating and variation operators form a set, expressed as . Also, populations, expressed as , are integrated into the framework in parallel, and each population is maintained by the environmental selection (ES) operator of the corresponding MOEA.

As illustrated in Figure 1, the populations are first initialized with a randomly generated population, followed by initializing the selection probabilities for the mating and variation operators. Then, the following six steps continue to loop until the stop condition is reached:Step 1: according to selection probabilities, the roulette-wheel method [55] is employed to select one mating and variation operator.Step 2: a new offspring population is generated by implementing selected mating and variation operator on the corresponding population. In other words, if the mating and variation operator of -th MOEA is selected, the corresponding population refers to the -th population.Step 3: the environment selection strategies run in parallel to update the combined populations, i.e., . In this ensemble framework, the newly generated population is combined with populations, respectively.Step 4: the new retained solutions from the by each environment selection strategy are selected, and the sets of selected solutions are denoted as , respectively.Step 5: add the solution sets, , into the big archive, and update the big archive using a low-complexity maintenance mechanism, which will be detailed in the Section 3.2.2. At the same time, record the number of solutions retained in big archive for each environment selection operator.Step 6: for an MOEA, the selection probability of its mating and variation operator is updated based on the number of its selected solutions retained in the big archive.

Once the stopping condition is reached, an environmental selection operator will be applied to select an output population from the big archive.

Suppose the parameters, , respectively, represent the number of solutions retained in big archive from solution sets , and the selection probability of the mating and variation operator from the -th MOEA is updated as .

3.2. Prototype of the Proposed Framework

In this section, we implement three representative MOEAs and design a maintenance strategy to realize a prototype for the proposed ensemble framework.

3.2.1. Selected Algorithms

Three representative MOEAs, i.e., RVEA [15], VaEA [35], and SPEA2 + SDE [28], are selected and implemented. The brief descriptions of the above three algorithms and the reasons for selecting them are given as follows.

(1) RVEA. Its environmental selection operator employs a set of reference vectors to partition the objective spaces of MOPs into a series of nonoverlapping subspaces. For the solutions associated to the same subspace, the one with the smallest angle-penalized distance is selected.

The mating operator of RVEA randomly pairs solutions in the current population to construct mating pool, and then, the simulated binary crossover [36] and polynomial mutation [37] are employed to generate offspring population.

Similar to other decomposition-based MOEAs [22], the RVEA performs competitive on MOPs with PFs similar to the unit simplex, while its diversity deteriorates severely facing with MOPs with irregular PFs. Thus, we select algorithm RVEA as a representative for the branch of decomposition-based MOEAs.

(2) VaEA. The environmental selection operator of VaEA is a new version of Pareto dominance-based approach. It first makes use of nondominated sorting method to group the solutions into different levels. Then, for the solutions in a last accepted level, a maximum-minimum-angle strategy is designed to improve the diversity, while a worse-elimination strategy is employed to strengthen convergence.

The mating and variation operators of VaEA are the same as that of RVEA.

The VaEA belongs to the branch of Pareto dominance-based MOEAs and has lower time complexity. This makes it more popular in an ensemble framework.

(3) SPEA2 + SDE. A shift-based density estimation approach is designed to promote diversity maintenance in the environmental selection process. To measure the density of a solution , the density estimation approach shifts the objective values of other solutions. When the -th objective value of solution is lower than that of solution in an objective, the is shifted to .

On the basis of the solutions’ fitness, the mating operator of SPEA2 + SDE employs tournament selection to construct the mating pool.

As demonstrated in work [56], the SPEA2 + SDE performs well on various MOPs compared with 21 MOEAs. Thus, we also choose it as a representative MOEA and implement it into the proposed ensemble framework.

Note that the simulated binary crossover and polynomial mutation are employed in original RVEA, VaEA, and SPEA2 + SDE as variation operators to generate offspring populations. To make the prototype have multiple variation operators, the variation operator of algorithm VaEA is replaced by the differential evolution [57].

As the VaEA is having parameter-less property and low time complexity compared with other two algorithms, when the stopping condition is reached, its environmental selection operator is applied to select an output population from the big archive.

3.2.2. Maintenance Mechanism for Big Archive

Using a big archive to store a large number of (e.g., ten times the population size) solutions is a good way to alleviate the negative phenomenon discarding some promising solutions during the evolution process. However, the time complexity of environmental selection operators in most existing algorithms is quadratic or even cubic in relation to the number of candidate solutions. For instance, time complexity of algorithms VaEA [35], NSGA-III [58], Two_Arch2 [31], and RVEA [15] is the square of the number of solutions, while that of algorithms GrEA [59] and SPEA2 + SDE [28] are cubic in relation to the number of solutions. Such high complexity limits the scalability of these environmental selection operators to maintain a big archive. Thus, a low-complexity maintenance mechanism is needed to maintain the big archive.

In this section, we design an efficient maintenance mechanism for maintaining the big archive. This mechanism first deletes all solutions that cannot dominate the Nadir point. Then, traversing each objective dimension, the interval consisting of minimum to maximum objective values is divided into a series of subintervals, and a solution with the smallest fitness is selected from each subinterval. The pseudocode of the proposed maintenance mechanism is shown in Algorithm 1.

Input: A big archive ; Nadir point ; Ideal point ;
Output: An updated big archive ;
(1)
(2)for eachdo
(3)if objective vector of cannot dominate then
(4)  
(5)
(6)
(7)fordo
(8), where
(9)
(10)foreachdo
(11)  
(12)  
(13)
(14)fordo
(15)if is not empty then
(16)   Select one solution with the smallest fitness from
(17)  
(18)  
(19)  

As described in Algorithm 1, the inputs of the proposed maintenance mechanism are big archive, the nadir, and ideal points, while its output is the updated big archive. The set is used to record the solutions whose objective vectors cannot dominate the nadir point and are initialized as empty (line 1). At first, all the solutions in the big archive are checked (line 2), and the solutions cannot dominate the nadir point are recorded in (lines 3-4). After the solutions in are removed (line 5), the remaining solutions will be selected from each objective dimension (line 7), and the set is employed to store the selected solutions. For an objective dimension, the interval consisting of minimum to maximum objective values is divided into subintervals, and the solutions located in the -th subinterval are recorded in the set (lines 8–12). After that, one solution with the smallest fitness from each subinterval is selected (lines 14–17). For simplicity, the fitness of a solution is defined as the sum of all its objective values.

The time complexity of identifying the solutions that cannot dominate the nadir point is (lines 2–4), where and denote the number of objectives and solutions in the big archive. For an objective dimension, the proposed maintenance mechanism needs to check all the solutions to assign them into a corresponding subinterval, and thus, this process takes . The time complexity of selecting solutions from each subinterval is also . Then, the time complexity from lines 7 to 17 is . In summary, the time complexity of the maintenance mechanism is .

Based on the above analysis, we can see that the time complexity of the proposed maintenance mechanism has a linear relationship with the number of objectives and the size of the big archive. This means that the maintenance mechanism has good scalability in both the objective dimension and solution scale and is suitable for dealing with optimization problems with many objectives and maintaining big archive.

4. Experimental Studies

This section carries out a large number of comparison experiments to verify the effectiveness of the proposal.

4.1. Experiment Design
4.1.1. Comparison Algorithms

To investigate the improvement of the proposed ensemble framework on the performance of existing MaOEAs, the three algorithms in its prototype, i.e., RVEA [15], VaEA [35], and SPEA2 + SDE [28], are first selected as comparison ones. In addition, three competitive algorithms, i.e., NSGA-III [58], MOEA/DD [51], and MOEA/D-AM2M [60], are chosen as comparison MaOEAs.

The brief descriptions of RVEA, VaEA, and SPEA2 + SDE are provided in Section 3.2. Here, we just give the brief descriptions for NSGA-III, MOEA/DD, and MOEA/D-AM2M as follows.

NSGA-III first employs the Pareto dominance relationships among solutions to classify them into many groups, and then, the solutions in the last accepted group are selected by a reference vector-based approach.

MOEA/DD improves the algorithm MOEA/D with a Pareto dominance strategy to strengthen the convergence for each subproblem.

MOEA/D-AM2M is a representative MOEA based on decomposition, and an adaptive mechanism is designed to adjust the reference vectors according to the distribution of obtained solutions. Also, a mechanism is employed to assign more search efforts for promising subproblems.

4.1.2. Benchmarks

The test functions MaF1–MaF13 [61] is the new versions of test suites DTLZ and WFG for assessing evolutionary many-objective optimization algorithms. These new test functions contain a variety of complicated features in real-world applications, such as degenerate, disconnected, biased deceptive, and mixed PF shapes, and complex Pareto sets. Based on these, the challenging test functions, i.e., MaF1–MaF13, are chosen to compare the proposed ASES with other six existing MaOEAs.

These test functions are scalable in the number of objectives, and a test function with a specific number of objectives is called as a test instance. For example, the MaF1 having 5 objectives is a test instance. To assess the scalability of algorithm ASES in objective dimension, the number of objectives is set to 3, 6, 9, and 12, respectively. So, for 13 test functions with 4 settings for the number of objectives, there are 52 test instances comparing the seven algorithms.

To ensure fair comparisons, all MaOEAs are set to the same population size for an instance. Besides, the population sizes of MaOEAs are set according to the objective number of test instances. Then, for the test instances with 3, 6, 9, and 12 objectives, the population sizes of the six comparison algorithms are set as 136, 182, 210, and 156, respectively.

For the proposed ASES, the size of output population is set as the population size of comparison algorithms. Besides, the size of big archive is set to ten times the size of output population.

4.1.3. Comparison Metrics

The metric inverted generational distance (IGD) [62] mainly tests the fitting degree of output population to the PF. When calculating the IGD, a set of uniformly distributed sample points on the PF is required. In the experiment, the size of is set to about 10000 for each test instance. With the sample points , the IGD value of an output population is calculated as follows:where denotes the distance between sample point and objective vector of solution . According to formula (2), we can derive that a smaller IGD value of an output population means the better performance of the corresponding MaOEA.

The hypervolume (HV) [63] refers to the volume of a space consisting of a reference point and nondominated solutions in a population. This metric can simultaneously measure the diversity and convergence of the output population. A larger HV value indicates better performance of the corresponding algorithm. For each test instance, the reference point is set to 1.1 times of its nadir point in objective space. With a reference point , the HV value of population can be calculated as follows:where refers to the Lebesgue measure.

The metrics IGD and HV have been widely employed to assess the effectiveness of MaOEAs considering simultaneously both the diversity and convergence. The experiments also make use of them to compare our proposed ASES with other six existing MaOEAs.

4.1.4. Stop Condition

Similar to the works [15, 28, 35, 51, 58, 60], the stop condition is set as the maximal function evaluations for all the seven MaOEAs. For all the test instances, the number of maximal function evaluations is set as 100,000.

Regarding the variation operators simulated binary crossover (SBX), differential evolution (DE), and polynomial mutation, their parameters employ the default values provided by the platform PlatEMO (https://github.com/BIMK/PlatEMO). To be specific, the distribution index for SBX is set to 20; the crossover constant and weighting factor for DE are, respectively, set as 1.0 and 0.5; and we set and for the polynomial mutation.

4.2. Comparison Results and Analysis

In the experiments, the seven MaOEAs, i.e., RVEA, VaEA, SPEA2 + SDE, NSGA-III, MOEA/DD, MOEA/D-AM2M, and ASES, are repeated 31 times on all the test instances, and the mean and variance of IGD and HV values are reported in Tables 1 and 2. Besides, the Wilcoxon rank sum test is employed to carry out significance analysis between the proposed ASES and each comparison MaOEA, and the confidence level is set to 0.05. The symbols , , and in Tables 1 and 2 denote that the comparison MaOEA is better than, worse than, and similar to our proposal, respectively. For each test instance, the best metric value among the seven algorithms is highlighted with a gray background.

From Table 2, we can observe that, among 52 test instances, the proposed ASES has the smallest IGD values among seven MaOEAs on 27 test instances. These results demonstrate that the ASES performs better overall performance than other six comparison MaOEAs with respect to metric IGD. In detail, our proposal significantly outperforms RVEA, NSGA-III, VaEA, MOEA/DD, SPEA2 + SDE, and MOEA/D-AM2M on 46, 43, 41, 48, 39, and 49 out of the 52 test instances, respectively. By comparing the proposed ASES with the three comparison algorithms (i.e., RVEA, VaEA, and SPEA2 + SDE) in the prototype, we can derive that the proposed ensemble framework has the ability to further improve the performance of existing MaOEAs.

Facing test functions with irregular PF shapes, such as inverted MaF1, badly scaled MaF4, degenerate MaF6, disconnected MaF7, and complicated mixed MaF10, a part of reference vectors in decomposition-based MaOEAs are either not associated with any solution (e.g., RVEA) or associated with crowding solutions (e.g., NSGA-III, MOEA/DD, and MOEA/D-AM2M), resulting in their poor diversity. Since the proposal integrates Pareto dominance-based algorithms to make up for this deficiency, the proposed ASES performs better than these decomposition-based MaOEAs as a whole.

With respect to metric HV, the proposed ASES shows competitive advantages. As shown in Table 2, the proposal stands out as the strongest MaOEA, outperforming all the six comparison algorithms in 27 out of the 52 pairwise comparisons. These results again demonstrate that the ASES performs better overall performance than other six comparison MaOEAs in terms of metric HV. In detail, our proposal generates significantly larger HV values than RVEA, NSGA-III, VaEA, MOEA/DD, SPEA2 + SDE, and MOEA/D-AM2M on 47, 42, 47, 48, 36, and 44 out of the 52 test instances, respectively.

The parallel coordinate [64] is a technique for displaying high-dimensional vectors in a two-dimensional graph, where each dimension of high-dimensional vectors is plotted on a vertical axis in the two-dimensional graph, and then, a high-dimensional vector is presented by a polyline connecting points on each axis. It has been frequently employed to visualize the distributions of output populations in evolutionary many-objective optimization community.

To graphically illustrate both the convergence and diversity of the seven algorithms, the parallel coordinate is employed to plot the distributions of their output populations corresponding to the smallest IGD values among 30 runs on 9-objective MaF4, MaF6, and MaF10, as illustrated in Figures 24.

The PF shape of test function MaF4 is inverted and badly scaled, and also, the number of local Pareto-optimal fronts in this function is up to (). For the badly scaled PF of MaF4, as shown in Figure 2(a), the interval of the -dimension objective values is from 0 to . From Figures 2(b) and 2(g), we can see that some solutions in the output populations of RVEA and MOEA/D-AM2M are far from convergent. Compared with the above two MaOEAs, the convergences of algorithms NSGA-III and VaEA are much better, but there exists still a part of solutions that fail to converge to dominate the nadir point of PF. For algorithm MOEA/DD, as shown in Figure 2(e), although it has good convergence, its diversity is poor. For instance, the values of the output population in 9-th objective only covers the interval [50, 260], while the interval covered by PF in this objective is [0, 512]. Comparing Figures 2(h) and 2(a), we can find out that the output population of algorithm ASES can fit the true PF very well. These visual results can explain that the proposal has better performance than the five comparison algorithms RVEA, NSGA-III, VaEA, MOEA/DD, and MOEA/D-AM2M with respect to both the IGD and HV, which are shown in Tables 1 and 2. Intuitively, SPEA2 + SDE shows good convergence and diversity, as shown in Figure 2(f). According to the IGD and HV values of algorithms SPEA2 + SDE and ASES on 9-objective MaF4, we can derive that the proposed ASES has better convergence and diversity than the comparison algorithm SPEA2 + SDE.

The test function MaF6 is a representative of MaOPs with degenerate PFs. To visually compare the capability of the seven MaOEAs in dealing with degenerate PFs, their output populations corresponding to the smallest IGD values among 30 runs on 9-objective MaF6 are plotted in parallel coordinate, as illustrated in Figure 3. For test function MaF6 with 9 objectives, the ninth objective on the PF ranges from 0 to 1.0, while the values of the eighth objective on the PF are between 0 and 0.7, as shown in the true PF in Figure 3(a). From Figures 3(b) and 3(d)3(g), it is obvious that most of the solutions in output populations of the comparison algorithms RVEA, VaEA, MOEA/DD, SPEA2 + SDE, and MOEA/D-AM2M do not converge to the PF. Take the eighth optimization objective for instance, and the values of most output solutions from the above five algorithms are far greater than 0.7. The algorithm NSGA-III has better convergence and diversity than other five comparison algorithms. Comparing Figures 3(h) and 3(c), we can observe that the convergence of NSGA-III and ASES is similar, but the diversity of ASES is much better. The above comparison results demonstrate the competitive capability of the proposal in solving MaOPs having degenerate PFs.

The PF of test function MaF10 has complicated mixed geometries with scaled convex and concave segments. As shown in Figure 4, although the output population of algorithm ASES cannot fully approximate the true PF, combined with the distributions of the output populations and the HV/IGD values in Tables 1 and 2, we can derive that the proposed ASES is more powerful than the six comparison algorithms.

Besides, the convergence trends of the six MaOEAs on solving MaF4, MaF6, and MaF10 with nine objectives with respect to metric IGD are illustrated in Figure 5. From Figure 5(a), we can observe that, in the initial stage, the IGD values of the proposed ASES decline slower than the RVEA. As the progress of optimization search, the proposed ASES still obtains lower IGD value lower than all the five comparison algorithms. For the 9-objective MaF6, as shown in Figure 5(b), the IGD of the proposed ASES declines much faster than all the five comparison algorithms. As shown in Figure 5(c), in the early stage, the convergence pace of the comparison algorithm SPEA2 + SDE is much faster than that of the algorithm ASES, but its IGD values remain unchanged after 300 generations. Then, the proposed ASES obtains lower IGD values than SPEA2 + SDE. The competitive performance of the ASES can be attributed to the fact that our proposed flexible framework has the capability to integrate the advantages of multiple MaOEAs.

The above experimental results have demonstrated that the proposed ensemble framework is highly beneficial to existing MaOEAs, and its prototype has competitive performance. To further investigate the behavior of the proposed ASES, the selection probability fluctuations of three variation operators during the entire evolutionary process for solving the 9-objective MaF1-MaF4 are plotted, as shown in Figure 6.

Figure 6 illustrates that no single mating and variation operator can dominate over the entire evolutionary process of solving any one instance. As the search progresses, the selection probabilities of different mating and variation operators fluctuate significantly. For instance, when solving 9-objective MaF1, at the beginning, the selection probability of operator from VaEA fluctuates between 0.35 and 0.43, while by about 400 generations, its fluctuation range became 0.28 and 0.4. When we look at Figure 6(c), we find that the fluctuation characteristics of the three operators on 9-objective MaF3 are completely different from those the other three test instances. This is because test function MaF3 has a large number of local optimal solutions. These observations imply that the proposal can employ efficient operators according to the problem characteristics and different search stages.

5. Conclusions and Future Work

This paper focuses on the issue that no one single MOEA is suitable for solving various types of MaOPs. We design a flexible ensemble framework to integrate any number of MOEAs to promote their advantages to solve a wide range of MOPs. Besides, a big archive is embedded in this ensemble framework to alleviate the undesirable phenomenon that some promising solutions are discarded during the evolution process. Finally, extensive experiments are performed on 52 challenging test instances to verify the effectiveness of the proposal by comparing it with six state-of-the-art MaOEAs.

The MaOPs with complicated PSs are challenging the existing evolutionary multiobjective optimization algorithms. Solving these challenging problems is one of our next research. Also, applying evolutionary multiobjective optimization to solve problems coming from real-world applications, such as Internet of Things [65], resource investment project scheduling [66], supply chain management [67], and collaborative robots [68], is another interesting direction, especially designing problem-specific evolution rules and environment selection mechanisms. Besides, large-scale multiobjective optimization [6972] is another challenging task in multiobjective optimization community and frequently occurs in real-world applications. Thus, designing efficient evolutionary algorithms for large-scale multiobjective optimization deserves future research.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant no. 71871185.