Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018, Article ID 1435463, 18 pages
https://doi.org/10.1155/2018/1435463
Research Article

An Indicator and Decomposition Based Steady-State Evolutionary Algorithm for Many-Objective Optimization

1Department of Information Science and Engineering, Northeastern University, Shenyang 110819, China
2School of Information Science and Engineering, Central South University (CSU), Changsha 410083, China
3School of Mechanical Engineering, Shenyang Jianzhu University, Shenyang 110168, China

Correspondence should be addressed to Fei Li; moc.621@ueneelecnal

Received 6 August 2017; Revised 11 December 2017; Accepted 22 January 2018; Published 11 March 2018

Academic Editor: Giuseppe Vairo

Copyright © 2018 Fei Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An indicator based selection method is a major ingredient in the formulation of indicator based evolutionary multiobjective optimization algorithms. The existing classical indicator based selection methodologies have demonstrated an excellent performance to solve low-dimensional optimization problems. However, the indicator based evolutionary multiobjective optimization algorithms encounter enormous challenges in high-dimensional objective space. Our main purpose is to explore how to extend the indicator to handle many-objective optimization problems. After analyzing the indicator, the objective space partition strategy, and the decomposition method, we propose a steady-state evolutionary algorithm based on the indicator and the decomposition method, named, -MOEA/D, to obtain well-converged and well-distributed Pareto front. The main contribution of this paper contains two aspects. (1) The convergence and diversity for the indicator based selection are analyzed. Two improper selection situations will be properly solved via applying the decomposition method. (2) According to the position of a new individual in the steady-state evolutionary algorithm, two different objective space partition strategies and the corresponding selection methods are proposed. Extensive experiments are conducted on a variety of benchmark test problems, and the experimental results demonstrate that the proposed algorithm has competitive performance in comparison with several tailored algorithms for many-objective optimization.

1. Introduction

A large number of evolutionary multiobjective optimization algorithms (EMOAs) have been introduced to solve multiobjective optimization problems (MOPs) [15]. Most of these algorithms have demonstrated their excellent performance to deal with MOPs involving two or three objectives. However, most of optimization problems, such as water resource management problem [6], general aviation aircraft design problem [7], and hybrid electric vehicle controller design problem [8], often relate to many-objective optimization problems [9] (MaOPs). The traditional EMOAs face substantial difficulties when addressing MaOPs. There is enormous amount of researches discussing most existing methodologies to solve them. It has been one of the major research aspects as the detailed survey of MaOPs [10].

Convergence is to find a set of solutions as close as possible to the Pareto front while diversity is to obtain well-distributed solutions. Balancing convergence and diversity is a key issue for solving MaOPs. In order to obtain the better performance, several EMOAs have been presented in the literature which can be divided into four categories.

The idea of the first category is based on the Pareto dominance relation. In the dominance based methods, the Pareto dominance can be defined as the preference of decision maker. The popular EMOAs, such as nondominated sorting genetic algorithm II (NSGA-II) [11] and multiple particle swarm optimizer (MOPSO) [12], are based on the Pareto dominance. These algorithms can deal with MOPs. However, the selection pressure of Pareto dominance is severe loss with the dimension of the objective space increasing. Therefore, there is a large amount of studies to modify the dominance relations or adopt the secondary metric. Fuzzy dominance [13, 14] and preference inspired method [15] have been deployed for handling MaOPs. However, the final obtained Pareto solutions may present an excellent convergence in objective space but with the poor diversity.

The second category is the decomposition based selection method. The method is to decompose an MOP into many single-objective optimization subproblems through a scalarizing function. It aims to optimize these subproblems in a collaborative manner by evolving the population. C-MOGA [16] and MOEA/D [17] are the most representative of this sort. During the past few years, as a major framework to design EMOAs, the decomposition based selection method has spawned a large number of research works, for example, incorporating self-adaptation mechanisms in reproduction [18] and hybridizing with the swarm intelligence [19, 20].

The third avenue is known as the indicator based approaches, which merge the convergence and diversity into an indicator. The indicator based evolutionary algorithms employ a single performance indicator to provide a desired ordering among individuals that represent Pareto front approximations. The hypervolume [21] is probably the most popular indicator ever adopted due to its satisfactory theoretical properties. Some hypervolume based EMOAs have been established, such as indicator-based evolutionary algorithm (IBEA) [22] and multiobjective covariance matrix adaptation evolution strategy (MO-CMA-ES) [23]. However, the computational cost of hypervolume hinders these algorithms for MaOPs [24].

Recently, several new performance indicators called [25, 26], [27], and [2831] have been proposed. More specifically, is composed of slight modifications of and performance indicators via introducing the averaged Hausdorff distance. Its computation cost is lower than that of the hypervolume, and it can handle outliers as well, which makes it attractive for assessing performance of EMOAs [32]. Meanwhile, it has been shown in [29, 33] that prefers evenly spread solutions along the Pareto front for biobjective and three-objective optimization problems, respectively. However, the main limitation of the indicator is how to produce the well-converged and well-distributed candidate solutions in high-dimensional objective spaces. Nowadays, a new performance indicator called the indicator is proposed in [34] to compare approximation sets based on a set of utility functions [35]. The indicator is weakly monotonic and performs a lower computational overhead than the hypervolume. Due to these characteristics, the indicator is recommended for dealing with MaOPs [32]. MOMBI [32] and MOMBI-II [36] have been proposed in the context of MaOPs.

It is worth noting that, unlike the three above-mentioned categories, the objective space partition strategy can be regarded as the fourth category to handle MaOPs. The representative EMOAs are MOEA/D-M2M [37] and MOEA/DD [38]. The motivation of this category is to balance convergence and diversity in each subspace. In comparison to the traditional decomposition based methods [17], these EMOAs can achieve a better performance via introducing the sophisticated selection methods.

In other words, the generational and steady-state schemes are two commonly used reproduction operators to produce the offspring population. Most of generational EMOAs have been deeply studied other than the steady-state EMOAs, especially in -EMOAs. There is still huge room for improvement especially for the steady-state evolutionary algorithm to resolve MaOPs. Meanwhile, MOMBI-II [36] and MOEA/DD [38] are suitable for different problems. This phenomenon motivates us on the merits of both selection strategies for further research to balance convergence and diversity. This paper mainly focuses on the indicator based steady-state evolutionary algorithm which adopts the modified Tchebycheff and penalty-based boundary intersection (PBI) decomposition methods to balance convergence and diversity for MaOPs. The main contributions of this paper are as follows.

(1) It is the first time to combine the indicator and the decomposition method into a steady-state evolutionary algorithm to address MaOPs. To be specific, we find that the indicator based selection method is not always proper for the steady-state evolutionary algorithm. It is harmful for the population diversity when purely applying the indicator to handle MaOPs. Therefore, we employ the objective space partition strategy and the PBI decomposition method to achieve a balance between convergence and diversity.

(2) The selection procedure of the steady-state evolutionary algorithm is to delete a candidate solution from the combined candidate solutions which contain the parent population and an offspring individual. Two types of the selection approaches will be introduced to delete a candidate solution in terms of the ideal point. On the one hand, we first repartition the objective space into a number of subspaces if the position of the ideal point changes. Then, we adopt the indicator and the decomposition method to prune the combined candidate solutions. On the other hand, we cluster the offspring candidate solution to the nearest reference vector if the ideal point remains unchanged. Then, the decomposition method is adopted to discard a solution which has the worst performance in the most crowded subspaces.

The rest of this paper is organized as follows. Section 2 provides the background and motivation of this paper. Section 3 is devoted to the description of our proposed -MOEA/D for MaOPs. Comprehensively experimental design and experimental results are provided in Sections 4 and 5. Finally, Section 6 concludes this paper.

2. Background and Motivation

In this section, some basic definitions and related works about -MOEA/D are firstly introduced. Then, the motivation of -MOEA/D will be illustrated in detail.

2.1. Basic Definitions

Without loss of generality, a minimization multiobjective optimization problem can be formulated as follows:where is the vector of decision variables, is the dimension of solution space, is the th objective, is the number of objectives, and is the vector of dimensional objectives. Given two vectors and in the objective space, we say that dominates if , for all , and . Pareto set (PS) corresponding to all of nondominated solutions in decision space and Pareto front (PF) is in objective space.

In [35], the indicator was first proposed to assess the convergence and diversity of a set of candidate solutions . According to [34], the definition of the indicator can be given as follows.

Definition 1. For a set of reference vectors and the modified Tchebycheff utility function, the indicator of a solution set can be defined as follows:

Definition 2. The contribution of a solution to the indicator can be of interest to assess the performance of each individual, which is defined as follows:where is called a set of reference vectors, is an ideal point, and is the individual of population .

In this paper, we adopt the penalty-based boundary intersection (PBI) method, due to its promising performance for many-objective optimization reported in MOEA/DD [38]. Without loss of generality, the PBI decomposition method can be given as follows:where is the convergence distance metric, denotes the diversity distance metric, and is an important parameter to balance convergence and diversity.

2.2. Related Works

In this subsection, we would illustrate the representative indicator and decomposition based selection methods which are based on a set of reference vectors.

(i) MOMBI [32] and MOMBI-II [36] are two representative -EMOAs which abandon the Pareto dominance concept for MaOPs. The quality of a solution is fully determined by a set of reference vectors. The achievement scalarizing function is first introduced into the -EMOAs. MOMBI-II has adopted statistical information of previous generations to normalize the candidate solutions. However, the fast ranking strategy scarcely considers the relationship between the adjacent ranks. The convergence is well while the diversity will be destroyed.

(ii) DBEA [7] is an excellent and practical steady-state EMOA for MaOPs which not only considers the benchmark test problems but also takes into account three constrained engineering design optimization problems. The innovation of DBEA contains two parts: one is a simple selection procedure without introducing any penalty parameter, where the diversity distance metric has a precedence over the convergence distance metric, and the other is the normalization method, which is based on the corner sort method.

(iii) MOEA/D-M2M [37] is the first paper which introduces a set of weight vectors to divide the objective space into a large number of subspaces. Different weight vectors are used to specify different subpopulations for approximating a small segment of the whole Pareto front. The algorithm shows a better performance to deal with biobjective or three-objective optimization problems.

(iv) RVEA [39] can be seen as a generational EA to produce offspring and the steady-state selection strategy based on a set of reference vectors. The adaptive angle-penalized distance (APD) metric and a reference vector updated strategy are firstly introduced to balance convergence and diversity and to deal with scaled MaOPs. Due to the excellent adaptive reference vector updated strategy, we have embedded this strategy into the proposed -MOEA/D algorithm.

(v) MOEA/DD [38] is on the merit of Pareto dominance and decomposition method to balance convergence and diversity. The main contribution contains three parts. Firstly, the mating restriction strategy considers neighboring subspace and the whole population. Secondly, the efficient nondominated level update approach is proposed for steady-state EMOAs. Finally, the density of the population is calculated by the local niche count of a subspace. MOEA/DD [38] is currently an excellent steady-sate EA for MaOPs.

2.3. Motivation

Achieving a balance between convergence and diversity is the cornerstone of the indicator and decomposition based evolutionary algorithms. The motivation of this paper is on the merit of the indicator and decomposition method to balance convergence and diversity for the steady-state evolutionary algorithm to address MaOPs. Two main aspects should be taken into account.

(1) As suggested in [36], the indicator naturally balances convergence and diversity during the evolutionary procedure. However, the selection pressure for solving MaOPs hardly quantitatively balances convergence and diversity simply depending on the indicator. Meanwhile, the indicator gives an excessive priority to the convergence requirement while the objective space partition and decomposition methods pay more attention to diversity other than convergence. Therefore, how to reasonably balance convergence and diversity on the benefit of the indicator and the decomposition method is vital for solving MaOPs.

(2) As for the steady-state EAs, only an individual is created and a candidate solution will be deleted from the combined candidate solutions in each generation. The key issue of the steady-state EA is to determine which candidate solution will be substituted by an offspring. Considering the position information between the offspring and the ideal point, the objective space partition strategy should be illustrated in detail.

First and foremost, the indicator [34] which can be seen as the mutual preference between a set of reference vectors and candidate solutions is a vital ingredient in EMOAs for MaOPs. Most of researchers [32, 36, 4042] believe that the indicator obtains a desirable performance which combines convergence and diversity. Let us consider an example as shown in Figure 1(a), where a candidate solution needs to be deleted from the combined candidate solutions. The indicator is adopted to prune the combined population and will be deleted from the population. It is a proper situation for the indicator based selection method. However, we find that purely adopting the indicator is not always reasonable as shown in Figure 1(b). In this case, although the indicator based selection mechanism intends to achieve a balance between convergence and diversity, the selected candidate solutions fail to keep the population diversity especially in high-dimensional objective space. The main reason is that the indicator based selection method will discard which is located in the isolated subspace. To relieve this effect, whose position is located in the most crowded subspace will be discarded while will survive. Based on the above description and discussion, it is interesting to note that combining the indicator with decomposition method is a potential improvement strategy to balance convergence and diversity for MaOPs.

Figure 1: Illustrative examples of our motivation. If we only use the indicator to delete an individual, two situations should be taken into consideration. (a) have the better contribution than . It is reasonable that we eliminate the worst solution associated with the most crowded subspace in . (b) belongs to the lowest contribution level while it is associated with an isolated subspace . We eliminate the solution associated with the most crowded subspace in . We preserve in to enhance the diversity at the expense of the convergence.

The following paragraph will illustrate the objective space partition strategy to avoid the redundant cluster operation as given in Figure 2. When an offspring is produced by commonly used genetic operators, the objective space partition strategy includes two situations: one is clustering the offspring to a set of reference vectors, the other is clustering the combined population to a set of reference vectors. To provide a clear explanation of two situations, we give two examples for biobjective optimization problems as illustrated in Figures 2(a) and 2(b). If the ideal point remains unchanged as shown in Figure 2(a), is located in the subspace without clustering all of the candidate solutions. However, if the ideal point is updated when is located in the second, third, or fourth quadrant as shown in Figure 2(b), the population distribution is different from the original. The whole combined population should cluster each candidate solution according to a set of reference vectors.

Figure 2: Illustration of the objective space partition strategy. It contains two situations. (a) a new individual is generated without updating the ideal point. The distribution of in the corresponding subspaces remains unchanged. We only locate the subspace where the new individual belongs to. Clustering the combined population is unnecessary. (b) A new individual changes the position of the ideal point. The distribution of the combined population must adopt the objective space partition strategy to repartition the objective space into subspaces.

3. -MOEA/D

3.1. Framework of -MOEA/D

The framework of the proposed -MOEA/D is listed in Algorithm 1. First, a set of reference vectors and initial population are generated. Only an offspring is generated by introducing a traditional reproduction operator until the termination condition is not met. Then, the offspring is combined with the current parent population as a combined population. The indicator and decomposition based selection strategy are employed to prune the combined population. In the following subsections, the implementation details of each component in -MOEA/D will be illustrated step by step.

Algorithm 1: -MOEA/D.
3.2. Initialization Strategy

To ensure the diversity in the obtained PF, a set of reference vectors is generated by using two-layer reference vector generation method [38] in high-dimensional objective space to enhance the diversity of population. Most of EMOAs adopt the same method to solve MaOPs, such as DBEA [7] and MOEA/DD [38]. The initial population consists of individuals which are randomly generated within the variable bounds.

Each reference vector partitions the objective space into a set of subspaces . The initialized population will be allocated to these subspaces by associating each individual with its closest reference vector. Each individual is allocated to subspace if and only if the angle between the individual and the reference vector is minimal.

3.3. -MOEA/D Selection Strategy

As for the steady-state evolutionary algorithm, a new offspring is generated via using simulated binary crossover operator (SBX) [43] and polynomial mutation operator (PM) [44]. After the generation of a new individual in a steady-state form, we use the individual to update the parent . The pseudocode of -MOEA/D selection procedure is presented in Algorithm 2. Visualization in evolutionary multiobjective optimization is essential in many aspects [45]. To better understand the -MOEA/D selection strategy, the biobjective illustrations are given in Figures 3 and 4. When the number of objectives is greater than three, such a simple and intuitive visualization of approximation sets is much harder to achieve. The main concern is developing such a steady-state selection mechanism based on the indicator and the decomposition method to balance convergence and diversity. The -MOEA/D selection strategy will be repeated multiple rounds during the iteration.

Algorithm 2: -MOEA/D-selection.
Algorithm 3: Objective space partition (OSP).
Algorithm 4: contribution calculation.
Figure 3: The lowest contribution only contains an individual. The number on the top of each individual represents its contribution. There are two situations. (a) will be eliminated as it belongs to the last contribution level and there is another better solution associated with the region. The indicator is proper for this situation. (b) Although belongs to the worst contribution level, it is associated with an isolated subspace . This indicates that is important for population diversity and it should be preserved without reservation. Only adopting the indicator is improper. Instead, we eliminate associated with the most crowded subspace in via adopting the PBI decomposition method. Combining the indicator and decomposition method has potential to enhance the performance.
Figure 4: The lowest contribution contains more than one individual. Two situations appear. (a) will be eliminated as it belongs to the last contribution level. Meanwhile, there is another better solution associated with the subspace. It is proper for the selection situation. (b) Although belong to the lowest contribution level, it is associated with the subspaces . Simply adopting the indicator is improper. Thus, we eliminate the most crowded subspace solution in through adopting the PBI decomposition method. Achieving balance between convergence and diversity is mainly concerned through the indicator and decomposition method.

First of all, we should detect whether the ideal point is changed or not after introducing a new offspring. If the ideal point is updated via introducing the new individual as given in Figure 2(b), we need to repartition the objective space and cluster the combined candidate solutions to each corresponding subspace. The instantiation of objective space partition strategy for the combined population is given in Algorithm 3 (Step of Algorithm 2). The angle between the objective vector and the reference vector will be calculated in Steps and of Algorithm 3. If the ideal point remains unchanged as illustrated in Figure 2(a), the traditional objective space partition strategy clusters the combined population and obtains the subspace where each candidate solution belongs to. It wastes a lot of precious time for the steady-state EAs. We only need to cluster the offspring to the corresponding reference vector after comprehensively analyzing the position of the new individual. We cluster the new individual to the nearest reference vector in Step of Algorithm 2 to avoid the wasted computation.

The indicator can obtain weakly dominated solutions compared with the Pareto dominance strategy. However, there is no need to introduce any sophisticated and complicated ranking strategy or the diversity maintaining strategy to distinguish the population in -EMOAs. The contribution is calculated via adopting Algorithm 4. We obtain the value of corresponding to each reference vector from Step to Step in Algorithm 4. We preserve the contribution individuals and their value in Step . Then, we take into account each individual to a set of reference vectors and add all the value together as their contribution in Step in Algorithm 4.

In order to clearly explain the -MOEA/D selection procedure which is based on indicator and decomposition based method, Algorithm 2 contains two aspects while deleting a solution from the combined population. If the ideal point is changed via introducing the new individual, , the indicator and the decomposition method should be combined to delete one solution. If the ideal point remains unchanged, the selection procedure only considers the most crowded subspaces among all subspaces. The worst solution is discarded from the combined population after applying the -MOEA/D selection strategy; the number of subspaces that the discarded solution occupied will be minus one. Then, the -MOEA/D selection procedure will be explained in detail.

(1) If the ideal point is updated by the offspring (Step of Algorithm 2), the combined candidate solutions should be repartitioned because the distribution of the original population has already changed in Step and Step in Algorithm 2. When we introduce the indicator to prune the combined population in Step in Algorithm 2, almost few researchers [32, 36, 40, 41] take into account the distribution of the candidate solutions in -EMOAs. It is doubted whether deleting the lowest contribution is reasonable. As further observed from Figures 3 and 4, only adopting the indicator to prune the combined population is not always suitable. The size of the lowest contribution individual contains two situations.(1)The lowest contribution candidate solution has an individual. Most of -EMOAs, such as MOMBI-II [36] and -MOPSO [41], will discard the individual. However, it is not always proper for most of test problems after comprehensively balancing convergence and diversity. Then, the proper and improper situations will be analyzed in detail.(a)If there are another solutions in the subspace , the lowest contribution individual does not have any contribution to enhance the algorithm performance from the nature of indicator. The solution will be deleted (from Step to Step in Algorithm 2). Figure 3(a) presents an example of this selection procedure.(b)If the subspace only contains a candidate solution, that is to say, the candidate solution is located in an isolated subspace, the isolated lowest contribution individual should be preserved to maintain the better diversity at the expense of the convergence. As shown in Figure 3(b), although belongs to the lowest contribution level, it is associated with an isolated subspace . It indicates that is important for the population diversity and it should be preserved. Instead, we find the most crowded subspaces and delete the solution which has the maximum PBI value (Step (11) of Algorithm 2). In Figure 3(b), in subspace will be deleted.(2)The lowest contribution level contains more than one individual. The distribution of the population contains two situations.(a)As shown in Figure 4(a), the subspace of the lowest contribution possesses more than one individual. Then, we find the most crowded subspaces in and delete the solution with maximum PBI value solution (from Step (13) to Step (15) in Algorithm 2). In this situation, it is proper to introduce the indicator based selection method without considering the population distribution to prune the candidate solutions.(b)As shown in Figure 4(b), solution in each subspace only contains one individual. The candidate solution is vital for the population diversity. The most crowded subspaces comprehensively consider all of the combined population. There is at least one subspace which contains more than one individual. We eliminate the worst individual which has the maximum PBI value in the most crowded subspaces (from Step to Step in Algorithm 2). It is improper to delete the star individual in an isolated subspace even if it has the lowest contribution. Discarding the cross individual in the first rank can ensure the better diversity.

(2) If remains unchanged via introducing a new individual , only the acute angle between the candidate solution and a set of reference vectors should be calculated in Step of Algorithm 2. The PBI decomposition method is introduced to evaluate the combined candidate solutions . The selection method is to delete which has the maximum PBI value in the most crowded subspaces in Step of Algorithm 2. The size of the next parent population remains unchanged.

After deleting an individual from the combined population , the subspace number of the discarded solution will be minus one. The selected population is the obtained solutions through adopting the -MOEA/D selection strategy. From Step to Step in Algorithm 2, the adaptive reference vector updated strategy from RVEA [39] is embedded into our -MOEA/D selection strategy. Interested researchers can refer to RVEA [39] for further reading.

3.4. Discussion

After describing the main procedure of -MOEA/D in detail, this subsection discusses the similarities and differences of -MOEA/D, MOMBI-II [36], MOEA/D-PBI [17], DBEA [7], and MOEA/DD [38].

(1) Similarities between -MOEA/D and MOMBI-II [36]. (i) Both of them adopt the indicator to select the candidate solutions.

(ii) Both of them introduce a set of reference vectors to guide the potential solutions.

(iii) Both of them use utility function to evaluate the population.

(2) Similarities between -MOEA/D and MOEA/D-PBI [17]. (i) Both of them adopt a set of reference vectors to guide the selection procedure.

(ii) Both of them apply the PBI decomposition method to select the candidate solutions.

(3) Similarities between -MOEA/D and DBEA [7]. (i) Both of them use a set of reference vectors to maintain the diversity of the population.

(ii) Both of them are the steady-state evolutionary algorithms.

(iii) Both of them are on the merit of nadir points.

(4) Similarities between -MOEA/D and MOEA/DD [38]. (i) Both of them use a set of reference vectors (weight vectors) to partition the objective space into a number of subspaces.

(ii) Both of them use the convergence distance and the diversity distance from the PBI decomposition method [17] to enhance the performance.

(iii) Both of them are the steady-state evolutionary algorithms.

(5) Difference among the above EMOAs. The selection procedure plays an important role for MaOPs compared with the single objective optimization problems. Generational EMOAs and steady-state EMOAs are two aspects to solve MaOPs. The proposed algorithm has some differences among MOMBI-II [36], RVEA [39], DBEA [7], and MOEA/DD [38].

(i) MOMBI-II [36] abandons any Pareto dominance method, while the quality of the candidate solutions is fully determined by a set of reference vectors. The achievement scalarizing function is first embedded into the algorithm. MOMBI-II has adopted statistical information of previous generations to update the nadir point. More specifically, if we only employ the indicator, are preserved as the selected candidate solutions as shown in Figure 5(a). and corresponding to and will be deleted. The diversity has been seriously destroyed.

Figure 5: The illustration of MOMBI-II and MOEA/D-PBI. (a) Selection strategy based on MOMBI-II. are the combined solution. are a set of reference vectors. are the first rank individuals via using indicator, and , , and are in ranks 2, 3, and 4, respectively. The star points are the selected solution. The subspaces will be ignored. The convergence is well while the diversity will be damaged. (b) Selection strategy based on MOEA/D-PBI. are the combined solution. are a set of reference vectors. The final selected solutions are . The isolated solution in the subspace will be ignored. The diversity will be damaged.

(ii) MOEA/D [17] has attracted many researchers to remedy optimization problems since it was proposed in 2007. It is an excellent algorithm no matter the dimension of the objective space is low or high. Figure 5(b) has already illustrated the decomposition based selection strategy for a set of potential solutions . are the obtained solutions via adopting the PBI decomposition method. The convergence is perfect and the diversity can be maintained by a set of weight vectors. However, the obtained solutions discard the subspace in the example.

(iii) DBEA [7] is mainly based on the decomposition method. The algorithm adopts the corner sort ranking method to calculate the intercept point. The algorithm first considers the Pareto dominance to preserve the convergence. Then, the diversity distance metric is used to maintain the diversity. If the Pareto dominance and can not distinguish the potential solutions, the convergence distance metric is introduced. The contribution of DBEA [7] is the normalization strategy and the PBI decomposition method without introducing any penalty parameter . As shown in Figure 6(a), the ultimate selected solutions are . The subspace and the solution will be eliminated from the population. The algorithm considers convergence a bit more while the diversity will be damaged for MaOPs.

Figure 6: The illustration of DBEA and MOEA/DD. (a) Selection strategy based on DBEA. are the combined solution. are a set of reference vectors. If a new individual is generated, it will be deleted from the combined population. The subspace will be discarded. The convergence is well while the diversity will be damaged. (b) Selection strategy based on MOEA/DD. If a new individual is generated, a sophisticated steady-state Pareto ranking strategy ENLU is used to update the rank. The PBI decomposition method is used to select the proper individuals. The star points will be survived. Combining Pareto dominance and decomposition method can well balance convergence and diversity.

(iv) MOEA/DD [38] is on the merit of Pareto dominance and the decomposition method to address MaOPs. It is an excellent algorithm to remedy high-dimensional optimization problems. As shown in Figure 6(b), the star individuals will be survived. The updated procedure is sophisticated and the efficient nondomination level updated approach (ENLU) is designed for steady-state EMOAs. The algorithm can comprehensively balance the convergence and diversity.

(v) -MOEA/D is the combination between the indicator and the decomposition based method. The indicator based selection can well deal with biobjective and three-objective optimization problems [34]. However, the diversity will be damaged when only adopting the indicator to address MaOPs. Then, the PBI decomposition method is introduced as the diversity maintaining strategy to enhance the performance. More specifically, the final selected solutions are as shown in Figure 6(b). However, the combined selection strategy does not introduce any Pareto dominance approach. When a new offspring is generated, the position of the offspring has the possibility to update the ideal point. If the ideal point is updated, the combined population distribution should be repartitioned. In this situation, the indicator and the decomposition method are integrated to select the candidate solutions. If the ideal point remains unchanged, we only cluster the new individual other than the combined population. In this situation, we delete the individual which has the maximum PBI value in the most crowded subspaces.

4. Experimental Design

This section is devoted to the experimental setup for investigating the performance of -MOEA/D. At first, the benchmark test problems used in our experimental research are given. Then, we introduce the performance indicator to assess the convergence and diversity of these EMOAs. Finally, the experimental settings adopted in this study are provided.

4.1. Benchmark Test Problems

Empirical experiments are conducted on two well-known test suites for MaOPs, that is, DTLZ [46] and WFG [47]. These test suits can be scaled to any number of objective spaces. For each test instance, the number of objective is varied from three to fifteen; that is, . As for the DTLZ test problems, the total number of decision variable is given by . is set to 5 for DTLZ1 and 10 for DTLZ2 to DTLZ4. As suggested in [47], the number of decision variables is set as , where is the position-related variable and is the distance-related variable for the WFG test instances.

4.2. Performance Measure

We select as a performance assessment measure. The performance indicator simultaneously evaluate proximity to the Pareto optimal front and spread of solutions along it. Given an approximation set and a sampled Pareto true Pareto front of MOPs, the indicator is defined as follows:where is the Euclidean distance from to its nearest member in and is the Euclidean distance from to its nearest member in . Small values of are preferred. We obtain the true from the PlatEMO [48]. According to [2831], is selected. Finally, we adopt the performance indicator to evaluate the algorithm performance.

4.3. EMOAs for Comparisons

The population size, termination condition, and parameter settings in each algorithm will be given as follows.

(1) The population size , the number of reference vectors, and reference points for different objectives are summarized in Table 1. They are determined by the simplex-lattice design factor together with the objective ; and are used to generate uniformly distributed reference vectors on the outer boundaries and the inside layers, respectively.

Table 1: Number of reference vector and population size.

(2) The termination condition of each run is the maximal number of generations, which are summarized in Tables 2 and 3.

Table 2: Statistical results (mean values and standard deviations) of values obtained by each algorithm on the DTLZ1, DTLZ2, DTLZ3, and DTLZ4 test instances. Best performance is shown in bold. +, , and − denote that TS-R2EA performs significantly better than, equivalent to, and worse than the compared algorithm, respectively.
Table 3: Comparisons of median values obtained by each algorithm on the WFG test suite. Best performance is shown in bold. +, ≈, and − denote that -MOEA/D performs significantly better than, equivalent to, and worse than the compared algorithm, respectively.

(3) For parameter settings in each algorithm, the distribution index is set to in -MOEA/D, MOMBI-II [36], MOEA/D-PBI [17], DBEA [7], and MOEA/DD [38], and the crossover probability is in all algorithms; the distribution index and the mutation probability are and . In MOEA/D-PBI [17] and MOEA/DD [38], the neighborhood size is set to , and the penalty parameter in PBI is set to , which is the same as in -MOEA/D. The reference vector will be adaptively updated via adopting for -MOEA/D. Each algorithm is independently run 21 times. The mean and standard deviation of for the DTLZ problems are given in the corresponding tables. The mean value of for the WFG problems is given as well. To have a statistically sound conclusion, we introduce Wilcoxon rank sum test at a significantly level of 5% to evaluate whether the proposed -MOEA/D algorithm is significantly better or worse than the compared algorithms.

5. Experimental Results and Discussion

Our experiments consist of three parts. First of all, we had better compare -MOEA/D with MOMBI-II [36], DBEA [7], MOEA/D-PBI [17], and MOEA/DD [38] on the DTLZ test problems. The main purpose is to validate the performance of -MOEA/D in balancing convergence and diversity. Secondly, we compare these algorithms on the WFG test instances to show great ability coping with different PF features. Finally, -MOEA/D is based on the hybrid selection strategy which is based on the indicator and decomposition method. Then, the -MOEA/D selection procedure should investigate all possible cases that might happen when introducing a new individual into the population.

5.1. Performance Comparisons on the DTLZ Test Suite

In this subsection, the performance indicator is used to evaluate the algorithms. Comparison results of -MOEA/D with other four MOEAs are presented in Table 2. The best values are written in bold. From the experimental results, it is clear that -MOEA/D and MOEA/DD [38] show the best performance for four original DTLZ problems. The values of MOMBI-II [36] and DBEA [7] compared with the proposed algorithm have the worst performance mainly due to the selection strategy and normalization method. MOEA/D-PBI [17] can well remedy the DTLZ2 problems as shown in Table 2. MOEA/DD [38] has achieved a better performance while solving most of the DTLZ test problems. The proposed algorithm is the second best EMOAs from the statistical results in Table 2. In addition, the performance of -MOEA/D is the best algorithm while comparing with our mother algorithms MOMBI-II [36] and MOEA/D-PBI [17]. -MOEA/D has achieved a satisfactory performance based on the indicator and the decomposition method. For example, the approximate PFs with the median value in Figure 7 illustrate an excellent comprehensive performance on the 15-objective DTLZ4 instance compared with the four related algorithms.

Figure 7: Pareto fronts obtained by five algorithms on the fifteen-objective DTLZ4 instance with the median value.
5.2. Performance Comparisons on the WFG Test Suite

We investigate the WFG problems which involve nonseparable variables. There are different geometries, for example, disconnected, convex, concave, degenerate, and linear PF. The WFG test suite brings a significant challenge to obtain a well-converged and well-distributed solution set. Table 3 presents the comprehensive results of -MOEA/D with other EMOAs in terms of values. The best metric values are written in bold. The representative distribution of the indicator value is illustrated by box plots as shown in Figure 8. It is obvious that -MOEA/D shows the best performance in most cases according to the statistical experimental results. Some discussions of the experimental results will be given in the following paragraph.

Figure 8: The box plots obtained by five algorithms on the respective fifteen-objective WFG test suite.

As for the WFG1, WFG2, and WFG3 problems, the indicator based algorithm named MOMBI-II has achieved the best values compared with the other algorithms. It is worthwhile pointing out that the proposed -MOEA/D is the second best algorithm among the compared algorithms. -MOEA/D which is based on the indicator and the decomposition method obtains the satisfactory performance. MOEA/DD which has obtained the best performance for the DTLZ test suite cannot achieve the better performance for these problems. DBEA can achieve the competitive performance for the 8-objective and 10-objective WFG1 problems. Meanwhile, DBEA shows the better values for the 3-objective and 5-objective WFG2 problems. MOEA/D-PBI hardly remedies most of problems due to lack of a proper normalization strategy. In addition, Figure 8(a) gives the box plot for the 15-objective WFG2 test instance. MOMBI-II and MOEA/D-PBI obtain better values than DBEA, MOEA/DD, and the proposed algorithm -MOEA/D.

For the remaining problems, WFG4 to WFG9 have the same convex PF in the objective space. However, these problems have different difficulties in the decision space. More specifically, WFG4 has multimodality characteristic with large “hill sizes.” Due to the difficulty, EMOAs are easily trapped into local optimal. -MOEA/D achieves the best values for the 3-objective and 5-objective WFG4 problems in Table 3. MOMBI-II cannot achieve the better performance compared with the proposed algorithm in terms of values. DBEA is the best algorithm for the 8-objective and 10-objective WFG4 test instances. MOEA/DD has obtained the best performance for the 15-objective WFG4 test instance as shown in Figure 8(b). -MOEA/D is the second best algorithm. The robustness of the algorithm is better than MOEA/DD. WFG5 is a deceptive problem. -MOEA/D is the best algorithm in terms of the values when the number of the objective spaces is three and fifteen in Table 3. DBEA shows a better performance for the 8-objective and 10-objective WFG5 test instances. Similar to the observations of the WFG5 problems, for WFG6, a nonseparability, and reduced problem, -MOEA/D and DBEA obtain better performance. Figure 8(c) gives the box plot of the 15-objective WFG6 test instance. -MOEA/D and MOEA/DD have the best performance in terms of the values. The other three compared algorithm cannot achieve the satisfactory performance. WFG7 is both separable and unimodal problem. -MOEA/D and MOEA/DD obtain the best performance in terms of values. -MOEA/D which combines the indicator and the decomposition method is suitable for low-dimensional WFG7 problem. MOEA/DD which combines Pareto dominance and the decomposition method is suitable for high-dimensional WFG7 instances. WFG8 and WFG9 have biased and nonseparable features. However, WFG8 is harder than WFG9 due to the parameter dependency. -MOEA/D obtains the best performance compared with the other algorithms for the WFG8 test instances. As further observed from Figure 8(d), the performance of -MOEA/D outperforms MOMBI-II and MOEA/DD for the 15-objective WFG8 instance. -MOEA/D for the 3- and 15-objective instances and DBEA for the 5-objective, 8-objective, and 10-objective test instances perform the best performance for WFG9 in Table 3.

From the overall statistical analysis in Table 3, it is evident that solutions achieved by -MOEA/D are well converged and widely distributed to the true PF of MaOPs. -MOEA/D is a competitive algorithm among the compared EMOAs. DBEA is the second rank algorithm for the WFG test problems. MOMBI-II is the third best algorithm on the merit of the achievement scalarizing function for the first three problems. MOEA/D-PBI and MOEA/DD have a potential improvement for the WFG problems compared with an excellent performance on the DTLZ problems as given in Section 5.1.

5.3. Investigation of Different Cases in -MOEA/D Selection Procedure

The proposed -MOEA/D selection strategy is based on the indicator and decomposition method to prune the combined population. The selection strategy is implemented in a hierarchical manner. We had better consider all possible cases that will happen when combining a new individual into the parent population. Without loss of generality, we investigate the different cases on benchmark problem in this subsection. The usages of each selection situation on each test instance will be illustrated in Figure 9. From the detailed analysis, five selection cases will be used in sophisticated -MOEA/D selection strategy to solve MaOPs. The selection method scarcely introduces any Pareto dominance in the proposed -MOEA/D compared with MOEA/DD. Whether the ideal point will be updated or not is the first condition to be considered. If the ideal point is changed by the new individual, the indicator and decomposition method will be used to delete an individual from the combined population. Four cases will possibly happen for high-dimensional objective optimization problems.

Figure 9: Plots of usages for different selection cases on three-objective WFG1 test instance.

Case 1. and .

Case 2. and .

Case 3. and .

Case 4. and .

If the ideal point remains unchanged by the new individual, the PBI decomposition method and the objective space partition strategy are introduced to prune the population. Only the decomposition method will be given in Case 5 as follows.

Case 5. remains unchanged.

Without loss of generality, the WFG1 problem which has a mixed convex and concave PF is introduced to investigate the usages of different cases from three to fifteen objectives. The representative usages for each case on 3-objective WFG1 test instance are given in Figure 9. From the experimental results, we have the following observations.

(1) The usage of Case 5 is frequently used among all five cases as shown in Figure 9(e), and it grows with the iteration increasing. The percentage of Case 5 usage will be 99.76%, 99.88%, 99.86%, 99.92%, and 99.60% from three to fifteen objectives, respectively. This can be explained by the fact that the ideal point cannot be updated frequently. The PBI decomposition method plays an important role no matter the number of the objective space is high or low.

(2) The second frequently used case is Case 3. The ideal point will be updated by a new individual. The new individual performs better convergence during the iteration. It will be survived from the evolution process. However, we must delete an individual from the remaining candidate solutions for the steady-state evolutionary algorithm. The indicator and decomposition method are introduced to prune the candidate solutions. The size of the lowest contribution individual is more than one and the most crowded subspaces with the lowest contribution contain more than one solution. The removal procedure only considers the solution in the most crowded subspaces. The execution number of Case 3 will be (80, 171, 300, 396, 1614) for the WFG1 problems from three to fifteen objectives as shown in Figure 9(c).

(3) The third commonly used case is Case 4 as shown in Figure 9(d). The usages only appear once during the iteration. As for more than three objectives, the execution number of this case only has (16, 7, 0, 0) from five to fifteen objectives. Concretely, the potential solutions in the isolated subspaces actually occur for some situations. In order to maintain the better diversity, all of the isolated solutions will be survived. The relatively better contribution individual will be discarded from the combined population.

(4) Both Cases 1 and 2 have scarcely been utilized during the evolutionary procedure as shown in Figures 9(a) and 9(b). However, some unknown benchmark test problems which have not been discovered yet may need Cases 1 and 2. After analyzing the selection cases have the potential ability to solve the real world optimization problems.

6. Conclusion

This paper provides a new algorithm to address many-objective optimization problems: -MOEA/D. The algorithm can be seen as the integrated selection strategy which combines the indicator and the decomposition method together. In the proposed algorithm, the selection procedure contains two aspects: if the ideal point is updated via introducing a new individual, the indicator and decomposition based selection method is introduced to prune the combined population for a steady-state evolutionary algorithm. If the ideal point remains unchanged when a new individual is introduced, the PBI decomposition method is used to discard the worst solution via adopting the objective space partition strategy. -MOEA/D has deeply dug the useful information from a combined candidate solutions. A set of reference vectors is the bridge between the indicator and the decomposition method. According to the empirical results, -MOEA/D performs highly competitive performance for the DTLZ and WFG unconstrained benchmark problems up to fifteen objectives after being compared with four related EMOAs.

Furthermore, the relationship between Pareto dominance which is based on partial orders and the indicator which is based on utility function should be carefully analyzed. How to properly design the indicator via adopting a set of reference vectors and a proper utility function to evaluate the EMOAs is helpful for MaOPs. How to find the relationship between the indicator and the other performance metrics, such as [31] and [27], is our future research.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China, under Grants 61773106, 61374137, and 51705341, and the State Key Laboratory of Integrated Automation of Process Industry Technology and Research Center of National Metallurgical Automation Fundamental Research Funds, under 2013ZCX02-03. Thanks are due to Oliver Schütze, Carlos A. Coello Coello, Dimo Brockhoff, Ke Li, Md. Asafuddoula, and Qingfu Zhang for providing their source code and the authors acknowledge the software platform: PlatEMO.

Supplementary Materials

The approximation Pareto front of the DTLZ and WFG test suites is given in the supplementary materials. The obtained approximation Pareto front of each compared algorithm has shown similar performance in the statistical experimental results as given in Tables and . Figures S1 to S5: in order to validate the experimental results, the supplementary material gives the obtained 15-objective approximation Pareto front. The distribution of the approximation Pareto front depends on the distribution of reference vector and the geometry of the Pareto front. Figure S1: -MOEA/D for the 15-objective DTLZ and WFG test instances. Figure S2: MOMBI-II for the 15-objective DTLZ and WFG test instances. Figure S3. DBEA for the 15-objective DTLZ and WFG test instances. Figure S4: MOEA/D-PBI for the 15-objective DTLZ and WFG test instances. Figure S5: MOEA/DD for the 15-objective DTLZ and WFG test instances. (Supplementary Materials)

References

  1. X. Guo, Y. Wang, and X. Wang, “Using objective clustering for solving many-objective optimization problems,” Mathematical Problems in Engineering, vol. 2013, Article ID 584909, 12 pages, 2013. View at Publisher · View at Google Scholar
  2. E. Cuevas and M. Daz, “A method for estimating view transformations from image correspondences based on the harmony search algorithm,” Computational Intelligence and Neuroscience, vol. 2015, Article ID 434263, 15 pages, 2015. View at Publisher · View at Google Scholar
  3. Z. Guo, H. Huang, C. Deng, X. Yue, and Z. Wu, “An enhanced differential evolution with elite chaotic local search,” Computational Intelligence and Neuroscience, vol. 2015, no. 6, Article ID 583759, 11 pages, 2015. View at Publisher · View at Google Scholar
  4. P. Tiwari, S. Ghosh, R. K. Sinha et al., “Classification of two class motor imagery tasks using hybrid ga-pso based k-means clustering,” Computational Intelligence and Neuroscience, vol. 2015, Article ID 945729, 11 pages, 2015. View at Publisher · View at Google Scholar
  5. T. Sun and M. h. Xu, “A swarm optimization genetic algorithm based on quantum-behaved particle swarm optimization,” Computational Intelligence and Neuroscience, vol. 2017, Article ID 2782679, 15 pages, 2017. View at Publisher · View at Google Scholar
  6. T. Ray, K. Tai, and C. Seow, “An evolutionary algorithm for multiobjective optimization,” Engineering Optimization, vol. 33, no. 3, pp. 399–424, 2001. View at Google Scholar
  7. M. Asafuddoula, T. Ray, and R. Sarker, “A decomposition-based evolutionary algorithm for many objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 3, pp. 445–460, 2015. View at Google Scholar
  8. R. Cheng, Nature inspired optimization of large problems [PhD. thesis], University of Surrey, 2016.
  9. X. He, C. Dai, and Z. Chen, “Many-objective optimization using adaptive differential evolution with a new ranking method,” Mathematical Problems in Engineering, vol. 2014, Article ID 259473, 8 pages, 2014. View at Publisher · View at Google Scholar
  10. B. Li, J. Li, K. Tang, and X. Yao, “Many-objective evolutionary algorithms: A survey,” ACM Computing Surveys (CSUR), vol. 48, no. 1, p. 13, 2015. View at Google Scholar
  11. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Google Scholar
  12. C. A. C. Coello and M. S. Lechuga, “MOPSO: A proposal for multiple objective particle swarm optimization,” in Proceedings of the Evolutionary Computation, 2002. CEC '02. Proceedings of the 2002 Congress on, vol. 2, pp. 1051–1056, IEEE, Honolulu, HI, USA, 2002.
  13. M. Farina and P. Amato, “A fuzzy definition of optimality for many-criteria optimization problems,” IEEE Transactions on Systems, man and cybernetics, Part A: systems and humans, vol. 34, no. 3, pp. 315–326, 2004. View at Google Scholar
  14. M. Köppen, R. Vicente-Garcia, and B. Nickolay, “Fuzzy-pareto-dominance and its application in evolutionary multi-objective optimization,” in Evolutionary Multi-Criterion Optimization, pp. 399–412, Springer, 2005. View at Google Scholar
  15. R. Wang, R. C. Purshouse, and P. J. Fleming, “Preference-inspired coevolutionary algorithms for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 4, pp. 474–494, 2013. View at Google Scholar
  16. T. Murata and M. Gen, “Cellular genetic algorithm for multi-objective optimization,” in Proceedings of the 4th Asian Fuzzy System Symposium, pp. 538–542, Citeseer, 2002.
  17. Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007. View at Google Scholar
  18. K. Li, A. Fialho, S. Kwong, and Q. Zhang, “Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 1, pp. 114–130, 2014. View at Google Scholar
  19. N. Al Moubayed, A. Petrovski, and J. McCall, “D2MOPSO: multi-objective particle swarm optimizer based on decomposition and dominance,” in Proceedings of the European Conference on Evolutionary Computation in Combinatorial Optimization, pp. 75–86, Springer, 2012.
  20. A. Pan, H. Tian, L. Wang, and Q. Wu, “A decomposition-based unified evolutionary algorithm for many-objective problems using particle swarm optimization,” Mathematical Problems in Engineering, vol. 2016, Article ID 6761545, 15 pages, 2016. View at Publisher · View at Google Scholar
  21. E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, 1999. View at Google Scholar
  22. E. Zitzler and S. Künzli, “Indicator-based selection in multiobjective search,” in Parallel Problem Solving from Nature-PPSN VIII, vol. 3242, pp. 832–842, Springer, Berlin, Germany, 2004. View at Publisher · View at Google Scholar
  23. C. Igel, N. Hansen, and S. Roth, “Covariance matrix adaptation for multi-objective optimization,” Evolutionary Computation, vol. 15, no. 1, pp. 1–28, 2007. View at Google Scholar
  24. K. Bringmann and T. Friedrich, “Don’t be greedy when calculating hypervolume contributions,” in Proceedings of the tenth ACM SIGEVO workshop on Foundations of genetic algorithms, pp. 103–112, 2009.
  25. A. Menchaca-Mendez and C. A. Coello Coello, “GD-MOEA: A new multi-objective evolutionary algorithm based on the generational distance indicator,” in Lecture Notes in Computer Science, vol. 1, pp. 156–170, EMO, 2015. View at Google Scholar
  26. A. Menchaca-Mendez and C. A. Coello Coello, “GDE-MOEA: a new moea based on the generational distance indicator and ε-dominance,” in Proceedings of the Evolutionary Computation (CEC), 2015 IEEE Congress on, pp. 947–955, IEEE, 2015.
  27. E. M. Lopez and C. A. C. Coello, “Improving the integration of the IGD+ indicator into the selection mechanism of a Multi-objective Evolutionary Algorithm,” in Proceedings of the Evolutionary Computation (CEC), 2017 IEEE Congress on, pp. 2683–2690, IEEE, San Sebastian, Spain, 2017. View at Publisher · View at Google Scholar · View at Scopus
  28. C. A. Rodrguez Villalobos and C. A. Coello Coello, “A new multi-objective evolutionary algorithm based on a performance assessment indicator,” in Proceedings of the 14th annual conference on Genetic and evolutionary computation, pp. 505–512, ACM, 2012.
  29. G. Rudolph, O. Schütze, C. Grimme, C. Domnguez-Medina, and H. Trautmann, “Optimal averaged hausdorff archives for bi-objective problems: theoretical and numerical results,” Computational Optimization and Applications, vol. 64, no. 2, pp. 589–618, 2016. View at Google Scholar
  30. O. Schütze, C. Domnguez-Medina, N. Cruz-Cortés et al., “A scalar optimization approach for averaged hausdorff approximations of the pareto front,” Engineering Optimization, vol. 48, no. 9, pp. 1593–1617, 2016. View at Google Scholar
  31. O. Schütze, X. Esquivel, A. Lara, and C. A. Coello Coello, “Using the averaged hausdorff distance as a performance measure in evolutionary multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 4, pp. 504–522, 2012. View at Google Scholar
  32. R. Hernández Gómez and C. A. Coello Coello, “MOMBI: A new metaheuristic for many-objective optimization based on the R2 indicator,” in Proceedings of the Evolutionary Computation (CEC), 2013 IEEE Congress on, pp. 2488–2495, IEEE, Cancun, Mexico, 2013.
  33. H. Trautmann, G. Rudolph, C. Dominguez-Medina, and O. Schütze, “Finding evenly spaced Pareto fronts for three-objective optimization problems,” in EVOLVE-A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation II, pp. 89–105, Springer, Berlin, Germany, 2013. View at Google Scholar
  34. D. Brockhoff, T. Wagner, and H. Trautmann, “R2 indicator-based multiobjective search,” Evolutionary Computation, vol. 23, no. 3, pp. 369–395, 2015. View at Google Scholar
  35. M. P. Hansen and A. Jaszkiewicz, Evaluating the Quality of Approximations to The Non-Dominated Set, IMM, Department of Mathematical Modelling, Technical University of Denmark, 1998.
  36. R. Hernández Gómez and C. A. Coello Coello, “Improved metaheuristic based on the R2 indicator for many-objective optimization,” in Proceedings of the 2015 on Genetic and Evolutionary Computation Conference, pp. 679–686, ACM, New York, NY, USA, 2015.
  37. H. L. Liu, F. Gu, and Q. Zhang, “Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 3, pp. 450–455, 2014. View at Google Scholar
  38. K. Li, K. Deb, Q. Zhang, and S. Kwong, “An evolutionary many-objective optimization algorithm based on dominance and decomposition,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 5, pp. 694–716, 2015. View at Google Scholar
  39. R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A reference vector guided evolutionary algorithm for many-objective optimization,” 2016.
  40. A. Daz-Manrquez, G. Toscano-Pulido, C. A. C. Coello, and R. Landa-Becerra, “A ranking method based on the R2 indicator for many-objective optimization,” in Proceedings of the Evolutionary Computation (CEC), 2013 IEEE Congress on, pp. 1523–1530, IEEE, Cancun, Mexico, 2013.
  41. F. Li, J. Liu, S. Tan, and X. Yu, “R2-MOPSO: A multi-objective particle swarm optimizer based on R2-indicator and decomposition,” in Proceedings of the Evolutionary Computation (CEC), 2015 IEEE Congress on, pp. 3148–3155, IEEE, 2015.
  42. H. Trautmann, T. Wagner, and D. Brockhoff, “R2-EMOA: Focused multiobjective search using R2-indicator-based selection,” in Learning and Intelligent Optimization, pp. 70–74, Springer, Berlin, Germany, 2013. View at Google Scholar
  43. K. Deb and R. B. Agrawal, “Simulated binary crossover for continuous search space,” Complex Systems, vol. 9, no. 3, pp. 1–15, 1994. View at Google Scholar
  44. K. Deb and M. Goyal, “A combined genetic adaptive search (GeneAS) for engineering design,” Computer Science and Informatics, vol. 26, no. 4, pp. 30–45, 1996. View at Google Scholar
  45. T. Tuar, B. Filipic, and T. Tušar, “Visualization of Pareto front approximations in evolutionary multiobjective optimization: A critical review and the prosection method,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 2, pp. 225–245, 2015. View at Google Scholar
  46. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi-objective optimization test problems,” in Proceedings of the Evolutionary Computation, 2002. CEC '02. Proceedings of the 2002 Congress on, vol. 1, pp. 825–830, IEEE, Honolulu, HI, USA, 2002.
  47. S. Huband, P. Hingston, L. Barone, and L. While, “A review of multiobjective test problems and a scalable test problem toolkit,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 5, pp. 477–506, 2006. View at Google Scholar
  48. Y. Tian, R. Cheng, X. Zhang, and Y. Jin, “PlatEMO: A MATLAB platform for evolutionary multi-objective optimization,” IEEE Computational Intelligence Magazine, vol. 12, no. 4, pp. 73–87, 2017. View at Google Scholar