A Simplified Hypervolume-Based Evolutionary Algorithm for Many-Objective Optimization
Evolutionary algorithms based on hypervolume have demonstrated good performance for solving many-objective optimization problems. However, hypervolume needs prohibitively expensive computational effort. This paper proposes a simplified hypervolume calculation method which can be used to roughly evaluate the convergence and diversity of solutions. The main idea is to use the nearest neighbors of a particular solution to calculate the volume as the solution’s hypervolume value. Moreover, this paper improves the selection operator and the update strategy of external population according to the simplified hypervolume. Then, the proposed algorithm (SHEA) is compared with some state-of-the-art algorithms on fifteen test functions of CEC2018 MaOP competition, and the experimental results prove the feasibility of the proposed algorithm.
Multiobjective optimization problems (MOPs) have been applied in numerous real-world applications. A minimized MOP which often has two or three objectives can be defined as follows :where is an -dimensional decision space; is an -dimensional decision variable; contains m interconflicting objective functions.
In the last few decades, multiobjective evolutionary algorithms (MOEAs) [2–5] are proposed to solve MOPs. However, when solving MOPs with more than three objectives which can also be recognized as many-objective optimization problems (MaOPs) , these MOEAs encounter challenges. First of all, the proportion of nondominated solutions in candidate solutions rises steeply with the increasing number of objectives, which severely deteriorates the selection pressure toward PF. Secondly, the size of population cannot be arbitrarily large in consideration of computational efficiency. But limited number of solutions is probably far away from each other in high-dimensional objective space, causing the offspring to stay away from parents. Lastly, the computational complexity of performance metrics grows exponentially with the increasing number of objectives.
To solve these problems, there are three main categories of many-objective evolutionary algorithms (MaOEAs). The first category is based on the modified dominance relationship [7–9] to enhance the selection pressure to PF. This kind of idea has been widely employed and proved considerable improvement. However, these approaches need more efforts in designing diversity maintenance mechanism to ensure diversity.
The second category uses decomposition-based method to solve MaOPs. The main idea is to decompose a many-objective optimization problem into a set of subproblems and optimize them collaboratively. The most representative algorithms are MOEA/D  and its variants [11–13]. And there are also some other methods based on decomposition such as MOEA/D-M2M  and DBEA  [16–18]. These approaches are adept in diversity maintenance and avoiding local optimum but ineffective to address highly irregular PFs.
The third approach is indicator-based evolutionary algorithms. Indicators such as hypervolume  weigh both convergence and diversity of solutions to enhance the selection pressure and guide the search to PF. IBEA , SMS-EMOA , and HypE  are three classical indicator-based evolutionary algorithms. Unfortunately, the computational cost becomes excessively expensive because of the high computational complexity.
So the key question is how to reduce the computational complexity and keep the advantages of the hypervolume indicator at the same time. The major contributions of this paper can be summarized as follows.(1)A simplified hypervolume calculation method is proposed to roughly evaluate the convergence and diversity of solutions(2)To enhance the quality of offspring, a new selection operator based on the simplified hypervolume is proposed to choose excellent parents(3)The simplified hypervolume together with nondomination is used in the new update strategy of external population to store solutions with good convergence and diversity
In the remainder of this paper, Section 2 describes the proposed algorithm. Then, Section 3 mainly presents experimental results and related analysis of the proposed algorithm. At last, conclusions are given in Section 4.
2. The Proposed Algorithm
A simplified hypervolume-based evolutionary algorithm for many-objective optimization (SHEA) is proposed to solve MaOPs. The core part of this paper is a new hypervolume calculation method to roughly evaluate the convergence and diversity of solutions. Furthermore, this new hypervolume is used to improve selection operator and update strategy.
2.1. Simplified Hypervolume
To get the hypervolume value, the normalized population is sorted by each objective function value. For each solution of each objective function value sequence, the reference point is the maximum of the two points on either side of the particular solution.
Thereafter, the volume between the particular solution and the reference point is calculated. As for the boundary solutions, just calculate the volumes between these boundary solutions and its adjacent solutions and remove the terms when the objective function value of is boundary value. So the calculation formula of is (2). Figure 1 shows the calculation of , and to make it easier to understand, the MOP in Figure 1 has only two objectives.
Then, the minimum of the particular solution’s volumes for all objectives is kept as the hypervolume value (3):where is a small positive number, , and are reference point and volume of the ith solution for the jth objective.
When is sparse which means the adjacent solutions are far from , is large. As for convergence, when each function of is small which means is far from , is large. So, the convergence and diversity of are better when is larger.
2.2. Selection Operator
The new selection operator aims to choose parents with good convergence and diversity to generate high quality offspring in differential evolution :where means the ith dimension of in decision space; CR is crossover rate; and are selected in T nearest neighbors of solution x.
The new selection operator calculates the nondominated neighbors’ hypervolume and retains the top H solutions. To accelerate convergence, selects the minimum solution of Tchebycheff function in H retained solutions:where is a given weight vector and is a reference point with .
Then, randomly choose in neighbors except for .
2.3. Update Strategy of External Population
The external population is adopted to store solutions with good convergence and diversity. The solutions with smaller hypervolume values of the nondominated solutions of population are deleted to maintain the number of the external population. And to retain diversity, m solutions in external population are selected in boundary solutions; others come from intermediate solutions.
2.4. Steps of the Proposed Algorithm
SHEA works as follows (Algorithm 1):
To update the population, the solutions in set are sorted by nondomination and kept in set from the first nondomination rank to the last until the total number of is bigger than . Then, the cosine of solutions in and weight vectors are calculated to classify each solution into the corresponding weight vector according to the maximum cosine. For each weight vector , when solutions in this kind are not empty, save the solution with minimum Tchebycheff function of modified version:
Otherwise, find the solution with the minimum Tchebycheff function in .
In the proposed algorithm, solutions (population, offspring, and external population), , and weight vectors need to be stored, so the space complexity is (in this paper, ). Therefore, the space complexity of SHEA is . The major computation of the proposed algorithm contains the selection operator and the update strategy of external population. Both of them use the simplified hypervolume, which needs basic operations (i.e., , , , , and comparison). So, the selection operator needs basic operations to calculate the simplified hypervolume and basic operations to choose parents. To update the external population, at most basic operations are needed. Furthermore, the update strategy of population also needs no more than basic operations. Altogether, the computational complexity of SHEA is .
3. Experimental Study
3.1. Experimental Settings
In this section, the proposed algorithm is compared with four state-of-the-art algorithms such as NSGAIII , MOEA/DD , KnEA , and RVEA  on fifteen many-objective benchmark functions (MaF) from CEC2018 MaOP competition . Each problem is tested for 5, 10, 15 objectives. NSGAIII  supplies and updates well-spread reference points adaptively to maintain the diversity among population members. MOEA/DD  exploits the merits of both dominance- and decomposition-based approaches to balance the convergence and diversity of the evolutionary process. KnEA  is a knee point-driven EA to solve MaOPs. RVEA  adopts a scalarization approach named angle-penalized distance to balance convergence and diversity.
All of these fifteen test problems for each algorithm mentioned above are run on PlatEMO , and average data is given over 20 independent runs. In the proposed algorithm, the size of external population EN is about 2N; crossover probability of SBX operator is 1; the size of T is 0.1 N; J is 0.9; CR is 0.5, and F is 0.5. And other settings are the same as the standard of CEC2018 MaOP competition . The algorithms are run on a PC with Intel Core i5-3210M (2.50 GHz for a single core and Windows 7 operating system) by using MATLAB language.
3.2. Performance Metrics
Comparison experiment employs inverted generational distance (IGD)  to judge the performances of these algorithms:where is a set of points uniformly sampled over the true PF, is the population obtained from MOEAs, and is the Euclidean distance between and its nearest neighbor in .
IGD can comprehensively measure the convergence and diversity of population, and when IGD value is smaller, the population is closer to Pareto fronts. For each problem, around 10000 points on Pareto fronts are uniformly sampled to calculate IGD. Besides, in the sense of statistics comparison experiment, use Wilcoxon rank-sum test  whose significance level is set 0.05 to compare algorithms’ mean IGD.
3.3. Comparative Studies
Table 1 shows the mean and standard deviation of IGDs which are given by the five MaOEAs on 5, 10, 15 objectives test problems over 20 independent runs. And the best performance for each test problem is marked in bold while “+,” “=,” “-” mean the proposed algorithm is better than, the same as, and worse than the compared algorithms.
On all forty-five problems in Table 1, SHEA statistically outperforms the compared algorithms on 21 problems, which reveals the good performance of the proposed algorithm in the form of IGD. NSGAIII, MOEA/DD, KnEA, and RVEA have better behavior than SHEA respectively on four, four, nine, five problems and SHEA does better than NSGAIII, MOEA/DD, KnEA and RVEA on thirty-two, twenty-nine, twenty-five, and thirty problems.
In fifteen MaFs, there are 8 problems (F1, F2, F4, F5, F7, F8, F9, and F15) that have partial PFs. The PF projections of these problems do not fully cover the unit hyperplane. The mean IGDs of SHEA are smaller than those of NSGAIII, MOEA/DD, KnEA, and RVEA on twenty-two, nineteen, sixteen, and twenty problems, separately. And for 6 problems (F3, F10, F11, F12, F13, and F14) with PF projection fully covering the unit hyperplane, there are respectively eleven, eleven, twelve, twelve problems that the mean IGDs of SHEA are smaller than those of NSGAIII, MOEA/DD, KnEA, and RVEA. As for the problem F6 whose PF is degraded, SHEA is superior to NSGAIII, MOEA/DD, KnEA, and RVEA on three problems in the form of IGD. All of these comparison results mentioned above indicate the best overall performance of SHEA on most problems and prove the excellent performance of simplified hypervolume in estimating convergence and diversity.
To simplify the calculation of hypervolume, a new simplified hypervolume is proposed to roughly estimate the convergence and diversity of solutions; then the new method is used in the selection operator and the update strategy of external population. And the proposed algorithm indicates good performance according to comparing experimental results with four state-of-the-art algorithms.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work was supported by the National Natural Science Foundation of China (nos. 61806120, 61502290, 61401263, 61672334, and 61673251), China Postdoctoral Science Foundation (no. 2015M582606), Industrial Research Project of Science and Technology in Shaanxi Province (nos. 2015GY016 and 2017JQ6063), Fundamental Research Fund for the Central Universities (no. GK202003071), and Natural Science Basic Research Plan in Shaanxi Province of China (no. 2016JQ6045).
G. B. Lamont, Evolutionary Algorithms for Solving Multi-Objective Problems, Springer US, Boston, MA, USA, 2007.
E. Zitzler and K. Simon, “Indicator-based selection in multiobjective search,” in Proceedings of the International Conference on Parallel Problem Solving from Nature, pp. 832–842, Springer, Berlin, Germany, September 2004.View at: Google Scholar
E. J. Hughes, “Evolutionary many-objective optimisation: many once or one many?” in Proceedings of 2005 IEEE Congress on Evolutionary Computation, pp. 222–227, Edinburgh, UK, September 2005.View at: Google Scholar
X. F. Zou, Y. Chen, M. Z. Liu, and L. S. Kang, “A new evolutionary algorithm for solving many-objective optimization problems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 38, no. 5, pp. 1402–1412, 2008.View at: Google Scholar
Q. F. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithm based on decomposi-tion,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007.View at: Google Scholar
K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: solving problems with box constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014.View at: Publisher Site | Google Scholar
M. Asafuddoula, H. K. Singh, and T. Ray, “An enhanced decomposition-based evolutionary algorithm with adaptive reference vectors,” IEEE Transactions on Cybernetics, vol. 48, no. 8, pp. 2321–2334, 2017.View at: Google Scholar
H. Lin, L. Chen, Q. Zhang, and K. Deb, “Adaptively allocating search effort in challenging many-objective optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 3, pp. 433–448, 2018.View at: Google Scholar
A. Auger, J. Bader, D. Brockhoff, and E. Zitzler, “Theory of the hypervolume indicator: optimal μ-distributions and the choice of the reference point,” in Proceedings of the Foundations of Genetic Algorithm X, pp. 87–102, Orlando, FL, USA, 2009.View at: Google Scholar
K. V. Price, R. M. Storn, and J. A. Lampinen, “Differential evolution-A practical approach to global optimization,” Natural Computing, vol. 141, no. 2, 2005.View at: Google Scholar
R. Cheng, M. Li, Y. Tian et al., “Benchmark functions for CEC’2018 competition on many objective optimization,” Tech. Rep., CERCIA, School of Computer Science, University of Birmingham Edgbaston, Birmingham, U.K, 2017, Tech. Rep.B15. 2TT.View at: Google Scholar
S. Robert, J. Torrie, and D. Dickey, Principles and Procedures of Statistics: A Biometrical Approach, McGraw-Hill, New York, NY, USA, 1997.