Research Article  Open Access
Naili Luo, Wu Lin, Peizhi Huang, Jianyong Chen, "An Evolutionary Algorithm with ClusteringBased Assisted Selection Strategy for Multimodal Multiobjective Optimization", Complexity, vol. 2021, Article ID 4393818, 13 pages, 2021. https://doi.org/10.1155/2021/4393818
An Evolutionary Algorithm with ClusteringBased Assisted Selection Strategy for Multimodal Multiobjective Optimization
Abstract
In multimodal multiobjective optimization problems (MMOPs), multiple Pareto optimal sets, even some good local Pareto optimal sets, should be reserved, which can provide more choices for decisionmakers. To solve MMOPs, this paper proposes an evolutionary algorithm with clusteringbased assisted selection strategy for multimodal multiobjective optimization, in which the addition operator and deletion operator are proposed to comprehensively consider the diversity in both decision and objective spaces. Specifically, in decision space, the union population is partitioned into multiple clusters by using a densitybased clustering method, aiming to assist the addition operator to strengthen the population diversity. Then, a number of weight vectors are adopted to divide population into N subregions in objective space (N is population size). Moreover, in the deletion operator, the solutions in the most crowded subregion are first collected into previous clusters, and then the worst solution in the most crowded cluster is deleted until there are N solutions left. Our algorithm is compared with other multimodal multiobjective evolutionary algorithms on the wellknown benchmark MMOPs. Numerical experiments report the effectiveness and advantages of our proposed algorithm.
1. Introduction
Many realworld optimization problems often involve multiple (often conflicting) objectives [1–3], which are usually modeled as follows:where x is an ndimensional decision vector in and are, respectively, the objective values of x by using a specific function mapping. Pareto optimal solutions (PS) are a set of nondominated solutions for an MOP, and Pareto optimal front (PF) represents their objective values [4–6]. To get the optimal solutions of an MOP, many multiobjective evolutionary optimization algorithms (MOEAs) are proposed to evolve a population with multiple solutions, aiming to approximate the true PF [7–9]. Due to the advantage of being able to optimize multiple solutions simultaneously, MOEAs have been a widely used approach to solve various kinds of optimization problems, that is, MOEAs based on Pareto dominance relation [10–13], MOEAs based on performance indicator [14–16], and MOEAs based on decomposition [17–25].
Recently, some MOPs with different PSs or some accepted local PSs [26–28] have drawn much attention, which are called multimodal multiobjective optimization problems (MMOPs). In recent years, several studies considering the diversity in decision space are proposed to solve MMOPs. In Omnioptimizer [29], the crowding distance is regarded as the diversity indicator of solution in decision space and embedded into NSGAII, which strengthens diversity of population. In DNNSGAII [30], it aims to maintain distinct solutions in decision space, while their objective values are equal. Moreover, in MOEA/DAD [31], by incorporating the addition and deletion operators into MOEA/D, it can obtain multiple nondominated solutions being different in decision space, which strengthens the diversity of population in decision space. Furthermore, there are other heuristic algorithms having been getting more attention for MMOPs, that is, particle swarm optimization (PSO) algorithms [32–34]. For example, a PSO algorithm is proposed in [32] as an effective niching algorithm, which does not need any niching parameter. In MO_Ring_PSO_SCD [33], a ring topology and special crowding distance are incorporated into PSO algorithm, helping to ensure greater diversity. Similarly, in [34], the neighborhood in decision space is constructed by a selforganizing map network, aiming to approximate multiple PSs for solving MMOPs. Recently, in TriMOEATA&R [35], two archives are used to cooperatively balance the convergence in objective space and the diversity in both decision and objective spaces. Due to the complexities and unpredictability in decisionmaking, more solutions with good performance, that is, multiple distinct global PSs and even some local PSs with good quality, are supposed to provide for decisionmaking [35].
However, for tackling MMOPs with at least one local PS, once global PSs are found in evolutionary process of the above multimodal MOEAs (MMOEAs) [29–35], all local PSs will not survive in nextgeneration population as local PSs are often dominated by global PS and they are usually replaced by some solutions with good convergence. To solve MMOPs, we develop a novel evolutionary algorithm with clusteringbased assisted selection strategy for multimodal multiobjective optimization (MMOEACAS). In decision space, the union population is first partitioned into multiple clusters by a densitybased clustering method, which will assist the addition operator to strengthen diversity of population. Then, a number of weight vectors divide population into N subregions in objective space (N is the population size), which is helpful to extend diversity of population. Moreover, in the deletion operator, the solutions in the most crowded subregion are collected into previous clusters, and then the worst solution in the most crowded cluster is deleted until there are N solutions left. Therefore, our approach is able to prevent the loss of local optimal solutions and help to realize a good balance in diversity of population. Here, the main contributions of our approach are stated as follows:(1)Different from traditional environmental selection mechanisms, in MMOEACAS, a densitybased clustering method is applied on the population to obtain multiple clusters according to their neighboring relationship in decision space. Based on those classified results, those solutions with better performance in each cluster have the priority to be selected in the addition operator. This way, the diversity in decision space can be ensured.(2)The deletion operator proposed in our method considers the solutions’ density information from both decision and objective spaces. Firstly, a set of weight vectors are used to divide candidate solutions into different regions based on their locations in objective space; then solutions in most crowded subregion are classified into previous clusters and the crowded solution in this cluster is deleted. This way, the more promising solutions are reserved into next generation.
Next, we give a brief introduction for organization of this paper. Some definitions about MMOPs and the introductions about the used clustering method are given in Section 2. Then, the framework of our algorithm with the addition and deletion operators is described in Section 3. Experimental setup and results are presented and analyzed in Sections 4 and 5, respectively. At last, some conclusions and further directions are discussed in Section 6.
2. Preliminaries
2.1. MMOPs
In this subsection, we give a case of MMOPs, which is shown in Figure 1. If any solution in a set PS_{1} cannot be dominated by any other solution in feasible regions, PS_{1} is regarded as global PS. The global PF is its objective vectors. Note that we marked global PS/PF with red color as shown in Figure 1. If any solution in a set PS cannot be dominated by any other solution in PS, PS is said to be local PS. The local PF is its objective vectors. As shown in Figure 1, we marked local PS/PF with blue color. Note that the local population is defined as the set including the neighboring solutions in the decision space. Thus, some MOPs with different PSs or local PSs [26–28] are called multimodal multiobjective optimization problems (MMOPs). For MMOPs, multiple different PSs, including global PSs and some good local PSs, should be provided for the decisionmaker, which help to find more suitable solutions in making decision. Therefore, diversity of solutions in both objective and decision spaces should be considered equally important in the selection mechanisms of MOEAs for solving MMOPs.
2.2. DensityBased Clustering Method
It is well known that DBSCAN [36] with the neighborhood parameters and MinPts (MinPts is positive integer) is a densitybased clustering method, aiming to find the cluster defined to the maximal densityconnected set of sample data deduced by the relationship of densityreachable. Considering each sample data in a data set D, related definitions are given as follows: Neighborhood: the εneighborhood of x is defined as follows: where dist(x, y) is Euclidean distance between x and y. Core objective: a sample data x is said to be core objective if it satisfies equation (3) as follows: Directly densityreachable: if a sample data y is direct densityreachable from x, it must meet Densityreachable: the sample data y is said to be densityreachable from x if there are a set of sample data p_{1}, p_{2}, …, p_{n}, where p_{1} = x, p_{n} = y, and p_{i}_{+}_{1} is direct densityreachable from p_{i}. Densityconnected: the sample data x and y are said to be densityconnected, if there is a sample data p where x and y are densityreachable from it.
3. The Proposed Algorithm
3.1. General Framework
In this section, we introduce the general framework of MMOEACAS (Algorithm 1). At first, N solutions are randomly generated in , which form a population in line 1. In line 2, multiple weight vectors are initialized [37]. Then, the recombination operators are applied on parent population to generate offspring population P’. Next, a union population U is formed by merging P′ and P in line 5. In line 6, according to Pareto dominated relationship [10], those solutions in U are ranked into different Pareto fronts F_{1},…, F_{t}. As shown in lines 78, after normalizing U, DBSCAN is used to divide normalized population into multiple clusters C_{1},…, C_{K} in decision space. Then, the addition operator is used to select the promising solutions into nextgeneration population P with more than N solutions in line 9. Next, in lines 1011, after normalizing P, the association process with a set of weight vectors is used to divide P into N subregions in objective space. Finally, in line 12, the deletion operator is run on P to delete the most crowded solution each time until there are N solutions left in P. If the maximal number of evaluations is reached, P is regarded as the final solution set as shown in line 14. If the current evaluations are still smaller than the maximal evaluations, the above procedure in lines 4–12 will continue until the stopping criterion is met.

3.2. Addition Operator
In this section, the addition operator (Algorithm 2) is introduced to select some promising solutions to maintain diversity of population. As described in lines 12, the nondominated solutions are first selected and then add them into an empty set P in line 2. To further strengthen the diversity, in lines 3–6, the nondominated solutions in each cluster C_{k} (k = 1,…, K) are also added to P. In addition, solutions with better convergence in objective space are further selected with higher priority until the size of solutions in P is not less than N. As shown in lines 8–11, solutions from F_{2} to F_{t} are used to fill P according to their respective priorities. When there are more than N solutions, the addition operator ends with returning the temporary population P.

3.3. Association Procedure
After the addition operator is run, the association procedure (Algorithm 3) will be applied on normalized P to divide population into N subregions in objective space by a set of weight vectors. At first, N empty subregions are initialized in line 1. Next, for each solution , its distances to all weight vectors are computed, and then x is associated with its closest subregion as shown in lines 3–7. When all solutions are associated with their closest subregions, association procedure ends and are returned in line 9.

3.4. Deletion Operator
After are got by association procedure, deletion operator (Algorithm 4) will be used to prune P to get N final solutions. The HAD method [38] is used to reflect the crowding distances of each solution x in decision space. Thus, as shown in line 23, after normalizing P, the HAD values of solutions are computed by equation (5) as follows:where dist is the Euclidian distance between two normalized solutions. In line 4, find the most crowded subregion by equation (6) as follows:

In line 5, solutions in are classified into S_{1}, …, S_{K} by equation (7) as follows:
Then, in line 6, find the most crowded cluster S_{k} by equation (8) as follows:
In lines 78, the solution x with the smallest HAD value in S_{k} will be deleted from and P. Finally, deletion operator ends until there are N solutions left and P will be returned in line 10 as the final population.
4. Experimental Settings
4.1. Test Instances and Parameter Settings
In experiments, a series of MMOPs from CEC 2019 [28] are used to validate the performance of all compared algorithms, that is, some MMOPs with many distinct PSs, some MMOPs with irregular shape of PSs (i.e., MMF1_e, MMF1_z, and MMF57), and some MMOPs with existing global and local PSs (MMF1013, MMF15, and MMF15_a).
Our proposed algorithm MMOEACAS is, respectively, compared with four MMOEAs (i.e., Omnioptimizer [29], DNNSGAII [30], MO_Ring_PSO_SCD [33], and TriMOEATA&R [35]) on all the MMOPs adopted. The parameter settings of compared algorithms are suggested in their references, which are given in Table 1.

Note that simulated binary crossover (SBX) and polynomialbased mutation [10] are used as recombination operators for producing offspring in MMOEACAS. For DBSCAN, the neighborhood parameter MinPts is set to be 3, and is set by an adaptive strategy as follows:where and are the minimum and maximum values in ith dimension.
In addition, the population size is equal to 100 × n for different test problems with ndimensional decision variables and the maximum evaluation is equal to 5000 × n. In order to avoid exceptional situation, we collect 21 times of experimental results for each algorithm on solving each test problem.
4.2. Performance Indicator
In our experiments, we select the inverted generational distance (IGD [39, 40]) as performance indicator. To evaluate the quality of obtained solutions in decision space or objective space, IGD is, respectively, applied on two different spaces. When computing IGD, sampling reference points on the true PS or PF is necessary.
If is composed of sampling reference points on PS and P is the final set of decision variables in decision space, the distance between and P is named IGDX. If is composed of sampling reference points on PF and P is the final set of objective values in objective space, the distance between them is named IGDF. In general, we compute the IGD value according to the following equation:where d(p^{i}, x^{j}) is Euclidean distance between p^{i} and x^{j} (p^{i} belongs to P^{∗} and x^{j} to belongs P). The lower IGDX or IGDF values indicate the better performance of P.
5. Experimental Studies
5.1. Results and Discussion
As shown in Table 2, the compared results of other MMOEAs (i.e., Omnioptimizer, DNNSGAII, MO_Ring_PSO_SCD, and TriMOEATA&R) and MMOEACAS in terms of IGDX and IGDF metrics are summarized. In order to get more intuitional information from the statistical data, we use “+/–/∼” to record the compared results between other four competitors and MMOEACAS, where Wilcoxon’s ranksum test with = 0.05 is used in comparison. According to the overall statistic results in the last row in Table 2, we can observe that MMOEACAS showed better performance than four other compared algorithms in 31, 32, 25, and 32 out of 44 cases, and it was similar to them in 4, 3, 5, and 2 cases. Conversely, the results obtained by MMOEACAS were significantly worse than those obtained by other MMOEAs in 9, 9, 14, and 10 out of 44 cases. The above comparison results demonstrate that our proposed MMOEACAS can get more uniformly and widely distributed solutions in decision space without deteriorating distributions in objective space, which confirms its superiority and effectiveness in solving most of MMOPs adopted.
 
Wilcoxon’s ranksum test () is used for experimental comparison where +, −, and = represent that the performance of the results obtained by competitors is better, worse, or similar. 
The detailed IGDX and IGDF values of MMOEACAS and four MMOEAs are reported in Tables 3 and 4, respectively. To be much clearer, we marked the best value in bold for each test problem. In Table 3, MMOEACAS achieved the best IGDX values in 10 cases, that is, Omnitest, MMF2, MMF4, MMF7, MMF1013, MMF15, and MMF15_a; MO_Ring_PSO_SCD got best IGDX values in 8 cases, that is, SYMPART rotated, MMF56, and MMF8; TriMOEATA&R only obtained best IGDX values in 4 cases, that is, SYMPART simple and MMF9. The remaining compared algorithms could not obtain best IGDX value in any case. Similarly, we could also observe from Table 4 that MMOEACAS obtained the best IGDF values in 10 cases, that is, MMF4, MMF7, MMF1015, MMF14_a, and MMF15_a; Omnioptimizer got the smallest IGDF values in 9 cases, that is, SYMPART simple, MMF1_z, MMF2, MMF56, and MMF89; MO_Ring_PSO_SCD obtained the smallest IGDF values on MMF1 and MMF3, while TriMOEATA&R only obtained the smallest IGDF value on MMF1_e; DNNSGAII could not obtain best IGDX value in any case. Thus, we could conclude that MMOEACAS achieved best IGDX values and best IGDF values on MMOPs with existing global and local PSs, that is, MMF1013, MMF15, and MMF15_a. By collecting the best values on all MMOPs in terms of IGDX and IGDF, we can conclude that the overall performance of our proposed MMOEACAS is better than other compared algorithms on these test problems. In addition, the above comparison results based on the best performance on each test problem demonstrate that the environmental selection mechanism of MMOEACAS can reserve more promising solutions, while those classical environmental selection criteria have no ability to solve these MMOPs well.
 
Wilcoxon’s ranksum test () is used for experimental comparison where +, −, and = represent that the performance of the results obtained by competitors is better, worse, or similar. 
 
Wilcoxon’s ranksum test () is used for experimental comparison where +, −, and = represent that the performance of the results obtained by competitors is better, worse, or similar. 
Based on the experimental results in terms of IGDX, we can find that Omnioptimizer and DNNSGAII could not perform better than MMOEACAS on any case, showing the drawbacks of their selection mechanisms when solving those test problems. It can be concluded that the fast nondominated sorting approach used in Omnioptimizer and DNNSGAII failed to maintain the diversity in decision space, since it always intended to choose the solutions with better convergence and diversity in objective space. Thus, regarding the performance of final population obtained in decision space, the above two algorithms always obtain poor performance on all test problems because they neglect the importance of diversity in decision space. To address this issue, MMOEACAS used the densitybased clustering method to divide clusters where the similar solutions are likely to be collected into the same cluster. At the same time, in the addition operator, the solutions with good performance in the same cluster even if it is worse in the whole population can be still seen as promising candidate solutions. This way, MMOEACAS can guarantee the diversity of population in decision space. This also explains why Omnioptimizer and DNNSGAII showed the advantages on MMOPs with multiple PSs (i.e., SYMPART simple, SYMPART rotated, Omnitest, MMF1, MMF56, and MMF8) when comparing the quality of population in objective space, while they showed poor performance on some MMOPs with local and global PSs (i.e., MMF1013, MM15, and MMF15_a) in terms of IGDX. Due to the comprehensive consideration of the diversity in both decision and objective spaces, MMOEACAS achieves better performance on most MMOPs at the expense of diversity in objective space when solving some problems with multiple PSs.
In order to improve the diversity in decision space, MO_Ring_PSO_SCD used the special crowding distance, while it still followed the principle of convergence first in objective space. The local Pareto optimal solutions are always lost during the evolutionary process of MO_Ring_PSO_SCD. Similarly, in TriMOEATA&R, nondominated solutions still replace dominated solutions in objective space, so it also failed to strike the balance between the diversity in decision and objective spaces. To avoid the local Pareto optimal solutions being replaced by the solutions with better convergence during the evolutionary process, the proposed addition and deletion operators in MMOEACAS collaboratively select more promising solution into next generation according to the information of clustering results classified by the densitybased clustering method in decision space and the subregions divided by weight vectors in objective space. Therefore, only the solutions that are always performing worse in both two spaces are abandoned, and those solutions having good performance in either space could get to survive. Although MO_Ring_PSO_SCD and TriMOEATA&R performed better than MMOEACAS on some problems with multiple PSs, they still obtained poor performance when compared with MMOEACAS on MMOPs having global and local PSs. As shown in Tables 3 and 4, when compared with MMOEACAS, it is obvious that MO_Ring_PSO_SCD and TriMOEATA&R achieved worse results on most MMOPs, especially on these MMOPs having global and local PSs. Thus, these compared results validate the effectiveness of MMOEACAS and it is able to gain solutions with better overall performance in decision space and objective space when compared to other MMOEAs on most MMOPs.
5.2. Further Discussion
To clearly demonstrate the performance of different MMOEAs, the final populations obtained in decision and objective spaces by different algorithms are plotted in Figures 2–3, which qualitatively show the effectiveness and advantages of our proposed algorithm on solving these MMOPs. For example, a MMOP with multiple PSs, that is, SYMPART rotated, is shown in Figure 2, and a MMOP with local PS, that is, MMF10, is shown in Figure 3, respectively. From Figure 2, it was found that the diversity of final population obtained by MMOEACAS is more uniform in decision space. Although Omnioptimizer and DNNSGAII found better solutions in objective space, they failed to maintain other PSs in decision space. Similarly, the population obtained by TriMOEATA&R also missed some PSs in decision space. MO_Ring_PSO_SCD found all PSs, but the final solutions cannot uniformly cover all the true PSs. Thus, we can conclude that MMOEACAS gets better results in terms of the comprehensive performance in both decision and objective spaces on SYMPART rotated.
(a)
(b)
(c)
(d)
(e)
(a)
(b)
(c)
(d)
(e)
Furthermore, for MMF10, the final populations obtained are also shown in Figure 3. It is obvious that the performance of final population obtained by MMOEACAS is significantly better than other MMOEAs. Other compared algorithms mainly pick solutions with faster convergence rates in objective space, which will cause some local optimal solutions always to be eliminated by the global optimal solutions. Thus, only MMOEACAS maintained some promising local optimal solutions. To summarize, MMOEACAS shows some superiorities on two cases, especially on MM10.
6. Conclusion
We have developed a novel evolutionary algorithm with clusteringbased assisted selection strategy for multimodal multiobjective optimization, in which addition operator and deletion operator are proposed to comprehensively consider the diversity in both decision and objective spaces. Specifically, a densitybased clustering method is used to divide population into multiple clusters in decision space, aiming to assist addition operator to strengthen diversity of population. Then, weight vectors are used to divide population into N subregions in objective space (N is population size). Moreover, in deletion operator, the solutions in the most crowded subregion are collected into previous clusters, and then the worst solution in the most crowded cluster is deleted until there are N solutions left. The numerical values on both IGDX and IGDF have reported the effectiveness of MMOEACAS. Most existing MMOEAs cannot search for local Pareto optimal solutions because they are often dominated and replaced by global optimal solutions with good convergence, while our proposed algorithm is capable of preserving more uniformly and widely distributed solutions in decision space without deteriorating their distributions in objective space, which confirms its superiority on most of MMOPs.
Finally, some more effective frameworks, enhanced search operators, and other techniques will be further studied and developed to fill in the gaps in the field of multimodal multiobjective optimization. In addition, we will further extend our proposed algorithm to solve other more complicated multimodal multiobjective optimization problems.
Data Availability
All experimental results can be obtained upon request through email to the corresponding author.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
Acknowledgments
This work was supported by the National Natural Science Foundation of China under Grants 61876110, 61836005, and 61672358, the Joint Funds of the National Natural Science Foundation of China under Key Program Grant U1713212, and Shenzhen Technology Plan under Grant JCYJ20190808164211203. Also, this work was supported by the National Engineering Laboratory for Big Data System Computing Technology and the Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen University.
References
 Y. Zhang, D. Gong, and J. Cheng, “Multiobjective particle swarm optimization approach for costbased feature selection in classification,” IEEEACM Transactions on Computational Biology and Bioinformatics, vol. 14, no. 1, pp. 64–75, 2015. View at: Google Scholar
 Z.H. Zhan, X.F. Liu, Y.J. Gong, J. Zhang, H. S.H. Chung, and Y. Li, “Cloud computing resource scheduling and a survey of its evolutionary approaches,” ACM Computing Surveys, vol. 47, no. 4, pp. 1–33, 2015. View at: Publisher Site  Google Scholar
 Q. Zhang, J. Ding, W. Shen, J. Ma, and G. Li, “Multiobjective particle swarm optimization for microgrids pareto optimization dispatch,” Complexity, vol. 2020, Article ID 5695917, 13 pages, 2020. View at: Publisher Site  Google Scholar
 K. Deb, MultiObjective Optimization Using Evolutionary Algorithms, Wiley, Chichester, UK, 2001.
 B. Xin, L. Chen, J. Chen, H. Ishibuchi, K. Hirota, and B. Liu, “Interactive multiobjective optimization: a review of the stateoftheart,” IEEE Access, vol. 6, pp. 41256–41279, 2018. View at: Publisher Site  Google Scholar
 C. Coello, G. Lamont, and D. V. Veldhuizen, Evolutionary Algorithms for Solving MultiObjective Problems, Springer, Berlin, Germany, 2007.
 Z. Zhang, Q. Han, Y. Li, Y. Wang, and Y. Shi, “An evolutionary multiagent framework for multiobjective optimization,” Complexity, vol. 2020, Article ID 9147649, 18 pages, 2020. View at: Publisher Site  Google Scholar
 X. Wu, S. Zhang, Z. Gong, J. Ji, Q. Lin, and J. Chen, “Decompositionbased multiobjective evolutionary optimization with adaptive multiple Gaussian process models,” Complexity, vol. 2020, Article ID 9643273, 22 pages, 2020. View at: Publisher Site  Google Scholar
 W. Lin, Q. Lin, Z. Zhu, J. Li, J. Chen, and Z. Ming, “Evolutionary search with multiple utopian reference points in decompositionbased multiobjective optimization,” Complexity, vol. 2019, Article ID 7436712, 22 pages, 2019. View at: Publisher Site  Google Scholar
 K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGAII,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site  Google Scholar
 E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strength pareto evolutionary algorithm for multiobjective optimization,” in Proceeding of Evolutionary Methods for Design, Optimization and Control with Application to Industrial Problems, pp. 95–100, Athens, Greece, 2002. View at: Google Scholar
 X. Zou, Y. Chen, M. Liu, and L. Kang, “A new evolutionary algorithm for solving manyobjective optimization problems,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 18, no. 4, pp. 577–601, 2014. View at: Google Scholar
 Y. Hua, Y. Jin, and K. Hao, “A clusteringbased adaptive evolutionary algorithm for multiobjective optimization with irregular pareto fronts,” IEEE Transactions on Cybernetics, vol. 49, no. 7, pp. 2758–2770, 2019. View at: Publisher Site  Google Scholar
 E. Zitzler and S. Künzli, “Indicatorbased selection in multiobjective search,” in Proceeding of the International Conference on Parallel Problem Solving from NaturePPSN VIII, pp. 832–842, Springer, Berlin, Germany, 2004. View at: Google Scholar
 N. Beume, B. Naujoks, and M. Emmerich, “SMSEMOA: multiobjective selection based on dominated hypervolume,” European Journal of Operational Research, vol. 181, no. 3, pp. 1653–1669, 2007. View at: Publisher Site  Google Scholar
 K. Li, J. Zheng, M. Li, C. Zhou, and H. Lv, “A novel algorithm for nondominated hypervolumebased multiobjective optimization,” in Proceeding of the IEEE International Conference Systems, Man, and Cybernetics, pp. 5220–5226, San Antonio, TX, USA, October 2009. View at: Publisher Site  Google Scholar
 Q. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007. View at: Google Scholar
 H. Zhang, X. Zhang, X.Z. Gao, and S. Song, “Selforganizing multiobjective optimization based on decomposition with neighborhood ensemble,” Neurocomputing, vol. 173, pp. 1868–1884, 2016. View at: Publisher Site  Google Scholar
 H. Li and Q. Zhang, “Multiobjective optimization problems with complicated pareto sets, MOEA/D and NSGAII,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 284–302, 2009. View at: Publisher Site  Google Scholar
 S. X. Zhang, L. M. Zheng, L. Liu, S. Y. Zheng, and Y. M. Pan, “Decompositionbased multiobjective evolutionary algorithm with mating neighborhood sizes and reproduction operators adaptation,” Soft Computing, vol. 21, no. 21, pp. 6381–6392, 2017. View at: Publisher Site  Google Scholar
 K. Li, Q. Zhang, S. Kwong, M. Li, and R. Wang, “Stable matchingbased selection in evolutionary multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 6, pp. 909–923, 2014. View at: Google Scholar
 C. Dai and X. Lei, “An improvement decompositionbased multiobjective evolutionary algorithm with uniform design,” KnowledgeBased Systems, vol. 125, no. 1, pp. 108–115, 2017. View at: Publisher Site  Google Scholar
 H. Xu, W. Zeng, D. Zhang, and X. Zeng, “MOEA/HD: a multiobjective evolutionary algorithm based on hierarchical decomposition,” IEEE Transactions on Cybernetics, vol. 49, no. 2, pp. 517–526, 2019. View at: Publisher Site  Google Scholar
 W. Li, L. Wang, Q. Jiang, X. Hei, and B. Wang, “Multiobjective cloud particle optimization algorithm based on decomposition,” Algorithms, vol. 8, no. 2, pp. 157–176, 2015. View at: Publisher Site  Google Scholar
 Y. Zhang, D.W. Gong, J.Y. Sun, and B.Y. Qu, “A decompositionbased archiving approach for multiobjective evolutionary optimization,” Information Sciences, vol. 430431, pp. 397–413, 2018. View at: Publisher Site  Google Scholar
 C. Yue, B. Qu, K. Yu, J. Liang, and X. Li, “A novel scalable test problem suite for multimodal multiobjective optimization,” Swarm and Evolutionary Computation, vol. 48, pp. 62–71, 2019. View at: Publisher Site  Google Scholar
 H. Ishibuchi, Y. Peng, and K. Shang, “A scalable multimodal multiobjective test problem,” in Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), pp. 310–317, Wellington, New Zealand, June 2019. View at: Publisher Site  Google Scholar
 J. J. Liang, B. Y. Qu, D. W. Gong, and C. T. Yue, “Problem definitions and evaluation criteria for the cec 2019 special session on multimodal multiobjective optimization,” Tech. Rep., Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China, 2019, Technical Report. View at: Google Scholar
 K. Deb and S. Tiwari, “Omnioptimizer: a procedure for single and multiobjective optimization,” in Lecture Notes in Computer Science, Evolutionary MultiCriterion Optimization, pp. 47–61, Springer, Berlin, Germany, 2005. View at: Google Scholar
 J. J. Liang, C. T. Yue, and B. Y. Qu, “Multimodal multiobjective optimization: a preliminary study,” in Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 2454–2461, Vancouver, Canada, July 2016. View at: Publisher Site  Google Scholar
 R. Tanabe and H. Ishibuchi, “A decompositionbased evolutionary algorithm for multimodal multiobjective optimization,” in Parallel Problem Solving from Nature—PPSN XV, pp. 249–261, Springer, Berlin, Germany, 2018. View at: Publisher Site  Google Scholar
 X. Li, “Niching without niching parameters: particle swarm optimization using a ring topology,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 1, pp. 150–169, 2010. View at: Publisher Site  Google Scholar
 C. Yue, B. Qu, and J. Liang, “A multiobjective particle swarm optimizer using ring topology for solving multimodal multiobjective problems,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 6, pp. 805–817, 2018. View at: Publisher Site  Google Scholar
 Y. Hu, J. Wang, J. Liang et al., “A selforganizing multimodal multiobjective pigeoninspired optimization algorithm,” Science China Information Sciences, vol. 62, no. 7, pp. 1–17, 2019. View at: Publisher Site  Google Scholar
 Y. Liu, G. G. Yen, and D. Gong, “A multimodal multiobjective evolutionary algorithm using twoarchive and recombination strategies,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 4, pp. 660–674, 2019. View at: Publisher Site  Google Scholar
 M. Ester, H. P. Kriegel, J. Sander, and X. Xu, “A densitybased algorithm for discovering clusters in large spatial databases with noise,” in Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 226–231, Menlo Park, CA, USA, August 1996. View at: Google Scholar
 I. Das and J. E. Dennis, “Normalboundary intersection: a new method for generating the Pareto surface in nonlinear multicriteria optimization problems,” SIAM Journal on Optimization, vol. 8, no. 3, pp. 631–657, 1998. View at: Publisher Site  Google Scholar
 V. L. Huang, P. N. Suganthan, K. Qin, and S. Baskar, “Multiobjective differential evolution with external archive and harmonic distancebased diversity measure,” Technical Report, Nanyang Technological University Singapore, 2005, https://www.researchgate.net/publication/228967624. View at: Google Scholar
 P. A. N. Bosman and D. Thierens, “The balance between proximity and diversity in multiobjective evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 2, pp. 174–188, 2003. View at: Publisher Site  Google Scholar
 M. Li and J. Zheng, “Spread assessment for evolutionary multiobjective optimization,” in Proceeding of the 5th International Conference on Evolutionary MultiCriterion Optimization, pp. 216–230, Nantes, France, April 2009. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2021 Naili Luo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.