Abstract

In multimodal multiobjective optimization problems (MMOPs), multiple Pareto optimal sets, even some good local Pareto optimal sets, should be reserved, which can provide more choices for decision-makers. To solve MMOPs, this paper proposes an evolutionary algorithm with clustering-based assisted selection strategy for multimodal multiobjective optimization, in which the addition operator and deletion operator are proposed to comprehensively consider the diversity in both decision and objective spaces. Specifically, in decision space, the union population is partitioned into multiple clusters by using a density-based clustering method, aiming to assist the addition operator to strengthen the population diversity. Then, a number of weight vectors are adopted to divide population into N subregions in objective space (N is population size). Moreover, in the deletion operator, the solutions in the most crowded subregion are first collected into previous clusters, and then the worst solution in the most crowded cluster is deleted until there are N solutions left. Our algorithm is compared with other multimodal multiobjective evolutionary algorithms on the well-known benchmark MMOPs. Numerical experiments report the effectiveness and advantages of our proposed algorithm.

1. Introduction

Many real-world optimization problems often involve multiple (often conflicting) objectives [13], which are usually modeled as follows:where x is an n-dimensional decision vector in and are, respectively, the objective values of x by using a specific function mapping. Pareto optimal solutions (PS) are a set of nondominated solutions for an MOP, and Pareto optimal front (PF) represents their objective values [46]. To get the optimal solutions of an MOP, many multiobjective evolutionary optimization algorithms (MOEAs) are proposed to evolve a population with multiple solutions, aiming to approximate the true PF [79]. Due to the advantage of being able to optimize multiple solutions simultaneously, MOEAs have been a widely used approach to solve various kinds of optimization problems, that is, MOEAs based on Pareto dominance relation [1013], MOEAs based on performance indicator [1416], and MOEAs based on decomposition [1725].

Recently, some MOPs with different PSs or some accepted local PSs [2628] have drawn much attention, which are called multimodal multiobjective optimization problems (MMOPs). In recent years, several studies considering the diversity in decision space are proposed to solve MMOPs. In Omni-optimizer [29], the crowding distance is regarded as the diversity indicator of solution in decision space and embedded into NSGA-II, which strengthens diversity of population. In DN-NSGA-II [30], it aims to maintain distinct solutions in decision space, while their objective values are equal. Moreover, in MOEA/D-AD [31], by incorporating the addition and deletion operators into MOEA/D, it can obtain multiple nondominated solutions being different in decision space, which strengthens the diversity of population in decision space. Furthermore, there are other heuristic algorithms having been getting more attention for MMOPs, that is, particle swarm optimization (PSO) algorithms [3234]. For example, a PSO algorithm is proposed in [32] as an effective niching algorithm, which does not need any niching parameter. In MO_Ring_PSO_SCD [33], a ring topology and special crowding distance are incorporated into PSO algorithm, helping to ensure greater diversity. Similarly, in [34], the neighborhood in decision space is constructed by a self-organizing map network, aiming to approximate multiple PSs for solving MMOPs. Recently, in TriMOEA-TA&R [35], two archives are used to cooperatively balance the convergence in objective space and the diversity in both decision and objective spaces. Due to the complexities and unpredictability in decision-making, more solutions with good performance, that is, multiple distinct global PSs and even some local PSs with good quality, are supposed to provide for decision-making [35].

However, for tackling MMOPs with at least one local PS, once global PSs are found in evolutionary process of the above multimodal MOEAs (MMOEAs) [2935], all local PSs will not survive in next-generation population as local PSs are often dominated by global PS and they are usually replaced by some solutions with good convergence. To solve MMOPs, we develop a novel evolutionary algorithm with clustering-based assisted selection strategy for multimodal multiobjective optimization (MMOEA-CAS). In decision space, the union population is first partitioned into multiple clusters by a density-based clustering method, which will assist the addition operator to strengthen diversity of population. Then, a number of weight vectors divide population into N subregions in objective space (N is the population size), which is helpful to extend diversity of population. Moreover, in the deletion operator, the solutions in the most crowded subregion are collected into previous clusters, and then the worst solution in the most crowded cluster is deleted until there are N solutions left. Therefore, our approach is able to prevent the loss of local optimal solutions and help to realize a good balance in diversity of population. Here, the main contributions of our approach are stated as follows:(1)Different from traditional environmental selection mechanisms, in MMOEA-CAS, a density-based clustering method is applied on the population to obtain multiple clusters according to their neighboring relationship in decision space. Based on those classified results, those solutions with better performance in each cluster have the priority to be selected in the addition operator. This way, the diversity in decision space can be ensured.(2)The deletion operator proposed in our method considers the solutions’ density information from both decision and objective spaces. Firstly, a set of weight vectors are used to divide candidate solutions into different regions based on their locations in objective space; then solutions in most crowded subregion are classified into previous clusters and the crowded solution in this cluster is deleted. This way, the more promising solutions are reserved into next generation.

Next, we give a brief introduction for organization of this paper. Some definitions about MMOPs and the introductions about the used clustering method are given in Section 2. Then, the framework of our algorithm with the addition and deletion operators is described in Section 3. Experimental setup and results are presented and analyzed in Sections 4 and 5, respectively. At last, some conclusions and further directions are discussed in Section 6.

2. Preliminaries

2.1. MMOPs

In this subsection, we give a case of MMOPs, which is shown in Figure 1. If any solution in a set PS1 cannot be dominated by any other solution in feasible regions, PS1 is regarded as global PS. The global PF is its objective vectors. Note that we marked global PS/PF with red color as shown in Figure 1. If any solution in a set PS cannot be dominated by any other solution in PS, PS is said to be local PS. The local PF is its objective vectors. As shown in Figure 1, we marked local PS/PF with blue color. Note that the local population is defined as the set including the neighboring solutions in the decision space. Thus, some MOPs with different PSs or local PSs [2628] are called multimodal multiobjective optimization problems (MMOPs). For MMOPs, multiple different PSs, including global PSs and some good local PSs, should be provided for the decision-maker, which help to find more suitable solutions in making decision. Therefore, diversity of solutions in both objective and decision spaces should be considered equally important in the selection mechanisms of MOEAs for solving MMOPs.

2.2. Density-Based Clustering Method

It is well known that DBSCAN [36] with the neighborhood parameters and MinPts (MinPts is positive integer) is a density-based clustering method, aiming to find the cluster defined to the maximal density-connected set of sample data deduced by the relationship of density-reachable. Considering each sample data in a data set D, related definitions are given as follows:-Neighborhood: the ε-neighborhood of x is defined as follows:where dist(x, y) is Euclidean distance between x and y.Core objective: a sample data x is said to be core objective if it satisfies equation (3) as follows:Directly density-reachable: if a sample data y is direct density-reachable from x, it must meetDensity-reachable: the sample data y is said to be density-reachable from x if there are a set of sample data p1, p2, …, pn, where p1 = x, pn = y, and pi+1 is direct density-reachable from pi.Density-connected: the sample data x and y are said to be density-connected, if there is a sample data p where x and y are density-reachable from it.

3. The Proposed Algorithm

3.1. General Framework

In this section, we introduce the general framework of MMOEA-CAS (Algorithm 1). At first, N solutions are randomly generated in , which form a population in line 1. In line 2, multiple weight vectors are initialized [37]. Then, the recombination operators are applied on parent population to generate offspring population P’. Next, a union population U is formed by merging P′ and P in line 5. In line 6, according to Pareto dominated relationship [10], those solutions in U are ranked into different Pareto fronts F1,…, Ft. As shown in lines 7-8, after normalizing U, DBSCAN is used to divide normalized population into multiple clusters C1,…, CK in decision space. Then, the addition operator is used to select the promising solutions into next-generation population P with more than N solutions in line 9. Next, in lines 10-11, after normalizing P, the association process with a set of weight vectors is used to divide P into N subregions in objective space. Finally, in line 12, the deletion operator is run on P to delete the most crowded solution each time until there are N solutions left in P. If the maximal number of evaluations is reached, P is regarded as the final solution set as shown in line 14. If the current evaluations are still smaller than the maximal evaluations, the above procedure in lines 4–12 will continue until the stopping criterion is met.

Input: N (the number of solutions in population)
Output: P (final population)
(1) initialize a population P
(2) initialize weight vectors
(3)while the stopping criterion is not met
(4)  P' = Crossover + Mutation (P)
(5)  U = PP′
(6)  (F1, …, Ft) = Non-dominated Sorting (U)
(7) normalize U
(8)  (C1, …, CK) = DBSCAN (U, ε, MinPts)
(9)  P = Addition Operator//Algorithm 2
(10)  normalize P
(11)  () = Association Process (P, W)//Algorithm 3
(12)  P = Deletion Operator//Algorithm 4
(13)end while
(14)return P
3.2. Addition Operator

In this section, the addition operator (Algorithm 2) is introduced to select some promising solutions to maintain diversity of population. As described in lines 1-2, the nondominated solutions are first selected and then add them into an empty set P in line 2. To further strengthen the diversity, in lines 3–6, the nondominated solutions in each cluster Ck (k = 1,…, K) are also added to P. In addition, solutions with better convergence in objective space are further selected with higher priority until the size of solutions in P is not less than N. As shown in lines 8–11, solutions from F2 to Ft are used to fill P according to their respective priorities. When there are more than N solutions, the addition operator ends with returning the temporary population P.

Output: P (updated population)
(1)set P to be an empty set
(2)P = P ∪ F1
(3)for k = 1 to K
(4) (f1, …, ft) = Non-dominated Sorting (Ck)
(5)P = P  ∪ f1
(6)end for
(7)i = 1
(8)while |P| < N
(9)i = i + 1
(10)P = P ∪ Fi
(11)end while
(12)return P
3.3. Association Procedure

After the addition operator is run, the association procedure (Algorithm 3) will be applied on normalized P to divide population into N subregions in objective space by a set of weight vectors. At first, N empty subregions are initialized in line 1. Next, for each solution , its distances to all weight vectors are computed, and then x is associated with its closest subregion as shown in lines 3–7. When all solutions are associated with their closest subregions, association procedure ends and are returned in line 9.

Input: P (population), (weight vectors)
Output: (clusters)
(1) initialize N empty subregions
(2)for each x ∈ P
(3)  for each
(4)   compute
(5)  end for
(6)  
(7)  
(8)end for
(9)return
3.4. Deletion Operator

After are got by association procedure, deletion operator (Algorithm 4) will be used to prune P to get N final solutions. The HAD method [38] is used to reflect the crowding distances of each solution x in decision space. Thus, as shown in line 2-3, after normalizing P, the HAD values of solutions are computed by equation (5) as follows:where dist is the Euclidian distance between two normalized solutions. In line 4, find the most crowded subregion by equation (6) as follows:

Output: P (updated population)
(1)while | P | > N
(2) normalize P
(3) compute HAD values of solutions in P by equation (5)
(4) find the most crowded subregion by equation (6)
(5) divide all solutions in into S1, …, SK by equation (7)
(6) find the most crowded cluster Sk by equation (8)
(7) find x with the smallest HAD value in Sk
(8) delete x from and P
(9)end while
(10)return P

In line 5, solutions in are classified into S1, …, SK by equation (7) as follows:

Then, in line 6, find the most crowded cluster Sk by equation (8) as follows:

In lines 7-8, the solution x with the smallest HAD value in Sk will be deleted from and P. Finally, deletion operator ends until there are N solutions left and P will be returned in line 10 as the final population.

4. Experimental Settings

4.1. Test Instances and Parameter Settings

In experiments, a series of MMOPs from CEC 2019 [28] are used to validate the performance of all compared algorithms, that is, some MMOPs with many distinct PSs, some MMOPs with irregular shape of PSs (i.e., MMF1_e, MMF1_z, and MMF5-7), and some MMOPs with existing global and local PSs (MMF10-13, MMF15, and MMF15_a).

Our proposed algorithm MMOEA-CAS is, respectively, compared with four MMOEAs (i.e., Omni-optimizer [29], DN-NSGA-II [30], MO_Ring_PSO_SCD [33], and TriMOEA-TA&R [35]) on all the MMOPs adopted. The parameter settings of compared algorithms are suggested in their references, which are given in Table 1.

Note that simulated binary crossover (SBX) and polynomial-based mutation [10] are used as recombination operators for producing offspring in MMOEA-CAS. For DBSCAN, the neighborhood parameter MinPts is set to be 3, and is set by an adaptive strategy as follows:where and are the minimum and maximum values in i-th dimension.

In addition, the population size is equal to 100 × n for different test problems with n-dimensional decision variables and the maximum evaluation is equal to 5000 × n. In order to avoid exceptional situation, we collect 21 times of experimental results for each algorithm on solving each test problem.

4.2. Performance Indicator

In our experiments, we select the inverted generational distance (IGD [39, 40]) as performance indicator. To evaluate the quality of obtained solutions in decision space or objective space, IGD is, respectively, applied on two different spaces. When computing IGD, sampling reference points on the true PS or PF is necessary.

If is composed of sampling reference points on PS and P is the final set of decision variables in decision space, the distance between and P is named IGDX. If is composed of sampling reference points on PF and P is the final set of objective values in objective space, the distance between them is named IGDF. In general, we compute the IGD value according to the following equation:where d(pi, xj) is Euclidean distance between pi and xj (pi belongs to P and xj to belongs P). The lower IGDX or IGDF values indicate the better performance of P.

5. Experimental Studies

5.1. Results and Discussion

As shown in Table 2, the compared results of other MMOEAs (i.e., Omni-optimizer, DN-NSGA-II, MO_Ring_PSO_SCD, and TriMOEA-TA&R) and MMOEA-CAS in terms of IGDX and IGDF metrics are summarized. In order to get more intuitional information from the statistical data, we use “+/–/∼” to record the compared results between other four competitors and MMOEA-CAS, where Wilcoxon’s rank-sum test with  = 0.05 is used in comparison. According to the overall statistic results in the last row in Table 2, we can observe that MMOEA-CAS showed better performance than four other compared algorithms in 31, 32, 25, and 32 out of 44 cases, and it was similar to them in 4, 3, 5, and 2 cases. Conversely, the results obtained by MMOEA-CAS were significantly worse than those obtained by other MMOEAs in 9, 9, 14, and 10 out of 44 cases. The above comparison results demonstrate that our proposed MMOEA-CAS can get more uniformly and widely distributed solutions in decision space without deteriorating distributions in objective space, which confirms its superiority and effectiveness in solving most of MMOPs adopted.

The detailed IGDX and IGDF values of MMOEA-CAS and four MMOEAs are reported in Tables 3 and 4, respectively. To be much clearer, we marked the best value in bold for each test problem. In Table 3, MMOEA-CAS achieved the best IGDX values in 10 cases, that is, Omnitest, MMF2, MMF4, MMF7, MMF10-13, MMF15, and MMF15_a; MO_Ring_PSO_SCD got best IGDX values in 8 cases, that is, SYM-PART rotated, MMF5-6, and MMF8; TriMOEA-TA&R only obtained best IGDX values in 4 cases, that is, SYM-PART simple and MMF9. The remaining compared algorithms could not obtain best IGDX value in any case. Similarly, we could also observe from Table 4 that MMOEA-CAS obtained the best IGDF values in 10 cases, that is, MMF4, MMF7, MMF10-15, MMF14_a, and MMF15_a; Omnioptimizer got the smallest IGDF values in 9 cases, that is, SYM-PART simple, MMF1_z, MMF2, MMF5-6, and MMF8-9; MO_Ring_PSO_SCD obtained the smallest IGDF values on MMF1 and MMF3, while TriMOEA-TA&R only obtained the smallest IGDF value on MMF1_e; DN-NSGA-II could not obtain best IGDX value in any case. Thus, we could conclude that MMOEA-CAS achieved best IGDX values and best IGDF values on MMOPs with existing global and local PSs, that is, MMF10-13, MMF15, and MMF15_a. By collecting the best values on all MMOPs in terms of IGDX and IGDF, we can conclude that the overall performance of our proposed MMOEA-CAS is better than other compared algorithms on these test problems. In addition, the above comparison results based on the best performance on each test problem demonstrate that the environmental selection mechanism of MMOEA-CAS can reserve more promising solutions, while those classical environmental selection criteria have no ability to solve these MMOPs well.

Based on the experimental results in terms of IGDX, we can find that Omni-optimizer and DN-NSGAII could not perform better than MMOEA-CAS on any case, showing the drawbacks of their selection mechanisms when solving those test problems. It can be concluded that the fast nondominated sorting approach used in Omni-optimizer and DN-NSGA-II failed to maintain the diversity in decision space, since it always intended to choose the solutions with better convergence and diversity in objective space. Thus, regarding the performance of final population obtained in decision space, the above two algorithms always obtain poor performance on all test problems because they neglect the importance of diversity in decision space. To address this issue, MMOEA-CAS used the density-based clustering method to divide clusters where the similar solutions are likely to be collected into the same cluster. At the same time, in the addition operator, the solutions with good performance in the same cluster even if it is worse in the whole population can be still seen as promising candidate solutions. This way, MMOEA-CAS can guarantee the diversity of population in decision space. This also explains why Omni-optimizer and DN-NSGA-II showed the advantages on MMOPs with multiple PSs (i.e., SYM-PART simple, SYM-PART rotated, Omni-test, MMF1, MMF5-6, and MMF8) when comparing the quality of population in objective space, while they showed poor performance on some MMOPs with local and global PSs (i.e., MMF10-13, MM15, and MMF15_a) in terms of IGDX. Due to the comprehensive consideration of the diversity in both decision and objective spaces, MMOEA-CAS achieves better performance on most MMOPs at the expense of diversity in objective space when solving some problems with multiple PSs.

In order to improve the diversity in decision space, MO_Ring_PSO_SCD used the special crowding distance, while it still followed the principle of convergence first in objective space. The local Pareto optimal solutions are always lost during the evolutionary process of MO_Ring_PSO_SCD. Similarly, in TriMOEA-TA&R, nondominated solutions still replace dominated solutions in objective space, so it also failed to strike the balance between the diversity in decision and objective spaces. To avoid the local Pareto optimal solutions being replaced by the solutions with better convergence during the evolutionary process, the proposed addition and deletion operators in MMOEA-CAS collaboratively select more promising solution into next generation according to the information of clustering results classified by the density-based clustering method in decision space and the subregions divided by weight vectors in objective space. Therefore, only the solutions that are always performing worse in both two spaces are abandoned, and those solutions having good performance in either space could get to survive. Although MO_Ring_PSO_SCD and TriMOEA-TA&R performed better than MMOEA-CAS on some problems with multiple PSs, they still obtained poor performance when compared with MMOEA-CAS on MMOPs having global and local PSs. As shown in Tables 3 and 4, when compared with MMOEA-CAS, it is obvious that MO_Ring_PSO_SCD and TriMOEA-TA&R achieved worse results on most MMOPs, especially on these MMOPs having global and local PSs. Thus, these compared results validate the effectiveness of MMOEA-CAS and it is able to gain solutions with better overall performance in decision space and objective space when compared to other MMOEAs on most MMOPs.

5.2. Further Discussion

To clearly demonstrate the performance of different MMOEAs, the final populations obtained in decision and objective spaces by different algorithms are plotted in Figures 23, which qualitatively show the effectiveness and advantages of our proposed algorithm on solving these MMOPs. For example, a MMOP with multiple PSs, that is, SYM-PART rotated, is shown in Figure 2, and a MMOP with local PS, that is, MMF10, is shown in Figure 3, respectively. From Figure 2, it was found that the diversity of final population obtained by MMOEA-CAS is more uniform in decision space. Although Omnioptimizer and DN-NSGA-II found better solutions in objective space, they failed to maintain other PSs in decision space. Similarly, the population obtained by TriMOEA-TA&R also missed some PSs in decision space. MO_Ring_PSO_SCD found all PSs, but the final solutions cannot uniformly cover all the true PSs. Thus, we can conclude that MMOEA-CAS gets better results in terms of the comprehensive performance in both decision and objective spaces on SYM-PART rotated.

Furthermore, for MMF10, the final populations obtained are also shown in Figure 3. It is obvious that the performance of final population obtained by MMOEA-CAS is significantly better than other MMOEAs. Other compared algorithms mainly pick solutions with faster convergence rates in objective space, which will cause some local optimal solutions always to be eliminated by the global optimal solutions. Thus, only MMOEA-CAS maintained some promising local optimal solutions. To summarize, MMOEA-CAS shows some superiorities on two cases, especially on MM10.

6. Conclusion

We have developed a novel evolutionary algorithm with clustering-based assisted selection strategy for multimodal multiobjective optimization, in which addition operator and deletion operator are proposed to comprehensively consider the diversity in both decision and objective spaces. Specifically, a density-based clustering method is used to divide population into multiple clusters in decision space, aiming to assist addition operator to strengthen diversity of population. Then, weight vectors are used to divide population into N subregions in objective space (N is population size). Moreover, in deletion operator, the solutions in the most crowded subregion are collected into previous clusters, and then the worst solution in the most crowded cluster is deleted until there are N solutions left. The numerical values on both IGDX and IGDF have reported the effectiveness of MMOEA-CAS. Most existing MMOEAs cannot search for local Pareto optimal solutions because they are often dominated and replaced by global optimal solutions with good convergence, while our proposed algorithm is capable of preserving more uniformly and widely distributed solutions in decision space without deteriorating their distributions in objective space, which confirms its superiority on most of MMOPs.

Finally, some more effective frameworks, enhanced search operators, and other techniques will be further studied and developed to fill in the gaps in the field of multimodal multiobjective optimization. In addition, we will further extend our proposed algorithm to solve other more complicated multimodal multiobjective optimization problems.

Data Availability

All experimental results can be obtained upon request through email to the corresponding author.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grants 61876110, 61836005, and 61672358, the Joint Funds of the National Natural Science Foundation of China under Key Program Grant U1713212, and Shenzhen Technology Plan under Grant JCYJ20190808164211203. Also, this work was supported by the National Engineering Laboratory for Big Data System Computing Technology and the Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen University.