Abstract
Multimodal multiobjective optimization problem (MMOP) is a special kind of multiobjective optimization problem (MOP) with multimodal characteristics, where multiple different Pareto optimal sets (PSs) map to the same Pareto optimal front (PF). To handle MMOPs, a decompositionbased harmony search algorithm (called MOEA/DHSA) is devised. In MOEA/DHSA, multiple individuals who are assigned to the same weight vector form a subpopulation for finding multiple different PSs. Then, an environmental selection method based on greedy selection is designed to dynamically adjust the subpopulation scale for keeping the population diversity. Finally, the modified harmony search algorithm and elite learning strategy are utilized to balance the diversity and convergence of the population. Experimental results on the CEC 2019 test suite reveal that MOEA/DHSA has superior performance than a few stateoftheart algorithms.
1. Introduction
MOPs with multiple objectives that require to be optimized simultaneously are frequently occurring in realworld applications [1, 2]. In general, an MOP is defined as follows [3]:where refers to the search space and denotes an ndimensional decision variable; indicates the jth () objective, and refers to the amount of objectives; and are the upper and lower bounds of , respectively. In an MOP, a solution is said to dominate another solution , if and only if , , and , . If there exists no solution that dominates , then is called a Pareto optimal solution [4, 5]. The set of all Pareto optimal solutions is called PS [6], which is denoted as . The image of in the objective space is termed as PF [7].
A large number of MOPs with multimodal properties exist in practical applications [8], such as scheduling problem [9], epsilonefficient solutions problem [10], rocket engine problem [11], and 0/1 knapsack problem [12]. This category of problems is termed MMOPs in [13]. In MMOPs, multiple different PSs map to the same PF. As depicted in Figure 1, an MMOP with two different Pareto regions (i.e., Region1 and Region2) in the decision space is presented. In Figure 1, all Pareto regions map to the same Pareto front (i.e., ) in the objective space. For example, A_{1} Region1 and A_{2} Region2 correspond to the same point A and A_{1}A_{2}. All PSs should be discovered in handling MMOPs.
(a)
(b)
In the past few years, numerous multimodal multiobjective evolutionary algorithms (MMOEAs) have been developed to tackle MMOPs, which can be grouped into the following types:(i)Paretobased MMOEAs: DNNSGAII [13], NichingCMA [14], and Omnioptimizer [15] are the three typical algorithms in this class. DNNSGAII proposed a niching method to deal with MMOPs, in which the crowding distance is adopted to find multiple PSs. Unlike DNNSGAII, an omnioptimizer calculated the crowding distance in both the decision and objective spaces to locate the equivalent PSs. In NichingCMA, a niche function is constructed to replace the crowding distance in the omnioptimizer by summing the distances of the individuals in the two spaces. Experimental results indicated that NichingCMA can improve the diversity of decision space. Subsequently, a method similar to the omnioptimizer is proposed to deal with MMOPs (named MO_Ring_PSO_SCD) [16], where the ring topology and diversity measure are utilized to keeping the population diversity. In addition, Liu et al. [17] proposed a penalty density strategy (called CPDEA) for solving MMOPs. In CPDEA, the individual density is calculated by the transformed Euclidean distance and used as the selection criteria to search for the equivalent PSs. Then, they proposed an MMOEA based on archives and recombination strategy to find multiple PSs (called TriMOEATA&R) [18]. Fan et al. [19] devised a zoning search approach (ZS) to tackle MMOPs, where the whole space is classified into multiple subspaces and existing MMOEAs are executed in each subspace. The results revealed that ZS helps to increase the property of MMOEAs in solving MMOPs. Inspired by this, Fan et al. proposed a zoning search approach with adaptive resource allocating [20] for balanced and imbalanced MMOPs. Lin et al. [21] presented a clusteringbased MMOEA that is able to efficiently locate local PSs. Zhang et al. [22] devised a kneebased approach to deal with MMOPs, where a multicriteria decision strategy is introduced to search for equivalent PS regions. Afterwards, the repulsive force between isotropic magnetic individuals is introduced into MMOPs to locate different PSs by Zhang et al. [23, 24]. In addition, inspired by online learning [25, 26]. Li et al. [27] devised a differential evolution based on reinforcement learning to handle MMOPs. However, these algorithms generally adopt Paretobased method to search for equivalent PSs, so they cannot deal with largescale MMOPs.(ii)Decompositionbased MMOEAs: like MOEA/D [28], an MMOP is decomposed into a few simple subproblems through a group of evenly distributed vectors in decompositionbased MMOEAs. In [29], the diversity maintenance strategy combining penalty boundary intersection technique [28] and two distance metrics is introduced into the decision space for locating equivalent PSs. Later, a similar concept is suggested to handle the largescale optimization problems in [30]. Tanabe et al. [31] designed a modified MOEA/D for MMOPs, which employs the addition and deletion operators to search for equivalent PSs. In [32], this concept was expanded as a framework to increase the diversity of MMOEAs in the decision space. In [33], an evolutionary algorithm based on the graph Laplacian is presented to tackle MMOPs, which employs the decomposition technique in two spaces to obtain equivalent Pareto solutions. Although the above algorithms can obtain multiple different PSs, they are unable to balance the population diversity in the decision and objective spaces well.(iii)Indicatorbased MMOEAs: the indicatorbased approaches utilize the chosen indicators to guide population converge to PF. Li et al. [34] proposed a weighted indicatorbased MMOEA (termed MMEAWI), which devised an indicator to keep the diversity in the decision space and introduced a convergence archive for approximating the true PF. In [35], a Niching indicator is designed to solve MMOPs, where individuals’ fitness values are calculated to keep the diversity of PSs. In summary, the indicatorbased MMOEAs usually adopt fitness indicator to keep the diversity in the decision space, but the calculation of indicator wastes a large amount of computing resources, especially for largescale MMOPs:
To sum up, although the decompositionbased MMOEAs can find multiple equivalent PSs in dealing with MMOPs, they are unable to well balance the diversity in the decision and objective spaces. In other words, they increase the diversity in the decision space at the cost of the performance in the objective space. To deal with this issue, a decompositionbased harmony search algorithm (named MOEA/DHSA) is presented. In MOEA/DHSA, multiple individuals who are assigned to the same weight vector form a subpopulation after an MMOP is decomposed. Then, an environmental selection method is used for guiding the individuals in the subpopulation to find the equivalent PSs. Finally, the modified harmony search algorithm is utilized to generate the offspring, which helps to balance the relationship of diversity and convergence. At the same time, an elite learning strategy is employed to prevent premature convergence. The test results on CEC 2019 validate the property of MOEA/DHSA.
2. Materials and Methods
2.1. Motivation
The diversity of decision and objective spaces is crucial in solving MMOPs. In the following, we take SYMPART rotated with nine distinct PSs as an example to explain the importance of the diversity in these two spaces. As depicted in Figures 2(a) and 2(b), all PSs are located in the decision space, but only three points are found on PF. The reason is that the algorithm pays more attention to the diversity in the decision space. For Figures 2(c) and 2(d), although the obtained PF has welldiversity and wellconvergence, only one PS is located. When an algorithm is only concerned about the diversity in the objective space, the situation in Figures 2(c) and 2(d) may appear. Therefore, the following issues need to be considered for preventing situations in Figure 2.(1)How to preserve the diversity in the objective space? This problem has been widely studied in multiobjective evolutionary algorithms (MOEAs). The classic methods include the weighted Tchebycheff approach [28], nondominated sorting method [36], and the fitness indicator technology [37, 38]. The weighted Tchebycheff approach is selected to deal with this problem. The reason is that the whole objective space is classified into multiple subspaces by a group of evenly distributed weight vectors in the weighted Tchebycheff approach. Thus, it can keep the diversity in this space. Besides, for balancing the exploration and exploitation of the algorithm, a modified harmony search algorithm is adopted to generate the offspring, where the algorithm parameters such as harmony memory size (HMS), harmony memory consideration rate (HMCR), and pitch adjusting rate (PAR) are dynamically adjusted.(2)How to preserve the diversity in the decision space? For handling this problem, the framework of subpopulation is adopted to search for different PSs, i.e., each weight vector corresponds to multiple individuals instead of one. As shown in Figure 3, an MMOP is decomposed into a few simple subproblems by a group of evenly distributed weight vectors (), and each weight vector is given to three distinct individuals to shape a subpopulation (i.e., the red, yellow, and blue dots indicate distinct individuals who are assigned to the same weight vector). In this way, the individuals corresponding to the same weight vector search toward the equivalent PSs. Furthermore, preserving the diversity of the subpopulation is vital in MMOEAs. Therefore, an environmental selection method is designed to separate individuals belonging to the same PS in the subpopulation.
(a)
(b)
(c)
(d)
2.2. The Proposed Algorithm
A decompositionbased harmony search algorithm for MMOPs, termed MOEA/DHSA, is presented. In MOEA/DHSA, first, initialize population P, weight vector W, and subpopulation (lines 1–6). Then, multiple individuals that are assigned to the same weight vector form a subpopulation (lines 7–9). Next, an offspring is generated by a modified harmony search algorithm and elite learning strategy (line 13). Lastly, the subpopulation is dynamically adjusted by an environmental selection method, which helps to preserve the diversity of the decision space (line 14). Algorithm 1 depicts the framework of MOEA/DHSA.

2.3. Reproduction
For balancing the diversity and convergence of the population, a modified harmony search algorithm [39] shown in Algorithm 2 is used to generate the offspring. In Algorithm 2, first, initialize the algorithm parameters (i.e., , , , and ). Then, parameters , , and are updated as follows [40, 41]:where MaxFES is the maximum number of evaluation functions, FES is the current number of evaluation functions, , and . With the help of adaptive parameters, the proposed algorithm has more extensive exploration ability in the early phase of evolution and nice exploitation ability in the late phase of evolution. It means that MOEA/DHSA can well balance the relationship of diversity and convergence. Next, the harmony memory containing individuals is constructed. Finally, the offspring is generated.

2.4. The Elite Learning Strategy
For avoiding premature convergence, the elite learning strategy is employed to generate the offspring , which can increase the global search ability and help the algorithm to jump out of local optimal location.where is a Gaussian distribution with mean 0 and standard deviation , is the jth dimension variable of the best individual in , and and are the upper and lower bounds of , . In the early phase of the algorithm, the larger value makes the offspring have a greater mutation range that is helpful for global search. In the late phase of the algorithm, small mutation range is helpful for local search. Therefore, is updated as follows:where MaxFES is the maximum number of evaluation functions, FES is the current number of evaluation functions, and are the upper and lower bounds of , and decreases as the iteration increases.
2.5. Environmental Selection
For MOEA/DHSA, it is vital to keep the diversity of subpopulations in the decision space. To reach this goal, the environmental selection method presented in Algorithm 3 is employed to separate individuals belonging to the same PS in the subpopulation. In Algorithm 3, when the subpopulation scale exceeds , the individual in the subpopulation that is nearest to the offspring is compared with by the weighted Tchebycheff approach and the better individuals are saved. Otherwise, into the next generation. In the following, we take the subpopulation as an example to explain the working mechanism of environmental selection. As depicted in Figure 4, an MMOP has three equivalent PSs, and all individuals within the triangle are Pareto optimal solutions. In Figure 4, the subpopulation consists of four different individuals (i.e., , , , and offspring ). When the subpopulation scale is larger than , the individual in the subpopulation that is nearest to the offspring is compared with (here, is the closest individual to ), and the preferable one is preserved. Otherwise, into the subpopulation . By this way, MOEA/DHSA can find multiple different PSs.

2.6. Complexity Analysis
MOEA/DHSA consists of the following components: initialization subpopulation, reproduction operation, and the environmental selection. The complexity of initialization subpopulation and environmental selection is . The complexity of the reproduction operation is , where is the decision variable dimension. In summary, considering , the complexity of MOEA/DHSA is .
3. Results and Discussion
3.1. Performance Metric and Benchmark Test Functions
In our experiment, 22 MMOPs from CEC 2019 competition [42] is utilized to examine the property of MMOEAs. In addition, inverted generational distance in the objective space (IGDF) [43, 44] and the Pareto set proximity (PSP) [16] are utilized to assess the property of algorithms. The lager the PSP is, the better the property of algorithm is. IGDF is just the opposite. They are defined as follows:(1) where denotes the Euclidean distance between and , and refer to the true PF and the obtained PF, respectively. can assess the diversity and convergence of the obtained solutions in the objective space.(2)where IGDX is inverted generational distance in the decision space, and represent the true PS and the obtained PS, respectively, denotes the cover ratio between and and the details refer to [16], is the Euclidean distance between and . can reflect the convergence and diversity of the population in the decision space.
3.2. Competitive Algorithms and Parameter Setup
To test the efficiency of the algorithm, MOEA/DHAS is compared with five uptodate MMOEAs: DNNSGAII [13], TriMOEATA&R [18], LORD [33], MOEA/D [28], and MO_Ring_PSO_SCD [16]. For a fair comparison, the population size is set to 800, all algorithms are independently implemented 25 times, and the maximum function evaluations are set to 80000 [16]. In addition, the parameter setting of all competitors is consistent with their original references. For MOEA/DHSA, according to [40, 41, 45], the algorithm parameters are , , , , , , , , , and .
In further, the Wilcoxon’s ranksum test [46] and Friedman’s test [47] are utilized to draw statistically reliable conclusions with significance level . The symbols “+,” “−,” and “ = ” denote that MOEA/DHSA is distinctly better, worse, and not much different from the other competitor, respectively.
3.3. Comparison with Other Algorithms
The experimental results about PSP are shown in Table 1. As indicated in Table 1, MOAE/DHSA achieves the best PSP on 13 test functions and defeats DNNSGAII and MOEA/D on all test functions. MO_Ring_PSO_SCD gets the maximum PSP on six test functions out of 22. LORD acquires the best performance on two test functions. TriMOEATA&R gets the maximum PSP on one test function. Although MOAE/DHSA is not the best on some test functions, the difference is appreciably small and acceptable. Therefore, the experimental results suggest that MOAE/DHSA can find multiple PSs with welldistribution and wellconvergence.
The experimental results in IGDF are listed in Table 2. In Table 2, MOEA/DHSA gets the smallest IGDF on six test functions and defeats TriMOEATA&R on most test functions. Moreover, MO_Ring_PSO_SCD is the best on six test functions out of 22. DNNSGAII acquires the best IGDF on four test functions. MOEA/D and LORD get the minimum IGDF on three test functions. In conclusion, it can be concluded that MOEA/DHSA is not only outperforms the competitors on most test functions in IGDF but also can well approximate the true PF.
3.4. Visual Comparison
The final population distributions of all algorithms are shown in Figure 5 for MMF4 and Figure 6 for SYM_PART rotated. As illustrated in Figure 5, each algorithm except MOEA/D is able to locate all PS regions, but TriMOEATA&R has poor convergence in the decision space, and DNNSGAII achieves the least number of Pareto optimal solutions on each PS region. In addition, compared with other algorithms, not only MOEA/DHSA can locate more Pareto optimal solutions on each PS but also the obtained PSs have welldiversity and wellconvergence.
(a)
(b)
(c)
(d)
(e)
(f)
(a)
(b)
(c)
(d)
(e)
(f)
Besides, for SYM_PART rotated with nine equivalent PSs, as depicted in Figure 6 that all algorithms have good convergence in the decision space, but MOEA/D and DNNSGAII cannot locate all PSs. Furthermore, MO_Ring_PSO_SCD and MOEA/DHSA have similar properties on SYM_PART rotated. And, compared with the remaining competitors, MOEA/DHSA not only can search more Pareto optimal solutions on each PS but also show good convergence. In summary, based on visual comparison, MOEA/DSS can find multiple PSs with good distribution and convergence.
To further verify the effectiveness of each component, two MOEA/DHSA variants are test. SSvariant1 indicates MOEA/DHSA without the elite learning strategy. SSvariant2 represents MOEA/DHSA deletes an individual with the largest Tchebycheff value in the subpopulation instead of using the environmental selection operation. Table 3 records the average rankings of PSP obtained by MOEA/DHSA and MOEA/DHSA variants on all test functions. As depicted in Table 3, MOEA/DHSA owns the best ranking. Therefore, it can be concluded that each component of the proposed algorithm can collaborate with the other components to enhance its overall performance.
3.5. Parameter Analysis
All parameters except are adaptively adjusted, so only the subpopulation size is discussed in this section. MOEA/DHSA with is tested on all problems, and Table 4 records their rankings of the Friedman test. In Table 4, it can be seen that the property of MOEA/DHSA is the best when is set to 5. Therefore, the subpopulation size is set to 5 in the proposed algorithm.
3.6. Analysis of Running Time
With the aim of comparing running time, all algorithms run 25 times on each test function, and the normalized wallclock time of all algorithms is given in Table 5. As revealed in Table 5, TriMOEATA&R and MOEA/D have the shortest running time on 16 and 6 test functions, respectively. And, LORD takes the longest running time on all test functions. In addition, compared with the remaining algorithms, MMOEA/DHSA achieves the shortest running time on most test functions. It can be concluded that the running time of MOEA/DSS is not the shortest, but the difference is acceptable.
4. Conclusions
A simple but effective method MOEA/DHSA is developed to tackle MMOPs. In MOEA/DHSA, first, multiple individuals are assigned to the same weight vector to form a subpopulation to find equivalent PSs. Then, an environmental selection based on greedy selection is used to dynamically adjust the subpopulation size. Finally, a modified harmony search algorithm and elite learning strategy are utilized for balancing the diversity and convergence of the population. The experimental results prove that MOEA/DHSA is superior to selected popular algorithms with respect to the IGDF and PSP values.
Besides, in this article, the dimension of the test problems is low. It is a meaningful study to develop MOEA/DHSA on more complicated problems or the real applications [48, 49], such as the neural architecture search problems.
Data Availability
Data and code can be made available upon request to [email protected].
Conflicts of Interest
The authors declare that there are no conflicts of interest.
Acknowledgments
This work was supported in part by the National Nature Science Foundation of China under Grant 61772391 and 62106186, in part by the Natural Science Basic Research Plan in Shaanxi Province of China under Grant 2022JQ670, and in part by the Fundamental Research Funds for the Central Universities under Grant QTZX22047.