Abstract

This paper proposes a constrained solution update strategy for multiobjective evolutionary algorithm based on decomposition, in which each agent aims to optimize one decomposed subproblem. Different from the existing approaches that assign one solution to each agent, our approach allocates the closest solutions to each agent and thus the number of solutions in an agent may be zero and no less than one. Regarding the agent with no solution, it will be assigned one solution in priority, once offspring are generated closest to its subproblem. To keep the same population size, the agent with the largest number of solutions will remove one solution showing the worst convergence. This improves diversity for one agent, while the convergence of other agents is not lowered. On the agent with no less than one solution, offspring assigned to this agent are only allowed to update its original solutions. Thus, the convergence of this agent is enhanced, while the diversity of other agents will not be affected. After a period of evolution, our approach may gradually reach a stable status for solution assignment; i.e., each agent is only assigned with one solution. When compared to six competitive multiobjective evolutionary algorithms with different population selection or update strategies, the experiments validated the advantages of our approach on tackling two sets of test problems.

1. Introduction

In real-world applications, it is often needed to handle multiobjective optimization problems (MOPs) [1], such as recommendation systems [2, 3], privacy computing [4], and resource assignment [57]. Due to the conflicts among different objectives, the results of MOPs will output a set of Pareto solutions (PS) and their mapping in the objective space is called Pareto front (PF) [810]. These MOPs may be characterized with complicated features [1113], which cannot be well solved by traditional mathematical methods. Instead, multiobjective evolutionary algorithms (MOEAs) can effectively obtain a set of solutions in one single run, which have shown a very promising performance in tackling different kinds of MOPs [1416] and become very popular during the recent decades.

In the design of MOEAs, evolution and selection are their two important mechanisms [1719]. The first one modifies individuals in order to approach the true PF, while the second one selects the most promising individuals to constitute the new population for next generation. Based on the selection mechanisms, most MOEAs can be classified into three types, i.e., Pareto-based MOEAs [2023], indicator-based MOEAs [2429], and decomposition-based MOEAs [3036]. Compared to the selection operators used in Pareto-based and indicator-based MOEAs, decomposition-based MOEAs are able to provide more flexibility to balance convergence and diversity [37], which has been found to provide a better performance when tackling some complicated MOPs, as reported in [38]. In this sort of MOEAs, the target MOP is decomposed into a set of subproblems, which are solved simultaneously using a set of cooperative agents. Each agent aims to optimize one subproblem in MOEA/D [39]. Due to the simplicity and effectiveness of MOEA/D, this framework has triggered a considerable amount of research, aiming to improve different components of MOEA/D, such as the adjustment and generation of weight vectors [4044], dynamic resource allocation [4547], enhanced evolutionary operators [4850], and improved population selection or update mechanisms [5157].

Especially, regarding the population selection or update mechanisms for decomposition-based MOEAs, the offspring in MOEA/D [39] are allowed to update any solution in population. However, this method may significantly lower the diversity when a very good solution may replace most of the others in several generations. In MOEA/D-DE [58], its solution update approach is controlled by two preset probabilities and nr, which obtains a better balance of convergence and diversity. The offspring is only allowed to update the parent solutions from the neighborhood with a probability and from the entire population with a probability (1-). Moreover, an offspring can only replace at most nr parent solutions. This strategy was mostly used in the following design of decomposition-based MOEAs [49, 54]. Different from the decomposition approach in MOEA/D-DE, MOEA/D-M2M [38] separates the search space into multiple search subspace, which simples the solving of MOPs in each subspace and the solution update is constrained by including the equal number of solutions in each subspace. Thus, MOEA/D-M2M was shown to be very effective for complicated MOPs that strongly emphasize diversity (i.e., MOP problems [38]). To further find a better match of solutions and subproblems, a stable matching model was proposed in MOEA/D-STM [59], which associates the solutions to subproblems according to their respective preferences. In this way, MOEA/D-STM can maintain a good convergence speed and population diversity. Similarly, an improved interrelationship model was designed in MOEA/D-IR [37] to associate the solutions to subproblems based on their mutual-preferences, which is an essentially diversity first and convergence second strategy. Moreover, two improved versions [53] for MOEA/D-STM were proposed to embed the concept of the incomplete preference lists in the stable matching model, which further strengthens the diversity. In [51], an adaptive replacement neighborhood size was proposed to assign an offspring to its most appropriate subproblems, obtaining a better balance of convergence and diversity. In MOEA/D-ACD [54], an adaptive constrained decomposition approach was presented, in which the update regions of decomposition approach are constrained to maintain the diversity. Moreover, to further enhance the performance in MOPs with more than three objectives, decomposition approach and Pareto domination were simultaneously used in MOEA/DD [44], decomposition-based-sorting and angle-based-selection approaches were proposed in MOEA/D-SAS [57], and the diversity was preferred in solution update by selecting certain closest subproblems for an offspring in [60].

On the other hand, another kind of population selection or update mechanisms in MOEA/D aims to improve their used decomposition functions. In MOEA/D [39], three traditional decomposition functions, i.e., the weighted sum (WS) approach, the Tchebycheff (TCH) approach, and the penalty-based boundary intersection (PBI) approach, were employed. In [61, 62], a local PBI and WS were, respectively, designed to constrain the update regions of decomposition approaches, which avoid the diversity loss. In [63, 64], an adaptive Pareto front scalarizing (PaS) and penalty-based boundary intersection (PaP) decomposition approaches were, respectively, introduced to match the true PFs with various shapes. Two decomposition approaches were presented in MOEA/AD [65] and DECAL [66] to deal with the complicated PF. In MOEA/AD, two coevolved populations were, respectively, updated by the two decomposition functions to fit different PF shapes, while two novel decomposition functions were, respectively, used to accelerate the convergence speed and enhance the population diversity in DECAL. Recently, MOEA/D-LTD [67] was proposed to trace the PF shape, in which the learning module predicts the PF shape and the decomposition function is adaptively adjusted to fit its PF shape.

Most of the above MOEAs all abide one basic principle that each agent should be assigned with one solution in order to find the optimal value for its subproblem. However, this kind of solution assignment may not be effective and efficient in decomposition-based MOEAs, as the solution assigned to the agent may be far away from its subproblem. In such case, it cannot truly reflect the diversity of each agent and cannot provide the correct neighboring information in evolution, which may slow down the convergence as decomposition-based MOEAs are designed as an essentially collaborative evolutionary framework. Therefore, a constrained solution update (CSU) strategy is designed in this paper for decomposition-based MOEAs to alleviate the above problem. The solutions are only assigned to the agent that handles the closest subproblem. This way, the correct neighboring information can be provided to guide the evolution and it is straightforward to show the diversity of each agent. In this case, the number of solutions in each agent may be zero or no less than one. To maintain the diversity of each agent, the offspring assigned to one agent are only allowed to renew its original solutions. When the agent has no solution, it will be assigned one solution in priority, once offspring are generated closest to its subproblem. To keep the same population size, the agent with the largest number of solutions will remove one solution showing the worst convergence. Thus, the diversity of one agent is enhanced, while the convergence of other agents is not affected. After a period of evolution, a stable status for solution assignment is anticipated so that each agent only has one solution. When compared to the existing population selection or update strategies for decomposition-based MOEAs, our experiments validate the superiority of the proposed approach when tackling two sets of complicated test MOPs.

The main contributions of this paper are clarified below.(1)Each agent may be assigned with no solution, or no less than one solution, which is different from the existing approaches that only assign one solution to each agent. This approach can truly reflect the diversity on the agents and provide the correct neighboring information in evolution.(2)A CSU strategy is designed for each agent in order to maintain diversity for all the agents without affecting their convergence. The agent with no solution will be assigned first, while the agent with the largest number of solutions will remove one solution showing the worst convergence. By this way, a stable status for solution assignment may be reached, so that each agent only has one solution, which ensures diversity in decomposition-based MOEAs.(3)When solution assignment is under an unstable status such that at least one agent is still not assigned any solution, the mating parents are randomly selected from the best solutions from all the agents, as the neighboring agent may have no solution. This random selection of mating parents helps to enhance the exploration ability in our algorithm.

The rest of this paper is organized as follows. Section 2 provides the related background, such as MOPs and the used decomposition function in this paper. Section 3 introduces the details of the proposed algorithm MOEA/D-CSU. The experimental results and discussions are provided in Section 4, while the conclusions and some future research directions are given in Section 5.

2.1. Multiobjective Optimization Problems

Multiobjective optimization problems often need to optimize several conflicting objectives, which can be modeled bywhere is an n dimensional decision vector in the decision space Ω and m is the number of objectives. The target of MOP in (1) is to minimize all the objectives simultaneously.

2.2. The Decomposition Function

In this paper, the modified Tchebycheff method [55] is used for decomposing the MOP in (1), which is defined bywhere is a preset weight vector with for each and , while is the ideal point by setting for each . When using N uniformly distributed weight vectors in (2), the MOP in (1) is decomposed into a set of N subproblems, which can be solved by a set of N collaborative agents. The population selection or update strategies designed in decomposition-based MOEAs will reasonably allocate the solutions to the agents [39]. Different from the existing approaches [39, 58] that assign one solution to each agent, the agent in our approach is only allocated by the solutions that are closest to its subproblem, resulting in the fact that the number of solutions in each agent may be zero or no less than one. To show (2) more visually, a case of updating solution is depicted in Figure 1, where s1 is a solution in current population while s2 and s3 are two offspring. For this case, s3 can update the subproblem but s2 cannot do this, because the yellow region is the improvement domain of s1 by the weight vector and (2), and a solution like s3 falling into the region can update the subproblem. Actually, (2) decides the profile of the region [54].

3. Our Algorithm: MOEA-CSU

Let be N weight vectors and denote the agent, which aims to optimize the subproblem in (2) with the weight vector (). In this paper, we classify the status of solution assignment into two kinds, i.e., a stable status (each is assigned only one solution) and an unstable status (at least one is not assigned any solution) (). Generally, an initial population often starts from the unstable status, while the purpose of our CSU strategy is to reach the stable status, which properly maintains the diversity of each agent.

3.1. Our CSU Strategy

Let P and O, respectively, denote the parent population and offspring population. At each generation, the solution set from P assigned to agent is denoted by (), while the solution set from O assigned to agent is denoted by (). In this paper, and can be obtained using the closest vector angles to the weight vector of agent , as follows:where (m is the number of objectives) is an utopian objective vector that is approximated by the minimal objective values from the current parent and offspring populations, i.e., for each , and indicates the acute angle of two vectors and , as defined byThe design principle of our method is simple and effective. When is empty, will not be updated. Otherwise, the offspring assigned to each agent is only allowed to renew its original solutions; i.e., the solutions in can only renew , which will speed up the convergence for , while the diversity of other agents is not affected, as the solutions in are not allowed to update the solutions for other agents. In more detail, two cases for are considered when is not empty, i.e., and , where indicates the size of .

In the case with , as the agent is not assigned with any solution before, one solution in with the best aggregated value using (2) is assigned to . To keep the same population size, the agent () with the largest number of solutions is found and then one solution in having the worst aggregated value in (2) is removed. Please note that if more than one agent has the same largest number of solutions, one of them is randomly selected to remove one worst solution. This way, the agent is assigned one solution to optimize its subproblem, which enhances its diversity, while the convergence for other agents (e.g., ) is not affected.

In other case with , the solutions in and are combined into U, and they are sorted using the aggregated values in (2) with an ascending order. The first solutions are selected from U to compose a new , which keeps the same number of solutions for the agent . By this way, the convergence for the agent is enhanced, while the diversity for other agents is not affected, as the offspring in are not allowed to update them.

With the above operations, the number of solutions assigned to each agent will be gradually reduced to only one, once there exists a solution generated around its subproblem. Finally, this approach may reach the stable status. Note that the stable status may be unreachable when solving MOPs with complicated PFs, e.g., disconnected and degenerated PFs. To further clarify the CSU strategy, its pseudocode is given in Algorithm 1. Please note that Algorithm 1 will return the updated population P and the status of solutions assignment.

(1) Get and respectively from P and O with Eqs. (3)-(4)
(2) for  i=1 to  N
(3) if  !=0
(4) if  ==0
(5) find one solution x with the minimal value in Eq. (2) from  
(6) add x into
(7) find one agent with the largest number of solutions
(8) remove one solution with the worst value in Eq. (2) from  
(9) else
(10) let and set as an empty set
(11) sort the solutions in U ascendingly using the aggregated values in Eq. (2)
(12) select the first solutions from U to compose a new
(13) end if
(14) end if
(15) end for
(16) collect all the to compose a new P
(17) if each is not empty
(18) status=True // solution assignment is under the stable status
(19) end if
(20) return [P,  status]
3.2. The Used Recombination Operator

In this paper, the evolutionary operator [68] in MOEA/D-M2M is used, which is, respectively, defined in (6) and (7), as follows:where x and z are the decision variables from two parents and y is that of an offspring, while u and l are, respectively, the lower and upper bounds for that decision variable. The crossover operator is defined in (6), where r1 and r2 are two random real numbers respectively generated from (-1, 1) and (0, 1), and is an index that is set to -(1-/)0.7 ( and are, respectively, the maximum number of generation and the current generation). The mutation operator is defined in (7), where r3 is a random real number produced from (-0.25, 0.25). When y is out of the parameter boundary, a repair operation will be executed in (8) and (9), as follows:where is a random real number generated in (-0.5, 0.5).

3.3. MOEA/D-CSU

In this section, our CSU strategy is embedded into a general framework of decomposition-based MOEAs, named MOEA/D-CSU. Its pseudocode is provided in Algorithm 2. In line (1), N weight vectors are generated and then the value of generation counter is initialized to 1, the value of status is initialized to False (indicating the unstable status of solution assignment), the offspring population O is initialized as an empty set, and an initial population is generated randomly in line (1). In line (2), for each agent is obtained from P by (5). If is smaller than the preset maximum number of generations , the following evolution and selection procedures in lines (4)-(15) are run. For each subproblem i in line (4), it is checked whether the status of the solutions assignment is stable in line (5). If it is not, we collect the best solution in each to form the mating pool. Otherwise, we set the neighbors of subproblem i as the mating pool in line (8) based on the Euclidean distance between the used weight vectors. Here, the neighbor size T in line (8) is dynamically adjusted according to the number of generations, by usingAfter that, an offspring is generated using the recombination operators defined in (6)-(9) based on pi and the mating parents in line (10) and is further evaluated to get the objective values in line (11), which is used to update the approximately ideal point in (2). In line (12), this offspring oi is added into the offspring population O. After all the offspring are collected into O, the CSU strategy (Algorithm 1) is run in line (14) with the inputs P, O, N to get a new population P. In line (15), the value of is increased by 1 and the offspring population O is reset to an empty set. The above evolutionary process will be terminated when reaches , and the final population P is reported.

(1) generate N weight vectors , initialize the values of , status, an
offspring population O, and generate an initial population
(2) get from P with (3)
(3) while  
(4) for i=1 to N
(5) if  status ==0
(6) collect the best solution of each to form the mating pool
(7) else
(8) set the neighbors of subproblem i as the mating pool
(9) end if
(10) generate oi with pi and the parents from the mating pool
(11) evaluate the objectives in oi and update in (2)
(12) add oi into the offspring population O
(13) end for
(14) [P, status]= CSU(P, O, N)
(15) set and initialize O  as an empty set
(16) end while
(17) return P

4. Experimental Results

4.1. Benchmark Problems and Parameters Settings

In this study, two complicated test suites (MOP [38] and IMB [69]) were used to assess the performance of MOEA/D-CSU, including MOP1-MOP7 and IMB1-IMB10. They have complicated mathematical features on the PS shapes. Please note that MOP1-MOP5, IMB1-IMB3, and IMB7-IMB9 have two optimization objectives, while MOP6-MOP7, IMB4-IMB6, and IMB10 include three optimization objectives. The number of decision variables is set to 10 for all the test problems. Regarding the biobjective and three-objective test problems, the population sizes were, respectively, set to 100 and 300 as suggested in [38], while the maximum numbers of function evaluations were, respectively, set to 3×105 and 9×105. The performance of MOEA/D-CSU is compared to six competitive MOEA/Ds with different population selection or update strategies, i.e., MOEA/D-M2M [38], MOEA/D-STM [59], MOEA/D-AGR [51], MOEA/D-IR [37], MOEA/D-DE [58], and MOEA/D-ACD [54]. Please note that MOEA/D-M2M, MOEA/D-AGR, and MOEA/D-CSU are run in Matlab, while the rest algorithms are realized in jMetal [70]. The parameters in all the compared algorithms were set as recommended in their original references. The crossover mutation probability in our algorithm was set to 1.0 and 1/n to run (6) and (7), respectively, as suggested in [38].

4.2. Performance Measures

In this paper, in order to provide a comprehensive assessment on the performance of all the competitors, two widely used performance indicators, i.e., inverted generational distance (IGD) [71] and Hypervolume (HV) [71], were adopted to measure the convergence and the diversity of the final solution set. A lower value of IGD and a larger value of HV indicate a better performance to approach the true PF and to spread solutions uniformly along the true PF. When computing the IGD indicator, no less than 500 sampling points from the true PF were used. For the HV calculation, the reference points were set to 1.1 times the upper bound of the PF, i.e., for biobjective problems and to for three-objective problems, as suggested in [71].

All the algorithms were run 30 times, and the mean results and standard deviations were collected for comparison. In order to have a statistically sound conclusion, Wilcoxon’s rank sum test with a 5% significance level was conducted to compare the significance of statistical difference between the results obtained by MOEA/D-CSU and other competitors.

4.3. Performance Comparisons with Six Competitive MOEA/Ds

Table 1 gives all the mean IGD results and standard deviations on MOP and IMB test problems, where the best mean result for each problem is highlighted in boldface. The last row “Better/Worse/Similar” in Table 1 summarizes the numbers of test problems in which MOEA/D-CSU, respectively, performed better than, worse than, and similarly to its competitors.

From Table 1, it is observed that MOEA/D-CSU performed best on most of the MOP and IMB test problems. As these problems were designed with complicated mathematical features that require more diversity in the population, MOEA/Ds only emphasizing the convergence will get easily trapped into local PFs. That is the reason why MOEA/D-STM and MOEA/D-DE had a poor performance obtaining IGD results mostly under an accuracy of 10−1. Other competitors, e.g., MOEA/D-M2M, MOEA/D-AGR, MOEA/D-ACD, and MOEA/D-IR were designed to put more emphasis on diversity, and they performed much better, obtaining IGD results mostly with an accuracy of 10−2, which is still not so close to the true PFs. Since the proposed CSU strategy was used in MOEA/D-CSU, it strongly emphasizes diversity but impacts the convergence less. MOEA/D-CSU properly converged to the true PFs, obtaining IGD results under an accuracy of 10−3 for half of test problems adopted. On MOP1 to MOP7, MOEA/D-CSU gets the all the best results. Particularly, some results are under an accuracy of 10−3, while the competitors cannot converge to the PF well. To IMB test problems, the performance of MOEA/D-CSU is superior except for the results on IMB4 and IMB10. On IMB4, MOEA/D-CSU is worse than MOEA/D-ARG and MOEA/D-IR, similar to MOEA/D-ACD, and better than the rest algorithms. For IMB10, MOEA/D-STM gets the best result and MOEA/D-DE has a pretty good performance. It indicates that the convergence is important on IMB10. To summarize the experimental results on Table 1, MOEA/D-CSU is superior to the competitors on most of test problems. Seeing the last row “Better/Worse/Similar”, when compared to six competitive MOEA/D variants, MOEA/D-CSU can perform better on at least 15 cases and worse on at most 2 cases, which indicates our outstanding performance to balance convergence and diversity for these test problems adopted. Moreover, the HV results provided in Table 2 also confirm the advantages of MOEA/D-CSU, as MOEA/D-CSU performs best on most of the cases.

To visually show our performance, the best nondominated solution sets obtained by MOEA/D-CSU from 30 runs were plotted in Figure 2, where the circles indicate the solutions, while the lines and grids mean the true PFs on the biobjective and three-objective test problems, respectively. On the test problems with continuous PFs (i.e., MOP1-MOP3, MOP5-MOP7, and IMB1-IMB10), MOEA/D-CSU can reach the stable status and find all the optimal values for the agents. Even for MOP4 which has a disconnected PF, MOEA/D-CSU could properly approach all the segments of the true PF. From these plots, it is reasonable to conclude that our proposed CSU strategy is very effective in tackling complicated test problems, such as MOP and IMB.

5. Conclusions and Future Work

In this paper, an enhanced decomposition-based MOEA with a CSU strategy was presented. The agent in our approach aims to optimize the subproblem, which is only allocated with the solutions that are closest to its subproblem. Thus, the number of solution in each agent may be zero or no less than one, which helps to reflect the true diversity among the agents and to provide the correct neighboring information in evolution. To ensure diversity, the offspring in each agent are only allowed to update its original solutions. In the case that the agent has no solution, one solution will be assigned in priority once there are offspring generated closest to its subproblem. Another agent with the largest number of solutions will remove one solution showing the worst convergence. Therefore, for each agent, this approach may enhance its diversity or convergence, but will not deteriorate either of them. After assessing its performance on two complicated test suites (MOP and IMB), the experimental results confirmed the superiority of MOEA/D-CSU over six competitive MOEA/Ds with other population selection or update strategies.

In our future work, the performance of this CSU strategy will be further studied to improve the way in which it reaches the stable status. One possible path is to embed an adaptive adjustment strategy for generating weight vectors in MOEA/D-CSU, which can cooperate with the CSU strategy to attain real-diversity when dealing with disconnected or incomplete PFs. The application of MOEA/D-CSU in some real-world problems will also be our future research direction.

Data Availability

The source code and source data can be provided by contacting with the corresponding author.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by Shenzhen Technology Plan under Grant JCYJ20170817102218122, the National Natural Science Foundation of China under Grants 61876110, 61836005, and 61402291, the Joint Funds of the National Natural Science Foundation of China under Key Program Grant U1713212, and the Natural Science Foundation of Guangdong Province under Grant 2017A030313338. Also, this work was supported by the National Engineering Laboratory for Big Data System Computing Technology.