Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2017 / Article

Research Article | Open Access

Volume 2017 |Article ID 2462891 | 14 pages | https://doi.org/10.1155/2017/2462891

An Improved Hybrid Algorithm Based on Biogeography/Complex and Metropolis for Many-Objective Optimization

Academic Editor: Thomas Hanne
Received08 Dec 2016
Revised17 Jan 2017
Accepted29 Jan 2017
Published30 Mar 2017

Abstract

It is extremely important to maintain balance between convergence and diversity for many-objective evolutionary algorithms. Usually, original BBO algorithm can guarantee convergence to the optimal solution given enough generations, and the Biogeography/Complex (BBO/Complex) algorithm uses within-subsystem migration and cross-subsystem migration to preserve the convergence and diversity of the population. However, as the number of objectives increases, the performance of the algorithm decreases significantly. In this paper, a novel method to solve the many-objective optimization is called Hmp/BBO (Hybrid Metropolis Biogeography/Complex Based Optimization). The new decomposition method is adopted and the PBI function is put in place to improve the performance of the solution. On the within-subsystem migration the inferior migrated islands will not be chosen unless they pass the Metropolis criterion. With this restriction, a uniform distribution Pareto set can be obtained. In addition, through the above-mentioned method, algorithm running time is kept effectively. Experimental results on benchmark functions demonstrate the superiority of the proposed algorithm in comparison with five state-of-the-art designs in terms of both solutions to convergence and diversity.

1. Introduction

In the scientific research and engineering practice, multiple objectives are usually needed to be optimized simultaneously. Because of the conflict between multiple targets in multiobjective optimization, the performance improvement of one subobjective may cause the performance of another subobjective to decrease. Only through the compromise method can all objectives go as far as possible to attain optimal. The set of all the optimal Pareto optimal solutions is known as the Pareto set (PS) and their corresponding objectives vector in the objective space is the Pareto Front (PF) [1]. The purpose of many multiobjective optimization evolutionary algorithms (MOEAs) is to determine a better approximation of the PF and PS. Although Many MOEAs have effectively solved multiobjective optimization problems (MOPs) with only two or three objectives [2], the MOPs with more than three objectives are too difficult to be solved by most already existing MOEAs [3]. In the recent literature report, the MOPs that have more than three objectives are often described as many-objective optimizations (MaOPs) [4, 5].

Due to the fact that minimization and maximization problem can be mutual transformation, without loss of generality, this article mainly describes minimizing many-objective problem and its related concepts MaOPs which can be defined as follows:where is the -dimensional decision variable vector, is the feasible search region, and   : consists of objective functions .

It is generally agreed that, evolutionary algorithms (EAs) are well suited for MOPs, due to their population based strategy for achieving an approximation to the PF. EAs achieve a Pareto approximation set in MOPs via pursing the entire PF and maximizing the diversity of solutions. The fundamentally balanced convergence and diversity between the EAs depend on the selection operator [6]. As far as we know, the popular MOEAs, including Hypervolume Estimation Algorithm for multiobjective optimization (HYPE) [7], nondominated sorting genetic algorithm II (NSGAII) [8], and grid-based evolutionary algorithm (GREA) [9], could effectively deal with two or three objectives’ optimization problems. However, there MOEAs are all confronted with great difficulties in many-objective optimization. With the increase of the objective space in size, the first problem is almost all solutions to the original population becoming nondominated with one another, which is mainly caused by the phenomenon called dominance resistance [10]. This will result in the selection pressure toward PF severely deteriorating and considerably slowing down the evolutionary process. This is primarily because most of EMO algorithms used Pareto dominance as selection criteria. The other problem is that the conflict between diversity and convergence becomes deteriorated. This is mainly because most of the current diversity operators (like crowding distance) prefer selecting the dominance resistant solutions. The third problem is the computational complexity and efficiency. With the increase of the number of objectives, the complexity of the EMO algorithm is significantly increased, and the efficiency of the algorithm is substantially reduced. To tackle the above problems, a lot of methods have been proposed [11, 12], which can be roughly divided into mainly four categories:(1)The dimensional reduction–based methods: by analyzing the relationship between objectives or using feature selection techniques, this method reduces the amount of objectives. In order to decrease the difficulty of the original problem, this method attempts to reduce those unimportant objectives. However, the method assumes that the MaOPs have redundant objectives. This assumption may restrict the application of the dimensional reduction–based methods [13].(2)The relaxed Pareto-dominance-based methods: the relaxed Pareto-dominance-based methods enhance the selection pressure on the Pareto Front, such as the methods of α-dominate [14] and ε-dominate [15]. But the main drawback of this method is that it involves one or more parameters and these parameters are difficult to choose.(3)The indicator-based approaches: the indicator-based method is not subject to the selection pressure problem, since it is not depending on the Pareto-dominance to push the population toward the PF. However, it has also suffered from the curse of dimensionality [16]. The current metrics available to Hypervolume is probably the most popular one that is employed for multiobjective search. However, computation cost of the Hypervolume grows exponentially with the number of objectives [17].(4)The decomposition-based method: this method has developed the decomposition strategy and the neighborhood concept. As one of the popular algorithms, many-objective evolutionary algorithm based on decomposition (MOEA/D) was proposed by Zhang and Li (2007) [18]. The aggregation function is used to compare the solutions and the uniform distribution weight vectors preserving solution convergence and diversity.

MOEA/D has a good convergence and diversity, with low computational complexity, so as to get an effective method. With the BBO/complex algorithm decomposition option, each subsystem has multiobjectives and multiple constraints [19]. It has more flexible decomposition options compared to traditional methods founded on decomposition. BBO/complex detailed explanations are introduced in Section 2. Despite the advance in adapting MOEAs for leading with MaOPs, very few have been recorded in the sense of improving BBO/complex for solving MaOPs so as to balance both convergence and diversity simultaneously. We present a new algorithm, the hybrid Metropolis BBO/Complex (Hmp/BBO) for many-objective optimization; Hmp/BBO, under the basic framework of BBO/Complex algorithm, improves the convergence and diversity in many-objective optimization by introducing the decomposition strategy and PBI aggregation function in MOEA/D.

Furthermore, selection in BBO/complex plays a major role in the information exchange among subsystems. We hope the useful information can be forwarded to the appropriate subsystems to improve them and will not mislead other unsuitable subsystems. In the original version of BBO/complex, the roulette wheel selection is based on the emigration rates to select the emigration islands. On the stage of within-subsystem migration, each SIV in an immigrating island will have a chance to be replaced by SIV from an emigrating island. However, it is not clear whether the new emigration islands are suitable for these subsystems. During the within-subsystem migration phase, some solutions with high quality are easily found at initial search stages, and they will be replaced easily by most current solutions. Consequently various subsystems will be trapped at their local convergence. The simulated annealing (SA) algorithm was proposed by Kirkpatrick et al. [20] and Černý [21]. SA is an intelligent algorithm based on probabilistic stochastic search optimization. It has the capability of potential jumping, and it can accept noninferior solution and inferior solution. Therefore, it effectively avoids falling into the local minimum solution and keeps solutions diversity. We are inspired here by the Metropolis criterion of SA algorithm to solve the problem posed above. Details about SA will be introduced in next section.

On the other hand, when sharing information in the subsystem, because there are numerous objectives and constraints, we need a new method to reduce the computation time of the central processor.

From discussion above, this paper mainly focuses on the Hmp/BBO algorithm that promotes the performance of convergence and diversity in many-objective optimization. Our contributions to this topic are summarized as follows:(1)We have designed a new framework of Hmp/BBO algorithm for many-objective optimization.(2)We have introduced PBI aggregation function to improve the convergence and diversity for Hmp/BBO.(3)We have to adopt the Metropolis criterion to improve the performance of exploration and exploitation.(4)We have introduced a checking unit; this unit ensured that the new islands are only generated within the Hmp/BBO, so it has saved an additional amount of CPU time.

The remainder of this paper is organized as follows: the principles of BBO/Complex and SA are, respectively, covered in Section 2. The hybrid Metropolis BBO/Complex algorithm (Hmp/BBO) is presented in Section 3. In Section 4, the experimental studies are given to demonstrate the efficiency of the proposed method as well as some discussions on this paper. Finally, Section 5 concludes the paper and future research directions are proposed.

2. BBO/Complex and Simulated Annealing

2.1. BBO/Complex

The biogeography-based optimization algorithm is an inventive algorithm, introduced for the first time in 2013, and according to [19] it provides competitive optimization performance with NSGAII [8], differential evolution (DE) [22], ant colony optimization (ACO) [23], and a lot of other algorithms. BBO/Complex extends BBO algorithm system of multiple subsystems; each subsystem contains multiobjectives and multiconstraints. The BBO/complex framework comprises archipelagos, where equals the number of subsystems. Each archipelago appears to have a lot of islands. These islands represent candidate solutions to the problem. BBO/complex framework is distinct from other MOEA algorithms. It includes framework and optimization algorithm, as showed in Figure 1. It provides an efficient model for communication between subsystems and provides a new way of migrating to share information between within-subsystems and cross-subsystems.

The classical BBO/Complex algorithm proposed can be described with the following algorithm:(1)Initialize the control parameters: population size, stop condition, and mutation probability. Initialization of the population is initialized by randomly generated individuals.(2)Get the objective and constraints value similarity levels between all pairs of subsystems.(3)Obtain the rank of islands in each subsystem.(4)Perform within-subsystem migration.(5)Perform cross-subsystem migration.(6)Mutation on each island.(7)Replace the worst island in the population with the generation’s good islands.(8)If the termination condition is not met, go to step ; otherwise, terminate.

2.2. Simulated Annealing Algorithm

The algorithm is built on the metaheuristic technique of thermodynamics of material annealing [2426]. At the beginning of the process the temperature rises and it is gradually cooled down to a minimum. The objective is to minimize the cost function and is expected to reach the lowest cost function for the freezing temperature. When the process is there, the temperature decreases and a new state is created. A simulated annealing algorithm based on statistical mechanics is established. In 1953, [27] adopted the concept of Boltzmann’s probability distribution. This means that if a system maintains its thermal equilibrium at temperature , the probability distribution of its energy can be calculated by the following [27]:where is Boltzmann’s constant. The difference in energy means the difference in cost function between the past and current iterations, which can be determined as follows:For minimization problems, means , so the new point is directly accepted. Otherwise, the Metropolis criterion will be enabled to decide whether to accept or reject . For this case where , the acceptance is treated probabilistically according to the relation . It can be seen that it is affected by the temperature of the receiving process. For the maximum amplitude of , the acceptance probability is also selected to be a much worse state too. This process will avoid falling into local optima. As the temperature decreases, the algorithm accepts only states which minimize the cost. Thus, in the iterative process, the temperature reduction way is one of the key parameters, which is named as the cooling schedule. On the other hand, once the iteration is complete, the next cycle will begin with a smaller value. Therefore, the number of cycles in and the number of iterations per each cycle are crucial settings. Large and lead to better performance but with a longer process time, and vice versa, for small and , based on the need to choose a compromise between the quality of the solution and the processing speed. The Metropolis criterion can help the algorithm balance the performance of exploration and exploitation.

3. The Hybrid Metropolis BBO/Complex Algorithm

3.1. Framework of Proposed Algorithm

Hmp/BBO uses the original BBO/complex framework but extends it to multiple subsystems environments to accommodate many-objective optimization problems. First, many-objective problems are decomposed into multiple subsystems. Generally speaking, there are two decomposition methods: one is built on the system requirements and the other is based on the physical system; also the number of subsystems is set by the user according to the decomposition strategy. The original BBO/complex decomposes the problem based on system requirements. Because the objective space dimension is higher, we need to use the new decomposition method. Afterwards, like Figure 2, a PBI aggregate function decomposition method can enhance the convergence and diversity of the algorithm when solving many-objective problems. With the above step, Hmp/BBO migration is divided into two categories: within-subsystem and cross-subsystem. During the within-subsystem migration phase, we enhance the probability of the solution and employ the Metropolis criterion to make the algorithm jump out of local optimal. During the cross-subsystem migration stage, we adopt the roulette wheel method to get best solution. Then we perform mutation and clear the duplicate algorithm.

Our algorithm does not take into account constraints problem. Constraints processing research questions will be in the rest of the work. Algorithm 1 presents the general framework of Hmp/BBO.

Output: population
() Initialization all the parameters
() generating parent population, weighting vectors and neighborhood index set
() decomposition strategy
() while the termination condition is not met do
()  calculate the nondominated ranking system in subsystem
()  perform within-subsystem migration
()  perform cross-subsystem migration
()  perform mutation
()  clear the duplicate solution
()  replace the worst solutions with good islands
()  end while
() return
3.2. Generating Weighting Vectors

We set weight vectors such that the optimal solutions of their subsystems are uniformly distributed along the PF. Most of the approaches for generating weight vectors for different decomposition strategies in MODE/D have been suggested [28]; we use Das’s method in [29], with a predefined integer which controls the divisions along each axis. The total number of such vectors is . Distributed reference points are . Taking the three-dimension problem as an example, there are reference points.

3.3. Decomposition Strategy

The Hmp/BBO initialization generates weight vectors; PBI aggregate function [30] is employed to divide the weight vectors into a set of (). Because the weight vectors uniformly distribute in the objective space, the number of weight vectors in one subsystem is approximate to Nw/L. Therefore, the whole objective space is divided into subsystems.

3.4. The Nondominated Ranking System (NDRS)

NDRS was introduced in [31] as the ranking system in the multiobjective genetic algorithms (MOGA). It uses inconsecutive integers as ranks to reflect the relative performance of each individual in a population. Assume that we have a subsystem: is the rank of the th island, where a lower rank is better. If objective of island is better than of , ; if objective of island is better than of , .

3.5. PBI Aggregate Function

In MOEA/D, several methods have been proposed for decomposing MOPs into a single-objective optimization subproblem, such as weighted sum method, Tchebycheff method, and Penalty-based Boundary Intersection (PBI) method, but it was shown that the MOEA/D-PBI is only best for many-objective optimization problem in [32]. This paper uses the PBI approach. A scalar optimization function is defined asIn Figure 2, is the ideal point, is the Euclidean distance between origin point and the foot point drawn from the solution to the reference direction, and is the perpendicular distance of the solution from the reference direction. The value on the performance of Hmp/BBO is present in Section 4.3.

3.6. Within-Subsystem Migration

Within-subsystem migration is mainly for information sharing; we need a new rapid method for selection of immigration islands and emigration islands. First, immigrating islands are selected based on the NDRS. Afterward, emigrating islands are selected by emigration rates, in order to further improve the algorithm’s convergence speed, while avoiding falling into local optima. Features ( SIVs) of the islands will not be directly substituted with the new values that come from the probabilistically selected original islands mentioned above. Instead, SIVs of the islands are maintained in two temporary matrices with size . Each row of the matrices represents one individual. The old independent variables are used again if and only if the modified individual exhibits lower solution quality and does not comply with the Metropolis criterion. The inferior migrated island will be selected only if they pass the Metropolis criterion of SA. With this restriction on the with-subsystem migration, the Hmp/BBO algorithm can avoid falling into the local optimum. The algorithm is described in Algorithm 2.

() Probabilistically choose the immigrating islands based on the NDRS.
()  Use the Metropolis criterion based on emigration rates to select the emigrating island.
() each SIV in an immigration island will have a chance to be replaced by an SIV from an emigrating island.
3.7. Cross-Subsystem Migration

During cross-system migration of the stage, it is desirable that the migration should be chosen from a neighborhood subsystem as much as possible. In Hmp/BBO, each island is uniquely specified by a weight vector; each weight vector has been assigned based on the Euclidean distance from a neighborhood. So we can select islands from its neighboring subsystems by neighboring index. First, we select islands between subsystems based neighboring index set (). Second, emigrating islands are selected based on the PBI distance. Furthermore, we need to eliminate those poor candidate islands to improve the algorithm diversity between subsystems. The inferior migrated island will not be selected unless it passes the roulette wheel method. With this restriction on the cross-subsystem migration, the Hmp/BBO algorithm’s diversity performance can be enhanced. The algorithm is described in Algorithm 3.

() Randomly select indices from (choose similar subsystem) and select immigrating islands based on
  immigration rages from population .
() Calculate Euclidean distance between the islands of neighboring subsystem
() Use the roulette wheel selection based on emigration rates to select the emigrating islands.
() each SIV in an immigration island will have a chance to be replaced by an SIV from an emigrating island.
3.8. Mutation Algorithm

In the Hmp/BBO, these events are modeled as SIV mutation. The mutation rate can be determined by the following number of species involved in the probability to the following equation: where the and () are a user-predefined maximum mutation rate that can reach. The ISI is the variable and function of SIV. The mutation processes are described in Algorithm 4.

() for   to do
  
()  Calculate probability based on and
   
()  Calculate mutation rate
() if rand < and   then  
() Replace SIVs vector of with a randomly generated SIVs vector
() end if
() end for
3.9. Clear the Duplicate Algorithm

Clear duplication will increase the diversity of solutions and avoid duplication of features on other islands. The algorithm can be described in Algorithm 5.

Require: Check all SIVs on all ISIi
() while there is a duplicated SIV do
() for to do
() if any duplicated SIVs is detected then
()  Replace the duplicated SIVs in with a randomly generated SIVs
() end if

In conclusion, The Hmp/BBO would prove the performance of convenience and diversity by introducing a new framework for many-objective optimization. In addition, the SA algorithm comes with only one individual, while the Hmp/BBO is a population based algorithm. Instead of running SA with only one island, it can be executed with all islands in subsystems. Thus, internal searching loops of SA with each cycle of can be invalid in Hmp/BBO without affecting the solution quality. This method will save one significant amount of CPU time. By these methods, the exploration and exploitation of the Hmp/BBO algorithm are much improved. The Hmp/BBO framework is described in Algorithm 6.

() Initialization with all the parameters. Initialization weight vectors, are the center of the weight vectors cluster. are the
  number of subsystems and neighborhood set of weight vectors .
() Decomposition strategy
() for to do
() Calculate the rank of islands in each subsystem (NDRS)
() Probabilistically select the immigrating islands based on the islands rank, probabilistically select the emigrating islands based
   on the emigration rates.
()  if    then
()     find the initial temperature
()   else
()    Update the temperature for the th generation
()  end if
(11) Save the vectors of all the populations (before migration) in matrix mm1 with size and their cost functions vector vv1
   with length .
(12) Do within-subsystem migration.
(13) Save the vectors of all the populations (after migration) in matrix mm2 with size and their cost functions in vector vv2
   with length .
(14) for   to do ( is number of islands)
(15) Calculate
(16) if   then
(17)  
(18)  if > rand  then
(19)   Accept (I,1→n) vector of matrix mm2 as an updated population for ISI
(20)  else
(21)   Re-select the past mm1(I,1→n) vector of matrix mm1 as an updated population for ISIi
(22)  end if
(23) end if
(24) Randomly select indices from and select immigrating islands based immigration rates.
(25) if
(26) save the best islands for emigrating
(27) else
(28) Save the vectors of all the populations (before mutation) in matrix mm3 with size and their cost functions in vector vv3
  with length
(29) Do cross-subsystem migration
(30) use the roulette wheel method to select population
  with length .
(31) Do mutation
(32)  Do clear duplicated SIV
(33)  if then
(34)    Repalce the worst ISI with the good ISI saved in the elitism stage
(35)     end if
(36)     end for
(37)     display the best islands
(38)    end if
(39) end if
(40) end if
(41) Do mutation
(42) Do clear duplicated SIV
(43)   if    then
(44) Replace the worst ISI with the good ISI saved in the elitism stage
(45)   end if
(46) end for
(47) Display the best islands.
3.10. Computational Complexity of One Generation of Hmp/BBO

In this section, we discuss the computational complexity of the Hmp/BBO. For a population size and a problem of objectives, the major computational costs are in steps , , and of Algorithm 1. Step is the computational complexity of NDRS; it requires () computations. Computing of within-subsystem (step ) needs () computations, where is cycles of Metropolis and is generated by parents’ populations. Cross-subsystem migration (step ) needs () computations and is equal to the number of subsystems. So for objectives, reviewing all the above computations, the overall complexity of one generation of Hmp/BBO is (++).

4. Simulation Results

In this section, to prove the validity of Hmp/BBO, we compare it with five state-of-the-art algorithms for comparisons, including BBO/Complex, NSGAIII, MOEA/D-PBI, HYPE, and GREA. NSGAIII is based on the Pareto dominance relationship, but the maintenance of population diversity is supported by the use of a group of uniformly distributed reference points. MOEA/D-PBI is a representative of the decomposition-based approach and keeps the diversity of solutions using a series of predefined weight vectors. HYPE is a Hypervolume-based evolutionary algorithm for many-objective optimizations. GREA uses the NSGA-II framework and introduces two concepts (grid dominance and grid difference) and three grid-based standards, and a finesse adjustment strategy. We describe the test questions used in Section 4.1 and the quality indicators in Section 4.2. Then we introduce five state-of-the-art algorithms that used comparison and the corresponding parameter settings in Section 4.3. Finally, the discussion results are in Section 4.4.

4.1. Test Problems

In checking to verify the proposed algorithm, the well-known test functions definitions of DTZ1–DTZ4 [33] functions and all of WFG test suite [34] are listed in Table 1. We only consider DTLZ 1–4 problems for DTLZ test suite, because the nature of DTLZ5 and DTLZ6’s PFs is unclear beyond three objectives [35]. DTLZ1 is a liner and multimodal function; DTLZ2 is a concave function; DTLZ3 is concave and multimodal function; DTLZ4 is concave and biased function, as summarized in Table 1.


Name of functionCharacteristics

DTLZ1Liner, multimodal
DTLZ2Concave
DTLZ3Concave, multimodal
DTLZ4Concave, biased
WFG1Mixed, Biased, scaled
WFG2Convex, disconnected
WFG3Linear, degenerate
WFG4Concave, multimodal
WFG5Concave, deceptive
WFG6Concave, scaled
WFG7Biased, scaled
WFG8Biased, nonseparable
WFG9Concave, scaled

4.2. Quality Indicators

In our empirical study, the following three extensively used quality indicators are examined. The first one can reflect the convergence of an algorithm. In the second one and last one, the convergence and diversity of the solutions can be recorded simultaneously.

(1) Generational Distance (GD). Let be the last series of nondominated points in the objective space and be a set of points evenly distributed over the actual PF. GD can only reflect the performance of convergence for an algorithm; a smaller value means better quality. Then GD is described as follows:

(2) Inverted Generational Distance (IGD). Let be a set of points uniformly sampled over the actual efficient front (EF), and let be the set of solutions obtained by an algorithm. IGD may be used to measure convergence and diversity in a sense, and a smaller value means better quality. Then IGD is described as follows:

(3) Hypervolume (HV) Indicator. HV indicator can measure both convergence and diversity of a solution set. The HV is used for WFG test suite, and the larger the HV value is, the better the solution’s quality for the PF is. Then HV is described as follows: is the number of objectives. A is the set of nondominated points obtained in the objective space by an algorithm. is a reference point in the objective space which is dominated by all Pareto points.

4.3. Parameters Setting

The parameters for the six MOEAs considered in this study are listed below.(1)Population size: the population size used in this study for Hmp/BBO is 91, 206, and 210 for three-, four-, and five-objective problems, respectively. Furthermore, the size of the NSGA-III population was mildly adjusted as in the original NSGAIII study, that is, 92, 212, and 276 for three-, four-, and five-objective problems.(2)Stop condition: each algorithm runs 20 times independently on each test question. All algorithms are implemented on a 2.6 GHz CPU desktop computer, 8 GB RAM, and Windows 10. The stop condition of the algorithm is the maximum number of fitness evaluations, as outlined in Table 2.(3)Parameter setting in Hmp/BBO: set immigration rate is , emigration rate is , , , , and , and penalty parameter in PBI: , is also suggested in [18]. Neighborhood sizes .(4)Parameter setting in BBO/Complex: set immigration rate is , emigration rate is , mutation probability , and the generation count .(5)Reproduction operator setting: crossover probability , in NSGAIII. The mutation probability is and its distribution index .(6)Parameter setting in MOEA/D-PBI: neighborhood size ; probability used for selection in neighborhood . Penalty parameter .(7)The grid division (div) in GREA: Div is set according to [9] as shown in Table 3.(8)Number of points in Monte Carlo sampling: it is placed at 1,000,000 to ensure accuracy.


Function

DTLZ1365002640012600
DTLZ2228404250073600
DTLZ39200014600220000
DTLZ45470013200220000
WFG1–93650074300157500


Function

DTLZ1101010
DTLZ210109
DTLZ3111011
DTLZ410107
WFG17914
WFG2181716
WFG3141414
WFG4–9101011

4.4. Result and Discussion

In this section, the GD metrics are used to compare the convergence capacity between the proposed Hmp/BBO and other five algorithms. Table 4 shows the best, median, and worst GD values obtained by six algorithms on DTLZ1 to DTLZ4 with different number of objectives, and the one that is significantly better than the other is marked in bold face. Table 4 shows that all the test functions of Hmp/BBO performed well, especially in the DTLZ1 and DTLZ4 problems. Results of DTLZ test suite in comparison to results of Hmp/BBO with other five MOEAs in terms of IGD values on DTLZ test suite are presented in Table 5. It shows both the mean and IGD values of over 20 independent runs for the six compared MOEAs, where the best mean and standard deviation values are marked in bold. Furthermore, Table 6 presents the average and standard deviation of the Hypervolume over 20 independent runs on WFG problems, where the best performance is highlighted in bold. The quality of the solution sets obtained by the six algorithms on all WFG test problems in terms of Hypervolume was compared. From the experimental results of DTLZ and WFG test problem, we find that Hmp/BBO shows better performance than the other five algorithms.


FunctionsValuesHmp/BBOBBO/ComplexNSGAIIIMOEA/D-PBIHYPEGREA

DTLZ13Best
Median
Worst
4Best
Median
Worst
5Best
Median
Worst

DTLZ23Best
Median
Worst
4Best
Median
Worst
5Best
Median
Worst

DTLZ33Best
Median
Worst
4Best
Median
Worst
5Best