Research Article  Open Access
An Improved Hybrid Algorithm Based on Biogeography/Complex and Metropolis for ManyObjective Optimization
Abstract
It is extremely important to maintain balance between convergence and diversity for manyobjective evolutionary algorithms. Usually, original BBO algorithm can guarantee convergence to the optimal solution given enough generations, and the Biogeography/Complex (BBO/Complex) algorithm uses withinsubsystem migration and crosssubsystem migration to preserve the convergence and diversity of the population. However, as the number of objectives increases, the performance of the algorithm decreases significantly. In this paper, a novel method to solve the manyobjective optimization is called Hmp/BBO (Hybrid Metropolis Biogeography/Complex Based Optimization). The new decomposition method is adopted and the PBI function is put in place to improve the performance of the solution. On the withinsubsystem migration the inferior migrated islands will not be chosen unless they pass the Metropolis criterion. With this restriction, a uniform distribution Pareto set can be obtained. In addition, through the abovementioned method, algorithm running time is kept effectively. Experimental results on benchmark functions demonstrate the superiority of the proposed algorithm in comparison with five stateoftheart designs in terms of both solutions to convergence and diversity.
1. Introduction
In the scientific research and engineering practice, multiple objectives are usually needed to be optimized simultaneously. Because of the conflict between multiple targets in multiobjective optimization, the performance improvement of one subobjective may cause the performance of another subobjective to decrease. Only through the compromise method can all objectives go as far as possible to attain optimal. The set of all the optimal Pareto optimal solutions is known as the Pareto set (PS) and their corresponding objectives vector in the objective space is the Pareto Front (PF) [1]. The purpose of many multiobjective optimization evolutionary algorithms (MOEAs) is to determine a better approximation of the PF and PS. Although Many MOEAs have effectively solved multiobjective optimization problems (MOPs) with only two or three objectives [2], the MOPs with more than three objectives are too difficult to be solved by most already existing MOEAs [3]. In the recent literature report, the MOPs that have more than three objectives are often described as manyobjective optimizations (MaOPs) [4, 5].
Due to the fact that minimization and maximization problem can be mutual transformation, without loss of generality, this article mainly describes minimizing manyobjective problem and its related concepts MaOPs which can be defined as follows:where is the dimensional decision variable vector, is the feasible search region, and â€‰â€‰: consists of objective functions .
It is generally agreed that, evolutionary algorithms (EAs) are well suited for MOPs, due to their population based strategy for achieving an approximation to the PF. EAs achieve a Pareto approximation set in MOPs via pursing the entire PF and maximizing the diversity of solutions. The fundamentally balanced convergence and diversity between the EAs depend on the selection operator [6]. As far as we know, the popular MOEAs, including Hypervolume Estimation Algorithm for multiobjective optimization (HYPE) [7], nondominated sorting genetic algorithm II (NSGAII) [8], and gridbased evolutionary algorithm (GREA) [9], could effectively deal with two or three objectivesâ€™ optimization problems. However, there MOEAs are all confronted with great difficulties in manyobjective optimization. With the increase of the objective space in size, the first problem is almost all solutions to the original population becoming nondominated with one another, which is mainly caused by the phenomenon called dominance resistance [10]. This will result in the selection pressure toward PF severely deteriorating and considerably slowing down the evolutionary process. This is primarily because most of EMO algorithms used Pareto dominance as selection criteria. The other problem is that the conflict between diversity and convergence becomes deteriorated. This is mainly because most of the current diversity operators (like crowding distance) prefer selecting the dominance resistant solutions. The third problem is the computational complexity and efficiency. With the increase of the number of objectives, the complexity of the EMO algorithm is significantly increased, and the efficiency of the algorithm is substantially reduced. To tackle the above problems, a lot of methods have been proposed [11, 12], which can be roughly divided into mainly four categories:(1)The dimensional reductionâ€“based methods: by analyzing the relationship between objectives or using feature selection techniques, this method reduces the amount of objectives. In order to decrease the difficulty of the original problem, this method attempts to reduce those unimportant objectives. However, the method assumes that the MaOPs have redundant objectives. This assumption may restrict the application of the dimensional reductionâ€“based methods [13].(2)The relaxed Paretodominancebased methods: the relaxed Paretodominancebased methods enhance the selection pressure on the Pareto Front, such as the methods of Î±dominate [14] and Îµdominate [15]. But the main drawback of this method is that it involves one or more parameters and these parameters are difficult to choose.(3)The indicatorbased approaches: the indicatorbased method is not subject to the selection pressure problem, since it is not depending on the Paretodominance to push the population toward the PF. However, it has also suffered from the curse of dimensionality [16]. The current metrics available to Hypervolume is probably the most popular one that is employed for multiobjective search. However, computation cost of the Hypervolume grows exponentially with the number of objectives [17].(4)The decompositionbased method: this method has developed the decomposition strategy and the neighborhood concept. As one of the popular algorithms, manyobjective evolutionary algorithm based on decomposition (MOEA/D) was proposed by Zhang and Li (2007) [18]. The aggregation function is used to compare the solutions and the uniform distribution weight vectors preserving solution convergence and diversity.
MOEA/D has a good convergence and diversity, with low computational complexity, so as to get an effective method. With the BBO/complex algorithm decomposition option, each subsystem has multiobjectives and multiple constraints [19]. It has more flexible decomposition options compared to traditional methods founded on decomposition. BBO/complex detailed explanations are introduced in Section 2. Despite the advance in adapting MOEAs for leading with MaOPs, very few have been recorded in the sense of improving BBO/complex for solving MaOPs so as to balance both convergence and diversity simultaneously. We present a new algorithm, the hybrid Metropolis BBO/Complex (Hmp/BBO) for manyobjective optimization; Hmp/BBO, under the basic framework of BBO/Complex algorithm, improves the convergence and diversity in manyobjective optimization by introducing the decomposition strategy and PBI aggregation function in MOEA/D.
Furthermore, selection in BBO/complex plays a major role in the information exchange among subsystems. We hope the useful information can be forwarded to the appropriate subsystems to improve them and will not mislead other unsuitable subsystems. In the original version of BBO/complex, the roulette wheel selection is based on the emigration rates to select the emigration islands. On the stage of withinsubsystem migration, each SIV in an immigrating island will have a chance to be replaced by SIV from an emigrating island. However, it is not clear whether the new emigration islands are suitable for these subsystems. During the withinsubsystem migration phase, some solutions with high quality are easily found at initial search stages, and they will be replaced easily by most current solutions. Consequently various subsystems will be trapped at their local convergence. The simulated annealing (SA) algorithm was proposed by Kirkpatrick et al. [20] and ÄŒernÃ½ [21]. SA is an intelligent algorithm based on probabilistic stochastic search optimization. It has the capability of potential jumping, and it can accept noninferior solution and inferior solution. Therefore, it effectively avoids falling into the local minimum solution and keeps solutions diversity. We are inspired here by the Metropolis criterion of SA algorithm to solve the problem posed above. Details about SA will be introduced in next section.
On the other hand, when sharing information in the subsystem, because there are numerous objectives and constraints, we need a new method to reduce the computation time of the central processor.
From discussion above, this paper mainly focuses on the Hmp/BBO algorithm that promotes the performance of convergence and diversity in manyobjective optimization. Our contributions to this topic are summarized as follows:(1)We have designed a new framework of Hmp/BBO algorithm for manyobjective optimization.(2)We have introduced PBI aggregation function to improve the convergence and diversity for Hmp/BBO.(3)We have to adopt the Metropolis criterion to improve the performance of exploration and exploitation.(4)We have introduced a checking unit; this unit ensured that the new islands are only generated within the Hmp/BBO, so it has saved an additional amount of CPU time.
The remainder of this paper is organized as follows: the principles of BBO/Complex and SA are, respectively, covered in Section 2. The hybrid Metropolis BBO/Complex algorithm (Hmp/BBO) is presented in Section 3. In Section 4, the experimental studies are given to demonstrate the efficiency of the proposed method as well as some discussions on this paper. Finally, Section 5 concludes the paper and future research directions are proposed.
2. BBO/Complex and Simulated Annealing
2.1. BBO/Complex
The biogeographybased optimization algorithm is an inventive algorithm, introduced for the first time in 2013, and according to [19] it provides competitive optimization performance with NSGAII [8], differential evolution (DE) [22], ant colony optimization (ACO) [23], and a lot of other algorithms. BBO/Complex extends BBO algorithm system of multiple subsystems; each subsystem contains multiobjectives and multiconstraints. The BBO/complex framework comprises archipelagos, where equals the number of subsystems. Each archipelago appears to have a lot of islands. These islands represent candidate solutions to the problem. BBO/complex framework is distinct from other MOEA algorithms. It includes framework and optimization algorithm, as showed in Figure 1. It provides an efficient model for communication between subsystems and provides a new way of migrating to share information between withinsubsystems and crosssubsystems.
The classical BBO/Complex algorithm proposed can be described with the following algorithm:(1)Initialize the control parameters: population size, stop condition, and mutation probability. Initialization of the population is initialized by randomly generated individuals.(2)Get the objective and constraints value similarity levels between all pairs of subsystems.(3)Obtain the rank of islands in each subsystem.(4)Perform withinsubsystem migration.(5)Perform crosssubsystem migration.(6)Mutation on each island.(7)Replace the worst island in the population with the generationâ€™s good islands.(8)If the termination condition is not met, go to step ; otherwise, terminate.
2.2. Simulated Annealing Algorithm
The algorithm is built on the metaheuristic technique of thermodynamics of material annealing [24â€“26]. At the beginning of the process the temperature rises and it is gradually cooled down to a minimum. The objective is to minimize the cost function and is expected to reach the lowest cost function for the freezing temperature. When the process is there, the temperature decreases and a new state is created. A simulated annealing algorithm based on statistical mechanics is established. In 1953, [27] adopted the concept of Boltzmannâ€™s probability distribution. This means that if a system maintains its thermal equilibrium at temperature , the probability distribution of its energy can be calculated by the following [27]:where is Boltzmannâ€™s constant. The difference in energy means the difference in cost function between the past and current iterations, which can be determined as follows:For minimization problems, means , so the new point is directly accepted. Otherwise, the Metropolis criterion will be enabled to decide whether to accept or reject . For this case where , the acceptance is treated probabilistically according to the relation . It can be seen that it is affected by the temperature of the receiving process. For the maximum amplitude of , the acceptance probability is also selected to be a much worse state too. This process will avoid falling into local optima. As the temperature decreases, the algorithm accepts only states which minimize the cost. Thus, in the iterative process, the temperature reduction way is one of the key parameters, which is named as the cooling schedule. On the other hand, once the iteration is complete, the next cycle will begin with a smaller value. Therefore, the number of cycles in and the number of iterations per each cycle are crucial settings. Large and lead to better performance but with a longer process time, and vice versa, for small and , based on the need to choose a compromise between the quality of the solution and the processing speed. The Metropolis criterion can help the algorithm balance the performance of exploration and exploitation.
3. The Hybrid Metropolis BBO/Complex Algorithm
3.1. Framework of Proposed Algorithm
Hmp/BBO uses the original BBO/complex framework but extends it to multiple subsystems environments to accommodate manyobjective optimization problems. First, manyobjective problems are decomposed into multiple subsystems. Generally speaking, there are two decomposition methods: one is built on the system requirements and the other is based on the physical system; also the number of subsystems is set by the user according to the decomposition strategy. The original BBO/complex decomposes the problem based on system requirements. Because the objective space dimension is higher, we need to use the new decomposition method. Afterwards, like Figure 2, a PBI aggregate function decomposition method can enhance the convergence and diversity of the algorithm when solving manyobjective problems. With the above step, Hmp/BBO migration is divided into two categories: withinsubsystem and crosssubsystem. During the withinsubsystem migration phase, we enhance the probability of the solution and employ the Metropolis criterion to make the algorithm jump out of local optimal. During the crosssubsystem migration stage, we adopt the roulette wheel method to get best solution. Then we perform mutation and clear the duplicate algorithm.
Our algorithm does not take into account constraints problem. Constraints processing research questions will be in the rest of the work. Algorithm 1 presents the general framework of Hmp/BBO.

3.2. Generating Weighting Vectors
We set weight vectors such that the optimal solutions of their subsystems are uniformly distributed along the PF. Most of the approaches for generating weight vectors for different decomposition strategies in MODE/D have been suggested [28]; we use Dasâ€™s method in [29], with a predefined integer which controls the divisions along each axis. The total number of such vectors is . Distributed reference points are . Taking the threedimension problem as an example, there are reference points.
3.3. Decomposition Strategy
The Hmp/BBO initialization generates weight vectors; PBI aggregate function [30] is employed to divide the weight vectors into a set of (). Because the weight vectors uniformly distribute in the objective space, the number of weight vectors in one subsystem is approximate to Nw/L. Therefore, the whole objective space is divided into subsystems.
3.4. The Nondominated Ranking System (NDRS)
NDRS was introduced in [31] as the ranking system in the multiobjective genetic algorithms (MOGA). It uses inconsecutive integers as ranks to reflect the relative performance of each individual in a population. Assume that we have a subsystem: is the rank of the th island, where a lower rank is better. If objective of island is better than of , ; if objective of island is better than of , .
3.5. PBI Aggregate Function
In MOEA/D, several methods have been proposed for decomposing MOPs into a singleobjective optimization subproblem, such as weighted sum method, Tchebycheff method, and Penaltybased Boundary Intersection (PBI) method, but it was shown that the MOEA/DPBI is only best for manyobjective optimization problem in [32]. This paper uses the PBI approach. A scalar optimization function is defined asIn Figure 2, is the ideal point, is the Euclidean distance between origin point and the foot point drawn from the solution to the reference direction, and is the perpendicular distance of the solution from the reference direction. The value on the performance of Hmp/BBO is present in Section 4.3.
3.6. WithinSubsystem Migration
Withinsubsystem migration is mainly for information sharing; we need a new rapid method for selection of immigration islands and emigration islands. First, immigrating islands are selected based on the NDRS. Afterward, emigrating islands are selected by emigration rates, in order to further improve the algorithmâ€™s convergence speed, while avoiding falling into local optima. Features ( SIVs) of the islands will not be directly substituted with the new values that come from the probabilistically selected original islands mentioned above. Instead, SIVs of the islands are maintained in two temporary matrices with size . Each row of the matrices represents one individual. The old independent variables are used again if and only if the modified individual exhibits lower solution quality and does not comply with the Metropolis criterion. The inferior migrated island will be selected only if they pass the Metropolis criterion of SA. With this restriction on the withsubsystem migration, the Hmp/BBO algorithm can avoid falling into the local optimum. The algorithm is described in Algorithm 2.

3.7. CrossSubsystem Migration
During crosssystem migration of the stage, it is desirable that the migration should be chosen from a neighborhood subsystem as much as possible. In Hmp/BBO, each island is uniquely specified by a weight vector; each weight vector has been assigned based on the Euclidean distance from a neighborhood. So we can select islands from its neighboring subsystems by neighboring index. First, we select islands between subsystems based neighboring index set (). Second, emigrating islands are selected based on the PBI distance. Furthermore, we need to eliminate those poor candidate islands to improve the algorithm diversity between subsystems. The inferior migrated island will not be selected unless it passes the roulette wheel method. With this restriction on the crosssubsystem migration, the Hmp/BBO algorithmâ€™s diversity performance can be enhanced. The algorithm is described in Algorithm 3.

3.8. Mutation Algorithm
In the Hmp/BBO, these events are modeled as SIV mutation. The mutation rate can be determined by the following number of species involved in the probability to the following equation: where the and () are a userpredefined maximum mutation rate that can reach. The ISI is the variable and function of SIV. The mutation processes are described in Algorithm 4.

3.9. Clear the Duplicate Algorithm
Clear duplication will increase the diversity of solutions and avoid duplication of features on other islands. The algorithm can be described in Algorithm 5.

In conclusion, The Hmp/BBO would prove the performance of convenience and diversity by introducing a new framework for manyobjective optimization. In addition, the SA algorithm comes with only one individual, while the Hmp/BBO is a population based algorithm. Instead of running SA with only one island, it can be executed with all islands in subsystems. Thus, internal searching loops of SA with each cycle of can be invalid in Hmp/BBO without affecting the solution quality. This method will save one significant amount of CPU time. By these methods, the exploration and exploitation of the Hmp/BBO algorithm are much improved. The Hmp/BBO framework is described in Algorithm 6.

3.10. Computational Complexity of One Generation of Hmp/BBO
In this section, we discuss the computational complexity of the Hmp/BBO. For a population size and a problem of objectives, the major computational costs are in steps , , and of Algorithm 1. Step is the computational complexity of NDRS; it requires () computations. Computing of withinsubsystem (step ) needs () computations, where is cycles of Metropolis and is generated by parentsâ€™ populations. Crosssubsystem migration (step ) needs () computations and is equal to the number of subsystems. So for objectives, reviewing all the above computations, the overall complexity of one generation of Hmp/BBO is (++).
4. Simulation Results
In this section, to prove the validity of Hmp/BBO, we compare it with five stateoftheart algorithms for comparisons, including BBO/Complex, NSGAIII, MOEA/DPBI, HYPE, and GREA. NSGAIII is based on the Pareto dominance relationship, but the maintenance of population diversity is supported by the use of a group of uniformly distributed reference points. MOEA/DPBI is a representative of the decompositionbased approach and keeps the diversity of solutions using a series of predefined weight vectors. HYPE is a Hypervolumebased evolutionary algorithm for manyobjective optimizations. GREA uses the NSGAII framework and introduces two concepts (grid dominance and grid difference) and three gridbased standards, and a finesse adjustment strategy. We describe the test questions used in Section 4.1 and the quality indicators in Section 4.2. Then we introduce five stateoftheart algorithms that used comparison and the corresponding parameter settings in Section 4.3. Finally, the discussion results are in Section 4.4.
4.1. Test Problems
In checking to verify the proposed algorithm, the wellknown test functions definitions of DTZ1â€“DTZ4 [33] functions and all of WFG test suite [34] are listed in Table 1. We only consider DTLZ 1â€“4 problems for DTLZ test suite, because the nature of DTLZ5 and DTLZ6â€™s PFs is unclear beyond three objectives [35]. DTLZ1 is a liner and multimodal function; DTLZ2 is a concave function; DTLZ3 is concave and multimodal function; DTLZ4 is concave and biased function, as summarized in Table 1.

4.2. Quality Indicators
In our empirical study, the following three extensively used quality indicators are examined. The first one can reflect the convergence of an algorithm. In the second one and last one, the convergence and diversity of the solutions can be recorded simultaneously.
(1) Generational Distance (GD). Let be the last series of nondominated points in the objective space and be a set of points evenly distributed over the actual PF. GD can only reflect the performance of convergence for an algorithm; a smaller value means better quality. Then GD is described as follows:
(2) Inverted Generational Distance (IGD). Let be a set of points uniformly sampled over the actual efficient front (EF), and let be the set of solutions obtained by an algorithm. IGD may be used to measure convergence and diversity in a sense, and a smaller value means better quality. Then IGD is described as follows:
(3) Hypervolume (HV) Indicator. HV indicator can measure both convergence and diversity of a solution set. The HV is used for WFG test suite, and the larger the HV value is, the better the solutionâ€™s quality for the PF is. Then HV is described as follows: is the number of objectives. A is the set of nondominated points obtained in the objective space by an algorithm. is a reference point in the objective space which is dominated by all Pareto points.
4.3. Parameters Setting
The parameters for the six MOEAs considered in this study are listed below.(1)Population size: the population size used in this study for Hmp/BBO is 91, 206, and 210 for three, four, and fiveobjective problems, respectively. Furthermore, the size of the NSGAIII population was mildly adjusted as in the original NSGAIII study, that is, 92, 212, and 276 for three, four, and fiveobjective problems.(2)Stop condition: each algorithm runs 20 times independently on each test question. All algorithms are implemented on a 2.6â€‰GHz CPU desktop computer, 8â€‰GB RAM, and Windows 10. The stop condition of the algorithm is the maximum number of fitness evaluations, as outlined in Table 2.(3)Parameter setting in Hmp/BBO: set immigration rate is , emigration rate is , , , , and , and penalty parameter in PBI: , is also suggested in [18]. Neighborhood sizes .(4)Parameter setting in BBO/Complex: set immigration rate is , emigration rate is , mutation probability , and the generation count .(5)Reproduction operator setting: crossover probability , in NSGAIII. The mutation probability is and its distribution index .(6)Parameter setting in MOEA/DPBI: neighborhood size ; probability used for selection in neighborhood . Penalty parameter .(7)The grid division (div) in GREA: Div is set according to [9] as shown in Table 3.(8)Number of points in Monte Carlo sampling: it is placed at 1,000,000 to ensure accuracy.


4.4. Result and Discussion
In this section, the GD metrics are used to compare the convergence capacity between the proposed Hmp/BBO and other five algorithms. Table 4 shows the best, median, and worst GD values obtained by six algorithms on DTLZ1 to DTLZ4 with different number of objectives, and the one that is significantly better than the other is marked in bold face. Table 4 shows that all the test functions of Hmp/BBO performed well, especially in the DTLZ1 and DTLZ4 problems. Results of DTLZ test suite in comparison to results of Hmp/BBO with other five MOEAs in terms of IGD values on DTLZ test suite are presented in Table 5. It shows both the mean and IGD values of over 20 independent runs for the six compared MOEAs, where the best mean and standard deviation values are marked in bold. Furthermore, Table 6 presents the average and standard deviation of the Hypervolume over 20 independent runs on WFG problems, where the best performance is highlighted in bold. The quality of the solution sets obtained by the six algorithms on all WFG test problems in terms of Hypervolume was compared. From the experimental results of DTLZ and WFG test problem, we find that Hmp/BBO shows better performance than the other five algorithms.
