Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017 (2017), Article ID 2314927, 23 pages
https://doi.org/10.1155/2017/2314927
Research Article

Enhancing the Performance of Biogeography-Based Optimization Using Multitopology and Quantitative Orthogonal Learning

State Key Laboratory of Advanced Electromagnetic Engineering and Technology, School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan 430074, China

Correspondence should be addressed to Jinfu Chen

Received 18 April 2017; Revised 16 July 2017; Accepted 7 August 2017; Published 13 September 2017

Academic Editor: Thomas Hanne

Copyright © 2017 Siao Wen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Two defects of biogeography-based optimization (BBO) are found out by analyzing the characteristics of its dominant migration operator. One is that, due to global topology and direct-copying migration strategy, information in several good-quality habitats tends to be copied to the whole habitats rapidly, which would lead to premature convergence. The other is that the generated solutions by migration process are distributed only in some specific regions so that many other areas where competitive solutions may exist cannot be investigated. To remedy the former, a new migration operator precisely developed by modifying topology and copy mode is introduced to BBO. Additionally, diversity mechanism is proposed. To remedy the latter defect, quantitative orthogonal learning process accomplished based on space quantizing and orthogonal design is proposed. It aims to investigate the feasible region thoroughly so that more competitive solutions can be obtained. The effectiveness of the proposed approaches is verified on a set of benchmark functions with diverse characteristics. The experimental results reveal that the proposed method has merits regarding solution quality, convergence performance, and so on, compared with basic BBO, five BBO variant algorithms, seven orthogonal learning-based algorithms, and other non-OL-based evolutionary algorithms. The effects of each improved component are also analyzed.

1. Introduction

Optimization problems have widely existed in any areas of life (economic, engineering, medicine, business, urban planning, etc.) and they play a very vital role in both academic research field and industrial production. They tend to be more complicated with the unceasing progress of science and technology. To handle such challenging optimization problems, much effort has been devoted by researchers and different techniques have been proposed during the last several decades. The proposed methods can be categorized into two main groups: derivative-based algorithms and artificial intelligence methods. Derivative-based algorithms require objective functions to be smooth and differentiable. Due to this property, they are restricted to be applied to many complex optimization problems. Oppositely, artificial intelligence methods do not require certain properties to be satisfied. In artificial intelligence methods, evolutionary algorithms (EAs) are popular and have been successfully applied to real-world problems. EAs mimic various social behaviors existing in nature to solve the optimization problems. Some popular EAs include genetic algorithm (GA) [1, 2], particle swarm optimization (PSO) [3, 4], artificial immune system (AIS) [5], differentiable evolution (DE) [6, 7], ant colony optimization (ACO) [8], artificial bee colony (ABC) [9, 10], and simulated annealing algorithm (SA) [11].

As an effective EA, biogeography-based optimization (BBO) [12] has shown excellent performance after being tested on various benchmark functions [1315], such as simple structure, fewer parameters than many other EAs, and strong robustness against the variations of its algorithm-dependent parameters [12]. Due to its excellent performance, it has been successfully applied to a variety of real-world problems [1620]. However, BBO has also been shown to have certain weaknesses. Although BBO has excellent exploitation ability, it tends to fall into local optima. That is to say, BBO lacks exploration ability. As we know, for an optimization algorithm, exploration and exploitation are two important concepts in any optimization algorithms. Exploration is the ability to search for areas far from the current individuals in the search space while exploitation is the capacity to investigate the vicinities nearby the current solution. Obviously, they are two entirely different objectives, so developing an algorithm good on both is a challenging work. To remedy it, much effort has been made, and some variants of BBO are proposed. One of the most popular research hotspots focuses on modifying the migration process. Li et al. [21] proposed a perturb migration operator to create new individuals to update the target individual. Ma and Simon [22] proposed a blended migration operator that can make two parental habitats contribute different constant weighted features to a new feature of an offspring. Xiong et al. [23] proposed a polyphyletic migration operator to utilize as many as four habitats’ features to construct a new solution vector, which can generate new features from more promising areas in the search space. Li and Yin [24] introduced a multiparent migration operator which adopts three consecutive individuals to generate three offspring according to their fitness value. From the experimental results of [2124], it can be seen that a well-designed migration operator indeed contributes to improving the performance of BBO. Besides migration operator modification, some other researchers focus on hybridizing BBO with some operators in other EAs so as to make up for BBO’s defects. BBO is hybridized with harmony search (HS) to obtain HSBBO in study [25], which aims to bring the exploration of HS into BBO. Study [26] combines DE with BBO to develop DBBO algorithm. Study [27] hybridizes opposition-based learning with BBO to obtain oppositional BBO (OBBO). All these algorithms have achieved excellent performance.

In this paper, both migration process modification and hybridization strategy are focused on. Two defects of BBO are found out by analyzing the characteristics of the migration operator, which is the main operator of BBO. One is that, due to global topology and direct-copying migration strategy, information in several good-quality habitats tends to be copied to the whole habitats rapidly, which would lead to premature convergence. The other is that the solutions generated by migration operator are distributed only in some specific regions so that other areas where competitive solutions may exist cannot be investigated. To remedy the former, a new migration operator precisely developed by taking topology modification and copy-mode improvement as two cut-in points is proposed. Diversity mechanism is also proposed for remedying it. To remedy the latter defect, quantitative orthogonal learning (QOL) process accomplished based on space quantizing and orthogonal design is adopted. It aims to investigate the solution space thoroughly so that more competitive solutions can be obtained.

The remainder of this paper is organized as follows. Section 2 briefly introduces the basic principle of BBO. The new proposed algorithm is described in Section 3. The experimental results conducted on various benchmark functions are shown in Section 4 to verify the effectiveness of the algorithm. In Section 5, the effects of different improved components are analyzed. Section 6 is devoted to conclusions.

2. The Proposed Methodology

BBO is inspired by the equilibrium theory of island biogeography. Each individual in population is called a “habitat.” The goodness of a habitat (i.e., candidate solution) is measured by the habitat suitability index (HSI) [12]. Habitats with higher HSI tend to have more species than those with lower HSI. In BBO, the immigration rate and emigration rate of each habitat are functions of the number of species in the habitat and they are calculated as follows:where , are maximum immigration and emigration rate, respectively, is the number of habitats, is the number of species in habitat , and is the largest possible number of species that a habitat can support. From (1), it can be seen that a good habitat tends to have a large value of and a small value of , which enables poor habitats to get information from good habitats to improve their quality.

There are two main operators in BBO: () migration operator and () mutation operator. In migration process, based on and , information is probabilistically emigrated from the good habitats to the poor ones. The quality of those poor habitats is enhanced by accepting new information from high-quality habitats. Hence, both good and bad habitats can collaborate with each other and move together toward more promising areas in the search space and finally converge to an optimal solution. In mutation process, some variables in each habitat are possible to be replaced by new variables which are randomly generated in the entire solution space, which aims to keep population diversity.

3. The Proposed Methodology

In allusion to the defects of the migration operator in basic BBO, a new BBO variant based on multitopology migration and QOL, called MTQLBBO, is proposed. The key points of our methods are described in detail in this section.

3.1. Population Initialization

Assuming that the number of decision variables in (population ) is , the initialization of is expressed as follows:where is the th decision variable in ,   and , is an -dimensional vector which contains a Latin Hypercube Sampling of values from the interval , and is the th element in vector .

3.2. Multitopology Migration Operator

Migration operator is the main operator of BBO algorithm, because it determines how the new population is generated from the previous population. So, it influences BBO’s search trajectory from the initial population. To enhance the performance of BBO, it is very necessary to analyze the characteristics of the migration operator.

During the migration process, we first use to decide (immigrated habitat). The habitats with greater values of are more likely to be selected as . Then use to decide (emigrated habitat) from all other habitats except . Also, the habitats with greater values of are more likely to be selected as . As a result, it is from several habitats with great values of that tends to get information and directly copies information from them. Moreover, each habitat is possible to get information from any other habitats and information in each habitat is possible to be emigrated to any other habitats, which means the migration process is based on global topology, as shown in Figure 1(a). In global topology, habitats have strong connection with each other and information spreads very quickly, just like virus propagation, according to social network theory [28]. Due to global topology and direct-copying migration mechanism, as mentioned above, information in several good-quality habitats would be copied to the whole habitats soon. Although it can rapidly improve the quality of bad habitats, all candidate solutions (i.e., habitats) would be homogeneous soon, resulting in poor exploration ability. In allusion to the defect, multitopology migration operator is developed by modifying topology and copy mode.

Figure 1: A schematic diagram of multitopology.

Multitopology includes two kinds of topology: global topology and ring topology, as shown in Figure 1. Figure 1(a) represents global topology and Figure 1(b) represents ring topology where every habitat is only connected with two adjacent habitats. Divide the population into two groups, that is, Group A and Group B. Habitats in Group A are migrated based on global topology and habitats in Group B are based on ring topology. To characterize the proportion of ring topology in multitopology, define Mixed Degree of Ring Topology as , which is expressed aswhere is the number of habitats in Group B.

The motivation of introducing the parameter is to balance both exploitation and exploration; that is, if , all the habitats are migrated based on global topology, as the ones migrated in basic BBO. It can speed up the convergence but easily make the algorithm get stuck in local optima; on the other hand, if , it can increase the population diversity but reduce the convergence speed. Therefore, we can tune the parameter to achieve a proper balance between the exploitation and the exploration so as to adapt the algorithm for optimization problems with different characteristics.

As for copy-mode modification, an indirect-copying migration operator is introduced. This operator is expressed aswhere is a habitat randomly selected from the habitats except and and is a uniformly distributed real random number within the range .

The core idea of constructing this operator is based on two considerations. One is that, instead of directly copying information from , this operator uses the relatively good habitat as the base and uses as a perturbation on , thus bringing more information from exploited feasible space. The other is that, by adding a perturbation in the operator, it can avoid the homogenization of habitats and thus helps to remedy the mentioned defect. The pseudocode of migration operator is as in Algorithm 1.

Algorithm 1: Migration operator.

There are three points that should be noted. First, the two groups are required to have roughly the same average HSI level. The grouping method in this paper is as follows:(1)Sort all habitats by the HSI from high to low and then number all of them from 1 to .(2)Select integers from interval by using Uniform Sampling Technique.(3)Put habitats whose numbers are those selected integers into Group B and the rest of the habitats are put into Group A.

Secondly, habitat , which is used for adding perturbation in operator (4), is selected from all habitats instead of only from the habitats in Group A or Group B. Thirdly, the two groups are mixed up at the end of each migration process and the population is regrouped at the beginning of each migration process. The second and third points show that although the habitats in the two groups are based on totally different topology to implement the migration process, the two groups can still adequately share their information and collaborate to move toward a more promising direction.

To verify the effectiveness and efficiency of the multitopology migration process, an enhanced BBO with the proposed migration operator (EBBO) is constructed to compare with basic BBO. Note that EBBO differs from BBO only in that it uses multitopology migration operator to replace the basic migration operator. Both EBBO and basic BBO are conducted on a 30-dimensional Alpine function described in Table 1.

Table 1: Benchmark functions in experimental tests.

For a fair comparison, both BBO and EBBO start their iterative process from the same initial population. The values of and in each habitat observed at different iterative stages are plotted in Figure 2. As analyzed at the beginning of this subsection, BBO’s habitats move much closer to each other before the 100th generation than the habitats of EBBO and BBO falls into local optima minima. Oppositely, EBBO finally pulls its population to swarm together toward the optimal region and converge to the optimal global solution. The comparative results show that the proposed migration operator can indeed enhance the performance of basic BBO algorithm.

Figure 2: Variables and observed at different iterative stages of BBO and EBBO.
3.3. Quantitative Orthogonal Learning Operator

Assume solutions are three-dimensional, and denote as () and as (). Their possibly generated solutions via migration process are (), (), (), (), (), and (), respectively. As shown in Figure 3, all of them are located on the vertexes of the cuboid determined by and .

Figure 3: Spatial location of , and their possibly generated solution in migration process.

It can be seen from Figure 3 that the inner area of the cuboid cannot be investigated. However, more competitive solutions are very likely to locate in the inner area. Ignoring the inner area is bad for the solution quality of the algorithm. The purpose of introducing QOL process is to make the algorithm able to search the solution space thoroughly so that more competitive solutions can be provided.

The process is described as follows. Suppose and are the two parental habitats of the QOL process, which are noted as follows:Each of them has decision variables. Denote and as the lower/upper bound of the space determined by and . The low/upper bound are expressed as follows:The space determined by and is quantized into levels such that the difference between any two successive levels is the same. The quantized space is represented by matrix , where the element in th row and th column is expressed as follows:where, for ,   and are the th elements in vectors and , respectively. has columns and each column corresponds to a decision variable and rows correspond to quantized values of the corresponding decision variable of the column. As a result, the solution space is quantized into points, which is a large number. If we just identify these points one by one to determine the optimal solution, it will waste lots of computational time. To improve computational efficiency, firstly, merge decision variables into factors. It means level (i.e., value) combinations of variables in a factor are not considered. In this paper, the number of variables in each factor is roughly the same. By merging variables, the number of quantized points (i.e., combinations) is decreased to . Secondly, apply orthogonal design to select some representative samples of the quantized points and then use them to determine the optimal solutions (i.e., level combinations). As for how to use representative samples selected by orthogonal design to determine the optimal level combinations for a multifactor and multilevel optimization problem, please refer to literature [29]. Thirdly, instead of being applied to every habitat , QOL process is implemented only once at each iteration. When optimal solutions are determined, use them to replace the two parental habitats.

3.4. Diversity Mechanism

To skip out on local optimal solutions, diversity mechanism is introduced. When the average HSI of the current population does not change over some certain generations in the evolutionary process, which is expressed aswhere and are the best fitness values at the th and ()th generations, respectively, and ,   are tolerance factors, some habitats are replaced as follows:where ,  ,   denotes the th variable in the th habitat, and rand is a randomly generated value from interval . is a user-defined parameter, which represents the randomization rate. Here, we set it to 0.2.

3.5. Algorithm Structure and Main Procedures

By incorporating the multitopology migration operator, QOL strategy, and diversity mechanism into BBO, the proposed MTQLBBO is developed and its main procedures are described as follows:(1)Set the parameters of MTQLBBO, mainly including ,  ,  ,  , the max number of fitness evaluations (FEs_Max), , and  .(2)Initialize the population based on Latin Hypercube Sampling.(3)Sort the population from best to worst and obtain each habitat’s species number according to its ranking in the population.(4)Calculate and by formula (1) and implement the migration and mutation process based on and .(5)Implement QOL operator.(6)Similar to BBO, perform elite reservation strategy.(7)If required, apply the diversity mechanism.(8)Check the stopping criteria. If met, stop and output the result; otherwise, go back to step () and continue.

4. Case Studies

4.1. Experimental Settings and Experiment Outline

To verify the performance of MTQLBBO, a test suit, consisting of 24 scalable benchmark functions [30, 31] with different characteristics, which are summarized in Table 1, is used, adopted in this section. are unimodal functions and are multimodal functions. The parameters are stated in Table 2. As for stopping criteria, the maximum number of fitness evaluations for each function is described in Table 3. For all experiments, these parameter settings are employed unless a change is mentioned.

Table 2: Parameter settings of the MTQLBBO algorithm used in this paper.
Table 3: Maximum evaluations for each function.

For fair comparison between different algorithms, to eliminate the contingency, all experiment results are obtained based on 40 independent runs. The mean and the standard deviation of the best-of-run errors over 40 independent runs are provided to measure the performance. In addition, to compare the statistically significant differences between the computational results of two algorithms, Mann-Whitney test at a 0.05 level is adopted between all the compared algorithms throughout the paper. For clarity, the results of the best algorithm are marked in boldface.

To fully evaluate the performance of the proposed algorithm, several group experiments are carried out. The experiment structure is described as Figure 4.

Figure 4: The experiment structure diagram.

The experiment can be divided into two parts: () to evaluate the performance of MTQLBBO; () to investigate the benefit of each modified component.

4.1.1. Evaluate the Performance of MTQLBBO

(a) Results Comparison between BBO and MTQLBBO. It aims to investigate whether the proposed improvement strategies can effectively enhance the performance of basic BBO algorithm. BBO and MTQLBBO are conducted on on .

(b) Scalability for MTQLBBO. A new proposed algorithm is expected to have excellent performance when solving optimization problems with different dimensions. So it is necessary to apply MTQLBBO on benchmark functions with different dimensions and investigate whether the algorithm can maintain good performance on different number of dimensions. MTQLBBO is conducted on on , 40, 80, and 160 and FE_Max = .

(c) Results Comparison between MTQLBBO and Other BBO Variant Algorithms. Since basic BBO algorithm was proposed, some BBO variants have been proposed and they have been proved to have better excellence performance than basic BBO. So it is necessary to compare the computational results of MTQLBBO and other BBO variants and verify whether MTQLBBO has better performance than other BBO variants. The experiment is conducted on .

(d) Results Comparison between MTQLBBO and Other Orthogonal Learning (OL) Based Algorithms. Due to the advantage of OL strategy, it has been employed to many EAs. To prove superiority of the OL-blended strategy in our work, the results of MTQLBBO are compared with those of other OL-based algorithms. The experiment is conducted on .

(e) Results Comparison between MTQLBBO and Other Non-OL-Based EAs. Besides OL-based EAs, there are other types of EAs that have shown excellent performance. To validate the deeper significance of our work, MTQLBBO is compared with other non-OL-based EAs. The experiment is conducted on .

4.1.2. Investigating the Benefit of Improvement Components

It is known from Section 3 that there are mainly two improved components in MTQLBBO. An experiment is conducted to identify the benefit of each component. In addition, the effects of three additional parameters brought by these two improvement components are also analyzed.

4.2. Comparison with Basic BBO

To validate the effectiveness of MTQLBBO, the optimization problems are solved 40 times independently for each benchmark function using BBO and MTQLBBO. The number of dimensions equals 30 in this subsection. The experimental statistical results are shown in Table 4. The comparison results according to Wilcoxon’s Test are marked as “+,” “−,” and “=,” which means that the performance of MTQLBBO is better than, equal to, and worse than that of its competitors, respectively. The statistical results denoted as are shown in the last row of the table. denotes that MTQLBBO wins on , ties on , and loses on benchmark functions, respectively, compared with its competitors. It can be seen from Table 2 that MTQLBBO significantly performs better than basic BBO on 22 out of 24 functions except for functions and . For either unimodal functions or multimodal functions, MTQLBBO outperforms BBO. It indicates that the modified components not only enhance the local search ability and search accuracy, but also enable the algorithm to have a better ability to escape from local minima and jump to a better near-global optimum. Namely, both excellent exploitation and exploration ability are achieved.

Table 4: Comparison of BBO and MTQLBBO for all benchmark functions (,  ).

Besides solution quality, convergence property is also an important evaluation indicator for judging the performance of an algorithm. To evaluate the convergence performance, convergence curves of some representative benchmark functions are plotted and shown in Figure 5. It is evident that MTQLBBO converges faster than BBO.

Figure 5: Convergence graphs of MTQLBBO and other five BBO variants for six representative test functions.
4.3. Scalability Study

To explore the effect of the problem dimension on the performance of MTQLBBO, a scalability study based on , 40, 80, and 160 is carried out, and the experimental results after FEs are summarized in Table 5. Note that, in this subsection, all the parameters remain unchanged except .

Table 5: Mean and standard deviation of optimal results of BBO and MTQLBBO on different dimensions ().

From the results, it can be seen that MTQLBBO performs better than BBO in all different dimensions. For , MTQLBBO obtains better results on 21 functions. For , MTQLBBO achieves better results than BBO on 22 functions while BBO gives better results only on and . In the case of , MTQLBBO performs better than BBO on 23 functions while BBO achieves better results only on . In the case of , BBO gets better results only on .

From the above comparison, we can conclude that MTQLBBO has better performance than BBO for solving both low dimensional and high dimensional optimization problems on the selected instances. In addition, it is noticed that, for MTQLBBO, although it is able to provide better results on some functions when the dimension increases from 10 to 160, its performance deteriorates on more functions with the growth of problem dimension. The main reason is that the search space increases dramatically, thus raising the difficulty of solving the problem.

4.4. Comparison with Other BBO Variant Algorithms

The performance of MTQLBBO is compared with other five BBO variant algorithms at ;   in this subsection. These BBO variant algorithms are MOBBO [24], PBBO-G [22], RCBBO-G [32], DBBO [26], and POLBBO [23]. All the other parameters of these variants algorithms are set the same as those in their original papers. Each benchmark function is optimized 40 times independently, and the optimal results are summarized in Table 6.

Table 6: Comparison with other BBO variant algorithms (, ).

It can be seen from Table 6 that MTQLBBO surpasses all BBO variant algorithms on most benchmark functions. MTQLBBO obtains better results than MOBBO, PBBO-G, DBBO, RCBBO-G, and POLBBO on 18, 21, 20, 23, and 13 functions, respectively. MTQLBBO is surpassed by MOBBO, PBBO-G, DBBO, RCBBO-G, and POLBBO only on 4, 2, 2, 0, and 6 functions, respectively. As for convergence property, convergence curves from some benchmark functions are plotted in Figure 6. It shows that MTQLBBO converges faster than its competitors.

Figure 6: Convergence graphs of MTQLBBO and other five BBO variants for six representative test functions.
4.5. Comparison with Other OL-Based Algorithms

Due to the advantages of orthogonal learning (OL) strategy, it has been employed in many EAs to improve algorithms’ performance. In this subsection, the results achieved by MTQLBBO are compared with those achieved by other OL-based evolutionary algorithms, which include OXDE [33], ODE/2 [34], OPSO [35], OLPSO-G [36], OLPSO-L [36], OXBBO [37], and POLBBO [23]. OXDE and ODE/2 are proposed by embedding OL strategy into DE algorithm while OPSO, OLPSO-G, and OLPSO-L are developed by embedding OL strategy into PSO algorithm. Similar to MTQLBBO, OXBBO and POLBBO are developed by embedding OL strategy into BBO algorithm. The comparative results are summarized in Table 7. The results of these seven compared algorithms are all taken from their corresponding references except that the results of OPSO are obtained from [36] and the results of POLBBO are from the conducted experiment in the previous subsection.

Table 7: Comparison with other OL-based algorithms.

MTQLBBO outperforms OXDE, ODE/2, OPSO, OLPSO-G, OLPSO-L, OXBBO, and POLBBO on 6, 7, 10, 10, 7, 7, and 8 functions, respectively. The algorithm is surpassed by OXDE, ODE/2, OLPSO-L, OXBBO, and POLBBO on 2, 4, 2, 3, and 3 functions, respectively. For OPSO and OLPSO-G, the algorithm wins both of them on all functions. Overall, MTQLBBO provides relatively competitive solutions compared to other OL-based algorithms.

4.6. Comparison with Other Non-OL-Based EAs

The proposed algorithm MTQLBBO is further compared with some other non-OL-based EAs, which include CLPSO [38], CMA-ES [39], GL-25 [40], SLPSO [41], MGBGE [42], iMEABC [43], MSODPSO-G [44], and MSODPSO-L [44]. The comparative results are summarized in Table 8. CLPSO seems not to be better than MTQLBBO on any function. Both GL-25 and SLPSO only perform better than MTQLBBO on two functions. CMA-ES, MGBGE, iMEABC, MSODPSO-G, and MSODPSO-L each only perform better than MTQLBBO on three functions. In addition, compared with all listed non-OL-based EAs, MTQLBBO performs the best on 9 out of 13 functions, showing that MTQLBBO has superior performance in terms of solution quality.

Table 8: Comparison with other non-OL-based EAs.

5. Analysis of Modified Components

In this section, we have analyzed effects of modified components, that is, multitopology based migration strategy and QOL strategy, on the performance of the algorithm. The influences of three additional parameters, brought by the modified components, are also investigated in this section.

5.1. Effect on Convergence Property

Convergence property is one of the most important characteristics for an optimization algorithm. To analyze the effects different components have on convergence property, two variants of MTQLBBO, that is, MTBBO and QOLBBO, are developed. They differ from MTQLBBO only in that MTBBO does not contain the QOL process, while QLBBO implements BBO’s original migration operator instead of the multitopology migration operator. Both of them are used to solve all benchmark functions, and their performance is compared with that of BBO and MTQLBBO. A representative convergence graph is plotted in Figure 7.

Figure 7: Convergence graphs of BBO, MBBO, QOLBBO, and MTQLBBO on function .

Comparing the convergence curves of BBO and MTBBO, we can see that MTBBO converges to a better solution than BBO, but its convergence speed in early stages is slower than BBO. It shows that multitopology migration strategy exactly improves exploration ability and can help converge to a better solution. However, the paid expense is reducing the convergence speed in the early iterative stages. QOLBBO converges faster and achieves a better solution than BBO by investigating the solution space more systematically. The paid expense is extra computational overhead brought by the QOL process.

Moreover, it is worth noting that, different from slower convergence speed in the early stages resulting from adding multitopology migration process to BBO, adding BBO to QOLBBO accelerates the convergence speed instead. The reason might be as follows. Good habitats guide the searching directions of the population. Information in good habitats spreads to the whole habitats very rapidly in BBO, and the whole habitats can move toward a more promising direction quickly. Multitopology reduces the speed of information spread and thus decelerates the convergence speed. However, MTBBO has good population diversity. When QOL process is implemented among habitats with better population diversity, the space that can be potentially investigated becomes much wider. Hence, the solution space can be investigated more thoroughly and a better-quality solution is more likely to be obtained at each iteration, which accelerates the convergence speed. It also proves that the mutually beneficial cooperation between the multitopology migration operator and QOL process can significantly help MTQLBBO improve the searching process toward the promising directions.

5.2. Effects on Solution Quality

To analyze the effects the modified components have on solution quality, both MTBBO and QOLBBO are used to optimize all test functions and the results are summarized in Table 9. It can be seen from Table 9 that both MTBBO and QOLBBO significantly perform better than BBO. Specifically, MTBBO obtains better solutions than BBO on 21 functions except , , and and QOLBBO obtains better solutions on 21 functions except ,  , and . Such results show that both multitopology migration strategy and QOL strategy exactly contribute to enhancing the solution quality. On the other hand, MTQLBBO has better solution quality than MTBBO and QOLBBO. Compared with MTBBO, MTQLBBO wins on 23 functions and loses only on . Compared with QOLBBO, MTQLBBO wins on 22 functions except for and . It shows that both multitopology migration operator and QOL strategy cooperate well to improve the solution quality mutually.

Table 9: Optimal results of MTBBO, QOLBBO, and MTQLBBO on all test functions (, ).
5.3. Influence of Three Additional Parameters

Although multitopology migration operator could improve the performance, the effects depend on the determination of the parameter . In this subsection, an experiment is conducted to investigate the effect of on the performance of MTQLBBO and find an appropriate value for the parameter. is set as 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.8, and 1.0, respectively and the experimental results are summarized in Table 10. The last row shows the number of the achieved best solutions when the parameter is taken as the corresponding number of the column. The best solutions are marked in bold black.

Table 10: Influence of the parameter (, ).

The value of has much influence on the experimental results and there is a big gap between the best solutions and the worst solutions. All the worst solutions are obtained when the parameter equals 1.0 and the best solutions are acquired when the parameter equals 0.1, 0.2, 0.3, or 0.4. The reason behind the phenomenon is as follows. The migration process is implemented based on ring topology when equals 1. Although the population diversity is good, immigrated habitats can hardly get information from good habitats to improve their quality and good habitats cannot guide the whole habitats to a more promising direction. It would lead to pure random search and discourage the solution quality. When is decreased to 0.8, more information in good habitats is flowed to the immigrated habitats and it is beneficial for solution quality. For the same reason, better solutions are obtained when is decreased to 0.5, 0.4, or even smaller values. However, with the parameter continuing to be decreased, the mentioned defects of basic BBO are exposed gradually. As a result, the solution quality becomes worse again.

Most of the best solutions for unimodal functions, that is, 8 out of 12, are achieved when the parameter equals 0.2 while most of the best ones for multimodal functions, 9 out of 12, are obtained when the parameter equals 0.3. The main reason might be as follows. A larger value of parameter helps to improve the population diversity, which is beneficial for solving multimodal functions. Meanwhile, with a smaller value, all habitats have a more unified searching direction toward the promising space and this helps exploit the optimal region more effectively. By observing the overall performance on both unimodal functions and multimodal functions, is recommended. Noting that the experiment is only conducted in the case of and ,   = 0.3 is not necessarily the best parameter setting when population size or the number of dimensions is set to other values. More experiments should be carried out to study the influences further. However, this is beyond the scope of the paper and we leave it for future research.

As for the other two parameters, that is, the number of levels () and the number of factors () brought by QOL operator, it is obvious that the higher values of and enable the algorithm to have better searching ability. However, the paid expense is extra overhead and computational time. Therefore, there are no best settings for and . The determination of them depends on how you balance precision requirement and acceptable computational time.

6. Conclusion and Future Work

A new variant of BBO, referred to as MTQLBBO, is proposed to solve the global numerical optimization problems. On one hand, the proposed multitopology migration operator can avoid the homogenization of habitats and enhance the exploration ability. On the other hand, QOL operator is able to investigate the solution space thoroughly and systematically so that more competitive solutions can be obtained.

To verify the effectiveness of the proposed algorithm, benchmark functions with various characteristics are employed. Experiment tests, including basic comparison with BBO, comparison with five BBO variant algorithms, comparison with seven OL-based algorithms, comparison with six evolutionary algorithms, the effect of problem dimension, the effects of different modified components, and the influence of three additional parameters, are conducted. The experimental results demonstrate that the improvement strategies exactly enhance the performance of BBO in terms of solution quality, convergence rate, and so on. Moreover, comparisons also show that MTQLBBO performs better than some other evolutionary algorithms on the majority of the selected test functions.

For future work, this paper can be extended in several directions. This work only considers unconstrained optimization problems, and it can be extended to solve multiobjective optimization problems or constrained optimization problems. Another future work can be conducted to solve some practical engineering problems. Additionally, the influences of parameter have been preliminarily investigated and its influences can be further studied by varying the population size and even problem dimensions. Besides, its best parameter setting is obtained by manually tuning the value, and this tuning method is tedious and computationally expensive. It is not necessarily accurate or efficient, either. Therefore, whether a new strategy such as adaptive parameter tuning can be applied will be another future direction.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Key Research and Development Program of China (2016YFB0900100).

References

  1. Z. Drezner and A. Misevičius, “Enhancing the performance of hybrid genetic algorithms by differential improvement,” Computers and Operations Research, vol. 40, no. 4, pp. 1038–1046, 2013. View at Publisher · View at Google Scholar · View at Scopus
  2. M. Silva, A. Santos, E. Figueiredo, R. Santos, C. Sales, and J. C. W. A. Costa, “A novel unsupervised approach based on a genetic algorithm for structural damage detection in bridges,” Engineering Applications of Artificial Intelligence, vol. 52, pp. 168–180, 2016. View at Publisher · View at Google Scholar · View at Scopus
  3. S. H. Ling, K. Y. Chan, F. H. Leung, F. Jiang, and H. Nguyen, “Quality and robustness improvement for real world industrial systems using a fuzzy particle swarm optimization,” Engineering Applications of Artificial Intelligence, vol. 47, pp. 68–80, 2016. View at Publisher · View at Google Scholar
  4. B. Haddar, M. Khemakhem, S. Hanafi, and C. Wilbaut, “A hybrid quantum particle swarm optimization for the Multidimensional Knapsack Problem,” Engineering Applications of Artificial Intelligence, vol. 55, pp. 1–13, 2016. View at Publisher · View at Google Scholar · View at Scopus
  5. K. C. Tan, C. K. Goh, A. A. Mamun, and E. Z. Ei, “An evolutionary artificial immune system for multi-objective optimization,” European Journal of Operational Research, vol. 187, no. 2, pp. 371–392, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  7. A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential evolution algorithm with strategy adaptation for global numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 398–417, 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Tabakhi, P. Moradi, and F. Akhlaghian, “An unsupervised feature selection algorithm based on ant colony optimization,” Engineering Applications of Artificial Intelligence, vol. 32, pp. 112–123, 2014. View at Publisher · View at Google Scholar · View at Scopus
  9. A. K. Dwivedi, S. Ghosh, and N. D. Londhe, “Low power FIR filter design using modified multi-objective artificial bee colony algorithm,” Engineering Applications of Artificial Intelligence, vol. 55, pp. 58–69, 2016. View at Publisher · View at Google Scholar · View at Scopus
  10. N. Imanian, M. E. Shiri, and P. Moradi, “Velocity based artificial bee colony algorithm for high dimensional continuous optimization problems,” Engineering Applications of Artificial Intelligence, vol. 36, pp. 148–163, 2014. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Bandyopadhyay, S. Saha, U. Maulik, and K. Deb, “A simulated annealing-based multiobjective optimization algorithm: AMOSA,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 3, pp. 269–283, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. D. Simon, “Biogeography-based optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 6, pp. 702–713, 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. D. Simon, R. Rarick, M. Ergezer, and D. Du, “Analytical and numerical comparisons of biogeography-based optimization and genetic algorithms,” Information Sciences, vol. 181, no. 7, pp. 1224–1248, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. I. Boussaïd, A. Chatterjee, P. Siarry, and M. Ahmed-Nacer, “Biogeography-based optimization for constrained optimization problems,” Computers & Operations Research, vol. 39, no. 12, pp. 3293–3304, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  15. H. Ma, “An analysis of the equilibrium of migration models for biogeography-based optimization,” Information Sciences, vol. 180, no. 18, pp. 3444–3464, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. A. Bhattacharya and P. K. Chattopadhyay, “Biogeography-based optimization for different economic load dispatch problems,” IEEE Transactions on Power Systems, vol. 25, no. 2, pp. 1064–1077, 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. K. Jamuna and K. S. Swarup, “Multi-objective biogeography based optimization for optimal PMU placement,” Applied Soft Computing, vol. 12, no. 5, pp. 1503–1510, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. G. Xiong, D. Shi, and X. Duan, “Multi-strategy ensemble biogeography-based optimization for economic dispatch problems,” Applied Energy, vol. 111, pp. 801–811, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. P. K. Roy, S. P. Ghoshal, and S. S. Thakur, “Biogeography based optimization for multi-constraint optimal power flow with emission and non-smooth cost function,” Expert Systems with Applications, vol. 37, no. 12, pp. 8221–8228, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. G. Xiong, Y. Li, J. Chen, D. Shi, and X. Duan, “Polyphyletic migration operator and orthogonal learning aided biogeography-based optimization for dynamic economic dispatch with valve-point effects,” Energy Conversion and Management, vol. 80, pp. 457–468, 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. X. Li, J. Wang, J. Zhou, and M. Yin, “A perturb biogeography based optimization with mutation for global numerical optimization,” Applied Mathematics and Computation, vol. 218, no. 2, pp. 598–609, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. H. Ma and D. Simon, “Blended biogeography-based optimization for constrained optimization,” Engineering Applications of Artificial Intelligence, vol. 24, no. 3, pp. 517–525, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. G. Xiong, D. Shi, and X. Duan, “Enhancing the performance of biogeography-based optimization using polyphyletic migration operator and orthogonal learning,” Computers and Operations Research, vol. 41, pp. 125–139, 2014. View at Publisher · View at Google Scholar · View at Scopus
  24. X. Li and M. Yin, “Multi-operator based biogeography based optimization with mutation for global numerical optimization,” Computers & Mathematics with Applications, vol. 64, no. 9, pp. 2833–2844, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  25. G. Wang, L. Guo, H. Duan, H. Wang, L. Liu, and M. Shao, “Hybridizing harmony search with biogeography based optimization for global numerical optimization,” Journal of Computational and Theoretical Nanoscience, vol. 10, no. 10, pp. 2312–2322, 2013. View at Publisher · View at Google Scholar · View at Scopus
  26. I. Boussaïd, A. Chatterjee, P. Siarry, and M. Ahmed-Nacer, “Two-stage update biogeography-based optimization using differential evolution algorithm (DBBO),” Computers & Operations Research, vol. 38, no. 8, pp. 1188–1198, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  27. M. Ergezer, D. Simon, and D. Du, “Oppositional biogeography-based optimization,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC '09), pp. 1009–1014, San Antonio, Tex, USA, October 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. D. Easley and J. Kleinberg, Networks, Crowds, and Markets: Reasoning About a Highly Connected World, Cambridge University Press, Cambridge, UK, 2010.
  29. A. C. Atkinson and A. N. Donev, Optimum Experimental Designs, Clarendon Press, Oxford, UK, 1992.
  30. X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at Publisher · View at Google Scholar · View at Scopus
  31. F. Kang, J. Li, and Z. Ma, “Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions,” Information Sciences, vol. 181, no. 16, pp. 3508–3531, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  32. W. Gong, Z. Cai, C. X. Ling, and H. Li, “A real-coded biogeography-based optimization with mutation,” Applied Mathematics and Computation, vol. 216, no. 9, pp. 2749–2758, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  33. Y. Wang, Z. Cai, and Q. Zhang, “Enhancing the search ability of differential evolution through orthogonal crossover,” Information Sciences, vol. 185, pp. 153–177, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  34. W. Gong, Z. Cai, and L. Jiang, “Enhancing the performance of differential evolution using orthogonal design method,” Applied Mathematics and Computation, vol. 206, no. 1, pp. 56–69, 2008. View at Publisher · View at Google Scholar · View at Scopus
  35. S.-Y. Ho, H.-S. Lin, W.-H. Liauh, and S.-J. Ho, “OPSO: orthogonal particle swarm optimization and its application to task assignment problems,” IEEE Transactions on Systems, Man, and Cybernetics A: Systems and Humans, vol. 38, no. 2, pp. 288–298, 2008. View at Publisher · View at Google Scholar · View at Scopus
  36. Z.-H. Zhan, J. Zhang, Y. Li, and Y.-H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832–847, 2011. View at Publisher · View at Google Scholar · View at Scopus
  37. Q. Feng, S. Liu, G. Tang, L. Yong, and J. Zhang, “Biogeography-based optimization with orthogonal crossover,” Mathematical Problems in Engineering, vol. 2013, pp. 1–20, 2013. View at Publisher · View at Google Scholar
  38. J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at Publisher · View at Google Scholar · View at Scopus
  39. N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation, vol. 9, no. 2, pp. 159–195, 2001. View at Publisher · View at Google Scholar · View at Scopus
  40. C. García-Martínez, M. Lozano, F. Herrera, D. Molina, and A. M. Sánchez, “Global and local real-coded genetic algorithms based on parent-centric crossover operators,” European Journal of Operational Research, vol. 185, no. 3, pp. 1088–1113, 2008. View at Publisher · View at Google Scholar · View at Scopus
  41. C. Li, S. Yang, and T. T. Nguyen, “A self-learning particle swarm optimizer for global optimization problems,” IEEE Systems, Man, and Cybernetics Society. Part B: Cybernetics, vol. 42, no. 3, pp. 627–646, 2012. View at Publisher · View at Google Scholar
  42. H. Wang, S. Rahnamayan, H. Sun, and M. G. H. Omran, “Gaussian bare-bones differential evolution,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 634–647, 2013. View at Publisher · View at Google Scholar · View at Scopus
  43. X. Y. Zhou, M. Y. Wang, and J. L. Zuo, “An improved multi-strategy ensemble artificial bee colony algorithm with neighborhood search,” in Proceedings of the 23rd International Conference on Neural Information Processing (ICONIP '16), Kyoto, Japan, 2016.
  44. Q. Qin, S. Cheng, Q. Zhang, Y. Wei, and Y. Shi, “Multiple strategies based orthogonal design particle swarm optimizer for numerical optimization,” Computers & Operations Research, vol. 60, pp. 91–110, 2015. View at Publisher · View at Google Scholar