Abstract

Biogeography-based optimization (BBO) is a new effective population optimization algorithm based on the biogeography theory with inherently insufficient exploration capability. To address this limitation, we proposed a modified BBO with local search mechanism (denoted as MLBBO). In MLBBO, a modified migration operator is integrated into BBO, which can adopt more information from other habitats, to enhance the exploration ability. Then, a local search mechanism is used in BBO to supplement with modified migration operator. Extensive experimental tests are conducted on 27 benchmark functions to show the effectiveness of the proposed algorithm. The simulation results have been compared with original BBO, DE, improved BBO algorithms, and other evolutionary algorithms. Finally, the performance of the modified migration operator and local search mechanism are also discussed.

1. Introduction

In practical application, many problems are regarded as optimization problems. Several effective techniques have been developed for solving optimization problems. Traditional techniques [1, 2] are effective methods for solving these problems, but they need to know the property of problems, such as continuity or differentiability. In the past few decades, various evolutionary algorithms have been sprung up for solving complex optimization problems, for example, genetic algorithm (GA) [3], evolutionary programming (EP) [4], particle swarm optimization (PSO) [5], Ant Colony optimization (ACO) [6], differential evolution (DE) [7], and biogeography-based optimization (BBO) [8]. Compared with traditional techniques, evolutionary algorithms can solve optimization problems without using some information such as differentiability.

Biogeography-based optimization (BBO), proposed by Simon [8], is a new entrant in the domain of global optimization based on the theory of biogeography. BBO is developed through simulating the emigration and immigration of species between habitats in the multidimensional solution space, where each habitat represents a candidate solution. Just as species, in biogeography, migrate back and forth between habitats, features in candidate solutions are shared between solutions through migration operator. Good solutions tend to share their features with poor solutions. However, these features, in the origin good solution, may exist in several solutions, both good and poor solutions, which may weaken exploration ability.

Several scholars have been working for enhancing the exploration ability. To mention just a few examples, Gong et al. [9] presented a real-coded BBO (RCBBO) for continuous optimization problem through the use of real code to represent the candidate solution and integration mutation operator to improve the population diversity. Cai et al. [10] proposed hybrid BBO (BBO-EP) by combing evolutionary programming with BBO. Boussaïd et al. [11] used BBO and DE to update the population alternately and gave a two-stage algorithm (DBBO). Ergezer et al. [12] employed opposition-based learning alongside BBO and developed oppositional BBO (OBBO). Ma et al. [13] incorporated resampling technique into BBO for solving optimization problems under noisy environments. Lohokare et al. [14] proposed a new variant of BBO, called aBBOmDE. In the proposed algorithm, a modified mutation operator and clear duplicate operators are used to accelerate convergence speed, and the modified DE (mDE) is embeded as a neighborhood search operator to improve the fitness. Gong et al. [15] combined the exploration of DE with the exploitation of BBO effectively and presented a hybrid DE with BBO, namely, DE/BBO, for global numerical optimization problem. Li et al. [16] designed perturbing migration operator based on the sinusoidal migration model and advanced perturbing biogeography-based optimization (PBBO). Inspired by multiparent crossover in evolutionary algorithm [17], Li and Yin [18] generalized migration operation based on multiparent crossover and presented multioperator biogeography-based optimization (MOBBO). These algorithms have acquired excellent performance.

It is well known that both exploration ability and exploitation ability are two bases for population-based optimization algorithm. However, the exploration and exploitation are contradictory to each other [19]. How to enhance the exploration ability is a key problem for improving the performance of BBO. For the sake of this, inspired of DE [7], in this paper, we modify the migration operator by applying a modified mutation strategy to enhance the exploration ability. Meanwhile, a local search mechanism is introduced in biogeography-based algorithm to supplement with the modified migration operator. We name this BBO algorithm as modified biogeography-based optimization with local search mechanism (denoted as MLBBO). Analysis and experimental results show that MLBBO with appropriate parameters can obtain excellent performance.

The rest of this paper is organized as follows. In Section 2, we will review the basic BBO algorithm. The modified BBO algorithm, called MLBBO, is presented and analyzed in Section 3, while experimental results are presented and discussed in Section 4. Finally, conclusions and future work are drawn in Section 5.

2. Review of BBO

Biogeography-based optimization (BBO) [8] is a new emerging population-based algorithm proposed through mimic species migration in natural biogeography. In BBO algorithm, each possible solution is a habitat, and their features that characterize suitability are called suitability index variables (SIVs) [10]. The goodness of each solution is named as habitat suitability index (HSI). In BBO, a habitat    is a vector of    values (SIVs) reaching global optima through migration step and mutation step. In original migration, information is shared among habitats based on the immigration rate and emigration rate which are linear functions of the number of species in the habitat. The linear model can be calculated as follows: where is the maximum possible emigration rate, is the maximum possible immigration rate, is the maximum number of species, and    is the number of species of the ith solution.

The pseudocode of migration operator [8] can be loosely described in Algorithm 1. Where (=) is the population size.

(1) Begin
(2)  For   i = 1 to NP
(3)   Select in the light of immigration rate
(4)   If is selected
(5)    For   j = 1 to NP
(6)     Select according to emigration rate
(7)     If rand (0,1) <
(8)       
(9)     EndIf
(10)   EndFor
(11)  EndIf
(12) EndFor
(13) End

Because of ravenous predator or some other natural catastrophe [20], ecological equilibrium may be destroyed, which could lead to drastically changing the species count and HSI of habitats. This phenomenon can be modeled as SIV mutation, and species count probabilities are used to determine mutation rates. The mutation probability   is expressed as follows: Where   is a user-defined parameter and .

In the original mutation operator, the SIV in each solution is probabilistically replaced with a new feature, randomly generated in the whole solution space, which will tend to increase the diversity of the population. Gong et al. [9] modified this mutation operator and suggested three mutation operators, Gaussian mutation operator, Cauchy mutation operator, and Lévy mutation operator used in real space.

3. Modified Biogeography-Based Optimization with Local Search Mechanism

3.1. Modified Migration Operator

Migration operator is a key operator of original BBO, which can improve the performance of BBO. Meanwhile, a habitat is modified through migration operator by simply replacing one of the SIV of the similar habitat, which means that the new habitat absorbs less information from the others. Therefore, migration operator lacks exploration ability.

In order to absorb more information from other habitats, inspired of DE [7], the migration operator is modified by using a DE mutation strategy. DE/best/two mutation formula is used in this paper: where is the best solution, is scaling factor, is the ith trial vector, and are for mutually different solutions.

BBO has acquired excellent performance since original migration operator can reserve good features in the population which are avoided being loosed or destroyed in the evolution process. In order to keep the exploitation ability of BBO, we modified formula (3) as follows: where is the best habitat, is the immigration habitat, and are four mutually different habitats. From this, we can see from formula (4) that two differences are added to the new habitat and more information from other habitats are adopted, which can enhance the exploration ability. Meanwhile, it should be noted that parameter plays an important role in balancing the exploration ability and exploitation ability. When takes 0, formula (4) absorbs information from best habitat, its exploitation ability enhanced. When increases, formula (4) absorbs more information from other habitats, and its exploration ability will increase, but their exploitation ability will decrease a certain degree. Of course, this silver lining has its cloud. Therefore, takes 0.5 in this paper.

In natural biogeography, the migration model may be nonlinear. Hence, Ma [21] introduced six migration models. Comparison results show that sinusoidal migration model is the best one. The sinusoidal migration model can be calculated as follows:

In order to enhance exploration ability, we modify migration operator, described in Algorithm 1, by combining formula and formula (4). The modified migration operator is described in Algorithm 2.

(1)   Begin
(2)   For   i = 1 to NP
(3)    Select according to immigration rate using sinusoidal migration model
(4)    If   is selected
(5)     For   j = 1 to NP
(6)      Generate fourmutually different integers in {1… NP}
(7)      Select according to emigration rate
(8)       If rand(0,1)<
(9)       
(10)    Else
(11)     
(12)    EndIf
(13)   EndFor
(14)  EndIf
(15) EndFor
(16) End

We can see from Algorithm 2 that immigration habitats not only accept information from best habitat, but also from BBO/best/2. On the one hand, information from HSI habitats make the operator maintain power exploitation ability. On the other hand, information from BBO/best/2 can enhance the exploration ability. From this, we know that this modified migration operator can preserve the exploitation ability and enhance the exploration ability simultaneously.

3.2. Mutation with Mutation Operator

Cataclysmic events can drastically change the HSI of a natural habitat and cause species count to differ from its equilibrium value. This phenomenon can be modeled in BBO as SIV mutation, and species count probabilities are used to determine mutation rates.

In order to enhance the exploration ability of BBO, a modified mutation operator, Cauchy mutation operator, is integrated into BBO. The probability density function of Cauchy distribution [9] is where and is a scale parameter.

The Cauchy mutation (t = 1) in this paper can be described as where is a randomness Cauchy value when parameter t equals 1. In this paper, we use the Cauchy distribution to update the solution, which can be described in Algorithm 3.

(1) Begin
(2) For   i = 1 to NP
(3)  Compute the mutation probability p
(4)  Select variable with probability p
(5)  If    is selected
(6)   
(7)  Endif
(8) Endfor
(9) End

3.3. Local Search Mechanism

As Simon has pointed out, the convergence speed is an obstacle for real application. So, finding mechanisms to speed up BBO could be an important area for further research. Therefore, a local search mechanism is proposed to accelerate the convergence speed of BBO. In order to remedy the insufficiency of modified migration operator, which mainly updates worse habitats selected by immigration rate, the local search mechanism is designed mainly for better habitats in the first half population.

For each habitat in the first half population, define where denotes a habitat randomly selected from population, denotes the local search probability, and alpha is the strength of local search.

3.4. Selection Operator

Selection operator is used to select better habitats preserving in next generation, which can impel the metabolism of population and retain good habitats. In this paper, greedy selection operator is used. In other words, the original habitat will be replaced by its corresponding trial habitat if the HSI of the trial habitat is better than that of its original habitat :

3.5. Main Procedure of MLBBO

Exploration ability is mainly a contributory factor to step down the performance of BBO algorithm. In order to improve the performance of BBO, a modified BBO algorithm, called MLBBO, is proposed for global optimization problems in the continuous domain. In MLBBO, the original migration operator and mutation operator are modified to enhance the exploration ability. Moreover, local search mechanism and selection operator are integrated into MLBBO to supplement with migration operator. The pseudo-code of MLBBO is described in Algorithm 4, where NP is the population size.

(1) Begin
(2)  Initialize the population Pop with NP habitats randomly
(3)  Evaluate the fitness (HSI) for each Habitat in Pop
(4)  While the halting criteria are not satisfied do
(5)   Sort all the habitats from best to worst in line with their fitness values
(6)   Map the fitness to the number of species count S for each habitat
(7)   Calculate the immigration rate and emigration rate according to sinusoidal migration model
(8)   For   i = 1 to NP    /*migration stage*/
(9)    Select according to immigration rate using sinusoidal migration model
(10)   If   is selected
(11)    For   j = 1 to NP
(12)     Generate fourmutually different integers in
(13)     Select according to emigration rate
(14)     If   rand(0,1) <
(15)       
(16)     Else
(17)      
(18)     EndIf
(19)    EndFor
(20)   EndIf
(21)   EndFor                        /*end ofmigration stage*/
(22)  For   i = 1 to NP   /*mutation stage*/
(23)   Compute the mutation probability
(24)   Select variable with probability
(25)   If   is selected
(26)      
(27)   Endif
(28)  EndFor                            /*end ofmutation stage*/
(29)  For   i = 1 to NP/2  /*local search stage*/
(30)    Ifrand(0,1) <
(31)     Select a habitat randomly
(32)       *
(33)    EndIf
(34)  EndFor                        /*end oflocal search stage*/
(35)  Make sure each habitat legal based on boundary constraints
(36)  Use new generated habitat to replace duplicate
(37)  Evaluate the fitness (HSI) for trial habitat in the new population Pop
(38)  For   i = 1 to NP    /*selection stage*/
(39)    If
(40)       
(41)    EndIf
(42)  EndFor                        /*end ofselection stage*/
(43) EndWhile
(44) End

4. Experimental Study

4.1. Experimental Setting

In this section, several experimental tests are carried out on 27 benchmark functions for validating the performance of the proposed algorithm. These functions consist of 13 standard benchmark functions [22] (f1–f13) and 14 complex functions (F1–F14) from the set of test problems listed in IEEE CEC 2005 [23]. All the test functions used in our experiments are high-dimensional functions with changeable dimension, which are only briefly summarized in Table 1. A more detailed description of each function is given in Table 8 (see also [22, 23]).

For fair comparison, we have chosen a reasonable set of parameter values. For all experimental simulations in this paper, we will use the following parameters setting unless a change is mentioned:(1)population size: NP = 100;(2)maximum immigration rate: I =1;(3)maximum emigration rate: E =1;(4)mutation probability: = 0.001;(5)value to reach (VTR) = 10−6, except for f7, F6–F14 of VTR = 10−2;(6)maximum number of fitness function evaluations (MaxNFFEs): 150,000 for f1, f6, f10, f12, and f13, 200,000 for f2, and f11, 300,000 for f7, f8, f9, F1–F14, and 500,000 for f3, f4, and f5.

In our experimental tests, all the functions are optimized over 30 independent runs. All experimental tests are carried out on a computer with 1.86 GHz Intel core Processor, 1 GB of RAM, and Window XP operation system in Matlab software 7.6.

4.2. Effect of the Strength Alpha and Rate   on the Performance of MLBBO

In this section, we investigate the impact of local search strength alpha and search probability on the proposed algorithm. In order to analyze the performance more scientific, eight functions are chosen from Table 1. They are Sphere, Schwefel 1.2, Schwefel, Penalized2, F1, F3, F10, and F13, which belong to standard unimodal function, standard multimodal function, shifted function, shifted rotated function, and expanded function, respectively. Search strength alpha are 0.3, 0.5, 0.8, and 1.1. Search rates are 0, 0.2, 0.4, 0.6, 0.8, and 1. It is important to note that equal to 0 means that local search mechanism is not conducted. Hence, in our experiments, alpha and are tested according to 24 situations. MLBBO algorithm executes 30 independent runs on each function, and convergence figures of average fitness values (which are function error values) are plotted in Figure 1.

From Figure 1, we can observe that different search rate have different results. When is 0, the results are worse than the others, which means that local search mechanism can improve the performance of proposed algorithm. When is 0.4, we obtain a faster convergence speed and better results on the Sphere function. For the other seven test functions, better results are obtained when search rate is 0.2 and 0.4. Moreover, the performance of MLBBO is most sensitive to search rate on function F8.

From Figure 1, we can also see that search strength alpha has little effect on the results as a whole. By careful looking, search strength alpha has an effect on the results when search rate is large, especially, when equals 0.8 or 1.

For considering opinions of both sides, the search strength alpha and search rate are set as 0.8 and 0.2, respectively.

4.3. Comparisons with BBO and DE/Best/2/Bin

For evaluating the performance of algorithms, we first compare MLBBO with BBO [8], which is consistent with original BBO except solution representation (real code) and DE/best/2/bin [7] (abbreviation as DE). For fair comparison, three algorithms are conducted on 27 functions with the same parameters setting above. The average and standard deviation of fitness values (which are function error values) after completion of 30 Monte Carlo simulations are summarized in Table 2. The number of successful runs and average of the number of fitness function evaluations (MeanFEs), needed in each run when the VTR is reached within the MaxNFFEs, are also recorded in Table 2. Furthermore, results of MLBBO and BBO, MLBBO and DE are compared with statistical paired -test [24, 25], which is used to determine whether two paired solution sets differ from each other in a significant way. Convergence curves of mean fitness values and boxplot figures under 30 independent runs are listed in Figures 2 and 3, respectively.

From Table 2, it is seen that MLBBO is significantly better than BBO on 24 functions out of 27 test functions, and MLBBO is outperformed by BBO on the rest 3 functions. When compared with DE, MLBBO gives significantly better performance on 16 functions. On 9 functions, there are no significant differences between MLBBO and DE based on the two tail -test. For the rest 2 functions, DE are better than MLBBO. Moreover, MLBBO can obtain the VTR over all 30 successful runs within the MaxNFFEs for 16 out of 27 functions. Especially, for the first 13 standard benchmark functions, MLBBO obtains 30 successful runs on 12 functions except Griewank function, where 22 successful runs are obtained by MLBBO. DE and BBO obtain 30 successful runs on 9 and 1 out of 27 functions, respectively. From Table 2, we can find that MLBBO needs less or similar NFFEs than DE and BBO on most of the successful runs, which means that MLBBO has faster convergence speed than DE and BBO. The convergence speed and the robustness can also show in Figures 2 and 3.

From the above results, we know that the total performance of MLBBO is significantly better than DE and BBO, which indicates that MLBBO not only integrates the exploration ability of DE and the exploitation ability of BBO simply, but also can balance both search abilities very well.

4.4. Comparisons with Improved BBO Algorithms

In this section, the performance of MLBBO is compared with improved BBO algorithms: (a) oppositional BBO (OBBO) [12], which is proposed by employing opposition-based learning (OBL) alongside BBO’s migration rates; (b) multi-operator BBO (MOBBO) [18], which is another modified BBO with multi-operator migration operator; (c) perturb BBO (PBBO) [16], which is a modified BBO with perturb migration operator; and (d) real-code BBO (RCBBO) [9]. For fair comparison, five improved BBO algorithms are conducted on 27 functions with the same parameters setting above. The mean and standard deviation of fitness values are summarized in Table 3. Further, the results between MLBBO and OBBO, MLBBO and PBBO, MLBBO and MOBBO, and MLBBO and RCBBO are compared using statistical paired -test. Convergence curves of mean fitness values under 30 independent runs are plotted in Figure 4.

From Table 3, we can see that MLBBO outperforms OBBO on 21 out of 27 test functions whereas OBBO surpasses MLBBO on 3 functions. MLBBO obtains similar fitness values with OBBO on 3 functions based on two tail -test. Similar results can be obtained when making a comparison between MLBBO and RCBBO. When compared with PBBO and MOBBO, MLBBO is significantly better than both PBBO and MOBBO on 13 functions. MLBBO obtains similar results with PBBO and MOBBO on 10 and 11 functions, respectively. PBBO surpasses MLBBO only on 4 functions, which are Quartic function, shifted rotated Ackley’s function, expanded extended Griewank’s plus Rosenbrock’s Function, and shifted rotated expanded Scaffer’s F6. MOBBO outperforms MLBBO only on 3 functions, which are Quartic function, shifted rotated Ackley’s function, and expanded extended Griewank’s plus Rosenbrock’s function. Furthermore, the total performance of MLBBO is better than PBBO and MOBBO and much better than OBBO and RCBBO.

Furthermore, Figure 4 shows similar results with Table 3. We can see that the convergence speed of PBBO an MOBBO are faster than the others in preliminary stages owing to their powerful exploitation ability, but the convergence speed decreases in the next iteration. However, MLBBO can converge to satisfactory solutions at equilibrium rapidly speed. From the comparison, we know that operators used in MLBBO can enhance exploration ability.

4.5. Comparisons with Hybrid Algorithms

In order to validate the performance of the proposed algorithms (MLBBO) further, we compare the proposed algorithm with hybrid algorithms: OXDE [26] and DE/BBO [15]. The mean standard deviation of fitness values of 27 test functions are listed in Table 4. The average NFFEs and successful runs are also listed in Table 4. Convergence curves of mean fitness values under 30 independent runs are plotted in Figure 5.

From Table 4, we can see that results of MLBBO are significantly better than those of OXDE in 22 out of 27 high-dimensional functions, while the results of OXDE outperform those of MLBBO on only 3 functions. When compared with DE/BBO, MLBBO outperforms DE/BBO on 14 functions out of 27 test functions whereas DE/BBO surpasses MLBBO on 5 functions. For the rest 8 functions, there is no significant difference based on two tails -test. MLBBO, DE/BBO, and OXDE obtain the VTR over all 30 successful runs within the MaxNFFEs for 16, 12, and 3 test functions, respectively. From this, we can come to a conclusion that MLBBO is more robust than DE/BBO and OXDE, which can be proved in Figure 5.

Further more, the average NFFEs MLBBO needed for the successful run is less than DE/BBO on most functions, which can also be indicate in Figure 5.

4.6. Effect of Dimensionality

In order to investigate the influence of scalability on the performance of MLBBO, we carry out a scalability study with DE, original BBO, DE/BBO [15], and aBBOmDE [14] for scalable functions. In the experimental study, the first 13 standard benchmark functions are studied at Dim = 10, 50, 100, and 200, and the other test functions are studied at Dim = 10, 50, and 100. For Dim = 10, the population size is equal to 50 and the population size is set to 100 for other dimensions in this paper. MaxNFFEs is set to Dim * 10000. All the functions are optimized over 30 independent runs in this paper. The average and standard deviation of fitness values of four compared algorithms are summarized in Table 5 for standard test functions f1–f13 and Table 6 for CEC 2005 test functions F1–F14. The convergence curves of mean fitness values under 30 independent runs are listed in Figure 6. Furthermore, the results of statistical paired -test between MLBBO and others are also list in Tables 5 and 6.

It is to note that the results of DE/BBO [15] for standard functions f1–f13 are taken from corresponding paper under 50 independent runs. The results of aBBOmDE [14] for all the functions are taken from the paper directly, which are carried out with different population size (for Dim = 10, the population size equals 30, and for other Dimensions, it sets to Dim) under 50 independently runs.

For standard functions f1–f13, we can see from Table 5 that the overall results are decreasing for five compared algorithms since increasing the problem dimensions leads to algorithms that are sometimes unable to find the global optima solution within limit MaxNFFEs. Similar to Dim = 30, MLBBO outperforms DE on the majority of functions and overcomes BBO on almost all the functions at every dimension; MLBBO is better than or similar to DE/BBO on most of the functions.

When compared with aBBOmDE, we can see from Table 5 that MLBBO is better than or similar to aBBOmDE on most of the functions too. By carefully looking at results, we can recognize that results of some functions of MLBBO are worse than those of aBBOmDE in precision, but robustness of MLBBO is clearly better than aBBOmDE. For example, MLBBO and aBBOmDE find better solution for functions f8 and f9 when Dim = 10 and 50; aBBOmDE fails to solve functions f8 and f9 when Dim increases to 100 and 200, but MLBBO can solve functions f8 and f9 as before. This may be due to modified DE mutation being used in aBBOmDE as local search to accelerate the convergence speed, but modified DE mutation is used in MLBBO as a formula to supplement with migration operator and enhance the exploration ability of MLBBO.

For CEC 2005 test functions F1–F14, we can obtain similar results, from Table 6, that MLBBO is better than or similar to DE, BBO, and DE/BBO on most of functions when comparing MLBBO with these algorithms. However, results of MLBBO outperform aBBOmDE on most of the functions wherever Dim = 10 or Dim = 50.

From the comparison, we can arrive to a conclusion that the total performance of MLBBO is better than the others, especially, for CEC 2005 test functions.

Figure 7 shows the NFFEs required to reach VTR with different dimensions for 13 standard benchmark functions (f1–f13) and 6 CEC 2005 test functions (F1, F2, F4, F6, F7, and F9). From Figure 7, we can see that as dimension increases, the required NFFEs increases exponentially, which may be due to an exponentially increase in local minima with dimension. Meanwhile, if we compare Figures 7(a) and 7(b), we can find that CEC 2005 test functions need more NFFEs to reach VTR than standard benchmark functions do, which may be due to that CEC 2005 functions are more complicated than standard benchmark functions. For example, for function f3 (Schwefel 1.2), F2 (Shifted Schwefel 1.2), and F4 (Shifted Schwefel 1.2 with Noise in Fitness), F4, F2, and f3 need 140486.7, 157596.7, and 289960 MeanFEs to reach VTR (10−6), respectively. Obviously, F4 is more complicated than f3 and F2; so, F4 needs 289960 MeanFEs to reach VTR, which is large than 140486.7 and 157596.

4.7. Analysis of the Performance of Operators in MLBBO

As the analysis above, the proposed variant of BBO algorithm has achieved excellent performance. In this section, we analyze the performance of modified migration operator and local search mechanism in MLBBO. In order to analyze the performance fairly and systematically, we compare the performance in four cases: MLBBO with both modified migration operator and local search leaning mechanism, MLBBO with the modified migration operator and without local search leaning mechanism, MLBBO without modified migration operator and with local search leaning mechanism, and MLBBO without both modified migration operator and local search leaning mechanism (it is actually BBO). Four algorithms are denoted as MLBBO, MLBBO2, MLBBO3, and MLBBO4. The experimental tests are conducted with 30 independent runs. The parameter settings are set in Section 4.1. The mean, standard deviation of fitness values of experimental test are summarized in Table 7. The numbers of successful runs are also recorded in Table 7. The results between MLBBO and MLBBO2, MLBBO and MLBBO3, and MLBBO and MLBBO4 are compared using two tails -test. Convergence curves of mean fitness values under 30 independent runs are listed in Figure 8.

From Table 7, we can find that MLBBO is significantly better than MLBBO2 on 15 functions out of 27 tests functions, whereas MLBBO2 outperforms MLBBO on 5 functions. For the rest 7 functions, there are no significant differences based on two tails -test. From this, we know that local search mechanism has played an important role in improving the search ability, especially, in improving the solution precision, which can be proved through careful looking at fitness values. For example, for function f1–f10, both algorithms MLBBO and MLBBO2 achieve the optima, but the fitness values obtained by MLBBO are better than those obtained by MLBBO2 by several orders of magnitudes on these functions. When compared with MLBBO3, we can see that MLBBO outperforms MLBBO3 on 19 functions out of 27 test functions, while MLBBO is outperformed by MLBBO3 on only 2 functions. From this, we know that the modified migration operator has played a key role in optimizing the process and is better than original migration operator. Similar conclusion can also be obtained if we compare MLBBO2 and MLBBO4, MLBBO3 and MLBBO4.

Furthermore, if we compare four algorithms together, the results of MLBBO4 are worse than those of MLBBO, especially, on multimodal functions, which means that the MLBBO has more powerful exploration ability than MLBBO4.

5. Conclusions and Future Work

In this paper, we have proposed a variant of modified BBO algorithm, called MLBBO, for solving global optimization problems. For origin BBO algorithm, the exploration ability is a barrier of performance of BBO. We suggest a modified migration operator by combining the formula in migration operator with a new one, which can absorb more information from other habitats and then enhance the exploration ability. Moreover, a local search mechanism is designed and used in modified BBO to supplement with modified migration operator. The proposed algorithm is compared with other improved BBO algorithms and evolutionary algorithms, which show that the proposed algorithm is superior to or at least highly competitive with the others.

During the analysis, we know that MLBBO possesses better performance, and we will investigate the performance of MLBBO in large-scale functions in the future work. We will also investigate the built-in relationship between the population diversity and exploration ability further.

6. Appendix

See Table 8.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The author would like to express thanks to the area editor and anonymous reviewers for their valuable comments and suggestions for this paper. This work is proudly supported in part by the National Natural Science Foundation of China (No. 11101101, No. 11226219, No. 61373174, and No.11361019), Natural Science Foundation of Guangxi (No. 2013GXNSFBA019008 and No. 2013GXNSFAA019003), Science and Technology Plan Project of Science and Technology Department of Hunan (No. 2013FJ3083), and Project supported by Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher Learning ([2011] 47).