Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 353969, 20 pages
Research Article

Biogeography-Based Optimization with Orthogonal Crossover

1Department of Applied Mathematics, Xidian University, Xi’an 710071, China
2School of Science, Guilin University of Technology, Guilin 541004, China
3School of Science, Xi’an University of Posts and Telecommunications, Xi’an 710121, China

Received 6 January 2013; Revised 15 April 2013; Accepted 15 April 2013

Academic Editor: Alexander P. Seyranian

Copyright © 2013 Quanxi Feng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Biogeography-based optimization (BBO) is a new biogeography inspired, population-based algorithm, which mainly uses migration operator to share information among solutions. Similar to crossover operator in genetic algorithm, migration operator is a probabilistic operator and only generates the vertex of a hyperrectangle defined by the emigration and immigration vectors. Therefore, the exploration ability of BBO may be limited. Orthogonal crossover operator with quantization technique (QOX) is based on orthogonal design and can generate representative solution in solution space. In this paper, a BBO variant is presented through embedding the QOX operator in BBO algorithm. Additionally, a modified migration equation is used to improve the population diversity. Several experiments are conducted on 23 benchmark functions. Experimental results show that the proposed algorithm is capable of locating the optimal or closed-to-optimal solution. Comparisons with other variants of BBO algorithms and state-of-the-art orthogonal-based evolutionary algorithms demonstrate that our proposed algorithm possesses faster global convergence rate, high-precision solution, and stronger robustness. Finally, the analysis result of the performance of QOX indicates that QOX plays a key role in the proposed algorithm.

1. Introduction

Many problems in both industrial application field and scientific research world can be regarded as optimization problems. During the past decades, several kinds of classical methods [1, 2] have been proposed to handle optimization problem, which have made enormous progress, but these methods demand to know the property of optimization problem, such as continuity or differentiability. In recent decades, many meta heuristic algorithms sprung up, for example, genetic algorithm (GA) [3], particle swarm optimization (PSO) [4], simulated annealing algorithm (SA) [5], differential evolution (DE) [6], and biogeography-based optimization (BBO) [7]. These algorithms can solve optimization problems without using some information such as differentiability.

Biogeography-based optimization (BBO) [7], developed by Simon, is a new emerging population-based evolutionary algorithm motivated through mimic migration of species in natural biogeography. Just as species migrate back and forth between islands in biogeography, migration operator in BBO shares information between habitats in the population. That is to say, a good solution can share their features with poor ones through migration operator, and a poor solution can improve their quality by accepting new features from good ones. Thus, BBO possesses powerful exploitation ability. However, migration operator does not produce new SIV which may lead to weak exploration ability and poor population diversity.

In evolution algorithm, the step of generating some solutions can be considered as “experiment.” For example, the operation of using crossover operator to sample genes from their parents and produce new offspring, this process can be considered as sample experiment. Orthogonal experiment design is a method that is used to generate multifactor experiment uniformly, which has been developed to sample a small, but representative set of combinations for experimentation. Zhang and Leung [8] introduced orthogonal experimental design into genetic algorithm and proposed orthogonal crossover (OX). Leung and Wang [9] used quantization technique in orthogonal experimental design and proposed quantization orthogonal crossover operator (QOX). Experimental studies show that QOX is an effective and efficient operator for numerical optimization.

Since BBO has properties of powerful exploitation ability, weak exploration ability, poor population diversity, and slow convergence rate simultaneous. QOX operator is a global search operator with systematic and rational search ability. In order to enhance the exploration ability and improve the population diversity, in this paper, an improved BBO variant, namely, biogeography-based optimization with orthogonal crossover (denoted as OXBBO), is presented for solving global optimization problems. In the proposed algorithm, QOX operator is embedded into BBO to enhance its exploration ability and accelerate its convergence rate, and a modified migration operator is used to improve the population diversity.

The rest of the paper is organized as follows. The basic BBO is introduced in Section 2, while Section 3 briefly reviews orthogonal experiment design. The proposed algorithm is introduced in Section 4 in detail. In Section 5, extensive experiments have been carried out to test OXBBO. Finally, some conclusions and future works are summarized in Section 6.

2. Biogeography-Based Optimization

BBO is a new population-based biogeography inspired global optimization algorithm, which has some features in common with other biology-based algorithms. Migration operator is the main operator of BBO which shares information among solutions. PSO and BBO solutions survive forever, while GA solutions “die” at the end of each generation. PSO solutions are more likely to clump together in similar groups, while BBO and GA solutions do not have any built-in trend to cluster.

BBO is a method motivated by geographical distribution of biological organisms. In BBO, each individual is considered as a “habitat” with a habitat suitability index (HSI). Habitats with a high HSI tend to have a large number of species, while those with a low HSI have a small number of species. Habitats with a high HIS migrate to the nearby habitats, while those with a low HSI accept features from neighbor habitats with high HSI.

Migration strategy in BBO is similar to the global recombination approach in the breeder GA and evolutionary strategy. Migration operator is a probabilistic operator that adjusts habitat based on the immigration rate and emigration rate. The probability that????is modified is proportional to its immigration rate??, and the probability that the modification source comes from????is proportional to the emigration rate??. Migration operator [7] can be described in Algorithm 1.

Algorithm 1: Procedure of migration operator.

A habitat’s HSI can be changed suddenly because of random events which are modeled as mutation operator in BBO. Mutation operator randomly modifies SIV based on the habitat’s priori probability. The mutation rate????is expressed as follows: where ??is a user-defined parameter. In BBO, highly probable solutions will tend to be more dominant in the population without this modification. This mutation scheme tends to increase diversity among population. The mutation operator [7] can be loosely described in Algorithm 2.

Algorithm 2: Procedure of mutation operator.

As some scholars has pointed out that BBO may have some deficiencies. For example, BBO is good at exploiting the solution space, but BBO is weak at exploring the solution space. Recently, many researchers have been working on the improvement of BBO, and many variants of BBO are presented.

Improving the operator in BBO is one direction for improvement. Gong et al. [11] used a real code to represent the solution and extend BBO to the continuous domain optimization and then presented a real-coded BBO (RCBBO). Ma and Simon [16] modified the migration operator by giving a new migration formula and then proposed blend BBO (B-BBO). Li et al. [10] presented perturbing biogeography-based optimization (PBBO) through a design of perturbing migration based on the sinusoidal migration curve. Li and Yin [12] proposed another migration operation based on multiparent crossover called multiparent migration model and then presented multioperator biogeography-based optimization (MOBBO). Ma [17] generalized linear migration model to six different migration models and then used experimental study and theoretical analysis to investigate their performance.

Hybridization of EAs is another direction for improvement. Cai et al. [18] combined evolutionary programming with BBO to enhance the exploration ability and then proposed hybrid BBO (BBO-EP). Wang and Xu [19] combined differential evolution and simplex search with BBO and proposed SSBBODE for solving parameter estimation problems. Lohokare et al. [20] presented intelligent biogeography-based optimization (IBBO) through hybridization of BBO with bacterial foraging algorithm. Zhu [21] used graphics hardware acceleration to present a massively parallel biogeography-based optimization-pattern search algorithm (BBO-PS). Tan and Guo [22] proposed quantum and biogeography-based optimization (QBBO) by evolving multiple quantum probability models via evolutionary strategies inspired by the mathematics of biogeography. Gong et al. [15] combined the exploration of DE with the exploitation of BBO effectively and then presented a hybrid DE with BBO, namely, DE/BBO. Boussaid et al. [13] proposed a two stages algorithm (DBBO) through hybridization of BBO with DE, which updated the population by using BBO and DE alternately.

Moreover, theory analysis of BBO and application has also been developing. Simon [23] introduced several simplified versions of BBO and used probability theory to perform an approximate analysis. Simon et al. [24] showed that BBO is a generalization of a genetic algorithm with global uniform recombination (GA/GUR) and compared BBO and GA/GUR algorithms by using analytical Markov models. Simon [25] also introduced dynamic system models for BBO and GA/GUR.

Ma et al. [26] incorporated resampling in BBO to solve optimization problems in noisy environments. Ergezer et al. [27] employed opposition-based learning alongside migration rate of BBO and created oppositional BBO (OBBO).

It is necessary to emphasize that our work is to add a new global search operator into BBO algorithm. Meanwhile, we also aim at not increasing too many control parameters in the algorithm.

3. Orthogonal Crossover

In practical application, it is impossible to consider all combinations in large sample space, and then we can use a small part of representative samples to represent the sample space. Orthogonal design method [9], with both orthogonal array (OA) and factor analysis (FA), is used to execute multifactor experiment uniformly and sample well-distributed in solution space OA is a fractional factorial array of numbers arranged in rows and columns which can assure a balanced comparison of levels of any factor. All columns in orthogonal array can be evaluated independently of one another, a number of such arrays can be found in

For example, an experiment with 4 factors and 3 levels, its orthogonal array is (34). There are 81 combinations for all. If we apply the orthogonal array, we only consider nine combinations.

For convenience, we denote an orthogonal array for????factors with????levels and????combinations as??, where????represents the??th factor in the??th??combination. Generally,

Orthogonal crossover operator was first developed by Zhang and Leung [8], which originated from the integration of orthogonal experiment design with crossover operator to generate several new solutions in the line with orthogonal array. As a matter of fact, each operation of generating a new offspring can be regarded as an experiment. For example, a crossover is a procedure for sampling several points from a defined region while orthogonal crossover operator (OX) uses orthogonal array to make crossover operator with more statistically. OX in [8] was used in combination problems. Leung and Wang [9] used OX with quantization technique and proposed new OX used for dealing with numerical optimization with continuous variables. Leung quantized the continuous domain into several levels firstly and then used orthogonal array to generate combinations.

Consider two parent habitats????and??. ???and????define a search space ??of variable??. Then, the solution space is??, where QOX quantizes the solution space????into ?levels:

In practical application, dimension is often much larger than orthogonal factor , and cannot be used directly. So, we first randomly generate ??different integers between 1 and and then divide -dimension solution vector into?? factors:

Finally, combinations are generated according to orthogonal array () whose habitat suitability index is evaluated, and the two best habitats are selected from them to replace their parent habitats.

QOX has been incorporated into particle swarm optimization [4], differential evolution [28], and genetic algorithm [9] to enhance the exploration ability.

4. The Proposed Algorithm: OXBBO

4.1. Orthogonal Crossover Operator

Migration operator plays a key role in BBO. Simon et al. [24] has pointed out that BBO migration strategy is conceptually similar to the GA global uniform recombination operator, which can be considered as a special GA crossover operator [24]. Similar to GA crossover operator, migration operator only generates and evaluates the habitat which is located in the vertex of hyperrectangle defined by the emigration habitats and immigration habitats. The inner and other vertexes of the hyperrectangle are not considered in this process, which might be a promising region in the solution space. This process may lead the optimal solution to fall into local optimal solution, especially, in the initial optimizing stage. That is to say, migration operator cannot perform a systematic search in the hyperrectangle. Therefore, the exploration ability of BBO could be limited to some extent.

Orthogonal crossover operator with quantization technique (QOX) samples a small but representative set of combinations for experimentation. In the searching process, QOX not only searches the vertexes of hyperrectangle defined by parents but also searches the inner of the hyperrectangle. So QOX could execute systematic search in the hyperrectangle, which can locate the global optimal solution easy and overcome the limitation of migration operator.

As shown in Figure 1, ??and?? ??are the parents (supposing that one is emigration habitat and the other one is immigration habitat in two-dimensional solution space). We can see in Figure 1 that GA crossover operator generates two offspring randomly. Migration operator absorbs better SIVs and generates one offspring, determined by immigration and emigration rate, in the vertex consciously. And orthogonal crossover operator with (34) generates nine potential offspring which lie in both the vertexes and the inner of hyperrectangle uniformly. Obviously, orthogonal crossover operator searches the solution space more thoroughly than GA crossover operator and migration operator do.

Figure 1: Illustrations of using crossover to generate new offspring in two-dimension space. ,?? are their parents; the others are offspring. (a) GA crossover operator, (b) migration operator, and (c) orthogonal crossover operator.

In order to enhance the exploration ability of BBO, we embedded the QOX operator with (34) into BBO. For the sake of randomness, two random selected habitats are used as the parent of QOX, which will be replaced by two best habitats chosen from offspring after performing QOX. Note that QOX operator is more costly than migration operator and GA operator, since it needs to evaluate 9 offspring if (34) is used. Considering the limitation of the number of fitness function evaluations, we apply QOX with (34) only once at each generation to save the computation cost and to simplify the implementation procedure. The brief description of QOX is shown in Algorithm 3.

Algorithm 3: Procedure of QOX.

4.2. Modified Migration Operator

As analyzed before, in migration operator, the SIV of new habitat migrates from other habitats. This procedure only uses the SIV of better habitats to replace the SIV of the worse habitats but does not produce new SIV which results in poor population diversity, especially in the later stage. For the sake of this, a new modified migration is used to improve the population diversity. In Algorithm 1, formula (*) is replaced by a new formula: where??,????are two different integers generated randomly between 1 and NP??(NP is population size). We can see in formula (6), the difference generated by two randomly selected habitats are added to the emigration SIV of habitat, which can generate a random perturbation on the emigration SIV and then improve the diversity of new population.

4.3. Mutation with Mutation Operator

In order to enhance the exploration ability of BBO, a modified mutation operator, Cauchy mutation operator, is integrated into BBO to replace the randomly generated SIV.

The formula for the probability density function of the Cauchy distribution [11] is where ,??and????is a scale parameter. A real-valued random variable is Cauchy distributed with being written as

The Cauchy mutation in OXBBO can be described as

4.4. Boundary Constraints

The OXBBO algorithm assumes that all the habitats in the population should be limited in an isolated and finite solution space. There are some habitats moving out of the solution space, in the optimizing process, which should be prevented. In order to keep the solution of bound-constrained problems feasible, those habitats that violate boundary constraints should be replaced by a new generate habitat. The formula is where ,????are the lower and upper bound, of the solution space, respectively.

4.5. Selection Operator

Selection operator can impel the metabolism of population and retain good habitats. Greedy selection operator is used in this paper. In other words, the population habitat will be replaced by its corresponding trial habitat if the HSI of the trial habitat is better than that of its population habitat: where ,????are the population habitat and trial habitat, respectively.

4.6. Main Procedure of OXBBO

As the analysis above, migration operator is good at exploiting the solution space and weak at exploraing the solution space. QOX is good at exploring the solution space, which can offset the deficiency of BBO. Modified migration operator can increase the difference of population and improve the population diversity. Therefore, we introduce biogeography-based optimization with orthogonal crossover (viz., OXBBO) by incorporating the above-mentioned QOX and modified migration operator into BBO in this section. The pseudocode of OXBBO approach is described in Algorithm 4.

Algorithm 4: Main Procedure of OXBBO.

5. Experimental Study

In order to verify the performance of the proposed algorithm, several experiments are implemented on 23 benchmark functions. These benchmark functions are chosen from [29] and briefly summarized in Table 1, detailed descriptions about these functions can be found in [29].

Table 1: Benchmark functions used in our experimental tests.

Functions are unimodal functions. Where function has a very sharp narrow ridge running around a parabola. Function is a discontinuous step function with one minimum. Function is a low-dimensional function with a noisy perturbation. Functions are multimodal functions where the number of local minima increases exponentially with dimension. Functions are low-dimensional functions with only a few local minima.

5.1. Experimental Setting

For OXBBO, we have chosen a reasonable set of values. For all experimental tests, we use the following parameters setting unless a change is mentioned:(1)population size: ;(2)maximum immigration rate: ;(3)maximum emigration rate: ;(4)mutation probability: = 0.005;(5)value to reach (VTR) = 10-8, except for of VTR = 10-2;(6)maximum number of fitness function evaluations (Max_NFFEs): 150,000 for , , , , and ; 200,000 for and ; 300,000 for , , and ; 500,000 for , , and ; 10,000 for , , , and ; 40,000 for ; 20,000 for .

All the algorithms are evaluated on performance criteria [14, 15] or similar to those. Successful rate (SR) is The ratio of the number of successful runs to total runs. NEFFs the number of fitness function evaluations is needed when reach VTR. Acceleration rate (AR): AR = NEFFsother/NEFFsOXBBO. AR is used to compare the convergence speed between OXBBO and other algorithms.

Note. For convenience’s sake of comparison, the best function values and their standard variation are compared in Section 5.2, and the best function error values and their corresponding standard variation are compared in other sections. Moreover, high-dimensional functions with dimension 10, 50, 100, and 200 are optimized in Section 5.5. Functions with 30 dimensions are optimized in other sections.

All experimental tests in this paper are implemented on a computer with 1.86?GHz Inter-core Processor, 1?GB of RAM, and Windows XP operating system in Matlab software 7.6.

5.2. Comparison with Improved BBO Algorithms

In order to validate the performance of OXBBO, we first compare OXBBO with BBO [11] and improved BBO algorithms: Perturb BBO with Gaussian mutation (PBBO-G) [10], real code BBO with Gaussian mutation (RCBBO-G) [11], and Multioperator BBO (MOBBO) [12]. OXBBO executed 50 independent runs on 23 benchmark functions; mean fitness values and their standard variations are recoded. Results of other algorithms are taken from corresponding papers directly. Comparison results are summarized in Table 2.

Table 2: Comparison of improved BBO algorithms and OXBBO.

We make a comparison of OXBBO with original BBO firstly. In Table 2, the results of OXBBO are better than those of BBO in 22 out of 23 benchmark functions except . Either higher-dimensional functions or lower-dimensional functions, OXBBO outperforms BBO. This means that the integration of QOX and modified operator in BBO can improve the performance greatly.

Compared with PBBO-G, from Table 2, we can see that OXBBO is better than PBBO-G on 21 out of 23 functions. For unimodal function , it is obvious that OXBBO performs better than or similar to PBBO-G on 5 out of 6 functions, and PBBO-G is better than OXBBO only on function . For higher-dimensional multimodal functions, OXBBO performs better than PBBO-G for all the multimodal functions. For lower dimension functions, OXBBO outperforms PBBO-G except function , and both algorithms obtain similar results on function . Moreover, the results of OXBBO are better than those of PBBO-G by several orders of magnitude on functions , , , , , , , , , , , , and . The analysis shows that the combination of QOX and modified migration operator into BBO is much better than combination of perturbing migration and sinusoidal migration curve in BBO.

What is more is that similar results are obtained when we compare OXBBO with RCBBO-G. OXBBO outperforms RCBBO-G in 22 test functions, and OXBBO, RCBBO-G obtain the same results on test function . When compared with MOBBO, OXBBO is better than or similar to MOBBO in 12 functions out of 23 functions. According to the parameters setting, MOBBO and OXBBO are tested with population size = 195 and 100; both of the two algorithms are tested with the same maximum generations. That is to say, the maximum number of fitness function evaluations of MOBBO is almost twice as that of OXBBO. Meanwhile, MOBBO is better than OXBBO in precision in some functions that excellent results are also obtained by OXBBO. The above analysis indicates that OXBBO not only has better exploitation ability but also has better exploration ability.

5.3. Comparison with Hybrid BBO Algorithms

In order to verify the performance of OXBBO further, comparisons of OXBBO with hybrid BBO algorithms, such as BBO-EP [11], DBBO [13], eBBO [14], and DE/BBO [15], are made. The algorithms we adopt in this section to conduct the comparison are to directly cite their numerical results available in the according papers. OXBBO are executed 50 independent runs on 13 high-dimensional functions. The numerical results of 13 high-dimensional benchmark functions are listed in Table 3.

Table 3: Comparison of hybrid BBO algorithms and OXBBO for high-dimension of functions.

From Table 3, OXBBO outperforms BBO-EP, DBBO, eBBO, and DE/BBO in 9, 9, 8, and 6 out of 13 test functions, respectively. In addition, BBO-EP, DBBO, eBBO, and DE/BBO surpass OXBBO in 3, 3, 3, and 3 out of 13 test functions, respectively. Thus, we can conclude that, overall, OXBBO is better than BBO-EP, DBBO, and eBBO and is very competitive with DE/BBO. The above discussion means that OXBBO is an effective global function optimizer.

5.4. Convergence Speed When Compared with BBO Algorithms

From Section 5.3, we can see that, though OXBBO outperforms four algorithms, the results of eBBO, DE/BBO are also very competitive. According to [30], more than three quarters of the computational cost is consumed by the fitness function evaluation in evolutionary search. Hence, in solving real-world problems, NFFEs overwhelm the algorithm overhead. For these reasons, we compare the convergence speed of OXBBO with eBBO and DE/BBO by using the concept acceleration rate (AR). From the definition of AR, we know that AR > 1 means that OXBBO is faster than the compared algorithm, and vice versa.

Table 4 summarizes the mean NFFEs and SR of eBBO, DE/BBO, and OXBBO for solving 13 test functions, and ARs between algorithm OXBBO and eBBO, OXBBO, and DE/BBO.

Table 4: NFFEs required to obtain accuracy levels less than VTR.

OXBBO requires fewer NFFEs to reach the VTR than DE/BBO does on 10 functions out of 11 functions with a successful run, which can indicate that our algorithm is faster than DE/BBO. Similar results can be obtained when compared with eBBO. For example, tests on function show that the average numbers of NFFEs of 56110, 59926, and 44398 are needed by the eBBO, DE/BBO, and OXBBO, respectively, to reach the VTR. However, OXBBO only needs 44398 NFFEs, which means that its CPU time is the shortest among the three algorithms. Moreover, it can be seen from Table 4 that the average AR between algorithms OXBBO and eBBO, OXBBO, and DE/BBO in the successful run are 1.26 and 1.37 (the total AR divided by 11); that is to say, the total convergence rate of OXBBO is faster than that of DE/BBO and eBBO.

From Table 4, we can also find that the average SR of eBBO, DE/BBO, and OXBBO are 76.8%, 84.6%, and 84.6%, respectively. OXBBO and DE/BBO reach the VTR with a successful ratio of 100% on 11 functions except function and , while eBBO on 9 functions. So, OXBBO and DE/BBO are more robust than eBBO.

5.5. Effect of Dimensionality on the Performance

From the analysis above, OXBBO is a BBO algorithm with robust effective performance. In order to investigate the influence of scalability on the performance of OXBBO, we carry out a scalability study comparison with DE/BBO for the scalable functions. For functions , Dim = 10, 50, 100, and 200. The results of 50 independent runs are recorded in Table 5 after Dim × 10000 NFFEs.

Table 5: Comparison of DE/BBO and OXBBO with differential dimensions.

We can find, from Table 5, that the overall SR is decreasing for both DE/BBO and OXBBO upon increasing the dimension. For OXBBO, the average SR for solving function with dimension 10, 50, 100, and 200 is 92%, 77%, 69%, and 67%, respectively, whereas the SR of DE/BBO for solving these functions with the same dimension is 92%, 77%, 72%, and 57%, respectively. Obviously, the average SR of DE/BBO decreases more than that of OXBBO does upon increasing dimension, which can also be proved when we carefully compare the mean fitness values accordingly. By carefully looking at the results, we can recognize that for unimodal function , DE/BBO is better than OXBBO at dimensions 10 and 50. However, DE/BBO is outperformed by OXBBO at high dimensions (100 and 200). Moreover, the precision of obtained by DE/BBO is decreased very much upon increasing dimension, while that obtained by OXBBO is increased upon increasing dimension. Another example, for multimodal function , which is very difficult to locate its global optima position, OXBBO can find its global solution at different dimensions in Table 5, but DE/BBO cannot do it. The experimental test shows that OXBBO can also solve higher dimensionality function effectively. Therefore, we can know that the operators used in OXBBO can enhance the exploration ability but do not offset its exploitation ability.

5.6. Comparison with Other OX-Based Algorithms

In this section, we compare the performance of OXBBO with two excellent OX-based algorithms: OXDE [28], which is proposed by embedding QOX into DE, and OLPSO [4], which is presented through using an orthogonal learning strategy for PSO to overcome the “oscillation” phenomenon of traditional PSO.

Now, we apply OX-based algorithms to solve 13 test functions adopted in this paper under 25 independent runs. In order to compare the performance fairly, all control parameters are kept the same with their corresponding paper except the population size. The population size is 50 in this section. Results for OX-based algorithms with 25 independent runs are summarized in Table 6. The convergence curves and boxplot figures are listed in Figures 2 and 3 separately.

Table 6: Comparison of OX algorithms.
Figure 2: (a) Convergence curves of mean fitness error values of OXBBO, OXDE, and OLPSO for functions , , , , , , , , , and . (b) Convergence curves of mean fitness error values of OXBBO, OXDE, and OLPSO for functions , , , , , , , , , and .
Figure 3: (a) ANOVA tests of fitness error values of functions , , , , , , , , , and for OXBBO, OXDE, and OLPSO. (b) ANOVA tests of fitness error values of functions , , , , , , , , , and for OXBBO, OXDE, and OLPSO.

From Table 6, we can find that the results of OXBBO are significantly better than those of OXDE, OLPSO in 12, and 10 out of 23 functions, while this number for OXDE, OLPSO are 3, 3 out of 23 functions. In terms of the success rate, the average SR of OXDE, OLPSO, and OXBBO is 74%, 71%, and 82%, respectively. By carefully looking at the results in Table 6, one can note that OXDE, OLPSO, and OXBBO achieve a 100% success rate for , , , , , , , , , , , , and . Additionally, for function , OXBBO surpass OXDE and OLPSO. For , OXBBO and OLPSO achieve the optima, but the successful rate of OXBBO is higher than that of OLPSO. For , OXDE and OLPSO can reach the optima, but OXBBO failed.

From Figures 2 and 3, we can see that OXBBO is not only converging faster than both OXDE and OLPSO, but also converging more robust than both two. From the analysis, we can conclude that the overall performance of OXBBO is better than OXDE and OLPSO.

5.7. Analysis the Performance of QOX

From the analysis above, we know that the proposed algorithm OXBBO possesses very good performance through comparison with improved BBO algorithms, hybrid BBO algorithms, and some state-of-the-art orthogonal-based algorithms. Does QOX really play a key role of OXBBO? For these reasons, in this section, a detailed analysis is made to analyze the performance of QOX operator in BBO. We consider two variants of OXBBO: OXBBO with QOX operator, OXBBO without QOX operator. These algorithms are denoted as OXBBO, OXBBO1. The experimental test performs on 23 test functions with 25 independent runs. The some parameter setting suggested in Section 5.1. The mean and standard deviation of the function error values and t-test values of 25 independent runs for each algorithm have been listed in Table 7. The convergence curves and boxplot figures are listed in Figures 4 and 5.

Table 7: Comparison of OX operator.
Figure 4: Convergence curves of mean fitness error values of OXBBO and OXBBO1 for functions , , , , , , , , , and .
Figure 5: ANOVA tests of fitness error values of functions , , , , , , , , , and for OXBBO and OXBBO1.

According to the t-test, the results of OXBBO are significantly better than those of OXBBO1 in 11 out of 23 test functions, and the results of OXBBO are similar to those of OXBBO1 in 11 out of 23 functions. OXBBO1 surpasses OXBBO on only one function. From this, we know that QOX has played a very important role in enhancing the performance of OXBBO, which not only improves the solution precision but also enhances the reliability. This can also be demonstrated in Figures 4 and 5.

6. Conclusions and Future Work

In this paper, a new improved BBO algorithm (OXBBO) is present through the combination of the QOX operator into BBO algorithm, which is designed to overcome the shortcoming that migration operator only visits one vertex of the hyperrectangle defined by the immigration and emigration vectors. In OXBBO, QOX is able to make a systematic and rational search in the region and enhance the exploration ability. Moreover, a modified migration operator is used in OXBBO to improve the population diversity. In this paper, extensive experiments have been implemented to compare the performance of OXBBO with state-of-the-art BBO algorithms and orthogonal crossover-based evolutionary algorithms. We have also experimentally studied the effect of QOX on the performance of OXBBO.

We have observed that the results obtained by OXBBO differ very much with different population sizes, which maybe due to the search frequency of QOX, similar results in [28]. In the future, we will study the effect of the search frequency of QOX on the performance in BBO framework for large scale dimension test functions. Furthermore, we would like to point out that experimental tests are based on the no-rotation functions; studying the performance on rotation functions is also our future work.


The authors would like to thank Doctor Zhan for providing the OLPSO code. The authors wish to make the acknowledgement of the area editor and the anonymous reviewers for their valuable comments and suggestions on this paper. This work is proudly supported in part by the National Natural Science Foundation of China (no. 60974082, 11101101, 11226219, and 51174263), Foundation of National Natural Science Foundation of Guangxi Region (no. 2013GXNSFBA019008, 2013GXNSFAA019003), and Project supported by Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher Learning.


  1. E. L. Lawler and D. E. Wood, “Branch-and-bound methods: a survey,” Operations Research, vol. 14, pp. 699–719, 1966. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. J. Vlček and L. Lukšan, “A conjugate directions approach to improve the limited-memory BFGS method,” Applied Mathematics and Computation, vol. 219, no. 3, pp. 800–809, 2012. View at Publisher · View at Google Scholar
  3. J. Horn, N. Nafpliotis, and D. E. Goldberg, “A niched Pareto genetic algorithm form multiobjective optimization,” Evolutionary Computation, vol. 1, pp. 82–87, 1994. View at Google Scholar
  4. Z.-H. Zhan, J. Zhang, Y. Li, and Y.-H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832–847, 2011. View at Google Scholar
  5. M. Dorigo, V. Maniezzo, and A. Colorni, “The ant system: optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 26, no. 1, pp. 29–41, 1996. View at Google Scholar · View at Scopus
  6. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. D. Simon, “Biogeography-based optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 6, pp. 702–713, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. Q. Zhang and Y. W. Leung, “An orthogonal genetic algorithm for multimedia multicast routing,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 1, pp. 53–62, 1999. View at Google Scholar · View at Scopus
  9. Y. W. Leung and Y. Wang, “An orthogonal genetic algorithm with quantization for global numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 5, no. 1, pp. 41–53, 2001. View at Publisher · View at Google Scholar · View at Scopus
  10. X. Li, J. Wang, J. Zhou, and M. Yin, “A perturb biogeography based optimization with mutation for global numerical optimization,” Applied Mathematics and Computation, vol. 218, no. 2, pp. 598–609, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  11. W. Gong, Z. Cai, C. X. Ling, and H. Li, “A real-coded biogeography-based optimization with mutation,” Applied Mathematics and Computation, vol. 216, no. 9, pp. 2749–2758, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. X. Li and M. Yin, “Multi-operator based biogeography based optimization with mutation for global numerical optimization,” Computers & Mathematics with Applications, vol. 64, no. 9, pp. 2833–2844, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  13. I. Boussaǐd, A. Chatterjee, P. Siarry, and M. Ahmed-Nacer, “Two-stage update biogeography-based optimization using differential evolution algorithm (DBBO),” Computers and Operations Research, vol. 38, no. 8, pp. 1188–1198, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. M. R. Lohokare, S. S. Pattnaik, S. Devi, and S. Das, “Extrapolated biogeography-based optimization for global numerical optimization and micro strip patch antenna design,” International Journal of Applied Evolutionary Computation, vol. 1, no. 3, pp. 1–26, 2010. View at Google Scholar
  15. W. Gong, Z. Cai, and C. X. Ling, “DE/BBO: a hybrid differential evolution with biogeography-based optimization for global numerical optimization,” Soft Computing, vol. 15, no. 4, pp. 645–665, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. H. Ma and D. Simon, “Blended biogeography-based optimization for constrained optimization,” Engineering Applications of Artificial Intelligence, vol. 24, no. 3, pp. 517–525, 2011. View at Google Scholar
  17. H. Ma, “An analysis of the equilibrium of migration models for biogeography-based optimization,” Information Sciences, vol. 180, no. 18, pp. 3444–3464, 2010. View at Google Scholar
  18. Z. H. Cai, W. Y. Gong, and C. X. Ling, “Research on a novel biogeography-based optimization algorithm based on evolutionary programming,” System Engineering Theory and Practice, vol. 30, no. 6, pp. 1106–1112, 2010. View at Google Scholar · View at Scopus
  19. L. Wang and Y. Xu, “An effective hybrid biogeography-based optimization algorithm for parameter estimation of chaotic systems,” Expert Systems with Applications, vol. 38, no. 12, pp. 15103–15109, 2011. View at Google Scholar
  20. M. R. Lohokare, S. S. Pattnaik, S. Devi, B. K. Panigrahi, S. Das, and K. M. Bakwad, “Intelligent biogeography-based optimization for discrete variables,” in Proceedings of the World Congress on Nature and Biologically Inspired Computing (NABIC '09), pp. 1088–1093, Coimbatore, India, December 2009. View at Publisher · View at Google Scholar · View at Scopus
  21. W. Zhu, “Parallel biogeography-based optimization with GPU acceleration for nonlinear optimization,” in Proceedings of the ASME, International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. 1–9, Montreal, Quebec, Canada, August 2010.
  22. L.-X. Tan and L. Guo, “Quantum and Biogeography based Optimization for a Class of Combinatorial Optimization,” in Proceedings of the 1st ACM/SIGEVO Summit on Genetic and Evolutionary Computation, pp. 969–972, Now York, NY, USA, 2009. View at Publisher · View at Google Scholar · View at Scopus
  23. D. Simon, “A probabilistic analysis of a simplified biogeography-based optimization algorithm,” Evolutionary Computation, vol. 19, no. 2, pp. 167–188, 2011. View at Google Scholar
  24. D. Simon, R. Rarick, and M. Ergezer, “Analytical and numerical comparisons of biogeography-based optimization and genetic algorithms,” Information Sciences, vol. 181, no. 7, pp. 1224–1248, 2011. View at Google Scholar
  25. D. Simon, “A dynamic system model of biogeography-based optimization,” Applied Soft Computing, vol. 11, no. 8, pp. 5652–5661, 2011. View at Google Scholar
  26. H. Ma, M. Fei, D. Simon, and M. Yu, “Biogeography-based optimization in noisy environments,” Tech. Rep., 2012,
  27. M. Ergezer, D. Simon, and D. Du, “Oppositional biogeography-based optimization,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, pp. 1035–1040, San Antonio, Tex, USA, 2009.
  28. Y. Wang, Z. Cai, and Q. Zhang, “Enhancing the search ability of differential evolution through orthogonal crossover,” Information Sciences, vol. 185, no. 1, pp. 153–177, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  29. X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at Publisher · View at Google Scholar · View at Scopus
  30. E. Mezura-Montes and C. A. Coello Coello, “A simple multimembered evolution strategy to solve constrained optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 1, pp. 1–17, 2005. View at Publisher · View at Google Scholar · View at Scopus