Swarm Intelligence in Engineering 2014View this Special Issue
A Modified Artificial Bee Colony Algorithm Based on Search Space Division and Disruptive Selection Strategy
Artificial bee colony (ABC) algorithm has attracted much attention and has been applied to many scientific and engineering applications in recent years. However, there are still some insufficiencies in ABC algorithm such as poor quality of initial solution, slow convergence, premature, and low precision, which hamper the further development and application of ABC. In order to further improve the performance of ABC, we first proposed a novel initialization method called search space division (SSD), which provided high quality of initial solutions. And then, a disruptive selection strategy was used to improve population diversity. Moreover, in order to accelerate convergence rate, we changed the definition of the scout bee phase. In addition, we designed two types of experiments to testify our proposed algorithm. On the one hand, we conducted experiments to make sure how much each modification makes contribution to improving the performance of ABC. On the other hand, comprehensive experiments were performed to prove the superiority of our proposed algorithm. The experimental results indicate that SDABC significantly outperforms other ABCs, contributing to higher solution accuracy, faster convergence speed, and stronger algorithm stability.
Optimization problems exist extensively in information science, control engineering, industrial design, operational research, and so on. Therefore, optimization algorithms, which are of great importance to engineering and scientific research, have attracted more and more of the researchers’ attention and many achievements have been got. However, conventional optimization algorithms fail to process problems characterized as nonconvex, nondifferentiable, discontinuous, high dimensional, and so forth; swarm intelligence algorithms that have some advantages, such as scalability, fault tolerance, adaptation, speed, modularity, autonomy, and parallelism , provide a promising way to solve the complex optimization problems mentioned above. Some excellent algorithms, such as genetic algorithm (GA) , particle swarm optimization (PSO) , differential evolutionary (DE) algorithm , ant colony optimization (ACO) , simulate anneal arithmetic (SAA) , artificial bee colony (ABC) algorithm , have been proposed and successful applications have been made in many fields. Among them, by simulating foraging behavior of honey bee swarm, ABC developed by Karaboga is competitive to other swarm intelligence algorithms . As a result, it has been widely used in multiobjective optimization, circuit design, SAR image segmentation, flow shop scheduling, and related optimization problems [9–11]. Nevertheless, as research on ABC is at the primary phase, many problems such as poor quality of initial solutions, slow convergence, premature, and low precision are still hampering the development and application of ABC.
To overcome the above mentioned shortages, researchers have presented many kinds of ABC variants. Zhu and Kwong  proposed a gbest-guided ABC (GABC) algorithm by incorporating the information of global best (gbest) solution into the solution search equation to improve the exploitation. Kang et al.  proposed a memetic algorithm which combined Hooke-Jeeves pattern search with artificial bee colony algorithm. Alatas  introduced the chaotic maps into the initialization and chaotic search into the scout bee phase to improve the convergence characteristics and to prevent the ABC to get stuck on local solutions. Gao and Liu  proposed a modified ABC by using a modified solution search equation and new initial method. Akay and Karaboga  proposed a modified ABC algorithm by controlling the frequency of perturbation and introducing the ratio of the variance operator into search equation. Mohammed  proposed a generalized opposition-based ABC by introducing the concept of generalized opposition-based learning through the initialization step and generation jumping. Luo et al.  used segmental-search strategy to make sure of fast convergence and to avoid trapping into local optimum. More extensive review of ABC can be seen in [18, 19].
According to the literatures mentioned above, the ABC variants have achieved a better performance, but some shortcomings of slow convergence and premature and low precision have not been solved yet. We note that to improve the performance of ABC, researchers mainly focus on search strategy research, but other important factors affecting performance, such as initialization method, selective strategy, and fitness value, are insufficient.
In order to further improve the performance of ABC, we proposed a modified artificial bee colony algorithm based on search space division and disruptive selection strategy (SDABC). In SDABC, first of all, a new initialization method called search space division, which steadily provided high quality of initial solutions, was presented. Note that convergence speed, final solution accuracy, and stability were improved by using search space division. It is well known that greedy strategy used in ABC is useful because it is quick to think up and often gives good approximations to the optimum. However, greedy strategy mostly (but not always) fails to find the globally optimal solution, because it usually does not operate exhaustively on all the data. It can make commitments to certain choices too early which prevent it from finding the best overall solution later. And thus, in this paper, we introduced the disruptive selection strategy to increase the diversity of population. Moreover, in order to accelerate convergence speed, a new search equation was presented in the scout bee phase. In addition, our proposed algorithm was evaluated on fourteen standard benchmark functions which had ever been applied to verify optimization methods in continuous optimization problems. The simulation results show that SDABC significantly outperforms other ABCs, contributing to higher solution accuracy, faster convergence speed, and stronger algorithm stability.
The rest of this paper is structured as follows. In Section 2, we make a brief review of standard ABC. Section 3 provides detailed descriptions of our proposed algorithm. Subsequently, some experiments are conducted to verify the performance of SDABC in Section 4. This is followed by conclusion and future research directions in Section 5.
2. Standard Artificial Bee Colony Algorithm
The ABC algorithm which was originally introduced by Karaboga is a recently proposed optimization algorithm inspired by simulating the foraging behavior of a honey bee swarm . In the minimal model, a colony of artificial bees consists of three kinds of bees [7, 8]: employed bees, onlooker bees, and scout bees. Both of the number of employed bees and onlooker bees are equal to half of the bee colony. There is a one-to-one correspondence between each employed bee and every food source. Employed bees are responsible for searching food sources, gathering information, and sharing food information with onlooker bees around the hive. The employed bee whose food source has been exhausted by the bees becomes a scout. Scout bee, just as its name implies, could have the fast discovery of the group of feasible solutions as a task. In the ABC algorithm, the position of a food source represents a possible solution to the optimization problem and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution. The ABC algorithm can be concluded into four phases: initialization phase, employed bee phase, onlooker bee phase, and scout bee phase, as described below.
(1) Initialization Phase. To begin with, a population of individuals is generated randomly. Each initial food source as a -dimensional vector is produced by (1), where denotes food source, namely, solution where , , and is the number of optimization parameters; and are the upper and lower bounds for the dimension , respectively. And then, each solution is evaluated by where denotes fitness value of solution and represents cost function of solution .
After the initialization, solutions repeat processes of employed bee phase, onlooker bee phase, and scout bee phase until stop conditions are satisfied.
(2) Employed Bee Phase. At this phase, each employed bee produces a new food source (solution) in the neighborhood of the previous selected solution. The position of the new food source is calculated by where and are randomly chosen indexes and has to be different from and is a new solution and is a random number between .
After that, fitness value of the new food source is evaluated, and compares with that of the old food source . The old food source will be replaced by the new one if the new one has a better fitness value. Otherwise, the old one will be stored.
(3) Onlooker Bee Phase. When all employed bees have finished their searching process, they share positions and fitness information of the food sources with onlooker bees, each of which selects a food source depending on probability calculated by (4). After an onlooker bee chooses a food source with a probability, a new food source is generated and fitness value is calculated. Subsequently, a greedy selection is applied between the selected food source and the old one:
(4) Scout Bee Phase. If the fitness value of a food source does not improve for a certain number of cycles, the food source will be abandoned and the corresponding employed bee becomes a scout bee. Then, the scout bee produces a new position randomly as in (1) to replace the position of abandoned food source.
On the basis of the above analysis, the pseudocode of standard artificial bee colony algorithm is simply described in Algorithm 1.
3. Our Proposed Algorithm: SDABC
Attracted by the prospect and potential of the ABC algorithm, researchers have focused on its variants. However, there are still some problems which need to be solved. In this section, to further improve its performance, we introduce the modified artificial bee colony from three aspects, namely, population initialization based on search space division, disruptive selection strategy, and the improved scout bee phase. The detailed descriptions are as follows.
3.1. Population Initialization Based on Search Space Division
An initialization method capable of providing a better exploration of the search-space and presenting only high-quality solutions should improve the performance of a metaheuristic algorithm . Specifically, population initialization influences convergence speed, final solution accuracy, and stability. Generally speaking, in absence of a prior knowledge about the solution, the random initialization is the most commonly used way to produce initial population. However, because of its randomness, it is impossible for random initialization to get high quality of initial solution steadily. That is to say, once in a while, an initial solution produced by random initialization close to optimal solution could result in a fast convergence. Otherwise, it will take considerably more time. Logically, we should be looking in all directions simultaneously , in other words, initial solutions should be distributed evenly across the whole search space.
Accordingly, we proposed a novel initialization algorithm called search space division (SSD). In order to make it easy to understand, we take one-dimensional space as an example, as shown in Figure 1. The basic ideal of SSD is given as follows: to begin with, SSD divides the search space into segments (e.g., ). There is one and only one initial solution in every segment (searching subspace), which makes sure initial solutions should be distributed relatively evenly across the whole search space. After that, SSD uses each midpoint in searching subspace as a base point, and initial solutions are random bias from base points which can move from left or right for up to half the length of segment. Here, random numbers generated in a limited scope are used to produce different initial solutions at each run, which make a chance for algorithm to succeed in the next run while it failed in the previous run. It is necessary to emphasize that SSD can easily work on the high dimensions. Based on the research described above, the main steps of SSD can be summarized as in Figure 1.
Let be a real number, which is the th dimension of th solution. Where ( denotes the number of initial solutions) and ( represents the number of parameters), and are the upper and lower bounds for the th dimension of th solution, respectively. Then, base points are generated by (5), and initial solutions are produced by using (6) where is the base point corresponding to the th dimension of th solution
Through the above analysis, we use SSD instead of pure random initialization to produce initial solutions, and its pseudocode is given in Algorithm 2.
3.2. Disruptive Selection Strategy
Fitness value is of crucial importance to the ABC algorithm because it is the sole criterion of nectar amount. In the ABC algorithm, fitness value is generated by (2). As described in , there is a vital problem which needs to be solved. For example, when finding the minimum value, (2) is used to evaluate function value. However, when the function value is very close to zero, for example, , fitness value calculated by (2) is rounded up to be 1 ( is ignored). Subsequently, the fitness of all solutions will be equal to 1 in later iterations. That is to say, a new solution that gives a better fitness value than the old solution will be ignored and the solution will stagnate at the old solution . In order to solve this problem, literature  directly used the objective value of function for comparison and selection of the better solution. To some extent, the issue has been solved by the above amendment, but it has raised a new problem: onlooker bees lose search direction, which results in falling into local optimum. As the number of the iterations increases, difference of the objective function value becomes smaller and smaller, which requires fitness value to have a sensitive response to the slight change. Otherwise, onlooker bees cannot select appropriate food sources. Through the above analysis, we proposed a new equation to calculate the fitness value, which is given as follows: where is the objective value of function and is the fitness value.
After that, greedy selection strategy is applied to select food source. However, as is known to all, greedy selection strategy often falls into locally optimal solution, which results from lack of population diversity. In order to solve this issue, we introduced the disruptive selection strategy to maintain population diversity. The disruptive selection strategy gives more chances for higher and lower individuals to be selected by changing the definition of the fitness function as in (8) and (9)  where is the average value of the fitness value of the individuals in the population. Based on the above amendments, the main steps of disruptive selection strategy are shown in Algorithm 3.
3.3. Improved Scout Bee Phase
When an employed bee becomes a scout bee, (1) is used to randomly generate a new position. Because of its randomness, the new position produced by (1) is more likely to be far away from global optimum or locates in the search area which has been explored. The guidance of the best solution will rapidly accelerate convergence speed, which has been proved in literature . It means that the scout bee has yet to take advantage of the information from the optimal solution achieved so far. Moreover, it is a hard work to determine limit value because we are facing a big dilemma: setting the value of limit that is too large results in poor population diversity, and the value of limit that is too small renders slow convergence. As a matter of fact, the sole purpose of a scout bee is to maintain the population diversity, which can be replaced by other strategies. As a result, in this paper, we adopted the disruptive selection strategy to keep population diversity. In addition, for the purpose of accelerating convergence speed, we assigned a new task to the scout bee, and that is to further exploit the promising position. Therefore, a new equation generating the new position for scout bee is given as follows: where , is the best solution achieved till now, and is the solution abandoned by employed bee. Through the above operations, the main steps of SDABC can be simply presented in Algorithm 4.
4. Simulation Experiments
In this section, a series of experiments were conducted to certify the effectiveness of our proposed algorithm SDABC in standard benchmark functions. Well-defined benchmark functions which are based on mathematical functions benefit the testing of our proposed algorithm in this paper. As a result, we selected fourteen benchmark functions to fulfill such experiments. The formulations and properties of these benchmark functions were summarized in Table 1. Specifically, are unimodal functions and are multimodal functions. Hereinto, is discontinuous step function, is bound-constrained function, and is a nonconvex function with multiple minima in high dimension case.
It is necessary to emphasize that all the experiments were implemented in the same hardware and software environment. Specifically, computer’s hardware configuration is given as follows: an Intel Core 2 Duo processor running at 2.20 GHz, 512 M of RAM, and an 80 G hard driver. The operating system is Microsoft Windows sp3. Our execution was compiled by the default setting of Matlab R2010a.
4.1. Effects of Each Modification on the Performance of SDABC
In this section, we conducted two types of experiments to analyze each modification, respectively. On the one hand, in order to further certify the superiority of our proposed initialization algorithm, we compared SSD with other initialization algorithms including opposition-based learning (OBL for short) proposed in  and random initialization (RI for short). For more clarity, we called the original ABC with our proposed population initialization algorithm (i.e., replacing RI in the original ABC with SSD) as ABC-ssd and the original ABC with OBL as ABC-obl. Note that ABC utilizes RI to generate initial population. In the experiments, the population size was set to be 20, the maximum number of cycles was set to be 600, and the value of limit was set to be 150. And then, we compared the convergence performance and final solution accuracy of different initialization methods on a set of six benchmark functions including three unimodal functions and three multimodal functions with , 60, and 100, respectively. For all functions, each algorithm was performed 100 independent runs. The results were shown in Table 2 in terms of mean and standard deviation (Std for short) of solutions obtained by each algorithm. Moreover, the best and the second best solutions were marked in bold and a two-tailed t-test at a 0.05 level of significance was used. Note that the value “+” denotes our proposed algorithm significantly better than other algorithms. It can be observed that ABC-ssd has higher solution accuracy on all functions.
In addition, convergence process of the best solution values over 100 runs on six functions is presented in Figure 2. Hereinto, , , and denote the best initial solutions generated by SSD, OBL, and RI, respectively. Moreover, some partial enlarged drawings are also provided in Figure 2 to see how much the different population initialization methods make contribution to improving quality of initial solutions. It can be seen clearly that SSD provides the highest quality of initial solution . For the six benchmark functions with and 100, SSD is significantly superior to OBL and RI during the whole progress of finding global minimum. There is a tendency that the more the number of optimization parameters is, the more obvious the superiority of SSD shows. Eventually, OBL performs the second best at finding global minimum. Specifically, for unimodal functions, OBL is somewhat superior to RI in terms of initial solution and convergence performance. However, for multimodal functions, the advantages may disappear.
On the other hand, because of correlative dependence, we combined disruptive selection strategy and the improved scout bee to investigate their effects on SDABC in terms of population diversity and convergence performance. To begin with, population diversity is defined as follows : where is the number of optimization parameters and is the average point which is defined as follows:
A comparison was conducted between ABC and the standard ABC algorithm with both the disruptive selection strategy and the improved scout bee (DSS for short) on the above six benchmark functions with . The number of population size was set to be 20 and the maximum number of cycles was set to be 1000. Note that the value of limit was set to be for ABC  and 20 for DSS. Because of changing the definition of the scout bee, we obtained the value of limit through testing again and again several times. For all functions, each algorithm was performed 100 independent runs. The convergence performance of different ABCs and their corresponding population diversity are presented in Figure 3. It can be observed that both the population diversity of ABC and DSS are almost the same as each other in the early stage, this is because the same initialization method is applied to obtain initial solutions. It means that they start with almost the same diversity towards global optimum. However, with the increasing iteration times, DSS shows higher population diversity and better convergence performance than that of ABC. For Rosenbrock function, we note that ABC has a fast convergence initially towards the known local optimum. As the procedure proceeds, ABC falls into the local minimum, while DSS gets closer and closer to the minimum. In a word, the better results are obtained by the combined effects of disruptive selection strategy and the improved scout bee.
4.2. Comparison between ABC and SDABC
In this section, a set of experiments were conducted to testify the efficiency of our proposed algorithm on 14 benchmark functions with and 100; as presented in Table 1, the experiment results were compared with those of ABC. To allow a fair comparison, all the experiments were performed using the following parameter settings: population size was set to be 20; namely, , and the maximum number of cycles was set to be 3000. It is necessary to emphasize that the value of limit was set to be 200 for ABC and 20 for SDABC, and the reason has been shown in the previous section. For all functions, the algorithms carry out 50 independent runs. For more clarity, the best solutions were marked in bold. The results are shown in Table 3 in terms of the best, worst, mean, and standard deviation of solutions obtained by each algorithm over 50 independent runs. In Table 3, “Std” and “AT” denote the standard deviation of solutions and the average elapsed time, respectively. It can be observed that SDABC has higher accuracy on all the functions than ABC. In particular, SDABC can find optimum solutions on functions , , , , and with . Moreover, the average elapsed time of SDABC on almost all functions is less than that of ABC except functions with , and with and 100. However, the superiority of ABC to SDABC is not significantly obvious in terms of the computational time. In other words, SDABC has higher convergence rate and accuracy.
In addition, the statistical comparison of SDABC with ABC uses a two-tailed -test at a 0.05 level of significance. Note that value “+” in 9th column in Table 3 represents that our proposed algorithm performs significantly better than ABC algorithm. It also indicates that SDABC outperforms ABC.
4.3. Comparison with Other ABCs
In this section, in order to further prove the superiority of our proposed algorithm, we compared SDABC with other state-of-the-art ABCs which are OABC , GABC , CABC , and OOABC  on fourteen functions with presented in Table 1. Given space limitations, this paper cannot fully explore all functions with different dimensions. Population size was set to be 20 and the maximum number of cycles was set to be 3000. The value of limit was set to be 20 for SDABC and 200 for other ABCs. Other parameter settings can be seen in [12, 14, 16, 26]. In addition, we used a two-tailed -test at a 0.05 level of significance to compare the results obtained by the best and the second best algorithms. For more clarity, the best and the second best solutions were marked in bold. Note that, the value “NA” denotes not applicable, when the best and the second best algorithms achieve the same solution accuracy. The results are shown in Table 4. It can be seen that the performance of SDABC is significantly superior to that of other ABCs on almost all functions except . In case of , both OOABC and SDABC can find optimum solution, which means that they can achieve the same solution accuracy.
5. Conclusions and Future Work
To further improve the performance of ABC, we first proposed a novel initialization method called search space division, which provided high quality of initial solution. Subsequently, a disruptive selection strategy was used to improve population diversity. Moreover, in order to accelerate convergence rate, we changed the definition of the scout bee phase. In addition, we designed two types of experiments to testify our proposed algorithm. On the one hand, we conducted single-factor experiments to make sure of how much each modification makes contribution to improving the performance of ABC. On the other hand, comprehensive experiments were performed to prove the superiority of our proposed algorithm. The experimental results indicate that SDABC significantly outperforms other ABCs, contributing to higher solution accuracy, faster convergence speed, and stronger algorithm stability.
Our future work will focus on two issues. On the one hand, we would apply SDABC to solve the real-world problems such as data mining, industrial design, and clustering. On the other hand, we would extend SDABC to handle the combinational optimization problems.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995.View at: Google Scholar
M. Dorigo and T. Stutzle, Ant Colony Optimization, MA MIT Press, Cambridge, Mass, USA, 2004.
D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep. TR06, Erciyes University, Kayseri, Turkey, 2005.View at: Google Scholar
E. A. Mohammed, “Generalized opposition-based artificial bee colony algorithm,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '12), pp. 1–4, Brisbane, Australia, 2012.View at: Google Scholar
J. Luo, X.-H. Xiao, L. Fu, and Q. Wang, “Modified artificial bee colony algorithm based on segmental-search strategy,” Control and Decision, vol. 27, no. 9, pp. 1402–1410, 2012.View at: Google Scholar