Research Article  Open Access
A Modified Artificial Bee Colony Algorithm with Firefly Algorithm Strategy for Continuous Optimization Problems
Abstract
Artificial Bee Colony (ABC) algorithm is one of the efficient natureinspired optimization algorithms for solving continuous problems. It has no sensitive control parameters and has been shown to be competitive with other wellknown algorithms. However, the slow convergence, premature convergence, and being trapped within the local solutions may occur during the search. In this paper, we propose a new Modified Artificial Bee Colony (MABC) algorithm to overcome these problems. All phases of ABC are determined for improving the exploration and exploitation processes. We use a new search equation in employed bee phase, increase the probabilities for onlooker bees to find better positions, and replace some worst positions by the new ones in onlooker bee phase. Moreover, we use the Firefly algorithm strategy to generate a new position replacing an unupdated position in scout bee phase. Its performance is tested on selected benchmark functions. Experimental results show that MABC is more effective than ABC and some other modifications of ABC.
1. Introduction
Most continuous optimization problems in various application areas such as science, engineering, economics, and management are nonlinear and difficult to find the optimal solutions. Several natureinspired optimization algorithms including particle swarm optimization algorithm [1], Bee algorithm [2], Firefly algorithm [3], Bat algorithm [4], and Artificial Bee Colony (ABC) algorithm [5] have been developed and applied to solve the problems. The ABC algorithm, proposed by Karaboga in 2005, has been shown to be competitive to some other algorithms [6–9]. It is widely used in many applications, for example, image template matching [10], virus evolution [11], inverse analysis problem [12], and clustering problem [13]. According to the experimental results in [5, 9, 14–16], ABC still has some deficiencies in dealing with the functions having narrow curving valley and the multimodal functions. It also faces a slow convergence speed [9, 11, 15, 17–21], premature convergence, and getting trapped into local optima [9, 15, 17, 22].
In order to address these deficiencies, some improved versions of ABC have been proposed. In 2010, Zhu and Kwong presented the Gbestguided ABC (GABC) algorithm [14]. They improved the search equation to increase the exploitation in employed and onlooker bee phases by using the strategy of the particle swarm optimization. The experimental results showed that GABC outperformed ABC for most test problems. In 2012, Gao et al. proposed a modified ABC algorithm (ABC/best) [9], which is inspired by the differential evolution algorithm, to improve the exploitation by searching only around the best solution of the previous iteration. Their results showed better performance when compared with ABC. In 2016, Anuar et al. developed ABC with the rate of change technique (ABCROC) based on the changing of slope on the performance graph to replace the parameter limit in the scout bee phase of ABC [23] and ABCROC also produces promising results.
In this paper, we proposed the Modified Artificial Bee Colony (MABC) algorithm to improve all phases of ABC algorithm. We generate the initial population by using the search space division as in [24] to provide high quality initial solutions. We use a new search equation for employed and onlooker bees, increase an opportunity for onlooker bees to search better food source positions, replace some worst positions by the new ones, and use the strategy of Firefly algorithm to improve unupdated positions for scout bees. The performance of MABC is tested on selected benchmark functions and compared with those of ABC, GABC, ABCROC, and ABC/best.
2. Algorithm Description
2.1. Artificial Bee Colony Algorithm
Artificial bee colony algorithm, proposed by Karaboga in 2005 [6], is a relatively new natureinspired optimization algorithm which is inspired by the behaviour of honeybee swarms. There are three kinds of population bees (employed bees, onlooker bees, and scout bees) working together to search for food source positions. Each employed bee searches for a new food source by communicating with another bee. If a new better position is found, the employed bee will memorize this position instead of the old one. Then the onlooker bees make the decisions to choose the food sources for exploration by using the information from employed bees. Again, if a new better position is found, the onlooker bee will also memorize this position. The employed bee whose food source has been abandoned for a period of time becomes a scout bee with the newly generated position for the next search cycle. The ABC algorithm can be described as follows:(1) Initialization: The initial food source associated with the bee is generated by for and , where is the number of bees and is the number of variables or dimension, is a random number in range of , and and are the lower and upper bounds for the dimension .(2) Employed bee phase: The bee shares a piece of information with the bee and generates a new food source by using the following equation: where is randomly chosen from to such that , is randomly chosen from to , and is a random number in . Note that is different from only at the component. The new position is evaluated and compared with the old . If , then replace by ; otherwise, hold and set where is the counter number of unimproved trials.(3) Onlooker bee phase: Onlooker bees select the food sources depending on their quality using the probability values which are computed by where If , then generate a new by the same equation (2). If , then replace by ; otherwise, retain and set .(4) Scout bee phase: If the food source cannot be improved through the limitation number of trails (), then generate a new position for by using (1).(5)Find the best position and the best value .(6)Repeat steps – until the stopping criterion is reached.
2.2. Firefly Algorithm
Firefly algorithm strategy is used in the scout bee phase of MABC algorithm, and its concept which is introduced by Yang in 2008 [3] is mentioned here. Firefly algorithm (FA) is inspired by the flashing behaviour of fireflies where their movements depend on the light intensity (brightness) and the attractiveness. The attractiveness is proportional to the brightness of a firefly; i.e., for any two fireflies the less bright one is attracted by the brighter one. On the other hand, the attractiveness is inversely proportional to the distance between two fireflies where the distance between any two fireflies and is given by the Euclidean normThe attractiveness is computed bywhere is the original light attractiveness at and is the light absorption coefficient. For simplicity, and are usually set to . From the movement of firefly to another more attractive firefly , a new position is given bywhere and are the random numbers between and .
3. Modified Artificial Bee Colony Algorithm
To construct MABC algorithm, all phases of ABC are determined. For the initialization, the population is generated based on search space division (SSD) proposed by He et al. [24]. For employed bee phase, we improve the search equation of ABC by gradually using the information of the best solution to accelerate the search. For onlooker bee phase, % of employed bees are selected with the same probability for additional search moves. Then % of worst positions are replaced by the new positions constructed by using the information of the current best solution and the number of the best solutions () as the scaling factor for providing long distance moves in the case that many best solutions are found for the multimodal functions. For scout bee phase, Firefly algorithm strategy is applied to construct the new position by moving an unupdated position to a new position based on the distance to a better solution. MABC algorithm is proposed as follows.(1)Initialization: To provide high quality initial solutions, generate the food source by using the search space division for and , where is a random number in .(2)Employed bee phase: The best position is used to generate a new food source by the following equation: where is randomly chosen from to such that , is randomly chosen from to and is a random number in . If , then replace by ; otherwise, hold and set , where is the counter number of unimproved trials.(3)Onlooker bee phase: The probability values used to make the decisions for onlooker bees are set to be a constant , if ; then (9) is used to generate a new position . If , then replace by ; otherwise, retain and set . In addition, the worst positions are replaced by the new ones using the equation where , , are the indexes of % worst positions, and are the randomly chosen indexes from to such that for all , and are the random numbers in range of , and is the number of the best positions obtained from previous generation.(4)Scout bee phase: Generate a new position for an unupdated position of a scout bee by using the following Firefly algorithm strategy: where is the first index such that .
4. Experimental Results and Discussion
To compare the performance of MABC algorithm with those of other methods, three experiments are conducted using different settings and performance measurements. We select 8 benchmark functions consisting of 2 functions from one of 4 different types: unimodal and separable (US), unimodal and nonseparable (UN), multimodal and separable (MS), and multimodal and nonseparable (MN). Their descriptions and 2D surface plots are shown in Table 1 and Figure 1, respectively.

(a) : Sphere function
(b) : SumSquares function
(c) : Rosenbrock function
(d) : Schwefel 2.22 function
(e) : Rastrigin function
(f) : Schwefel function
(g) : Ackley function
(h) : Griewank function
For the first experiment, we compare the performances of MABC, ABC [5], GABC [14], ABC/best/1, and ABC/best/2 [9]. As in [5, 9, 14], we set , , and , where is the maximum number of generations. The MABC algorithm is performed runs. Table 2 shows the mean and SD of values of our MABC compared with those of ABC, GABC, and ABC/best/1 and ABC/best/2 as reported in [5, 9, 14]. The best values are indicated in bold, and the values less than are reported as . In addition, a twotailed ttest at a 0.05 level of significance is used to compare the performances of MABC with those of ABC, GABC, ABC/best/1, and ABC/best/2 in this order. The values “+”, “0”, and “–” denote that MABC performs significantly better than, similarly to, and worse than a compared method. The results show that MABC clearly outperforms ABC and GABC for almost all 10 cases. Compared to ABC/best/1 and ABC/best/2, MABC performs better for 3 cases and performs similarly for 6 cases, and there is only one case that it performs slightly worse than ABC/best/1.

For the second experiment, we compare the performances of MABC, ABC [5], and ABCROC [23]. The parameters are set the same as in [23] where , = , , and . The MABC algorithm is performed runs. Table 3 presents the mean and SD of values of MABC compared with those of ABC and ABCROC as reported in [23]. The best value for each function is highlighted in boldface. We use a twotailed ttest at a significance level of 0.05 to compare the performances of MABC with ABC and ABCROC, respectively. The results show that MABC gives the best values for all 5 test functions and significantly outperforms both ABC and ABCROC for 2 cases and performs similarly for 3 cases.

Those two experiments use relatively low maximum number of generations () when we consider the best values obtained for the Rosenbrock in high dimensions. To be able to solve function and to better compare the convergence performances of MABC and ABC, we conduct the third experiment by setting the more accurate value to reach VTR= and which is related to dimension . The dimensions are also varied as while and are set the same as the second experiment. Both algorithms are performed for runs and the number of successful runs (NS), the mean of number of function evaluations (NF), and the percentage of standard deviations of the function evaluations (%SD) are reported in Table 4. The last column of the table shows the performances of MABC compared with ABC at a 0.05 level of significance. The values “+” and “0” denote that MABC performs significantly better than and similarly to ABC and the value “++” denotes that our MABC’s solutions can reach VTR= while those of ABC cannot. The results show that MABC performs successfully for runs for all test functions whereas ABC can perform successfully for almost all functions except for which it fails for all runs. Moreover, the NF values obtained by MABC are less than those obtained by ABC for all cases and our MABC significantly outperforms ABC for 33 out of 35 cases and performs similarly for 2 cases. For 5 cases of , MABC’s solutions can reach VTR= while those of ABC cannot. This indicates that MABC algorithm is more effective.

We also show the convergence graphs of MABC and ABC for in Figure 2 where the NF values are plotted against their corresponding VTR=, , , , and . The graphs show that MABC converges faster than ABC for all test functions.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
5. Conclusions
We have presented an efficient modification of ABC algorithm called MABC which improves all phases of ABC algorithm: the initial population is generated by using search space division, the new search equations are used in employed bee and onlooker bee phases, the opportunity of onlooker bee to find better solutions is increased and some worst positions are replaced by the new ones depending on the best position, and the unupdated positions are improved in scout bee phase. Extensive experiments show that MABC converges faster than ABC and it performs better than ABC and several previously proposed modification methods of ABC.
Data Availability
The data used to support the findings of this current study are included within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
Amnat Panniem acknowledges the Development and Promotion for Science and Technology talents project (DPST) for the financial support.
References
 J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks (ICNN ’95), vol. 4, pp. 1942–1948, Perth, Western Australia, NovemberDecember 1995. View at: Publisher Site  Google Scholar
 D. T. Pham, A. Ghanbarzadeh, E. Koç, and S. Otri, The Bees Algorithm Technical Note, Manufacturing Engineering Centre, Cardiff university, UK, 2005.
 X.S. Yang, “Firefly algorithms for multimodal optimization,” in Stochastic Algorithms: Foundations and Applications, vol. 5792 of Lecture Notes in Computer Science, pp. 169–178, Springer, Berlin, Germany, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 X.S. Yang, “A new metaheuristic batinspired algorithm,” in Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), J. R. Gonzalez, D. A. Pelta, C. Cruz, G. Terrazas, and N. Krasnogor, Eds., vol. 284 of Studies in Computational Intelligence, pp. 65–74, Springer, Berlin, Germany, 2010. View at: Publisher Site  Google Scholar
 D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Technical ReportTR06, Erciyes University, Kayseri, Turkey, 2005. View at: Google Scholar
 D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing, vol. 8, no. 1, pp. 687–697, 2008. View at: Publisher Site  Google Scholar
 D. Karaboga and B. Akay, “A comparative study of artificial Bee colony algorithm,” Applied Mathematics and Computation, vol. 214, no. 1, pp. 108–132, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 W. Gao, S. Liu, and L. Huang, “A global best artificial bee colony algorithm for global optimization,” Journal of Computational and Applied Mathematics, vol. 236, no. 11, pp. 2741–2753, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 B. Li, L. G. Gong, and Y. Li, “A novel artificial bee colony algorithm based on internalfeedback strategy for image template matching,” The Scientific World Journal, vol. 2014, Article ID 906861, 14 pages, 2014. View at: Publisher Site  Google Scholar
 Y. Cheng, “Modified ABC algorithm in virus evolution,” in Proceedings of the 2016 IEEE International Symposium on Computer, Consumer and Control, IS3C 2016, pp. 805–808, China, July 2016. View at: Google Scholar
 F. Kang, J. Li, and Q. Xu, “Structural inverse analysis by hybrid simplex artificial bee colony algorithms,” Computers & Structures, vol. 87, no. 1314, pp. 861–870, 2009. View at: Publisher Site  Google Scholar
 W. Zou, Y. Zhu, H. Chen, and X. Sui, “A clustering approach using cooperative artificial bee colony algorithm,” Discrete Dynamics in Nature and Society, vol. 2010, Article ID 459796, 16 pages, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 G. Zhu and S. Kwong, “Gbestguided artificial bee colony algorithm for numerical function optimization,” Applied Mathematics and Computation, vol. 217, no. 7, pp. 3166–3173, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 G. Li, P. Niu, and X. Xiao, “Development and investigation of efficient artificial bee colony algorithm for numerical function optimization,” Applied Soft Computing, vol. 12, no. 1, pp. 320–332, 2012. View at: Publisher Site  Google Scholar
 F. Kang, J. J. Li, and H. J. Li, “Artificial bee colony algorithm and pattern search hybridized for global optimization,” Applied Soft Computing, vol. 13, no. 4, pp. 1781–1791, 2013. View at: Publisher Site  Google Scholar
 X. Kong, S. Liu, and Z. Wang, “An Improved Artificial Bee Colony Algorithm and Its Application,” International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 6, no. 6, pp. 259–274, 2013. View at: Publisher Site  Google Scholar
 M. Alam, M. Islam, and K. Murase, “Artificial bee colony algorithm with improved exploration for numerical function optimization,” in Proceeding of the 13th International Conference Intelligent Data Engineering and Automated Learning, pp. 1–8, Natal, Brazil, 2011. View at: Google Scholar
 N. Sulaiman, J. MohamadSaleh, and A. G. Abro, “A modified artificial bee colony (JAABC) optimization algorithm,” in Proceedings of the 2013 International Conference on Applied Mathematics and Computational Methods in Engineering, pp. 74–79, Rhodes Island, Greece, 2013. View at: Publisher Site  Google Scholar
 W. Liu, “A Multistrategy Optimization Improved Artificial Bee Colony Algorithm,” The Scientific World Journal, vol. 2014, Article ID 129483, 10 pages, 2014. View at: Publisher Site  Google Scholar
 Y. Gaowei and L. Chuangqin, “An effective refinement Artificial Bee Colony optimization algorithm based on chaotic search and application for PID control tuning,” Journal of Computational Information Systems, vol. 7, no. 9, pp. 3309–3316, 2011. View at: Google Scholar
 S. Shapla, H. Haque, and M. Alam, “Explorative artificial bee colony algorithm: A novel swarm intelligence based algorithm for continuous function optimization,” International Journal of Science and Research, vol. 4, no. 7, pp. 1339–1344, 2015. View at: Google Scholar
 S. Anuar, A. Selamat, and R. Sallehuddin, “A modified scout bee for artificial bee colony algorithm and its performance on optimization problems,” Journal of King Saud University  Computer and Information Sciences, vol. 28, no. 4, pp. 395–406, 2016. View at: Publisher Site  Google Scholar
 Z. He, C. Ma, X. Wang et al., “A Modified Artificial Bee Colony Algorithm Based on Search Space Division and Disruptive Selection Strategy,” Mathematical Problems in Engineering, vol. 2014, Article ID 432654, 14 pages, 2014. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2018 Amnat Panniem and Pikul Puphasuk. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.