Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2016, Article ID 8085953, 13 pages
http://dx.doi.org/10.1155/2016/8085953
Research Article

An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch

Department of Industrial Engineering, Uludag University, Görükle Campus, 16059 Bursa, Turkey

Received 3 July 2015; Accepted 9 September 2015

Academic Editor: Ezequiel López-Rubio

Copyright © 2016 Alkın Yurtkuran and Erdal Emel. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature.

1. Introduction

Optimization techniques play an important role in the field of science and engineering. Over the last five decades, numerous algorithms have been developed to solve complex optimization algorithms. Since more and more present-day problems turn out to be nonlinear, multimodal, discontinuous, or dynamic in nature, derivative-free, nonexact solution methods attract ever-increasing attention. Evolutionary biology or swarm behaviors inspired most of these methods. There have been several classes of algorithms proposed in this evolutionary or swarm intelligence framework including genetic algorithms [1, 2], memetic algorithms [3], differential evolution (DE) [4], ant colony optimization (ACO) [5], particle swarm optimization (PSO) [6], artificial bee colony algorithm (ABC) [7], cuckoo search [8], and firefly algorithm [9].

The ABC is a biologically inspired population-based metaheuristic algorithm that mimics the foraging behavior of honeybee swarms [7]. Due to its simplicity and ease of application, the ABC has been widely used to solve both continuous and discrete optimization problems since its introduction [10]. It has been shown that ABC tends to suffer poor intensification performance on complex problems [1113]. To improve the intensification performance of ABC, many researchers have focused on the search rules as they control the tradeoff between diversification and intensification. Diversification means the ability of an algorithm to search for unvisited points in the search region, whereas intensification is the process of refining those points within the neighborhood of previously visited locations to improve solution quality. Various new search strategies, mostly inspired from PSO and DE, have been proposed in the literature. Zhu and Kwong [14] proposed a global best guided ABC, which utilizes the global best individual’s information within the search equation similar to PSO. Gao et al. [15] introduced another variant of global best ABC. Inspired by DE, Gao and Liu [13] introduced a modified version of the ABC in which ABC/Best/1 and ABC/Rand/1 were employed as local search equations. Kang et al. [16] described the Rosenbrock ABC, which combines Rosenbrock’s rotational method with the original ABC. To improve diversification, Alatas [11] employed chaotic maps for initialization and chaotic searches within a search strategy. Akay and Karaboga [17] introduced a modified version of the ABC in which frequency of perturbation is controlled adaptively and the ratio of variance operator was introduced. Liao et al. [18] proposed a detailed experimental analysis and comparison of an ABC variant with different search equations. Gao et al. [19] introduced two new search equations for onlooker and employed bee phases and a new robust comparison technique for candidate solutions. ABC. Qiu et al. [20] were inspired from DE/current-to-best/1 strategy in DE algorithm and proposed a modified ABC. Banitalebi et al. [21] proposed an enhanced compact ABC, which did not store the actual population of candidate solution; instead their approach employed probabilistic representation. Wang et al. [22] presented multistrategy ABC, in which a pool of different search strategies was constructed and various search strategies were used during the search process. Gao et al. [23] introduced a bare bones ABC with parameter adaptation and fitness-based neighborhood to improve the intensification performance of standard ABC. Ma et al. [24] reduced the redundant search moves and maintained the diversity of the swarm by introducing hybrid ABC with life cycle and social learning. Furthermore, ABC has been successfully applied to solve various types of optimization problems, such as production scheduling [25, 26], vehicle routing [27], location-allocation problem [28], image segmentation [29], wireless sensor network routing [30], leaf-constrained minimum spanning tree problem [31], clustering problem [32], fuel management optimization [33], and many others [3436]. Readers can refer to Karaboga et al. [10] for an extensive literature review of the ABC and its applications.

This study presents an enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) in order to solve global optimization problems efficiently. In ABC-SA, three search mechanisms with different diversification and intensification characteristics are employed. Moreover, search mechanism selection probabilities , and are introduced to control the balance between diversification and intensification. In our proposed approach, a search mechanism is established using the selection probabilities to generate a new neighbor solution from the current one. Additionally, a solution acceptance rule is implemented, in which not only better solutions but also worse solutions may be accepted by using a probability function. A nonlinearly decreasing acceptance probability function is employed, thus allowing worse solutions to be more likely accepted in the early phases of the search. Therefore, ABC-SA algorithm explores the search space more widespread, especially in the early phases of the search process. By using solution acceptance rule and implementing different search mechanisms of contrasting nature, ABC-SA balances the trade-off between diversification and intensification efficiently. The proposed approach is tested on six different benchmark functions with varying dimensions and compared to novel ABC, PSO, and DE variants. Computational results reveal that ABC-SA outperforms competitor algorithms in terms of solution quality.

The main contributions of the proposed study are as follows:(i)Three different search mechanisms are employed with varying diversification and intensification abilities. Probabilistic multisearch with predetermined probability values are employed to determine the search mechanism to be used to generate candidate solutions. Therefore, ABC-SA explores and exploits the search space efficiently.(ii)Instead of a greedy selection, a new candidate solution acceptance rule is integrated, where a worse solution may have a chance to be accepted as new solution. By the help of this new acceptance rule, ABC-SA achieves better diversification performance, specifically in the early phases of the search.

The remainder of this paper is structured as follows: Section 2 presents the traditional ABC; Section 3 introduces the proposed framework; the instances, parameter settings, and computational results are presented in Section 4 and finally Section 5 concludes the paper.

2. Artificial Bee Colony Algorithm

The ABC has inspired from the organizational nature and foraging behavior of honeybee swarms. In the ABC algorithm, the bee colony comprises three kinds of bees: employed bees, onlooker bees, and scout bees. Each bee has a specialized task in the colony to maximize the nectar amount that is stored in the hive. In ABC, each food source is placed in the -dimensional search space and represents a potential solution to the optimization problem. The amount of nectar in the food source is assumed to be the fitness value of a food source. Generally, the number of employed and onlooker bees is the same and equal to the number of food sources.

Each employed bee belongs to a food source and is responsible for mining the corresponding food source. Then, employed bees pass the nectar information to onlooker bees in the “dance area.” Onlooker bees wait in the hive and select a food source to mine based on the information coming from the employed bees. Here, more beneficial food sources will have higher selection probabilities to be selected by onlooker bees. In ABC, in order to decide if a food source is abandoned or not, trial counters and a predetermined limit parameter are used. If a solution represented by a food source does not improve during a number of trials (limit), the food source is abandoned. When the food source is abandoned, the corresponding employed bee will become a scout bee and randomly generate a new food source and replace it with the abandoned one.

The ABC algorithm consists of four main steps: initialization, employed bee phase, onlooker bee phase, and scout bee phase. After the initialization step, the other three main steps of the algorithm are carried out repeatedly in a loop until the termination condition is met. The main steps of the ABC algorithm are as follows.

Step 1 (initialization). In the initialization step, the ABC generates a randomly distributed population of SN solutions (food sources), where SN also denotes the number of employed or onlooker bees. Let represent the th food source, where is the problem size. Each food source is generated within the limited range of th index bywhere , ,    is a uniformly distributed random real number in , and and are the lower and upper bounds for the dimension , respectively. Moreover, a trial counter for each food source is initialized.

Step 2 (employed bee phase). In the employed bee phase, each employed bee visits a food source and generates a neighboring food source in the vicinity of the selected food source. Employed bees search a new solution, , by performing a local search around each food source as follows:where is a randomly selected index and is a randomly chosen food source that is not equal to ; that is, . is a random number within the range generated specifically for each and combination. A greedy selection is applied between and by selecting the better one.

Step 3 (onlooker bee phase). Unlike the employed bees, onlooker bees select a food source depending on the probability value , which is determined by nectar amount associated with that food source. The value of is calculated for th food source as follows:where is the fitness value of solution and calculated as in (4) for minimization problems. Different fitness functions are employed for maximization problems. By using this type of roulette wheel based probabilistic selection, better food sources will more likely be visited by onlooker bees. Therefore, onlooker bees try to find new candidate food sources around good solutions. Once the onlooker bee chooses the food source, it generates a new solution using (2). Similar to the employed bee phase, a greedy selection is carried out between and .

Step 4 (scout bee phase). A trial counter is associated with each food source, which depicts the number of tries that the food source cannot be improved. If a food source cannot be improved for a predetermined number of tries (limit) during the onlooker and employed bee phases, then the employed bee associated with that food source becomes a scout bee. Then, the scout bee finds a new food source using (1). By implementing the scout bee phase, the ABC algorithm easily escapes from minimums and improves its diversification performance.

It should be noted that, in the employed bee phase, a local search is applied to each food source, whereas in the onlooker bee phase better food sources will more likely be updated. Therefore, in ABC algorithm, the employed bee phase is responsible for diversification whereas the onlooker bee phase is responsible of intensification. The flow chart of the ABC is given in Figure 1.

Figure 1: Flowchart of ABC.

3. Proposed Framework

In this section, the proposed algorithm is described in detail. First, a solution acceptance rule is presented. Second, a novel probabilistic multisearch mechanism is proposed. Finally, the complete ABC-SA mechanism is given.

3.1. Solution Acceptance Rule

In order to strengthen the diversification ability of ABC-SA mechanism, a solution acceptance rule is proposed. Instead of greedy selection in both employed and onlooker bee phases, an acceptance probability is given to worse solutions. The main idea behind this acceptance probability is not to restrict the search moves to only better solutions. By accepting a worse solution, the procedure may escape from a local optimum and explore the search space effectively. In ABC-SA algorithm, if a worse solution is generated, it is accepted if the following condition holds: where is a random real number within , is the acceptance probability, denotes the initial probability, and and represent the current iteration number and the maximum iteration number, respectively. According to (6), the acceptance probability is nonlinearly decreased from to zero during the search process. As can be seen from (6), when and the range of is . A typical graph is given in Figure 2 and Algorithm 1 presents the implementation of the solution acceptance rule. At this point, it is important to note that the trial counter is incremented, whether a worse candidate solution is accepted or not.

Algorithm 1: Solution acceptance rule.
Figure 2: Acceptance probability curve.
3.2. Probabilistic Multisearch Strategy

In standard ABC, a candidate solution is generated using the information of the parent food source with the guidance of the term in (3). However, there is no guarantee that a better individual influences the candidate solution; therefore, it is possible to have a poor convergence speed and intensification performance. In fact, studying search equations is a trending topic to improve the ABC’s performance. Recently, numerous search equations have been proposed, such as [1316, 19, 20, 37, 38]. It is well known that the balance between diversification and intensification is the most critical part of any metaheuristic algorithm.

In ABC-SA approach, instead of employing a single search mechanism throughout the search process, a probabilistic multisearch mechanism with three different search rules is used. A probabilistic selection is employed using predefined probability parameters to select the search rule within both employed and onlooker bee phases. The three search rules which were proposed by [7, 13, 14], respectively, are presented as follows:where is a food source, is a randomly selected index for all , and , respectively. is a randomly chosen food source where . stands for the global best solution, whereas is the best solution in the current population. represents a real random number within the range of [7]. Finally, is a real random number within the range of where is a predetermined number [13].

Equation (7) is the original search rule, which was discussed in previous sections. Equation (8) is presented to improve the intensification capability of ABC. Equation (8) uses the information provided by the global best solution which is similar to PSO. In (9), guides the search with the random effect of the term . Equation (7) has an explorative character, whereas (9) favors intensification. On the other hand, (8) explores the search space using the second term and exploits effectively by the third term. Therefore, (8) balances diversification and intensification performance. In summary, the proposed ABC-SA uses three different search rules to achieve a trade-off between diversification and intensification. In ABC-SA, search probabilities , and are introduced such that and to select a search rule to be used in the employed and the onlooker bee phases. A roulette wheel method is employed with three cumulative ranges , , and assigned to (7), (8), and (9), respectively, where . Algorithm 2 shows the mechanism of probabilistic multisearch.

Algorithm 2: Probabilistic multisearch.
3.3. Proposed Approach

Algorithm 3 summarizes the ABC-SA framework. The novel parts of the ABC-SA mechanism are the probabilistic multisearch (Lines 9 and 19) and the solution acceptance rule (Lines 10 and 20) sections.

Algorithm 3: ABC-SA framework.

4. Computational Results

4.1. Test Instances

In literature, many test functions with different characters were used to test algorithms [11, 13, 15, 17, 1921, 34, 3740]. Unimodal functions have one local minimum as the global optimum. These functions are generally used to test the intensification ability of algorithms. Multimodal functions have one or more local optimums which may be the global optimum. Therefore, diversification behavior of algorithms is analyzed on multimodal instances. Separable functions can be written as sum of functions with one variable, whereas nonseparable functions can not be reformulated as subfunctions. In this study, to analyze the performance of the proposed ABC-SA algorithm, 13 scalable benchmark functions with dimensions ,  , and are used and listed in Table 1. They are Rosenbrock, Ackley, Rastrigin, Weierstrass, Schwefel 2.26, Shifted Sphere, Shifted Schwefel 1.2, Shifted Rosenbrock, Shifted Rastrigin, Step, Penalized 2, and Alpine. In Table 1, function label, name, formulation, type (UN: unimodal and nonseparable, MS: multimodal and separable, and MN: multimodal and nonseparable), range, and optimal values () are given.

Table 1: Test functions used in experiments.
4.2. Parameters Settings

Parameter settings may have a great influence on the computational results. The ABC-SA mechanism has seven control parameters such as maximum iteration number (), , population size (), , , , and . Maximum iteration number is the termination condition, and is the initial acceptance probability. First, is set to 4,000, , where is the dimension of the problem [21], is taken to be 40 for 50 and 100 problems and 50 for 200 problems [40], and is set to be a random real number within (0, 1.5) [14]. Then, preliminary experiments were conducted with appropriate combinations of the following parameter values to determine the best settings: = 0.25, 0.20, 0.15, 0.10, and 0.05, = 0.2, 0.4, and 0.6, = 0.2, 0.4, and 0.6, = 0.2, 0.4, and 0.6.

From the results of the pilot studies, , , , and settings achieved the best results. Therefore, these parameter settings are used for further experiments.

4.3. Comparison with ABC Variants

In this section, aforementioned ABC-SA is implemented and evaluated by benchmarking with other well-known ABC variants including the original ABC [7], GABC [14], and IABC [13] on problems F1–F13.

The parameters of test algorithms are set to their original values given in their corresponding papers, except for the maximum number of function evaluations, population size, and limit, which are set to the same values for all ABC variants. ABC-SA, ABC, and GABC implement random initialization mechanisms whereas IABC employs a chaotic initialization as described in [13]. All algorithms have been simulated in MATLAB environment and executed on the same computer with Intel Xeon CPU (2.67 GHz) and 16 GB of memory.

The computational results are presented in Table 2 for 50 problems, Table 3 for 100 problems, and Table 4 for 200 problems. In Tables 24, results are given in terms of mean and standard deviation of the objective values due to the repetitive runs for the global best solutions. All algorithms were run 30 times with random seeds and the stopping criteria set to 4,000 iteration, which means that 320,000 functions evaluations for 50 and 100 problems and 400,000 function evaluations for 200 problems approximately. For a precise and pairwise comparison, statistical significances of the differences between the means of two algorithms are analyzed using -tests where significance level is set to 0.05. In Tables 24, “+” in the columns next to competing algorithms shows that ABC-SA outperforms the competitor algorithm, “=” indicates that the difference between the ABC-SA and the compared algorithm is not statistically significant, and “−” depicts that the competitor algorithm is better than ABC-SA at a level of 0.05 significance.

Table 2: Comparisons of ABC-SA and ABC variants on 50 problems.
Table 3: Comparisons of ABC-SA and ABC variants on 100 problems.
Table 4: Comparisons of ABC-SA and ABC variants on 200 problems.

Tables 24 show that, according to pairwise tests, ABC-SA obtains statistically better results on 25, 28, and 28 cases out of 39 comparisons for each of the 50, 100, and 200 problem types, respectively. Specifically, ABC-SA is inferior to IABC on F6 and F9 with 200 and GABC on F12 with 100. There is no significant difference on the results that are obtained by ABC and ABC-SA on F1 (50 and 100), F4 (100 and 200), F7 (all dimensions), F8 200, F10 50, and F11 50. Moreover, on F1 50, F6 50, F4 (100 and 200), F7 (50 and 100), F8 200, F10 (50 and 100), F11 50, and F13 100, GABC and ABC-SA perform statistically similar. Further, ABC-SA and IABC perform equally well, namely, on F1 50, F6 (50 and 100), F7 (all dimensions), F8 200, F10 (all dimensions), F11 50, and F13 100. Standard deviation of the results also indicates that ABC-SA has a stable performance. According to the results of Tables 24, one can safely conclude that ABC-SA significantly surpasses ABC, GABC, and IABC on 50, 100, and 200 problems.

To vividly describe the effectiveness of ABC-SA framework, the convergence curves of some benchmark problems are given in Figure 3. According to the figure, ABC-SA shows better convergence behavior on the majority of test cases when compared to ABC, GABC, and IABC.

Figure 3: Convergence curves for ABC-SA and ABC variants.

Furthermore, mean acceptance rate curves for solution acceptance rule in ABC-SA framework are given in Figure 4 and the acceptance rate is determined as follows: The curves in Figure 4 clearly coincide with the acceptance probability curve given in Figure 2. Figure 4 also shows the nonlinear decreasing of acceptance rate throughout the search process.

Figure 4: Mean acceptance rates.
4.4. Comparison with PSO and DE Variants

The performance of ABC-SA is also tested against novel and powerful variants of DE and PSO. The competitor algorithms are self-adapting DE (jDE) [41], adaptive DE with optional external archive (JADE) [42], self-adaptive DE (SaDE) [43], comprehensive learning PSO (CLPSO) [44], self-organizing hierarchical PSO with time-varying acceleration coefficients (HPSO-TVAC) [45], and fully informed particle swarm (FIPS) [46]. The results of these algorithms are taken directly from corresponding studies. The experimental results for 30 problems are shown in Tables 5 and 6 for DE and PSO variants, respectively. Some of the benchmarks problems are not included in the comparisons, since results on these problems were not reported in competitor studies. The previous parameter setting for ABC-SA is used, but this time maximum function evaluation number (Max.FE) is employed as the stopping criteria. Since the results of competitor algorithms are taken directly from corresponding studies, statistical significance tests could not be applied. Therefore, in this part of the analysis, mean and standard deviations of results are compared directly. In Tables 5 and 6, the winner algorithms are indicated in bold character according to the mean results of 30 independent runs. As can be seen from Tables 5 and 6, ABC-SA outperforms other algorithms on all cases, except in the case of F2, F4, and F12. SaDE performs better than the ABC-SA on F4 and HPSO-TVAC outperforms ABC-SA on only F2 and F12. ABC achieves better results on the majority of the instances in terms of robustness according to the standard deviations of the results. These results also indicate the effectiveness of ABC-SA when compared to other novel swarm based and evolutionary algorithms.

Table 5: The comparisons of ABC-SA and DE variants on 30 problems.
Table 6: The comparisons of ABC-SA and PSO variants on 30 problems.

5. Conclusion and Future Work

This paper presents a modified ABC algorithm, namely, the ABC-SA, enhanced with a solution acceptance rule and a probabilistic multisearch strategy. In ABC-SA, instead of a greedy selection, a new acceptance rule is presented, where a worse candidate solution has a probability to be accepted. Furthermore, to balance the diversification and intensification tendency of the algorithm, a probabilistic multisearch mechanism is employed. In the probabilistic multisearch, a search rule is selected among three alternatives according to their predetermined probabilities. The proposed algorithm is very effective as compared to other novel ABC variants and state-of-the-art algorithms. Several experimental studies are conducted and results show that ABC-SA outperforms all other competitor algorithms on the majority of the test cases. Future research will be along the line of implementing the ABC-SA to solve complex engineering problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. D. E. Golberg, Genetic Algorithms in Search, Optimization, and Machine Learning, vol. 1989, Addison-Wesley, 1989.
  2. J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT Press, 1992.
  3. P. Moscato, “On evolution, search, optimization, genetic algorithms and martial arts: towards memetic algorithms,” C3P Report 826, Caltech Concurrent Computation Program, 1989. View at Google Scholar
  4. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  5. A. Colorni, M. Dorigo, and V. Maniezzo, “Distributed optimization by ant colonies,” in Proceedings of the 1st European Conference on Artificial Life, vol. 142, pp. 134–142, Paris, France, 1991.
  6. R. C. Eberhart and J. Kennedy, “New optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science, vol. 1, pp. 39–43, New York, NY, USA, October 1995. View at Scopus
  7. D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. X.-S. Yang and S. Deb, “Engineering optimisation by cuckoo search,” International Journal of Mathematical Modelling and Numerical Optimisation, vol. 1, no. 4, pp. 330–343, 2010. View at Publisher · View at Google Scholar · View at Scopus
  9. I. Fister Jr., X.-S. Yang, and J. Brest, “A comprehensive review of firefly algorithms,” Swarm and Evolutionary Computation, vol. 13, pp. 34–46, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. D. Karaboga, B. Gorkemli, C. Ozturk, and N. Karaboga, “A comprehensive survey: artificial bee colony (ABC) algorithm and applications,” Artificial Intelligence Review, vol. 42, no. 1, pp. 21–57, 2014. View at Publisher · View at Google Scholar · View at Scopus
  11. B. Alatas, “Chaotic bee colony algorithms for global numerical optimization,” Expert Systems with Applications, vol. 37, no. 8, pp. 5682–5687, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. A. Banharnsakun, B. Sirinaovakul, and T. Achalakul, “Job shop scheduling with the best-so-far ABC,” Engineering Applications of Artificial Intelligence, vol. 25, no. 3, pp. 583–593, 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. W. Gao and S. Liu, “Improved artificial bee colony algorithm for global optimization,” Information Processing Letters, vol. 111, no. 17, pp. 871–882, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. G. Zhu and S. Kwong, “Gbest-guided artificial bee colony algorithm for numerical function optimization,” Applied Mathematics and Computation, vol. 217, no. 7, pp. 3166–3173, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. W. Gao, S. Liu, and L. Huang, “A global best artificial bee colony algorithm for global optimization,” Journal of Computational and Applied Mathematics, vol. 236, no. 11, pp. 2741–2753, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. F. Kang, J. Li, and Z. Ma, “Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions,” Information Sciences, vol. 181, no. 16, pp. 3508–3531, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  17. B. Akay and D. Karaboga, “A modified artificial bee colony algorithm for real-parameter optimization,” Information Sciences, vol. 192, no. 1, pp. 120–142, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. T. Liao, D. Aydin, and T. Stützle, “Artificial bee colonies for continuous optimization: experimental analysis and improvements,” Swarm Intelligence, vol. 7, no. 4, pp. 327–356, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. W.-F. Gao, S.-Y. Liu, and L.-L. Huang, “Enhancing artificial bee colony algorithm using more information-based search equations,” Information Sciences, vol. 270, no. 1, pp. 112–133, 2014. View at Publisher · View at Google Scholar · View at Scopus
  20. J. Qiu, J. Wang, D. Yang, and J. Xie, “An artificial bee colony algorithm with modified search strategies for global numerical optimization,” Journal of Theoretical & Applied Information Technology, vol. 48, no. 1, pp. 293–302, 2013. View at Google Scholar · View at Scopus
  21. A. Banitalebi, M. I. A. Aziz, A. Bahar, and Z. A. Aziz, “Enhanced compact artificial bee colony,” Information Sciences, vol. 298, pp. 491–511, 2015. View at Publisher · View at Google Scholar · View at Scopus
  22. H. Wang, Z. Wu, S. Rahnamayan, H. Sun, Y. Liu, and J.-S. Pan, “Multi-strategy ensemble artificial bee colony algorithm,” Information Sciences, vol. 279, pp. 587–603, 2014. View at Publisher · View at Google Scholar · View at Scopus
  23. W. Gao, F. T. Chan, L. Huang, and S. Liu, “Bare bones artificial bee colony algorithm with parameter adaptation and fitness-based neighborhood,” Information Sciences, vol. 316, pp. 180–200, 2015. View at Publisher · View at Google Scholar
  24. L. Ma, K. Hu, Y. Zhu, and H. Chen, “A hybrid artificial bee colony optimizer by combining with life-cycle, Powell's search and crossover,” Applied Mathematics and Computation, vol. 252, pp. 133–154, 2015. View at Publisher · View at Google Scholar · View at Scopus
  25. Q.-K. Pan, M. F. Tasgetiren, P. N. Suganthan, and T. J. Chua, “A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem,” Information Sciences, vol. 181, no. 12, pp. 2455–2468, 2011. View at Publisher · View at Google Scholar · View at Scopus
  26. J.-Q. Li, Q.-K. Pan, and K.-Z. Gao, “Pareto-based discrete artificial bee colony algorithm for multi-objective flexible job shop scheduling problems,” The International Journal of Advanced Manufacturing Technology, vol. 55, no. 9–12, pp. 1159–1169, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. W. Y. Szeto, Y. Wu, and S. C. Ho, “An artificial bee colony algorithm for the capacitated vehicle routing problem,” European Journal of Operational Research, vol. 215, no. 1, pp. 126–135, 2011. View at Publisher · View at Google Scholar · View at Scopus
  28. A. Yurtkuran and E. Emel, “A modified artificial bee colony algorithm for p-center problems,” The Scientific World Journal, vol. 2014, Article ID 824196, 9 pages, 2014. View at Publisher · View at Google Scholar · View at Scopus
  29. M. Ma, J. Liang, M. Guo, Y. Fan, and Y. Yin, “SAR image segmentation based on artificial bee colony algorithm,” Applied Soft Computing, vol. 11, no. 8, pp. 5205–5214, 2011. View at Publisher · View at Google Scholar · View at Scopus
  30. D. Karaboga, S. Okdem, and C. Ozturk, “Cluster based wireless sensor network routing using artificial bee colony algorithm,” Wireless Networks, vol. 18, no. 7, pp. 847–860, 2012. View at Publisher · View at Google Scholar · View at Scopus
  31. A. Singh, “An artificial bee colony algorithm for the leaf-constrained minimum spanning tree problem,” Applied Soft Computing, vol. 9, no. 2, pp. 625–631, 2009. View at Publisher · View at Google Scholar · View at Scopus
  32. D. Karaboga and C. Ozturk, “A novel clustering approach: Artificial Bee Colony (ABC) algorithm,” Applied Soft Computing Journal, vol. 11, no. 1, pp. 652–657, 2011. View at Publisher · View at Google Scholar · View at Scopus
  33. I. M. S. De Oliveira and R. Schirru, “Swarm intelligence of artificial bees applied to in-core fuel management optimization,” Annals of Nuclear Energy, vol. 38, no. 5, pp. 1039–1045, 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. W.-F. Gao, S.-Y. Liu, and F. Jiang, “An improved artificial bee colony algorithm for directing orbits of chaotic systems,” Applied Mathematics and Computation, vol. 218, no. 7, pp. 3868–3879, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  35. W.-C. Hong, “Electric load forecasting by seasonal recurrent SVR (support vector regression) with chaotic artificial bee colony algorithm,” Energy, vol. 36, no. 9, pp. 5568–5578, 2011. View at Publisher · View at Google Scholar · View at Scopus
  36. S. K. Kumar, M. K. Tiwari, and R. F. Babiceanu, “Minimisation of supply chain cost with embedded risk using computational intelligence approaches,” International Journal of Production Research, vol. 48, no. 13, pp. 3717–3739, 2010. View at Publisher · View at Google Scholar · View at Scopus
  37. W.-F. Gao and S.-Y. Liu, “A modified artificial bee colony algorithm,” Computers & Operations Research, vol. 39, no. 3, pp. 687–697, 2012. View at Publisher · View at Google Scholar · View at Scopus
  38. W.-F. Gao, S.-Y. Liu, and L.-L. Huang, “A novel artificial bee colony algorithm with Powell's method,” Applied Soft Computing Journal, vol. 13, no. 9, pp. 3763–3775, 2013. View at Publisher · View at Google Scholar · View at Scopus
  39. D. Karaboga and B. Akay, “A comparative study of artificial bee colony algorithm,” Applied Mathematics and Computation, vol. 214, no. 1, pp. 108–132, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  40. D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing, vol. 8, no. 1, pp. 687–697, 2008. View at Publisher · View at Google Scholar · View at Scopus
  41. J. Brest, S. Greiner, B. Bošković, M. Mernik, and V. Zumer, “Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 6, pp. 646–657, 2006. View at Publisher · View at Google Scholar · View at Scopus
  42. J. Zhang and A. C. Sanderson, “JADE: adaptive differential evolution with optional external archive,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 945–958, 2009. View at Publisher · View at Google Scholar
  43. A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential evolution algorithm with strategy adaptation for global numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 398–417, 2009. View at Publisher · View at Google Scholar · View at Scopus
  44. J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at Publisher · View at Google Scholar · View at Scopus
  45. A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at Publisher · View at Google Scholar · View at Scopus
  46. R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, 2004. View at Publisher · View at Google Scholar · View at Scopus