Table of Contents Author Guidelines Submit a Manuscript
Journal of Control Science and Engineering
Volume 2017, Article ID 4851493, 13 pages
https://doi.org/10.1155/2017/4851493
Research Article

Self-Adaptive Artificial Bee Colony for Function Optimization

1School of Energy and Power Engineering, Changsha University of Science & Engineering, Changsha 410114, China
2Hubei Key Laboratory of Power System Design and Test for Electrical Vehicle, Xiangyang 441053, China
3Guizhou Key Laboratory of Economics System Simulation, Guizhou University of Finance & Economics, Guiyang 550004, China
4Department of Chemical Engineering, University of Waterloo, ON, Canada N2L 3G1

Correspondence should be addressed to Wen Long; nc.ude.efug.liam@722wl

Received 16 February 2017; Revised 19 April 2017; Accepted 28 May 2017; Published 14 August 2017

Academic Editor: Shen Yin

Copyright © 2017 Mingzhu Tang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Artificial bee colony (ABC) is a novel population-based optimization method, having the advantage of less control parameters, being easy to implement, and having strong global optimization ability. However, ABC algorithm has some shortcomings concerning its position-updated equation, which is skilled in global search and bad at local search. In order to coordinate the ability of global and local search, we first propose a self-adaptive ABC algorithm (denoted as SABC) in which an improved position-updated equation is used to guide the search of new candidate individuals. In addition, good-point-set approach is introduced to produce the initial population and scout bees. The proposed SABC is tested on 12 well-known problems. The simulation results demonstrate that the proposed SABC algorithm has better search ability with other several ABC variants.

1. Introduction

Population-based optimization algorithms, such as whale optimization algorithm (WOA) [1], flower pollination algorithm (FPA) [2], bacterial foraging optimizer (BFO) [3], cuckoo search algorithm (CSA) [4], fruit fly optimization (FFO) [5], gravitational search optimizer (GSO) [6], and chemical reaction optimization (CRO) [7], have many advantages over classical optimization methods and have been successfully and broadly applied to solve global continuous optimization problems in the last few decades [6].

In this paper, we used the advantage of ABC, presented by Karaboga [8], which mimics the hunting behaviors of honey bee swarm. Simulation results show that ABC is superior to many other population-based optimization methods, namely, genetic algorithm (GA), evolution strategies (ES), and particle swarm optimization (PSO) [9, 10]. ABC has caused wide attention and applications since its invention in 2005, owing to its simplicity and fewer control parameters. However, ABC faces some challenging problems, namely, low precision and slow convergence. As a result, many improved versions of ABC have been proposed to overcome these shortcomings. Zhu and Kwong [11] presented an improved Gbest-guided ABC (denoted as GABC) through combining the global best (Gbest) individual with the position-updated equation to enhance the ability of local search. Luo et al. [12] presented an improved position-updated equation by using global best individual information to generate offspring individuals. Inspired by DE, Gao et al. [13] developed an improved ABC variant applying the bees search only near the global best one to enhance the exploitation. Xiang and An [14] developed a modified position-updated equation to accelerate the convergence speed. Moreover, chaotic optimization mechanism is introduced to avoid being trapped in local minima. Li et al. [15] presented an improved ABC variant based on inertia weight and accelerating factors and to coordinate the ability of global and local search. Gao and Liu [16] developed two modified position-updated equations, namely, “ABC/best/1” and “ABC/rand/1.” Kang et al. [17] developed a hybrid method which combines ABC algorithm and pattern search method to speed up convergence. Gao et al. [18] presented an improved version of ABC (denoted as CABC) based on a modified position-updated equation. In addition, the orthogonal experimental design is introduced.

As we know, for an optimization algorithm, both global and local search are necessary and they should be well coordinated to obtain better global performance [11]. Different from the previous work, the position-updated equation is modified by self-adaptive adopting of the previous and global best solution to generate new candidate offspring to coordinate the ability of global and local search. Moreover, the good-point-set approach is used to generate initial population. The proposed SABC is tested on 12 benchmark global optimization problems. The experimental results show that our SABC algorithm is superior to basic ABC and other ABC variants.

The remainder of this study is organized as follows. The standard ABC is described in Section 2. The improved ABC called SABC algorithm is proposed and analyzed in Section 3. In Section 4, 12 benchmark global optimization problems are used to test the proposed SABC algorithm. Finally, Section 5 summarized the conclusions.

2. Artificial Bee Colony

ABC is metaheuristic optimization method inspired by hunting behavior of honey bee swarm. At the initialization step, generate randomly solutions to construct an initial population:where ,  ; is a uniformly distributed random number; and are the lower and upper bounds for the dimension , respectively.

In onlooker bee stage, food source is chosen by probability where denotes the fitness value of and is defined aswhere denotes the objective function values of the decision vector .

A candidate food source position can be generated from the old one aswhere and are randomly selected individuals; is different from ; and is a random uniformly distributed number in

Pseudocode of basic ABC is given in Algorithm 1.

Algorithm 1: Pseudo-code of basic ABC algorithm.

3. Self-Adaptive Artificial Bee Colony (SABC)

3.1. Population Initialization

Note that, for ABC algorithm, in order to find the existing area of optimal solutions faster, initial population should cover the whole search space. Good-point-set method is widely applied to generate many uniformly distributed candidate individuals [19]. Therefore, we apply good-point-set technique for producing initial population of ABC to maintain diversity of population. The pseudocode of good-point-set technique is given in Algorithm 2.

Algorithm 2: Good point set method.

The uniformity properties of random method and good-point-set method are compared as given in Figure 1 and it displays the 80 points on a unit square generated using random number generator and good-point-set approach.

Figure 1: 80 points in the unit squares generated through random and good-point-set approaches.

From Figure 1, the candidate individuals produced through good-point-set technique are more uniform than the candidate individuals produced by random method. Thus, good-point-set method is preferred technique for generating initial population.

3.2. Modified Search Equation

On the basis of the position-updated equation depicted by (4), the offspring individual is produced through moving the previous individual towards (or far away from) another individual chosen randomly in population. Nevertheless, the randomly chosen individual is a good individual or a bad one; the probability is the same. Therefore, the offspring individual could not be guaranteed to be better than the previous one. In addition, the coefficient in (4) is a random number over the range and is a randomly selected solution from population. Thus, the position-updated equation depicted by (4) is skilled in global search and bad at local search [11].

For the sake of enhancing the performance of ABC, one active research trend is to investigate its position-updated equation. As mentioned above, the characteristics of the search equations of ABC have been extensively investigated. ABC researchers have suggested many empirical guidelines for modifying search equation during the last decade. It has been clear that some search equations can speed up the convergence [1113], and some others are suitable for the global search [20]. Indubitably, these experiences are extremely helpful for improving the performance of ABC. “ABC/rand/1” and “ABC/best/1” are invariably employed in lots of ABC variants and their characteristics have been commendably investigated. The equations of “ABC/rand/1” and “ABC/best/1” are stated as [16]where and are randomly selected from , and . is global best solution position vector and denotes dimension. is a uniformly distributed random number in . The “ABC/rand/1” search equation is one of many often used in the paper. In “ABC/rand/1” equation, all positions are randomly chosen in population and, consequently, it does not have any bias in particular positions and randomly selects new positions. Therefore, they often show better global search ability with slow convergence. “ABC/best/1” has faster convergence based on global optima position. However, they are more likely to fall into local optima.

In the ABC algorithm, appropriate coordinate of global search and local search is very important for obtaining the optima effectively. In order to bring about a balance between global and local search abilities of ABC, we first introduces a parameter , which coordinates the effect of previous position on the current one, by modifying (4) towhere is utilized to influence the balance between global and local search abilities of individuals, the indices and are randomly selected from , and . is global best solution position vector and denotes j dimension. is a uniformly distributed random number.

Note that the parameter is crucial in balancing the abilities of global and local search. When is equal to 1, (7) becomes (5). When decreases from 1 to 0, the global search ability of (7) will also decrease correspondingly. When takes 0, (7) is (6). The search performance of algorithm will adaptively adjust through changing the parameter . From (7), with a large value of in the early stage, individuals are admitted to move in all directions of solution space, instead of moving towards the best individual. A small value of allows the population to converge to the global optimal solution in later stage. Therefore, a well-tuned is very important. The parameter    in this paper is calculated according to the following function:where t denotes number of iterations and T denotes a predefined maximum number of iterations. Thus, the improved position-updated equation developed by (7) is able to balance the abilities of global and local search of SABC.

3.3. Rank Selection

As we know, the roulette wheel selection mechanism is employed in classical ABC. This selection strategy makes more chance to select individuals with higher fitness [14]. To keep population diversity better, a rank-based selection mechanism is introduced in this paper. Selection probability pi is defined [21]:where is the population size, is parameters, is current iteration, is maximum iteration.

3.4. The Proposed SABC Algorithm

The flow chart of proposed SABC algorithm is presented in Figure 2.

Figure 2: Flow chart of proposed SABC algorithm.

4. Simulation Results and Comparisons

4.1. Benchmark Test Functions

To evaluate the performance of SABC, 12 benchmark test problems from [14, 15, 22] are used and are shown in Table 1.

Table 1: Benchmark test functions.
4.2. Comparison between SABC and Basic ABC Algorithm

In SABC, population size N is 40, limit is 100, and the maximum iteration is 1000. Each experiment is repeated 30 runs independently. The experimental results of SABC and basic ABC are given in Table 2 regarding the best, the mean, the worst, the standard deviation (St.dev), and the convergence iteration (CI).

Table 2: Experimental results of SABC and ABC in .

As seen from Table 2, SABC is able to obtain global optima for three functions (f6, f8, and f10). Moreover, in seven functions (f1f4, f9, f11, and f12), the results obtained by SABC are pretty near to global optimal solution. Compared with ABC, SABC can obtain much better results than ABC on 12 functions, that is, f1f12. The convergence performance of SABC and basic ABC on 12 functions are drawn in Figure 3 so as to show the performance of SABC more clearly.

Figure 3: Convergence curve of SABC and basic ABC for twelve functions ().
4.3. Comparison between SABC and ABC Variants

SABC is compared against four high-performance ABC algorithms under three performance evaluation criteria: Mean, St.dev, and CI. These selected ABC variants such as Gbest-guided ABC (denoted as GABC) algorithm [11], an efficient and robust ABC (abbreviated as ERABC) algorithm [14], improved ABC (abbreviated as IABC) algorithm [15] and prediction and selection ABC (denoted as PSABC) [15]. The parameters settings are the same as those of ERABC [14] and PSABC [15] together with SABC. The comparative results have been shown in Tables 3 and 4. The results provided by other algorithms were directly taken from the original references for each approach.

Table 3: Performance comparison of GABC, ERABC, IABC, PSABC, and SABC on test functions (Dim = 30).
Table 4: Performance comparison of GABC, ERABC, IABC, PSABC, and SABC on test functions f1f12 (Dim = 50).

From Tables 3 and 4, compared to GABC, SABC could achieve much better “mean” and “standard deviation” results than GABC on 11 problems except for function f7. For the function f7, SABC algorithm obtained similar results. In addition, the convergence iteration obtained by SABC is smaller than those of ABC, except for function f7.

As can be seen from Tables 3 and 4, with respect to ERABC, SABC obtained better “mean” and “standard deviation” values on two functions (f3 and f4) and similar results on four test functions (f6, f8, f9, and f10). However, ERABC algorithm provided better results on six problems (f1-f2, f5, f7, and f11-f12) than SABC.

From Tables 3 and 4, compared to IABC, SABC obtained better results on five functions (f3, f5, f6, f11, and f12) and similar solutions on four functions (f7, f8, f9, and f10). However, IABC provided better solutions on three functions (f1, f2, and f4) than SABC algorithm.

In comparison with PSABC, in Tables 3 and 4, SABC provided better solutions on four test functions (f3, f5, f6, and f11) and similar results on five functions (f7, f8, f9, f10, and f12). However, the PSABC algorithm found better solutions on three functions (f1, f2, and f4).

4.4. Effects of Limit on the Performance of SABC

To investigate the impact of limit, four different test functions with are used. They are Sphere (f1), Rastrigin (f8), Ackley (f9), and Griewank (f10). Five different values of limit (i.e., 50, 100, 150, 200, and 250) are used to optimize the four high dimensional test functions. The mean values (mean) and the standard deviation values (St.dev) are shown in Table 5. The box plots of different limit values for four functions are presented in Figure 4.

Table 5: The experimental results of SABC with different limit values.
Figure 4: Box plots of different limit values for function f1, f8, f9, and f10 over 30 independent runs.

From Table 5, parameter limit is able to influence the performance of SABC. When limit is equal to 100, SABC achieves better performance for three functions (f1, f8, and f10). For the Ackley function (f9), the effect of limit on the performance of SABC is very little. From Figure 3 and Table 5, parameter limit = 100 is a suitable choice in SABC algorithm.

5. Conclusion

An improved version of ABC, called SABC, is developed by using good-point-set initialization employed to enhance the population distribution, rank-based selection strategy used to enhance the global search ability, self-adaptive position-updated equation applying for balancing the exploration and exploitation. The proposed SABC is tested on 12 well-known global optimization problems. The simulation results show that our algorithm is superior to the conventional ABC and other ABC variants. The further work includes the studies on how to develop SABC to deal with those constrained and engineering design problems.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (nos. 61403046, 61463009), the Natural Science Foundation of Hunan Province, China (no. 2015JJ3005), Science and Technology Foundation of Guizhou Province (Grant no. 1022), China Scholarship Council, Key Laboratory of Renewable Energy Electric-Technology of Hunan Province, Key Laboratory of Efficient & Clean Energy Utilization of Hunan Province, Hubei Key Laboratory of Power System Design and Test for Electrical Vehicle, and Hunan Province 2011 Collaborative Innovation Center of Clean Energy and Smart Grid.

References

  1. S. Mirjalili and A. Lewis, “The Whale Optimization Algorithm,” Advances in Engineering Software, vol. 95, pp. 51–67, 2016. View at Publisher · View at Google Scholar · View at Scopus
  2. E. Nabil, “A modified flower pollination algorithm for global optimization,” Expert Systems with Applications, vol. 57, pp. 192–203, 2016. View at Publisher · View at Google Scholar · View at Scopus
  3. W. Zhao and L. Wang, “An effective bacterial foraging optimizer for global optimization,” Information Sciences, vol. 329, pp. 719–735, 2016. View at Publisher · View at Google Scholar · View at Scopus
  4. L. Huang, S. Ding, S. Yu, J. Wang, and K. Lu, “Chaos-enhanced cuckoo search optimization algorithms for global optimization,” Applied Mathematical Modelling. Simulation and Computation for Engineering and Environmental Systems, vol. 40, no. 5-6, pp. 3860–3875, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. Y. Zhang, G. Cui, J. Wu, W.-T. Pan, and Q. He, “A novel multi-scale cooperative mutation Fruit Fly Optimization Algorithm,” Knowledge-Based Systems, vol. 114, pp. 24–35, 2016. View at Publisher · View at Google Scholar · View at Scopus
  6. A. Yadav, K. Deep, J. H. Kim, and A. K. Nagar, “Gravitational swarm optimizer for global optimization,” Swarm and Evolutionary Computation, vol. 31, pp. 64–89, 2016. View at Publisher · View at Google Scholar · View at Scopus
  7. Z. Y. Li, Z. Li, T. T. Nguyen, and S. M. Chen, “Orthogonal chemical reaction optimization algorithm for global numerical optimization problems,” Expert Systems with Applications, vol. 42, no. 6, pp. 3242–3252, 2015. View at Publisher · View at Google Scholar · View at Scopus
  8. D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep. Technical Report-TR06, Erciyes University, Kayseri, Turkey, 2005. View at Google Scholar
  9. A. n. Yurtkuran and E. Emel, “An adaptive artificial bee colony algorithm for global optimization,” Applied Mathematics and Computation, vol. 271, pp. 1004–1023, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  10. J. Liu, H. Zhu, Q. Ma, L. Zhang, and H. Xu, “An Artificial Bee Colony algorithm with guide of global & local optima and asynchronous scaling factors for numerical optimization,” Applied Soft Computing Journal, vol. 37, pp. 608–618, 2015. View at Publisher · View at Google Scholar · View at Scopus
  11. G. Zhu and S. Kwong, “Gbest-guided artificial bee colony algorithm for numerical function optimization,” Applied Mathematics and Computation, vol. 217, no. 7, pp. 3166–3173, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  12. J. Luo, Q. Wang, and X. Xiao, “A modified artificial bee colony algorithm based on converge-onlookers approach for global optimization,” Applied Mathematics and Computation, vol. 219, no. 20, pp. 10253–10262, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  13. W. Gao, S. Liu, and L. Huang, “A global best artificial bee colony algorithm for global optimization,” Journal of Computational and Applied Mathematics, vol. 236, no. 11, pp. 2741–2753, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  14. W.-L. Xiang and M.-Q. An, “An efficient and robust artificial bee colony algorithm for numerical optimization,” Computers & Operations Research, vol. 40, no. 5, pp. 1256–1265, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  15. G. Li, P. Niu, and X. Xiao, “Development and investigation of efficient artificial bee colony algorithm for numerical function optimization,” Applied Soft Computing Journal, vol. 12, no. 1, pp. 320–332, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. W. Gao and S. Liu, “Improved artificial bee colony algorithm for global optimization,” Information Processing Letters, vol. 111, no. 17, pp. 871–882, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. F. Kang, J. Li, and H. Li, “Artificial bee colony algorithm and pattern search hybridized for global optimization,” Applied Soft Computing, vol. 13, no. 4, pp. 1781–1791, 2013. View at Publisher · View at Google Scholar
  18. W.-F. Gao, S.-Y. Liu, and L.-L. Huang, “A novel artificial bee colony algorithm based on modified search equation and orthogonal learning,” IEEE Transactions on Cybernetics, vol. 43, no. 3, pp. 1011–1024, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. R. Wang, Y. Ru, and Q. Long, “Improved adaptive and multi-group parallel genetic algorithm based on good-point set,” Journal of Software, vol. 4, pp. 348–356, 2009. View at Google Scholar · View at Scopus
  20. W.-f. Gao, S.-y. Liu, and L.-l. Huang, “Enhancing artificial bee colony algorithm using more information-based search equations,” Information Sciences. An International Journal, vol. 270, pp. 112–133, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. L. Bao and J. Zeng, “Comparison and analysis of the selection mechanism in the artificial bee colony algorithm,” in Proceedings of the 2009 Ninth International Conference on Hybrid Intelligent Systems, pp. 411–416, Shenyang, China, 2009. View at Publisher · View at Google Scholar
  22. E. Rashedi, H. Nezamabadi-pour, and S. Saryazdi, “GSA: a gravitational search algorithm,” Information Sciences, vol. 213, pp. 267–289, 2010. View at Publisher · View at Google Scholar · View at Scopus