Abstract

Artificial bee colony (ABC) is a novel population-based optimization method, having the advantage of less control parameters, being easy to implement, and having strong global optimization ability. However, ABC algorithm has some shortcomings concerning its position-updated equation, which is skilled in global search and bad at local search. In order to coordinate the ability of global and local search, we first propose a self-adaptive ABC algorithm (denoted as SABC) in which an improved position-updated equation is used to guide the search of new candidate individuals. In addition, good-point-set approach is introduced to produce the initial population and scout bees. The proposed SABC is tested on 12 well-known problems. The simulation results demonstrate that the proposed SABC algorithm has better search ability with other several ABC variants.

1. Introduction

Population-based optimization algorithms, such as whale optimization algorithm (WOA) [1], flower pollination algorithm (FPA) [2], bacterial foraging optimizer (BFO) [3], cuckoo search algorithm (CSA) [4], fruit fly optimization (FFO) [5], gravitational search optimizer (GSO) [6], and chemical reaction optimization (CRO) [7], have many advantages over classical optimization methods and have been successfully and broadly applied to solve global continuous optimization problems in the last few decades [6].

In this paper, we used the advantage of ABC, presented by Karaboga [8], which mimics the hunting behaviors of honey bee swarm. Simulation results show that ABC is superior to many other population-based optimization methods, namely, genetic algorithm (GA), evolution strategies (ES), and particle swarm optimization (PSO) [9, 10]. ABC has caused wide attention and applications since its invention in 2005, owing to its simplicity and fewer control parameters. However, ABC faces some challenging problems, namely, low precision and slow convergence. As a result, many improved versions of ABC have been proposed to overcome these shortcomings. Zhu and Kwong [11] presented an improved Gbest-guided ABC (denoted as GABC) through combining the global best (Gbest) individual with the position-updated equation to enhance the ability of local search. Luo et al. [12] presented an improved position-updated equation by using global best individual information to generate offspring individuals. Inspired by DE, Gao et al. [13] developed an improved ABC variant applying the bees search only near the global best one to enhance the exploitation. Xiang and An [14] developed a modified position-updated equation to accelerate the convergence speed. Moreover, chaotic optimization mechanism is introduced to avoid being trapped in local minima. Li et al. [15] presented an improved ABC variant based on inertia weight and accelerating factors and to coordinate the ability of global and local search. Gao and Liu [16] developed two modified position-updated equations, namely, “ABC/best/1” and “ABC/rand/1.” Kang et al. [17] developed a hybrid method which combines ABC algorithm and pattern search method to speed up convergence. Gao et al. [18] presented an improved version of ABC (denoted as CABC) based on a modified position-updated equation. In addition, the orthogonal experimental design is introduced.

As we know, for an optimization algorithm, both global and local search are necessary and they should be well coordinated to obtain better global performance [11]. Different from the previous work, the position-updated equation is modified by self-adaptive adopting of the previous and global best solution to generate new candidate offspring to coordinate the ability of global and local search. Moreover, the good-point-set approach is used to generate initial population. The proposed SABC is tested on 12 benchmark global optimization problems. The experimental results show that our SABC algorithm is superior to basic ABC and other ABC variants.

The remainder of this study is organized as follows. The standard ABC is described in Section 2. The improved ABC called SABC algorithm is proposed and analyzed in Section 3. In Section 4, 12 benchmark global optimization problems are used to test the proposed SABC algorithm. Finally, Section 5 summarized the conclusions.

2. Artificial Bee Colony

ABC is metaheuristic optimization method inspired by hunting behavior of honey bee swarm. At the initialization step, generate randomly solutions to construct an initial population:where ,  ; is a uniformly distributed random number; and are the lower and upper bounds for the dimension , respectively.

In onlooker bee stage, food source is chosen by probability where denotes the fitness value of and is defined aswhere denotes the objective function values of the decision vector .

A candidate food source position can be generated from the old one aswhere and are randomly selected individuals; is different from ; and is a random uniformly distributed number in

Pseudocode of basic ABC is given in Algorithm 1.

() Initialize the parameters.
() Initialize N solutions to construct an initial population.
() Evaluate the fitness values of each solution.
() cycle = 1.
() Repeat
()  Generate a offspring individual by Eq. (4) and evaluate its quality.
()  Compare and select the better one.
()  Calculate probabilities through Eq. (2).
()  Produce randomly a number in 0, 1.
()  Generate a offspring individual by Eq. (4) and evaluate its quality.
()  Compare and select the better one.
()  Memorize the best solution achieved so far.
()  cycle = cycle + 1.
() Until cycle=Maximum cycle number

3. Self-Adaptive Artificial Bee Colony (SABC)

3.1. Population Initialization

Note that, for ABC algorithm, in order to find the existing area of optimal solutions faster, initial population should cover the whole search space. Good-point-set method is widely applied to generate many uniformly distributed candidate individuals [19]. Therefore, we apply good-point-set technique for producing initial population of ABC to maintain diversity of population. The pseudocode of good-point-set technique is given in Algorithm 2.

()Set the population size ; the decision variables dimension , , .
()For to do
()For to do
();
()While do
()Set individual counter .
()For to do
()If mod then
();
()Else
();
()End if
()End for
()End while
();
();
()End for
()End for

The uniformity properties of random method and good-point-set method are compared as given in Figure 1 and it displays the 80 points on a unit square generated using random number generator and good-point-set approach.

From Figure 1, the candidate individuals produced through good-point-set technique are more uniform than the candidate individuals produced by random method. Thus, good-point-set method is preferred technique for generating initial population.

3.2. Modified Search Equation

On the basis of the position-updated equation depicted by (4), the offspring individual is produced through moving the previous individual towards (or far away from) another individual chosen randomly in population. Nevertheless, the randomly chosen individual is a good individual or a bad one; the probability is the same. Therefore, the offspring individual could not be guaranteed to be better than the previous one. In addition, the coefficient in (4) is a random number over the range and is a randomly selected solution from population. Thus, the position-updated equation depicted by (4) is skilled in global search and bad at local search [11].

For the sake of enhancing the performance of ABC, one active research trend is to investigate its position-updated equation. As mentioned above, the characteristics of the search equations of ABC have been extensively investigated. ABC researchers have suggested many empirical guidelines for modifying search equation during the last decade. It has been clear that some search equations can speed up the convergence [1113], and some others are suitable for the global search [20]. Indubitably, these experiences are extremely helpful for improving the performance of ABC. “ABC/rand/1” and “ABC/best/1” are invariably employed in lots of ABC variants and their characteristics have been commendably investigated. The equations of “ABC/rand/1” and “ABC/best/1” are stated as [16]where and are randomly selected from , and . is global best solution position vector and denotes dimension. is a uniformly distributed random number in . The “ABC/rand/1” search equation is one of many often used in the paper. In “ABC/rand/1” equation, all positions are randomly chosen in population and, consequently, it does not have any bias in particular positions and randomly selects new positions. Therefore, they often show better global search ability with slow convergence. “ABC/best/1” has faster convergence based on global optima position. However, they are more likely to fall into local optima.

In the ABC algorithm, appropriate coordinate of global search and local search is very important for obtaining the optima effectively. In order to bring about a balance between global and local search abilities of ABC, we first introduces a parameter , which coordinates the effect of previous position on the current one, by modifying (4) towhere is utilized to influence the balance between global and local search abilities of individuals, the indices and are randomly selected from , and . is global best solution position vector and denotes j dimension. is a uniformly distributed random number.

Note that the parameter is crucial in balancing the abilities of global and local search. When is equal to 1, (7) becomes (5). When decreases from 1 to 0, the global search ability of (7) will also decrease correspondingly. When takes 0, (7) is (6). The search performance of algorithm will adaptively adjust through changing the parameter . From (7), with a large value of in the early stage, individuals are admitted to move in all directions of solution space, instead of moving towards the best individual. A small value of allows the population to converge to the global optimal solution in later stage. Therefore, a well-tuned is very important. The parameter    in this paper is calculated according to the following function:where t denotes number of iterations and T denotes a predefined maximum number of iterations. Thus, the improved position-updated equation developed by (7) is able to balance the abilities of global and local search of SABC.

3.3. Rank Selection

As we know, the roulette wheel selection mechanism is employed in classical ABC. This selection strategy makes more chance to select individuals with higher fitness [14]. To keep population diversity better, a rank-based selection mechanism is introduced in this paper. Selection probability pi is defined [21]:where is the population size, is parameters, is current iteration, is maximum iteration.

3.4. The Proposed SABC Algorithm

The flow chart of proposed SABC algorithm is presented in Figure 2.

4. Simulation Results and Comparisons

4.1. Benchmark Test Functions

To evaluate the performance of SABC, 12 benchmark test problems from [14, 15, 22] are used and are shown in Table 1.

4.2. Comparison between SABC and Basic ABC Algorithm

In SABC, population size N is 40, limit is 100, and the maximum iteration is 1000. Each experiment is repeated 30 runs independently. The experimental results of SABC and basic ABC are given in Table 2 regarding the best, the mean, the worst, the standard deviation (St.dev), and the convergence iteration (CI).

As seen from Table 2, SABC is able to obtain global optima for three functions (f6, f8, and f10). Moreover, in seven functions (f1f4, f9, f11, and f12), the results obtained by SABC are pretty near to global optimal solution. Compared with ABC, SABC can obtain much better results than ABC on 12 functions, that is, f1f12. The convergence performance of SABC and basic ABC on 12 functions are drawn in Figure 3 so as to show the performance of SABC more clearly.

4.3. Comparison between SABC and ABC Variants

SABC is compared against four high-performance ABC algorithms under three performance evaluation criteria: Mean, St.dev, and CI. These selected ABC variants such as Gbest-guided ABC (denoted as GABC) algorithm [11], an efficient and robust ABC (abbreviated as ERABC) algorithm [14], improved ABC (abbreviated as IABC) algorithm [15] and prediction and selection ABC (denoted as PSABC) [15]. The parameters settings are the same as those of ERABC [14] and PSABC [15] together with SABC. The comparative results have been shown in Tables 3 and 4. The results provided by other algorithms were directly taken from the original references for each approach.

From Tables 3 and 4, compared to GABC, SABC could achieve much better “mean” and “standard deviation” results than GABC on 11 problems except for function f7. For the function f7, SABC algorithm obtained similar results. In addition, the convergence iteration obtained by SABC is smaller than those of ABC, except for function f7.

As can be seen from Tables 3 and 4, with respect to ERABC, SABC obtained better “mean” and “standard deviation” values on two functions (f3 and f4) and similar results on four test functions (f6, f8, f9, and f10). However, ERABC algorithm provided better results on six problems (f1-f2, f5, f7, and f11-f12) than SABC.

From Tables 3 and 4, compared to IABC, SABC obtained better results on five functions (f3, f5, f6, f11, and f12) and similar solutions on four functions (f7, f8, f9, and f10). However, IABC provided better solutions on three functions (f1, f2, and f4) than SABC algorithm.

In comparison with PSABC, in Tables 3 and 4, SABC provided better solutions on four test functions (f3, f5, f6, and f11) and similar results on five functions (f7, f8, f9, f10, and f12). However, the PSABC algorithm found better solutions on three functions (f1, f2, and f4).

4.4. Effects of Limit on the Performance of SABC

To investigate the impact of limit, four different test functions with are used. They are Sphere (f1), Rastrigin (f8), Ackley (f9), and Griewank (f10). Five different values of limit (i.e., 50, 100, 150, 200, and 250) are used to optimize the four high dimensional test functions. The mean values (mean) and the standard deviation values (St.dev) are shown in Table 5. The box plots of different limit values for four functions are presented in Figure 4.

From Table 5, parameter limit is able to influence the performance of SABC. When limit is equal to 100, SABC achieves better performance for three functions (f1, f8, and f10). For the Ackley function (f9), the effect of limit on the performance of SABC is very little. From Figure 3 and Table 5, parameter limit = 100 is a suitable choice in SABC algorithm.

5. Conclusion

An improved version of ABC, called SABC, is developed by using good-point-set initialization employed to enhance the population distribution, rank-based selection strategy used to enhance the global search ability, self-adaptive position-updated equation applying for balancing the exploration and exploitation. The proposed SABC is tested on 12 well-known global optimization problems. The simulation results show that our algorithm is superior to the conventional ABC and other ABC variants. The further work includes the studies on how to develop SABC to deal with those constrained and engineering design problems.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (nos. 61403046, 61463009), the Natural Science Foundation of Hunan Province, China (no. 2015JJ3005), Science and Technology Foundation of Guizhou Province (Grant no. 1022), China Scholarship Council, Key Laboratory of Renewable Energy Electric-Technology of Hunan Province, Key Laboratory of Efficient & Clean Energy Utilization of Hunan Province, Hubei Key Laboratory of Power System Design and Test for Electrical Vehicle, and Hunan Province 2011 Collaborative Innovation Center of Clean Energy and Smart Grid.