Abstract

Artificial bee colony (ABC) algorithm is a popular swarm intelligence technique inspired by the intelligent foraging behavior of honey bees. However, ABC is good at exploration but poor at exploitation and its convergence speed is also an issue in some cases. To improve the performance of ABC, a novel ABC combined with grenade explosion method (GEM) and Cauchy operator, namely, ABCGC, is proposed. GEM is embedded in the onlooker bees’ phase to enhance the exploitation ability and accelerate convergence of ABCGC; meanwhile, Cauchy operator is introduced into the scout bees’ phase to help ABCGC escape from local optimum and further enhance its exploration ability. Two sets of well-known benchmark functions are used to validate the better performance of ABCGC. The experiments confirm that ABCGC is significantly superior to ABC and other competitors; particularly it converges to the global optimum faster in most cases. These results suggest that ABCGC usually achieves a good balance between exploitation and exploration and can effectively serve as an alternative for global optimization.

1. Introduction

Algorithms dealing with solving optimization problems can be classified into different groups according to their characteristics such as population-based, stochastic, deterministic, and iterative. An algorithm working with a group of solutions and trying to improve them is called population-based [1]. Two important classes of population-based optimization algorithms are evolutionary algorithms and swarm intelligence-based algorithms [2]. Swarm intelligence is an innovative artificial intelligence technique based on collective behavior of self-organized systems [3]. One of the most recent population-based methods is artificial bee colony (ABC) algorithm [4] which simulates the intelligent behavior of a honey bee swarm for seeking a high quality food source in nature. Karaboga [4] first proposed ABC in 2005 for numerical optimization, and then afterwards Karaboga and his collaborators [59] applied it to train feed-forward neural network and solve integer programming problems, data clustering, constrained optimization problems, and real-parameter optimization problems. Their results [2, 46, 811] demonstrate that ABC is simple in concept, few in parameters, easy for implementation, and more effective than some other population-based algorithms such as genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO), and differential evolution (DE) algorithm. Therefore, ABC has aroused much interest and has been successfully used in different fields [120].

Exploration and exploitation are extremely important mechanisms in a robust search process. Exploration is the ability to search the space to find promising new solutions, while exploitation is the ability to find the optimum in the neighborhood of a good solution [9]. Unfortunately, ABC is good at exploration but poor at exploitation and its convergence speed is also an issue in some cases [1218], so a lot of its variants [1, 3, 79, 1220] have been proposed in recent years to further improve the performance of ABC. For example, [13] introduced a gbest-guided ABC (GABC) by incorporating the information of global best (gbest) solution into the solution search equation to improve the exploitation; [14] proposed a novel ABC with Powell’s method (PABC); that is, the search equation in the onlooker bees’ phase is modified and Powell’s method is further used as a local search tool to improve the search ability; [15] designed a modified ABC combined with the best solution, chaotic systems, and opposition-based learning method (ABCbest) in which the initial population and scout bees are generated by combining chaotic systems with opposition-based learning method, and each bee searches only around the best solution of the previous iteration to improve the exploitation; [16] presented a new ABC based on modified search equation and orthogonal learning (CABC) where a modified search equation is applied to generate a candidate solution to improve the search ability of ABC, and the orthogonal experimental design is used to form an orthogonal learning strategy for variant ABCs to discover more useful information from the search experiences; [17] developed a bare bone ABC (BABC) which uses a Gaussian search equation to produce a new candidate individual in the onlooker bees’ phase and integrates a parameter adaptation strategy and a fitness-based neighborhood mechanism into the employed bees’ phase for better performance; moreover, the proposed framework was applied to GABC [13], ABCbest [15], and CABC [16], and the corresponding advanced ABCs were termed as BGABC, BABCbest, and BCABC, respectively. The experiments suggest that ABC variants perform competitively and effectively. Note that most of the papers which are based on improvement of ABC try to improve the local search capability (i.e., exploitation ability) [21]. However, there is no specific algorithm to substantially achieve the best solution for all optimization problems. Some algorithms only give a better solution for some particular problems than others. Hence it is very necessary to search for a well-improved or new optimization method.

To improve the performance of ABC in terms of poor balance between exploitation and exploration and convergence speed, we notice that grenade explosion method (GEM) [2225] which mimics the mechanism of a grenade explosion usually converges to the global minimum faster than some other methods such as GA and ABC. Thereby, two modified versions of ABC inspired by GEM, namely, GABC1 and GABC2, were first proposed to enhance the basic ABC’s exploitation ability in [18]. GEM is embedded in the employed bees’ phase of GABC1, whereas it is embedded in the onlooker bees’ phase of GABC2. The experiments show that GABC1 has similar or better performance than GABC2 in most cases, but GABC2 performs more robustly and effectively than GABC1; they significantly outperform the competitors. In addition, a mutation operator based on Cauchy random numbers [26], namely, Cauchy operator, is appropriate for global search because of its higher probability of making longer jumps. Thus a hybrid algorithm ABCGC which combines ABC with GEM and Cauchy operator is proposed for global optimization. GEM is embedded in the onlooker bees’ phase, which greatly enhances the exploitation ability and accelerates convergence of ABCGC; meanwhile, Cauchy operator is introduced into the scout bees’ phase to help ABCGC jump out of local optimum and further enhance its exploration ability. To evaluate the performance of the proposed algorithm, a set of 22 well-known benchmark functions is tested among ABCGC, ABC, and 8 ABC variants. Furthermore, a set of 6 classical functions is used to compare ABCGC with ABC and other 5 state-of-the-art algorithms. Besides, the effects of each modification on the performance of ABCGC are analyzed on the same 6 functions.

The rest of this paper is organized as follows. The basic ABC is introduced in Section 2. In Section 3, the proposed algorithm is described in detail. The performance of ABCGC, ABC, and several other compared algorithms is compared and discussed in Section 4. Finally, conclusions and future work are presented in Section 5.

2. Artificial Bee Colony Algorithm

A honey bee colony manages to discover the highest quality food sources in nature. Thereby, ABC [4] takes concepts from honey bee intelligent foraging behavior to discover good solutions in an optimization problem. In ABC, the colony of artificial bees contains three groups: employed, onlooker, and scout bees. The first half of the colony consists of employed bees and the second half includes onlooker bees. Employed bees search food sources and share the information about these food sources to recruit onlooker bees. Onlooker bees select the food sources found by all employed bees according to the probability proportional to the quality of food sources and further exploit the selected food sources. Scout bees are translated from a few employed bees, which abandon their food sources through a predetermined number of cycles called limit and search new ones. The position of a food source represents a possible solution to the optimization problem, and the profitability of a food source corresponds to the quality (fitness) of the associated solution. Each food source is exploited by only one employed or onlooker bee. In other words, the number of food sources is equal to the number of employed bees.

At the beginning of an optimization, an initial population containing solutions is generated randomly. is the number of food sources. Each solution (food source) () is a -dimensional vector and let represent the th food source in the population, where denotes the number of optimization parameters (dimension). And then, the population of the positions is subject to repeated cycles, , and maximum cycle number (), of the search processes of the employed, onlooker, and scout bees.

In ABC, the fitness function is defined aswhere is the objective function value of solution and is the fitness value of .

The probability of a food source being selected by an onlooker bee can be presented by

After an employed bee discovers or an onlooker bee selects the food source , they exploit a neighboring food source . is determined by changing only one parameter of , namely, , while the rest of keep the same value as . is generated bywhere is a randomly chosen index and must be different from , is a randomly chosen dimension, and is a random number in the range from to 1. Note that after an employed or onlooker bee determines a new candidate food source in the neighborhood of its currently associated food source using (3), a greedy selection mechanism [811] is applied between the new food source and the old one; that is to say, if the new food source is better than the old one, it is substituted for the old one. Otherwise, the old one is retained in the memory.

If the abandoned food source is , a scout bee produces a new food source according to where and are the lower and upper bounds of the variable , respectively.

The main steps and some related pseudocodes of ABC are outlined in Algorithm 1.

Step  1. Preset the values of control parameters: D, , ,
Step  2. Initialize the population of food sources using (4)
Step  3. Evaluate the population using (1)
Step  4.
Step  5. repeat
Step  6. Produce new food sources for the employed bees and evaluate them,
   then apply the greedy selection process employed bees’ phase:
    for to
     Produce a new food source from (based on , ) using (3)
     Calculate the fitness of the food source using (1)
     Apply the greedy selection between the new food source and the old one
    end for
Step  7. Calculate the probability values for food sources using (2)
Step  8. Produce new food sources for the onlooker bees from the food source selected depending on
   and evaluate them, then apply the greedy selection process onlooker bees’ phase:
    for to
     if
      Produce a new food source from (based on , ) using (3)
      Calculate the fitness of the food source using (1)
      Apply the greedy selection between the new food source and the old one
     end if
    end for
Step  9. Determine the abandoned food source for the scout bee, if exists,
   and replace it with a new randomly produced one using (4) scout bees’ phase
Step  10. Memorize the best food source achieved so far
Step  11.
Step  12. until

3. The Proposed Algorithm ABCGC

In this section, ABCGC which is a modified version of the basic ABC based on GEM and Cauchy operator to improve its performance is proposed. The proposed algorithm follows the general procedure of ABC. The essential difference between ABCGC and ABC is different exploitation and exploration strategies adopted by their onlooker bees and scout bees, respectively. That is to say, the main steps of ABCGC remain the same as ABC except for Steps and . Figure 1 visualizes the framework for ABCGC.

3.1. New Exploitation Strategy Adopted by Onlooker Bees

In ABC, after an onlooker bee selects the food source , it further exploits a neighboring food source . is determined by changing only one parameter of ; namely, . is generated by (3). In (3), is modified from based on a comparison with the randomly selected position from its neighboring solution . As can be seen from (3), the difference between and is a difference of position in the randomly chosen dimension . is a crucial parameter since it directly influences the position of a new food source. However, the randomly chosen dimension may not always guide ABC toward more high fitted positions of food sources and lead to slow convergence or even make the search easily trapped in local optimum. Then which dimension among all the dimensions is the best choice for an onlooker bee to update the new candidate food source?

GEM first presented by Ahrari et al. [22] in 2009 is inspired by the mechanism of a grenade explosion, where objects are hit by pieces of shrapnel. Damage caused by each piece of shrapnel hitting an object is calculated. A high value for damage per piece in an area indicates there are valuable objects in that area. To intensify the damage, the next grenade is thrown where the greatest damage occurs. This process will result in finding the best place for throwing the grenade. Ahrari et al. [2225] used a set of classical benchmark functions and some randomly generated multimodal functions to test the performance of GEM; the results show that this simple and robust method can often spot high fitted regions quite fast and converge to the exact location of the global minimum.

Therefore, GEM is introduced into the onlooker bees’ phase of ABC to select the optimal search dimension instead of a random chosen one for each onlooker bee in hope that they collectively move towards the optimal position. Here, the overall damage caused by the hit is considered as the “fitness” of a solution. Note that the number of pieces of shrapnel per grenade should be large enough so that far regions can be explored for new high fitted regions and the algorithm would not be trapped in local optimum. To eliminate the need for setting the parameters of GEM, there is only one grenade and let the grenade throw pieces of shrapnel in each cycle.

In each cycle of ABCGC, pieces of shrapnel are thrown in all the dimensions (i.e., each dimension is exploited by only one shrapnel) to gather information around the current position of the grenade (old food source); meanwhile, each onlooker bee computes each candidate food source along which each shrapnel is thrown and evaluates corresponding damage-per-shrapnel value (fitness) and then makes a decision on a new candidate food source with the greatest damage (the highest fitness), which means the selected optimal search dimension is biased towards the global or near-global optimal position more quickly. Consequently, in ABCGC, a new candidate solution based on the optimal search dimension for an onlooker bee is produced bywhere is a randomly chosen index and ; represents the optimal search dimension; is a random number in the range from to 1; denotes a new candidate food source generated by just changing the value of old food source in dimension , namely, , while the rest of keep the same value as ; has a similar meaning as and also indicates that obtains the maximum fitness in dimension instead of other dimensions.

Similarly, after an onlooker bee determines a new candidate food source in the neighborhood of its currently associated food source using (5) and (1), a greedy selection mechanism is applied between the new food source and the old one.

From the above explanation, the pseudocodes of Step of ABCGC are presented in Algorithm 2 (the main differences from ABC are highlighted in bold).

Step  8. Produce and evaluate new food sources for each onlooker bee in all dimensions of each associated food
   source according to its probability and determine the optimal search dimension () and the best new
   candidate food source using (5) and (1), then apply the greedy selection process onlooker bees’ phase:
     for to
        if
          for to  
         Produce a new food source    from    (based on  , ) using (3)
         Calculate the fitness of a food source using (1)
        if  
             
             
             
        else
            if  
          
          
          
            end if
        end if
          end for
          Apply the greedy selection between the new food source and the old one
        end if
     end for

3.2. New Exploration Strategy Adopted by Scout Bees

Although a search can easily fall into local optimum, Cauchy operator ensures that the search is executed in the global region and does not trap in local optimum prematurely [30]. For example, [26] combined evolutionary programming (EP) with Cauchy operator, and the new EP significantly outperforms the classical EP with Gaussian mutation; [31] applied Bayesian techniques to enhance the PSO’s searching ability in the exploitation of past particle positions and used Cauchy operator for exploring the better solution; [32] introduced Chaos and Cauchy operator into the improved biogeography-based optimization algorithm to search for the optimal solution of the core backbone network. Their results confirm that Cauchy operator is appropriate for global search due to its higher probability of making longer jumps. Figure 2 shows the probability density functions of standard Cauchy and Gaussian distributions. On the interval , Gaussian function has a large probability, but its probability is almost 0 on the intervals and . Cauchy distribution has a similar shape as Gaussian distribution, but it has more probability in its long tail area, so the possibility of Cauchy distribution to generate a random number away from the origin is higher than Gaussian distribution. In other words, Cauchy distribution-based random numbers explore a relatively wider search space than Gaussian distribution-based random numbers. Thereby, Cauchy operator is introduced into the scout bees’ phase of ABC to generate a wider solution instead of a random produced one for each scout bee. This will help ABC escape from local optimum and further enhance the ability of global exploration.

In ABCGC, if the abandoned food source is , a scout bee produces a new food source according to where is the standard Cauchy distribution, which denotes a random value from a Cauchy distribution centered at 0 with a scale parameter equal to 1. Concretely, is defined as

Based on above considerations, Step of ABCGC is listed in Algorithm 3 (the main differences from ABC are highlighted in bold).

Step  9. Determine the abandoned food source for the scout bee, if exists, and replace it with a  new produced one
   based on Cauchy operator using (6) and (7)    scout bees’ phase

4. Experiments and Discussion

It is a common practice to compare different algorithms using different benchmark problems in the field of optimization [33]. In order to verify the performance of ABCGC, a set of 22 well-known benchmark functions is tested among the proposed algorithm, ABC, and 8 improved ABC algorithms. Besides, a set of 6 classical functions is used to compare ABCGC with ABC and other 5 state-of-the-art algorithms; moreover, the effects of each modification on the performance of ABCGC are analyzed on the same 6 functions. Table 1 shows the list of various algorithms used in the paper.

ABCGC is developed in MATLAB version 7.0.4.365(R14) and the MATLAB codes of ABC are downloaded from Karaboga’s website (http://mf.erciyes.edu.tr/abc/software.htm). In order to make fair comparisons, the parameter settings for different methods are chosen to be the same in all our experiments. In ABC, ABCG, ABCC, and ABCGC, both the employed bees and the onlooker bees are 50% of the colony, respectively, and the number of scout bees is selected as one. All experiments are repeated 50 times and run on a portable computer with an Intel Core i5-3317U 1.70 GHz CPU and 4 GB RAM under Windows 7. The mean and standard deviation of the function values found by different algorithms under the same conditions have been recorded. Since all the functions are minimization problems, a smaller value represents the better one. Results in boldface indicate the best values obtained by different algorithms for each function.

4.1. Comparison among ABC Algorithms

ABCGC, ABC, and 8 ABC variants are used to minimize a set of 22 well-known benchmark functions given in [14, 16, 17]. Tables 2 and 3 show the details of these functions. Different functions have different characteristics. A function is unimodal if it has only one optimum, while a multimodal function has more than one local optimum [2]. Multimodal functions are used to test the ability of algorithms getting rid of local minima. A function is separable if it can be rewritten as a sum of functions of just one variable [10], while a nonseparable function cannot be rewritten in this form due to the complex interrelation among variables. Therefore, a nonseparable function is more difficult than the separable one and difficulty increases if the function is multimodal. In Tables 2 and 3, Cha denotes the characteristics of a function; U, M, S, and N in the Cha column represent that a function is unimodal, multimodal, separable, and nonseparable, respectively. are unimodal. More concretely, and are continuous functions; is a discontinuous function; is a noisy function and is unimodal for and but may have multiple minima in high dimension cases. are multimodal and the number of their local minima increases exponentially with the problem dimension. ABCGC follows the same parameter settings; that is, , , the maximum number of function evaluations, and the dimension of and are fixed at 100, 200, 200000, 60, and 200, respectively. The results of the compared algorithms are all derived directly from [14, 17]. Table 4 shows the comparison results.

As seen from Table 4, all the algorithms easily find the minimum of which is a region rather than a point. Both PABC and ABCGC reach the global optimum of . BABC, BGABC, PABC, ABCbest, BABCbest, CABC, BCABC, and ABCGC obtain the optimum of and . BABC, BGABC, PABC, BABCbest, BCABC, and ABCGC find the minimum of . BABC, BGABC, BABCbest, BCABC, and ABCGC reach the global optimum of . On , BABC, BGABC, BABCbest, and BCABC have the same mean value and outperform the other algorithms, and ABCGC ranks the second. On , BABC, BGABC, BABCbest, CABC, and BCABC have the same mean value and perform better than the rest of the algorithms, and ABCGC ranks the fifth. On , BABC, BGABC, PABC, BABCbest, CABC, and BCABC have the same mean value and outperform the compared algorithms, and ABCGC ranks the fourth. BCABC performs best on , and ABCGC ranks the fifth. On , BABCbest, CABC, and BCABC have the same mean value and outperform the other algorithms, and ABCGC ranks the seventh. On , BABC, BGABC, PABC, ABCbest, BABCbest, CABC, BCABC, and ABCGC have the same mean value and perform better than ABC and GABC, but ABCGC has the smallest standard deviation. However, only ABCGC can obtain the optimum of , , , and . Note that the minimum value of found by ABCGC is 0 and its mean value is , which greatly surpass the values obtained by the rest of the methods; moreover, ABCGC significantly outperforms the competitors on , , , , and . To sum up, ABCGC has similar or better performance than ABC, BABC, GABC, BGABC, PABC, ABCbest, BABCbest, CABC, and BCABC on 22, 17, 19, 17, 19, 19, 18, 18, and 17 out of 22 functions, respectively. These results clearly indicate that the proposed algorithm is greatly superior to the nine compared algorithms on almost all the functions.

The above results are reasonable. In ABCGC, onlooker bees always select the optimal search dimension instead of a random chosen one in each cycle, which is very useful for local search and fine tuning; on the other hand, ABCGC has global search ability which prevents the search from premature convergence due to the exploration of a wider solution space carried out by scout bees and neighbor solution production mechanism performed by employed and onlooker bees. Generally, there is a good balance between exploitation and exploration; hence ABCGC has better performance in terms of global optimization.

4.2. Comparison with Other State-of-the-Art Algorithms

To further evaluate the performance of the proposed algorithm, a set of 6 classical benchmark functions from [29] is used to compare ABCGC with ABC and 5 state-of-the-art algorithms: BSO-RPTVW, IMDE (1st process), IMDE (2nd process), DFO, and DFO (QA). The details of these functions are given in Table 5. Based on their characteristics, the functions may be divided into three groups: functions with no local minima, many local minima, and a few local minima. Sphere and Rosenbrock functions are high-dimensional unimodal functions. Rastrigin, Ackley, and Griewank functions are high-dimensional multimodal functions with many local minima and highly nonlinear in nature; moreover, the number of local minima of these functions increases exponentially with increase of dimension. Schaffer function is a low-dimensional function with a smaller number of local minima. Therefore, these functions are widely used to test the performance of global optimization algorithms [2, 13, 18, 2729]. As mentioned in [29], is set to 100 and is 5000, 7500, and 10000 for with 10, 20, and 30 dimensions, respectively, but for Schaffer function with , is equal to 2000 in this experiment. The results of ABC, BSO-RPTVW, IMDE (1st process), IMDE (2nd process), DFO, and DFO (QA) are quoted from [29]. Table 6 shows the comparison results.

From Table 6, DFO and ABCGC obtain the global optimum of Sphere function and perform better than the rest of the methods. On Rosenbrock function, IMDE (1st process) and IMDE (2nd process) reach the minimum and outperform all other algorithms, and ABCGC ranks the fourth. IMDE (1st process), IMDE (2nd process), DFO, and ABCGC can find the global optimum of Rastrigin and Griewank functions, while the others cannot. On Ackley function, BSO-RPTVW performs best, and ABCGC is better than the rest of the methods except for the function with , but ABCGC ranks the third on the function with . ABC, BSO-RPTVW, IMDE (1st process), and IMDE (2nd process) are not able to reach the minimum of Schaffer function, while DFO, DFO (QA), and ABCGC do it. These results clearly show that ABCGC has similar or better performance than ABC, BSO-RPTVW, IMDE (1st process), IMDE (2nd process), DFO, and DFO (QA) in 16, 10, 12, 12, 16, and 13 out of 16 cases. In other words, ABCGC performs significantly better than the compared algorithms in most cases. These also demonstrate that ABCGC is available for local and global optimization due to a good balance between the local search process carried out by employed and onlooker bees and the global search process managed by scout bees.

4.3. Effects of Each Modification on the Performance of ABCGC

In order to further study the effects of each modification, the basic ABC combined with GEM is named as ABCG (i.e., GABC2), and the basic ABC with Cauchy operator is named as ABCC. A set of 6 functions given in Table 5 is compared with the convergence performance of ABC, ABCG, ABCC, and ABCGC to see how much each new search strategy makes contribution to improving the performance of ABCGC. In the literature [1, 2, 46, 818, 2131, 33], , , and are often set to 10, , and 10, 50, 100, respectively. Consequently, the parameter settings for the four algorithms are as follows: , , , and . The convergence characteristics of the algorithms are shown in Figures 3, 4, and 5.

It can be observed from Figures 3, 4, and 5 that ABCC has similar performance to ABC on Sphere, Rosenbrock, Rastrigin, Ackley, and Griewank functions with , and it has a slightly bigger fluctuation than ABC due to wider solutions generated by ABCC, but ABCC performs slightly better than ABC. In addition, the performances of ABCG and ABCGC are very close on the same 5 functions with the three dimensions, and they exhibit much better convergence than ABC and ABCC, but ABCGC slightly outperforms ABCG. More excitingly, both ABCG and ABCGC can obtain the global optimum of Rastrigin function after 260 cycles. Schaffer function is one of the most difficult standard test functions and its global minimum is very close to local minima [2]. More interestingly, on Schaffer function with , ABCC shows a bigger fluctuation than other algorithms and performs much better than ABC; moreover, ABCC performs worse than ABCG before 270 cycles but surpasses ABCG after 270 cycles. Note that ABCGC significantly outperforms the rest of the algorithms on Schaffer function with . For example, the mean value of ABC, ABCG, and ABCC is about 1000, 100, and 300 times that of ABCGC after 300 cycles, respectively. On Schaffer function with and , the performance of ABC, ABCG, ABCC, and ABCGC decreases as increases; ABCGC has similar or better performance than ABCG and ABCC also has similar or better performance than ABC. However, ABCG and ABCGC greatly outperform ABC and ABCC. Taken as a whole, ABCGC converges the fastest and performs more robustly and effectively than ABC, ABCG, and ABCC on all the 6 functions. These results demonstrate that ABC is good at exploration but poor at exploitation, which result in slow convergence. On the other hand, the new exploitation strategy adopted by onlooker bees in ABCG and ABCGC biases the solution towards the global or near-global optimum more quickly, and the new exploration strategy adopted by scout bees in ABCC and ABCGC further helps to maintain large population diversity and avoids premature convergence; that is to say, the new exploitation and exploration strategies work well together to improve the performance of ABCGC rather than contradict each other.

5. Conclusions

To address the problems of poor balance between exploitation and exploration and slow convergence of the basic ABC, a new hybrid ABC for global optimization is proposed, namely, ABCGC. In the proposed algorithm, the new exploitation strategy inspired by GEM makes onlooker bees always select the optimal search dimension instead of a random chosen one in each cycle, which greatly enhances the exploitation ability and accelerates convergence of ABCGC; meanwhile, the new exploitation strategy with Cauchy operator generates a wider solution instead of a randomly produced one for each scout bee to help ABCGC escape from local optimum and further enhance its exploration ability. Comprehensive experiments are conducted on two sets of well-known benchmark functions to verify the performance of ABCGC. The results show that ABCGC works best and achieves overall better performance than the compared algorithms in terms of solution quality and convergence characteristics; particularly it finds the global optimum faster in most cases. These confirm that ABCGC usually achieves a good balance between exploitation and exploration. Thus it can effectively serve as an alternative for global optimization.

In future work, there is a need for an adaptive scheme instead of a fixed one, which will dynamically adjust the number of pieces of shrapnel per grenade in each cycle of ABCGC. In addition, it is desired to further apply the proposed variants to real-world applications.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by National Science Foundation of China under Grant nos. 61463007 and 21466008, by National Science Foundation of Shanghai under Grant no. 15ZR1401600, and by Key Project of Guangxi University for Nationalities under Grant no. 2012MDZD035. The authors would like to thank the editors and the anonymous reviewers for their kind assistance, constructive comments, and recommendations, which have significantly improved the presentation of this paper. They would like to express their appreciation to Karaboga for sharing his basic ABC codes.