Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 9791060 | 10 pages |

Particle Swarm and Bacterial Foraging Inspired Hybrid Artificial Bee Colony Algorithm for Numerical Function Optimization

Academic Editor: Matjaz Perc
Received04 Nov 2015
Revised04 Jan 2016
Accepted04 Jan 2016
Published28 Jan 2016


Artificial bee colony (ABC) algorithm has good performance in discovering the optimal solutions to difficult optimization problems, but it has weak local search ability and easily plunges into local optimum. In this paper, we introduce the chemotactic behavior of Bacterial Foraging Optimization into employed bees and adopt the principle of moving the particles toward the best solutions in the particle swarm optimization to improve the global search ability of onlooker bees and gain a hybrid artificial bee colony (HABC) algorithm. To obtain a global optimal solution efficiently, we make HABC algorithm converge rapidly in the early stages of the search process, and the search range contracts dynamically during the late stages. Our experimental results on 16 benchmark functions of CEC 2014 show that HABC achieves significant improvement at accuracy and convergence rate, compared with the standard ABC, best-so-far ABC, directed ABC, Gaussian ABC, improved ABC, and memetic ABC algorithms.

1. Introduction

In recent years, facing optimization problems, people have put forward a series of traditional solving methods, such as linear programming and dynamic planning. Limited by the huge time complexity, these methods are not applied to solve these large-scale problems. With the development of biotechnology, people found that when individuals work together for a complex job the ability of individuals is not a simple addition of every individual but a very complex behavioral feature. So, many swarm intelligence based optimization methods have been proposed. Inspired by the law of survival of the fittest, Tang et al. [1] proposed the genetic algorithm (GA). Inspired by the foraging behavior of ant colonies, Dorigo et al. [2] proposed the ant colony optimization (ACO). Inspired by the social behavior of bird flocking, Kennedy and Eberhart [3] proposed the particle swarm optimization (PSO). Li et al. [4] proposed artificial fish swarm algorithm (AFSA). Yang [5] proposed firefly algorithm (FA) and Fister et al. [6] summarized some improvement of chaos-based firefly algorithms. Motivated by the behaviors of honeybee swarms, artificial bee colony (ABC) algorithm was first proposed by Karaboga in 2005 [7]. ABC algorithm has been widely used in many fields, for example, determining the optimal size and selecting optimum locations of shunt capacitors by El-Fergany and Abdelaziz [8], solving the economic lot scheduling problem by Bulut and Tasgetiren [9], segmenting SAR image by Ma et al. [10], enhancing image contrast by Draa and Bouaziz [11], and solving the Leaf-Constrained Minimum Spanning Tree (LCMST) problem by Singh [12].

Despite the success of the ABC algorithm in many applications, it still has numerous drawbacks. As we all know, strong capabilities in both exploration and exploitation are important for population-based optimization algorithms. Although the standard ABC algorithm works fine with exploration, it performs poorly in exploitation. To improve the performance, researchers have been working on modifying ABC algorithm and integrating ABC algorithm with other evolutionary computation based optimization methods. For example, Bansal et al. proposed a self-adaptive ABC algorithm, which updates step size and parameters for searching solutions adaptively according to the current fitness values; therefore it gives the solutions more chances to update themselves [13]. Wang et al. presented multistrategy ensemble ABC algorithm by utilizing different characteristics of solution search to construct a strategy pool. During the search process, the strategy for each food source is dynamically changed in order to achieve better performance [14]. Kang et al. presented Rosenbrock ABC algorithm [15] by combing standard ABC and RM-based local search techniques. To improve global searching capability by escaping the local solutions, Alatas [16] adopted a method to adjust parameters for ABC algorithm using random numbers generated from different chaotic systems. Xiang et al. proposed a particle swarm inspired multielitist ABC algorithm [17] which updates the parameters of the solutions using global best solution and an elitist randomly selected from an elitist archive. To efficiently solve the constraint optimization problems, Li and Yin [18] presented a self-adaptive constrained ABC algorithm by introducing feasible rule and multiobjective optimization methods.

In this paper, we propose a hybrid ABC algorithm to improve the performance of standard ABC algorithm. To enhance the ability of local searching and exploitation, we applied chemotactic behavior in Bacterial Foraging Optimization algorithm [19] into employed bees and adopted the global best solution search equation in PSO [20, 21] algorithm into onlooker bees. Moreover, we also use inertia weight [22, 23] like contraction-expansion coefficient in PSO algorithm to balance exploration and exploitation dynamically. Finally, our algorithms are evaluated by the average values and standard deviation (SD) of 16 CEC 2014 benchmark functions.

This paper is organized as follows. Section 2 describes the standard ABC algorithm. Section 3 introduces the HABC algorithm. The experimental results are shown and discussed in Section 4. Section 5 presents the conclusions.

2. Standard ABC Algorithm

When solving optimization problems, ABC algorithm abstracts the food sources as feasible solutions. The process of artificial bee colony seeking for quality food sources is used to simulate the process of finding the global optimal solution. The honeybee swarms are categorized into three groups: employed, onlooker, and scout bees, where the number of employed bees equals the number of high quality food sources. The job of employed bees is to find quality food sources and then share food location information with onlooker bees. Once onlooker bees obtain the food source information, they search closer toward the selected food sources according to the probability distribution functions. Higher quality food sources have a higher possibility to be selected. When food sources found by employed bees are identified as low quality ones, the corresponding employed bees turn into scout bees which search for new food sources randomly. The algorithm can be divided into four steps: initialization, behavior of employed bees, behavior of onlooker bees, and behavior of scout bees.

2.1. Initialization

In the initialization step, food sources are initialized with random positions given by the following equation:

Every solution is a -dimensional vector, is the number of food sources which is equal to the number of employed bees or onlooker bees, and denotes the number of optimization parameters. are the lower and upper bounds of food source positions at dimension , respectively.

2.2. Behavior of Employed Bees

Starting from the initial locations, employed bees search around the surrounding areas for better food sources. The algorithm assumes that employed bees can record all the food source locations that the colony has reached; thus employed bees can move a random distance toward another food source. The updated location is calculated as follows: where is the new candidate food source location, ( is the population of swarm), ( represents the dimension); is the previous food source location; is a random number between []; is another food source location, ; and .

The fitness of a solution can be calculated from the objective function value by using the following equation:

2.3. Behavior of Onlooker Bees

When bee colony foraging for food, onlooker bees stay around the hive and search locally for high quality food sources by observing the information of food sources carried by employed bees. The information interaction is an important reflection of the intelligent behavior of the bee colonies in searching for food. The employed bees provide the information of food sources to onlooker bees after returning to the hive. The onlooker bees decide whether to update the food source using the greedy algorithm when searching for food and the probability of updating the food source iswhere is the fitness function of th onlooker bees. If the new value of fitness function is better, the onlooker bee will update its position using (2); thus, the bee colony can move closer to better-quality food sources gradually.

2.4. Behavior of Scout Bees

If food sources cannot be improved after a predefined number of iterations, they will be abandoned. The corresponding employed bees will change to scout bees which search for a new feasible food source randomly across a wide range using same (1) for initialization.

3. Hybrid Artificial Bee Colony Algorithm

3.1. For Employed Bees

For employed bees, we adopt the chemotactic behavior in Bacterial Foraging Optimization (BFO) to help employed bees to escape local optimum trap and enter a relatively large searching space gradually. Chemotactic behavior is that bacteria gather together at a more favorable environment instead of a noxious one, which includes two operators: tumble and swim. A unit walk with random direction represents tumble operator. After finishing a tumble operator, if the fitness value is not improved, then bacteria move to another random direction with a unit walk; if the fitness value improved, then bacteria move to the same direction with a few unit walks until the fitness value reaches the maximum swim steps, where the process represents swim operator. In iterative procedure, standard ABC algorithm implement global search by employed bees. As shown in (2), is a random dimension of an individual, and standard ABC algorithm updates only one randomly selected dimension of the solutions in employed bees’ process, which causes some redundancy in searching for solutions. Compared with the standard ABC algorithm, in our proposed algorithm the frequency and range of the neighborhood search are increased when processing the employed bees. Once the employed bees forage, the fitness value of the solutions will be calculated. If the fitness value improves, the old position is substituted for the new position. Otherwise, the employed bees remain in the old position. When the number of swim steps (called ) reaches the limit (called ) or the fitness value is compounded, employed bees stop foraging at current dimension and tumble to another dimension. These operations can ameliorate the convergence speed of employed bees and increase potential solutions. After all the employed bees have tumbled in all dimensions, the process finishes. Benefitting from these, the bee colony gets into a larger search space and avoids plunging into local optimum. In HABC algorithm, the positions of employed bees can be updated as follows:where , , , and are the same as (2) and is the number of advances, like swim steps in Bacterial Foraging Optimization algorithm. The improvement is shown in Figures 1 and 2.

The pseudocode of the behavior of employed bees in HABC algorithm is as given in Pseudocode 1.

(1) Set the source position , produce new solution .
(2) for (as a counter) from 1 to colony size
(3)   = rand() (as a random dimension of source position)
(4)   = rand(1, colonysize) &   (as another random dimension of source position)
(5)  set temp = rand
(6)  for dim (as a counter from 1 to max dimension)
(7)   while
(8)    new solution
(9)    if new fitness value is better than fitness value
(10)    then
(11)    else
(12)    set
(13)   end if
(14)  end while
(15) end for
(16) end for
3.2. For Onlooker Bees

In the standard ABC algorithm, employed bees and onlooker bees update their position by searching for a candidate solution in a randomly selected dimension of current position toward a random position. The purpose of employed bees is for global searching across a relatively large space. While the purpose of onlooker bees is for local searching in a neighboring area, which means that employed bees have the fastest convergence rate and onlooker bees have the best exploration ability. Eventually, we can locate the global optimization solution. In standard ABC algorithm, step size and dimension are randomly chosen for onlooker bees’ process, so the probabilities of choosing good and bad quality food sources are the same, which lead to its low robustness.

As we all know, the distance of global optimal solution and suboptimal solution is short, and then the current best solution can drive the new candidate solution step towards the best direction, so we adopt the idea of tracing the current best particle from PSO algorithm to enhance the global search capability of the standard ABC algorithm. At the same time, to strike a balance between exploration and exploitation, we introduce the inertia weight into our algorithm. In PSO algorithm, inertia weight represents the ability that particles inherit from their previous velocity; it was first introduced into PSO algorithm by Shi and Eberhart in 1998 [22]. Analysis states that a relatively larger inertia weight is good for global searching and a relatively smaller inertia weight is good for local searching. In HABC algorithm, we adopt linear decreasing inertia weight (called LDIW), which contains smaller step size and difficult trap into local optimum. At the initial stages, in order to avoid trapping into a local optimal solution, a relatively large inertia weight is used to make onlooker bees spread into a larger search space. At the end stages, a relatively small inertia weight is used to protect onlooker bees from disturbing the current best solution. This approach enhances the overall convergence speed and makes the algorithm more efficient for obtaining the global optimization solution. The equation of LDIW is given as follows:

Among all parameters, is the initial inertia weight, is the inertia weight when iteration reach the maximum; respect the current iteration, and means the maximum number of iterations. In our algorithm, according to the experience, we set as 0.9 and as 0.4.

In HABC algorithm, the positions of onlooker bees can be updated as follows:where , , , and are the same as (2), is dimension of current best solution, and is the LDIW.

The pseudocode of onlooker bees is as given in Pseudocode 2.

(1) Set the source position , produce new solution , maximum convergence iterations , current iteration .
(2) for (as a counter) from 1 to colony size
(3) calculate selective probability
(4)   = rand() (as a random dimension of source position)
(5)   = rand(1, colonysize) &   (as another random dimension of source position)
(6)  set
(7)  if selective probability p > rand()
(8)   for dim (as a counter) from 1 to max dimension
(9)    new solution
(10)  end for
(11)   if new fitness value is better than fitness value
(12)   then
(13)  end if
(14) end if
(15) end for

4. Experimental Comparison and Analysis

4.1. Benchmark Functions Used

In this section, algorithms are used to find the global optimum values of 16 benchmark functions from the CEC 2014 competition [24]. The details of the functions are as follows.

Unimodal functions include the following.(1)Rotated High Conditioned Elliptic Function.(2)Rotated Bent Cigar Function.(3)Rotated Discus Function.

Multimodal functions include the following.(4)Shifted and Rotated Rosenbrock’s Function.(5)Shifted and Rotated Ackley’s Function.(6)Shifted and Rotated Weierstrass Function.(7)Shifted and Rotated Griewank’s Function.(8)Shifted Rastrigin’s Function.(9)Shifted and Rotated Rastrigin’s Function.(10)Shifted Schwefel’s Function.(11)Shifted and Rotated Schwefel’s Function.(12)Shifted and Rotated Katsuura Function.(13)Shifted and Rotated HappyCat Function.(14)Shifted and Rotated HGBat Function.(15)Shifted and Rotated Expanded Griewank’s plus Rosenbrock’s Functions.(16)Shifted and Rotated Expanded Scaffer’s F6 Function.

4.2. Parameter Settings

To test the performance of HABC, the experimental results are compared with those of the standard ABC, best-so-far ABC (BSABC)[25], directed ABC (dABC) [26], Gaussian ABC (GABC) [27], improved ABC (IABC) [28], and memetic ABC [29] (MABC). In HABC algorithm, we set 3 as the value of . For all algorithms, the colony size is set to be 50, and the stopping criterion was to run for up to 10000D FEs. Moreover, search space for all functions are . Each experiment is repeated 51 times independently with random seeds, and then we record the mean and standard deviations of benchmark functions.

4.3. Comparisons between HABC and ABC Variants

All algorithms are coded in MATLAB 7.9.0, and all experiments were running on a Windows XP operation system with an Intel Pentium Dual-Core CPU E5300 2.6 GHZ and 2 GB RAM. Experimental results are shown in Tables 13, which also include the results by Wilcoxon signed-rank test.


2.26E + 057.65E + 04+3.74E + 042.02E + 04+1.76E + 056.41E + 04+4.94E + 042.21E + 04+8.50E + 046.33E + 04+1.09E + 058.81E + 04+
2.18E + 017.93E + 00+1.03E − 021.32E − 03+3.14E − 021.86E − 02+1.55E − 031.03E − 031.38E − 021.03E − 02+2.38E − 032.33E − 03
1.71E − 011.17E − 01+3.54E − 023.49E − 02+6.42E − 024.06E − 02+1.74E − 021.33E − 02+7.33E − 027.45E − 02+3.63E − 026.44E − 02+
7.20E − 013.78E − 01+1.91E − 021.37E − 02+4.70E − 022.33E − 02+2.01E − 021.03E − 02+9.22E − 028.13E − 02+6.03E − 031.15E − 02+
1.75E + 014.55E + 00+8.96E + 007.64E + 00+1.39E + 017.85E + 00+5.66E + 003.42E + 00+9.35E + 007.39E + 00+1.51E + 018.34E + 00+
2.83E − 024.93E − 03+1.16E − 023.07E − 03+1.59E − 024.08E − 03+1.14E − 022.31E − 03+1.37E − 023.32E − 03+7.46E − 031.53E − 03
3.24E − 027.30E − 03+1.09E − 041.49E − 04+3.56E − 033.35E − 03+7.63E − 051.54E − 05+4.48E − 046.33E − 04+8.46E − 061.42E − 05
7.86E + 001.55E + 00+4.53E + 001.01E + 00+4.93E + 006.34E − 01+5.61E + 001.02E + 00+3.27E + 006.98E − 01=3.42E + 007.80E − 01+
00NA8.93E − 032.36E − 02+3.93E − 023.01E − 02+6.25E − 023.61E − 02+00NA1.25E − 015.10E − 02+
2.89E + 024.01E + 01+7.33E + 014.65E + 01+1.31E + 027.26E + 01+1.08E + 027.16E + 01+8.26E + 013.32E + 01+4.91E + 015.83E + 01+
3.71E − 035.64E − 04+1.75E − 032.71E − 04+2.53E − 035.01E − 04+1.48E − 033.03E − 04+2.51E − 032.83E − 04+7.62E − 041.50E − 04
1.56E − 032.89E − 04+9.41E − 041.70E − 04+1.05E − 031.35E − 04+9.70E − 041.27E − 04+9.11E − 042.17E − 04+7.11E − 042.40E − 04+
1.09E − 032.37E − 04+1.07E − 031.21E − 04+9.55E − 041.01E − 04+1.14E − 032.12E − 04+1.04E − 031.81E − 04+8.18E − 042.42E − 04+
1.53E + 002.32E − 01+6.17E − 011.63E − 01+8.71E − 011.05E − 01+5.04E − 011.50E − 01+6.48E − 011.95E − 01+4.54E − 011.71E − 01+
2.40E + 001.39E − 01+1.80E + 003.13E − 01+2.12E + 002.23E − 01+1.94E + 001.62E − 01+1.84E + 001.92E − 01+1.39E + 004.06E − 01=


3.45E + 078.47E + 06+8.07E + 062.07E + 06+2.41E + 077.01E + 06+6.00E + 061.10E + 06+1.42E + 073.26E + 06+6.33E + 061.67E + 06+
2.25E + 038.56E + 02+3.98E − 022.06E − 02+4.99E − 024.99E − 02+1.72E − 022.38E − 02+3.79E − 022.30E − 02+8.18E − 037.98E − 03+
1.81E + 001.60E + 00+3.03E − 012.13E − 01+1.08E + 006.66E − 01+1.94E − 011.23E − 01+2.40E − 012.30E − 01+3.27E − 012.99E − 01+
9.22E + 019.86E + 00+3.43E + 003.75E + 00+1.35E + 011.11E + 01+2.79E + 001.29E + 00+1.28E + 019.58E + 00+1.64E + 012.75E + 00+
2.04E + 011.27E − 02+2.03E + 012.66E − 02+2.04E + 014.01E − 02+2.03E + 012.53E − 02+2.04E + 014.05E − 02+2.01E + 011.41E − 02
1.66E − 011.45E − 02+1.37E − 011.17E − 02+1.48E − 015.65E − 03+1.45E − 011.05E − 02+1.36E − 016.54E − 03+1.13E − 011.50E − 02+
2.25E − 032.27E − 03+1.76E − 072.79E − 07+1.13E − 049.70E − 05+6.03E − 088.19E − 08+1.41E − 051.19E − 05+00NA00
9.24E + 011.17E + 01+7.08E + 011.34E + 01+7.92E + 019.03E + 00+8.14E + 017.53E + 00+5.35E + 014.92E + 00+6.28E + 018.47E + 00+
9.81E − 031.16E − 024.24E − 014.11E − 01+1.47E + 001.22E − 01+8.32E − 018.50E − 01+002.77E − 012.61E − 02+1.87E − 012.66E − 02
3.19E + 031.93E + 02+2.00E + 031.62E + 02+2.43E + 031.70E + 02+1.95E + 032.04E + 02+2.13E + 037.28E + 01+1.85E + 032.99E + 02+
6.25E − 039.56E − 04+3.44E − 035.09E − 04+5.12E − 037.70E − 04+3.04E − 034.60E − 04+4.61E − 033.40E − 04+1.63E − 032.74E − 04+
3.46E − 032.44E − 04+2.29E − 032.74E − 04+2.78E − 031.65E − 04+2.39E − 031.71E − 04+2.62E − 032.52E − 04+1.78E − 032.95E − 04+
2.39E − 032.12E − 04+1.83E − 031.27E − 04+2.04E − 031.93E − 04+1.79E − 039.21E − 05+2.22E − 031.41E − 04+1.50E − 031.17E − 04
1.65E + 011.05E + 00+8.24E + 001.93E + 00+1.28E + 011.58E + 00+9.61E + 001.11E + 00+9.14E + 001.06E + 00+7.78E + 001.70E + 00+
1.08E + 011.56E − 01+1.01E + 012.22E − 01+1.07E + 012.94E − 01+1.04E + 013.34E − 01+1.03E + 012.05E − 01+9.78E + 004.65E − 01+


4.64E + 073.68E + 06+1.60E + 073.17E + 06+2.89E + 073.65E + 06+1.50E + 073.49E + 06+2.00E + 072.92E + 06+1.60E + 073.49E + 06+
1.15E + 047.19E + 03+1.67E + 008.86E − 01+1.90E + 007.53E − 01+7.48E − 013.63E − 01+3.56E + 003.36E + 00+3.36E − 011.82E − 01
6.54E + 002.87E + 00+6.43E + 001.88E + 00+6.69E + 002.05E + 00+5.89E + 001.12E + 00+4.96E + 002.94E + 00=7.10E + 002.68E + 00+
1.30E + 026.80E + 00+4.21E + 012.34E + 01+6.65E + 012.57E + 01+4.00E + 012.65E + 01+4.74E + 012.10E + 01+6.25E + 012.67E + 01+
2.06E + 012.77E − 02+2.05E + 012.37E − 02+2.06E + 012.91E − 02+2.05E + 013.22E − 02+2.05E + 013.33E − 02+2.01E + 011.61E − 02
3.80E − 011.40E − 02+3.23E − 012.10E − 02+3.40E − 011.04E − 02+3.29E − 011.74E − 02+3.10E − 012.35E − 02+2.70E − 011.48E − 02+
2.70E − 029.14E − 03+3.21E − 032.46E − 03+8.32E − 036.07E − 03+1.45E − 031.17E − 03+6.92E − 033.28E − 03+6.33E − 044.97E − 04+
00NA00NA00NA4.05E − 037.58E − 03+00NA00NA00
2.25E + 021.35E + 01+1.87E + 021.52E + 01+1.96E + 022.63E + 01+2.05E + 021.87E + 01+1.49E + 021.27E + 01+1.67E + 025.97E + 00+
1.51E − 011.48E − 013.60E + 007.64E − 01+6.29E + 002.87E + 00+4.90E + 001.26E + 00+2.52E + 003.37E − 01+8.08E − 015.23E − 01
7.28E + 032.56E + 02+4.75E + 031.37E + 02+5.60E + 033.45E + 02+4.83E + 033.74E + 02+5.26E + 031.64E + 02+3.78E + 034.24E + 02
8.85E − 032.76E − 04+4.54E − 034.97E − 04+6.31E − 035.06E − 04+4.04E − 036.80E − 04+5.72E − 037.76E − 04+1.96E − 031.04E − 04+
4.20E − 033.27E − 04+3.20E − 033.16E − 04+3.47E − 032.24E − 04+3.25E − 032.67E − 04+3.40E − 032.53E − 04+2.84E − 032.10E − 04+
2.91E − 031.78E − 04+2.33E − 031.67E − 04+2.41E − 031.51E − 04+2.30E − 031.91E − 04+2.68E − 032.41E − 04+2.18E − 031.54E − 04+
3.82E + 012.29E + 00+2.53E + 011.44E + 00+3.21E + 012.45E + 00+2.61E + 013.30E + 00+2.43E + 011.38E + 00+1.82E + 013.01E + 00+
1.98E + 012.31E − 01+1.91E + 014.44E − 01+1.95E + 013.61E − 01+1.93E + 011.94E − 01+1.90E + 013.48E − 01+1.82E + 016.13E − 01+

It can be seen that all algorithms cannot get the global optimum value of F1 function and have equal performance for F8 function. From Table 1, our proposed HABC is better than the other algorithms for F3, F4, F5, F6, F7, F9, F10, and F11 functions, and the MABC algorithm is slightly better than the other algorithms for F2, F6, F7, and F12 functions. According to Table 2, our proposed HABC algorithm shows the best performance on most of the functions, except for functions F10 and F5. From Table 3, our proposed HABC also has better performance than the other algorithms on most of the benchmark functions, except for F2, F5, and F10 functions. The MABC algorithm is slightly better than the other algorithms in F2 and F5 functions, and the IABC algorithm is slightly better than the other algorithms in F10 function.

In the last column of Tables 13, we give the Wilcoxon signed-rank test results. A value of “+” in the table indicates that HABC algorithm is statically superior to the compared algorithm, whereas a “−” indicates that the HABC algorithm is statistically worse than the compared algorithm. A value of “=” indicates that the HABC algorithm is statistically equivalent with the compared algorithm and a value of “NA” indicates that two algorithms are statistically indistinguishable. We can conclude that the HABC algorithm is statistically better than the other algorithms on most of the benchmark functions.

4.4. Time Complexity Analysis

The improvement of computation precision in HABC algorithm usually needs the sacrifice of time complexity. Because the limit of neighbor searching in employed bees’ stage is not constant, in order to analyze time complexity properly, we calculated the computational complex of each algorithm by recording the average time to reach the given precision. All algorithms ran 51 times independently with random seeds. The mean time is measured in seconds. Results are listed in Tables 46, and the best is shown in bold font.


12E + 533.8342.4088.7573.1386.61518.8461.885
23E − 240.7224.3219.5344.37113.0971.7884.271
37E − 237.69110.42618.70712.0059.2088.2605.809
49E − 231.8582.69310.6263.1029.7202.5001.666
51E + 132.92710.11024.42521.7478.14924.2917.486
61E − 255.43635.49246.76721.74751.06421.47121.491
71E − 332.4206.81223.47413.2488.2437.3109.418
81E − 101.0131.1391.4781.3700.8612.1670.497
95E + 031.99413.73423.09822.4426.41811.4573.159
101E − 12.4885.10810.63711.3480.71511.89311.053
111E + 233.57912.92728.19726.39016.64122.12110.235
122E − 335.91516.45926.65116.74123.2181.5941.944
131E − 328.18713.51323.82124.46529.59112.79210.736
141E − 329.06226.74119.53823.25730.84825.70011.597
158E − 130.24914.98618.24314.00712.91910.7998.261
162E + 030.79514.64323.07116.13421.00512.92610.204


12E + 733.5072.65029.9065.0039.13310.3971.036
25E − 241.29215.95724.06120.07612.15713.90711.375
31E + 030.5862.36610.83910.02617.0486.8663.285
43E + 133.47418.45615.02119.59816.97018.87411.977
52E + 134.48828.73029.09034.08131.5627.0149.018
62E − 17.8035.0793.8753.7784.0096.5883.740
71E − 433.4316.45222.6668.93010.02711.6553.463
81E − 1012.9264.2338.7066.0342.7438.0191.677
98E + 133.74712.54523.28628.58615.18012.8623.247
102E + 06.9438.41720.82723.53314.99215.1132.073
112E + 334.93420.29933.10416.40927.45117.46914.429
125E − 342.9404.81724.4984.83819.2271.6411.002
133E − 328.5611.90017.2272.21610.7581.1501.583
142E − 332.53111.27811.8269.85327.5085.9093.640
151E + 136.00020.17527.87625.80913.9638.8527.548
161E + 134.89028.97829.00033.35428.7549.08412.534


13E + 738.11414.07028.4626.0694.5537.6932.140
24E + 042.52728.69522.2119.94320.54013.99010.660
37E + 033.4288.44023.17714.61411.95921.8127.322
47E + 135.89120.54530.86118.12624.23729.54114.176
52E + 136.31430.80532.35836.15731.12627.85620.181
64E − 119.8999.50511.1949.2187.05113.0545.060
78E − 331.09516.59925.86815.36219.24220.77912.879
81E − 1028.8899.69425.13631.7079.86922.9364.583
92E + 231.60713.14924.49626.83115.38710.3703.228
106E + 03.10311.70230.87933.9133.52919.14620.840
115E + 335.72815.49835.81736.22934.4589.3187.696
127E − 347.8062.85013.2922.93412.2572.2872.822
134E − 327.9321.79811.4012.6944.3201.8721.411
143E − 314.6692.8014.3502.6338.0513.1421.924
153E + 135.29411.28635.5959.8015.3988.1495.073
162E + 114.7525.4428.8056.8903.9844.5032.757

It can be seen from Tables 46 that HABC can find the global optimal solution with less time on most of the benchmark functions, because of its faster converge ability.

5. Conclusion

In this paper, hybrid ABC (HABC) is proposed by introducing the chemotactic behavior of Bacterial Foraging Optimization into employed bees and adopting the principle of moving the particles toward the best solutions in PSO to improve the global search ability of onlooker bees. Experiment result shows that HABC algorithm has better solution accuracy and higher evolution speed and reaches the global optimal solution with fewer time, compared with the current reported standard ABC, best-so-far ABC, directed ABC, Gaussian ABC, improved ABC, and memetic ABC.

In our future work, we will find a better way for searching for the best to improve the performance of HABC.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This research is supported in part by the Program for New Century Excellent Talents in University (NCET-12-0881), National Science and Technology Support Program of China (no. 2015 BAD17B02), China Agriculture Research System (CARS-49), and the Fundamental Research Funds for the Central Universities (JUSRP51410B).


  1. K. S. Tang, K. F. Man, S. Kwong, and Q. He, “Genetic algorithms and their applications,” IEEE Signal Processing Magazine, vol. 13, no. 6, pp. 22–37, 1996. View at: Publisher Site | Google Scholar
  2. M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, vol. 26, no. 1, pp. 29–41, 1996. View at: Publisher Site | Google Scholar
  3. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, IEEE, Perth, Australia, December 1995. View at: Google Scholar
  4. X.-L. Li, Z.-J. Shao, and J.-X. Qian, “Optimizing method based on autonomous animats: fish-swarm Algorithm,” System Engineering Theory and Practice, vol. 22, no. 11, pp. 32–38, 2002. View at: Google Scholar
  5. X.-S. Yang, “Firefly algorithms for multimodal optimization,” in Stochastic Algorithms: Foundations and Applications, vol. 5792, pp. 169–178, Springer, Berlin, Germany, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  6. I. Fister, M. Perc, S. M. Kamal, and I. Fister, “A review of chaos-based firefly algorithms: perspectives and research challenges,” Applied Mathematics and Computation, vol. 252, pp. 155–165, 2015. View at: Publisher Site | Google Scholar
  7. D. Karaboga, “An idea based on honey bee swarm for numerical optimization (vol. 200),” Tech. Rep. tr06, Computer Engineering Department, Engineering Faculty, Erciyes University, Kayseri, Turkey, 2005. View at: Google Scholar
  8. A. A. El-Fergany and A. Y. Abdelaziz, “Capacitor placement for net saving maximization and system stability enhancement in distribution networks using artificial bee colony-based approach,” International Journal of Electrical Power & Energy Systems, vol. 54, pp. 235–243, 2014. View at: Publisher Site | Google Scholar
  9. O. Bulut and M. F. Tasgetiren, “An artificial bee colony algorithm for the economic lot scheduling problem,” International Journal of Production Research, vol. 52, no. 4, pp. 1150–1170, 2014. View at: Publisher Site | Google Scholar
  10. M. Ma, J. Liang, M. Guo, Y. Fan, and Y. Yin, “SAR image segmentation based on artificial bee colony algorithm,” Applied Soft Computing Journal, vol. 11, no. 8, pp. 5205–5214, 2011. View at: Publisher Site | Google Scholar
  11. A. Draa and A. Bouaziz, “An artificial bee colony algorithm for image contrast enhancement,” Swarm and Evolutionary Computation, vol. 16, pp. 69–84, 2014. View at: Publisher Site | Google Scholar
  12. A. Singh, “An artificial bee colony algorithm for the leaf-constrained minimum spanning tree problem,” Applied Soft Computing Journal, vol. 9, no. 2, pp. 625–631, 2009. View at: Publisher Site | Google Scholar
  13. J. C. Bansal, H. Sharma, K. V. Arya, K. Deep, and M. Pant, “Self-adaptive artificial bee colony,” Optimization, vol. 63, no. 10, pp. 1513–1532, 2014. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  14. H. Wang, Z. Wu, S. Rahnamayan, H. Sun, Y. Liu, and J.-S. Pan, “Multi-strategy ensemble artificial bee colony algorithm,” Information Sciences, vol. 279, pp. 587–603, 2014. View at: Publisher Site | Google Scholar
  15. F. Kang, J. Li, and Z. Ma, “Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions,” Information Sciences, vol. 181, no. 16, pp. 3508–3531, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  16. B. Alatas, “Chaotic bee colony algorithms for global numerical optimization,” Expert Systems with Applications, vol. 37, no. 8, pp. 5682–5687, 2010. View at: Publisher Site | Google Scholar
  17. Y. Xiang, Y. Peng, Y. Zhong, Z. Chen, X. Lu, and X. Zhong, “A particle swarm inspired multi-elitist artificial bee colony algorithm for real-parameter optimization,” Computational Optimization and Applications, vol. 57, no. 2, pp. 493–516, 2014. View at: Publisher Site | Google Scholar
  18. X. Li and M. Yin, “Self-adaptive constrained artificial bee colony for constrained numerical optimization,” Neural Computing and Applications, vol. 24, no. 3-4, pp. 723–734, 2014. View at: Publisher Site | Google Scholar
  19. K. M. Passino, “Biomimicry of bacterial foraging for distributed optimization and control,” IEEE Control Systems Magazine, vol. 22, no. 3, pp. 52–67, 2002. View at: Publisher Site | Google Scholar
  20. J. Kennedy, “Particle swarm optimization,” in Encyclopedia of Machine Learning, pp. 760–766, Springer, New York, NY, USA, 2010. View at: Google Scholar
  21. R. C. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science, vol. 1, pp. 39–43, IEEE, Nagoya, Japan, October 1995. View at: Google Scholar
  22. Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation Proceedings and the IEEE World Congress on Computational Intelligence, pp. 69–73, IEEE, Anchorage, Alaska, USA, May 1998. View at: Publisher Site | Google Scholar
  23. Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '99), vol. 3, Washington, DC, USA, July 1999. View at: Publisher Site | Google Scholar
  24. J. J. Liang, B. Y. Qu, and P. N. Suganthan, “Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization,” Tech. Rep., Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China; Nanyang Technological University, Singapore, 2013. View at: Google Scholar
  25. A. Banharnsakun, T. Achalakul, and B. Sirinaovakul, “The best-so-far selection in artificial bee colony algorithm,” Applied Soft Computing Journal, vol. 11, no. 2, pp. 2888–2901, 2011. View at: Publisher Site | Google Scholar
  26. M. S. Kiran and O. Findik, “A directed artificial bee colony algorithm,” Applied Soft Computing Journal, vol. 26, pp. 454–462, 2015. View at: Publisher Site | Google Scholar
  27. L. Dos Santos Coelho and P. Alotto, “Gaussian artificial bee colony algorithm approach applied to Loney's solenoid benchmark problem,” IEEE Transactions on Magnetics, vol. 47, no. 5, pp. 1326–1329, 2011. View at: Publisher Site | Google Scholar
  28. W. Gao and S. Liu, “Improved artificial bee colony algorithm for global optimization,” Information Processing Letters, vol. 111, no. 17, pp. 871–882, 2011. View at: Publisher Site | Google Scholar
  29. I. Fister, I. Fister Jr., J. Brest, and V. Žumer, “Memetic artificial bee colony algorithm for large-scale global optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '12), pp. 1–8, Brisbane, Australia, June 2012. View at: Publisher Site | Google Scholar

Copyright © 2016 Li Mao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

969 Views | 603 Downloads | 7 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.