Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 8085953 | 13 pages | https://doi.org/10.1155/2016/8085953

An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch

Academic Editor: Ezequiel López-Rubio
Received03 Jul 2015
Accepted09 Sep 2015
Published24 Dec 2015

Abstract

The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature.

1. Introduction

Optimization techniques play an important role in the field of science and engineering. Over the last five decades, numerous algorithms have been developed to solve complex optimization algorithms. Since more and more present-day problems turn out to be nonlinear, multimodal, discontinuous, or dynamic in nature, derivative-free, nonexact solution methods attract ever-increasing attention. Evolutionary biology or swarm behaviors inspired most of these methods. There have been several classes of algorithms proposed in this evolutionary or swarm intelligence framework including genetic algorithms [1, 2], memetic algorithms [3], differential evolution (DE) [4], ant colony optimization (ACO) [5], particle swarm optimization (PSO) [6], artificial bee colony algorithm (ABC) [7], cuckoo search [8], and firefly algorithm [9].

The ABC is a biologically inspired population-based metaheuristic algorithm that mimics the foraging behavior of honeybee swarms [7]. Due to its simplicity and ease of application, the ABC has been widely used to solve both continuous and discrete optimization problems since its introduction [10]. It has been shown that ABC tends to suffer poor intensification performance on complex problems [1113]. To improve the intensification performance of ABC, many researchers have focused on the search rules as they control the tradeoff between diversification and intensification. Diversification means the ability of an algorithm to search for unvisited points in the search region, whereas intensification is the process of refining those points within the neighborhood of previously visited locations to improve solution quality. Various new search strategies, mostly inspired from PSO and DE, have been proposed in the literature. Zhu and Kwong [14] proposed a global best guided ABC, which utilizes the global best individual’s information within the search equation similar to PSO. Gao et al. [15] introduced another variant of global best ABC. Inspired by DE, Gao and Liu [13] introduced a modified version of the ABC in which ABC/Best/1 and ABC/Rand/1 were employed as local search equations. Kang et al. [16] described the Rosenbrock ABC, which combines Rosenbrock’s rotational method with the original ABC. To improve diversification, Alatas [11] employed chaotic maps for initialization and chaotic searches within a search strategy. Akay and Karaboga [17] introduced a modified version of the ABC in which frequency of perturbation is controlled adaptively and the ratio of variance operator was introduced. Liao et al. [18] proposed a detailed experimental analysis and comparison of an ABC variant with different search equations. Gao et al. [19] introduced two new search equations for onlooker and employed bee phases and a new robust comparison technique for candidate solutions. ABC. Qiu et al. [20] were inspired from DE/current-to-best/1 strategy in DE algorithm and proposed a modified ABC. Banitalebi et al. [21] proposed an enhanced compact ABC, which did not store the actual population of candidate solution; instead their approach employed probabilistic representation. Wang et al. [22] presented multistrategy ABC, in which a pool of different search strategies was constructed and various search strategies were used during the search process. Gao et al. [23] introduced a bare bones ABC with parameter adaptation and fitness-based neighborhood to improve the intensification performance of standard ABC. Ma et al. [24] reduced the redundant search moves and maintained the diversity of the swarm by introducing hybrid ABC with life cycle and social learning. Furthermore, ABC has been successfully applied to solve various types of optimization problems, such as production scheduling [25, 26], vehicle routing [27], location-allocation problem [28], image segmentation [29], wireless sensor network routing [30], leaf-constrained minimum spanning tree problem [31], clustering problem [32], fuel management optimization [33], and many others [3436]. Readers can refer to Karaboga et al. [10] for an extensive literature review of the ABC and its applications.

This study presents an enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) in order to solve global optimization problems efficiently. In ABC-SA, three search mechanisms with different diversification and intensification characteristics are employed. Moreover, search mechanism selection probabilities , and are introduced to control the balance between diversification and intensification. In our proposed approach, a search mechanism is established using the selection probabilities to generate a new neighbor solution from the current one. Additionally, a solution acceptance rule is implemented, in which not only better solutions but also worse solutions may be accepted by using a probability function. A nonlinearly decreasing acceptance probability function is employed, thus allowing worse solutions to be more likely accepted in the early phases of the search. Therefore, ABC-SA algorithm explores the search space more widespread, especially in the early phases of the search process. By using solution acceptance rule and implementing different search mechanisms of contrasting nature, ABC-SA balances the trade-off between diversification and intensification efficiently. The proposed approach is tested on six different benchmark functions with varying dimensions and compared to novel ABC, PSO, and DE variants. Computational results reveal that ABC-SA outperforms competitor algorithms in terms of solution quality.

The main contributions of the proposed study are as follows:(i)Three different search mechanisms are employed with varying diversification and intensification abilities. Probabilistic multisearch with predetermined probability values are employed to determine the search mechanism to be used to generate candidate solutions. Therefore, ABC-SA explores and exploits the search space efficiently.(ii)Instead of a greedy selection, a new candidate solution acceptance rule is integrated, where a worse solution may have a chance to be accepted as new solution. By the help of this new acceptance rule, ABC-SA achieves better diversification performance, specifically in the early phases of the search.

The remainder of this paper is structured as follows: Section 2 presents the traditional ABC; Section 3 introduces the proposed framework; the instances, parameter settings, and computational results are presented in Section 4 and finally Section 5 concludes the paper.

2. Artificial Bee Colony Algorithm

The ABC has inspired from the organizational nature and foraging behavior of honeybee swarms. In the ABC algorithm, the bee colony comprises three kinds of bees: employed bees, onlooker bees, and scout bees. Each bee has a specialized task in the colony to maximize the nectar amount that is stored in the hive. In ABC, each food source is placed in the -dimensional search space and represents a potential solution to the optimization problem. The amount of nectar in the food source is assumed to be the fitness value of a food source. Generally, the number of employed and onlooker bees is the same and equal to the number of food sources.

Each employed bee belongs to a food source and is responsible for mining the corresponding food source. Then, employed bees pass the nectar information to onlooker bees in the “dance area.” Onlooker bees wait in the hive and select a food source to mine based on the information coming from the employed bees. Here, more beneficial food sources will have higher selection probabilities to be selected by onlooker bees. In ABC, in order to decide if a food source is abandoned or not, trial counters and a predetermined limit parameter are used. If a solution represented by a food source does not improve during a number of trials (limit), the food source is abandoned. When the food source is abandoned, the corresponding employed bee will become a scout bee and randomly generate a new food source and replace it with the abandoned one.

The ABC algorithm consists of four main steps: initialization, employed bee phase, onlooker bee phase, and scout bee phase. After the initialization step, the other three main steps of the algorithm are carried out repeatedly in a loop until the termination condition is met. The main steps of the ABC algorithm are as follows.

Step 1 (initialization). In the initialization step, the ABC generates a randomly distributed population of SN solutions (food sources), where SN also denotes the number of employed or onlooker bees. Let represent the th food source, where is the problem size. Each food source is generated within the limited range of th index bywhere , ,    is a uniformly distributed random real number in , and and are the lower and upper bounds for the dimension , respectively. Moreover, a trial counter for each food source is initialized.

Step 2 (employed bee phase). In the employed bee phase, each employed bee visits a food source and generates a neighboring food source in the vicinity of the selected food source. Employed bees search a new solution, , by performing a local search around each food source as follows:where is a randomly selected index and is a randomly chosen food source that is not equal to ; that is, . is a random number within the range generated specifically for each and combination. A greedy selection is applied between and by selecting the better one.

Step 3 (onlooker bee phase). Unlike the employed bees, onlooker bees select a food source depending on the probability value , which is determined by nectar amount associated with that food source. The value of is calculated for th food source as follows:where is the fitness value of solution and calculated as in (4) for minimization problems. Different fitness functions are employed for maximization problems. By using this type of roulette wheel based probabilistic selection, better food sources will more likely be visited by onlooker bees. Therefore, onlooker bees try to find new candidate food sources around good solutions. Once the onlooker bee chooses the food source, it generates a new solution using (2). Similar to the employed bee phase, a greedy selection is carried out between and .

Step 4 (scout bee phase). A trial counter is associated with each food source, which depicts the number of tries that the food source cannot be improved. If a food source cannot be improved for a predetermined number of tries (limit) during the onlooker and employed bee phases, then the employed bee associated with that food source becomes a scout bee. Then, the scout bee finds a new food source using (1). By implementing the scout bee phase, the ABC algorithm easily escapes from minimums and improves its diversification performance.

It should be noted that, in the employed bee phase, a local search is applied to each food source, whereas in the onlooker bee phase better food sources will more likely be updated. Therefore, in ABC algorithm, the employed bee phase is responsible for diversification whereas the onlooker bee phase is responsible of intensification. The flow chart of the ABC is given in Figure 1.

3. Proposed Framework

In this section, the proposed algorithm is described in detail. First, a solution acceptance rule is presented. Second, a novel probabilistic multisearch mechanism is proposed. Finally, the complete ABC-SA mechanism is given.

3.1. Solution Acceptance Rule

In order to strengthen the diversification ability of ABC-SA mechanism, a solution acceptance rule is proposed. Instead of greedy selection in both employed and onlooker bee phases, an acceptance probability is given to worse solutions. The main idea behind this acceptance probability is not to restrict the search moves to only better solutions. By accepting a worse solution, the procedure may escape from a local optimum and explore the search space effectively. In ABC-SA algorithm, if a worse solution is generated, it is accepted if the following condition holds: where is a random real number within , is the acceptance probability, denotes the initial probability, and and represent the current iteration number and the maximum iteration number, respectively. According to (6), the acceptance probability is nonlinearly decreased from to zero during the search process. As can be seen from (6), when and the range of is . A typical graph is given in Figure 2 and Algorithm 1 presents the implementation of the solution acceptance rule. At this point, it is important to note that the trial counter is incremented, whether a worse candidate solution is accepted or not.

Input: , , ,
Output:
(1)Evaluate and . //Set and
(2)if    then
(3)  set  
(4)  
(5)else
(6)  Calculate . //Use (6).
(7)  Produce a random number within the range 0, 1
(8)  if    then
(9) set
(10)   
(11)  else
(12)   
(13)end if
(14) end if
3.2. Probabilistic Multisearch Strategy

In standard ABC, a candidate solution is generated using the information of the parent food source with the guidance of the term in (3). However, there is no guarantee that a better individual influences the candidate solution; therefore, it is possible to have a poor convergence speed and intensification performance. In fact, studying search equations is a trending topic to improve the ABC’s performance. Recently, numerous search equations have been proposed, such as [1316, 19, 20, 37, 38]. It is well known that the balance between diversification and intensification is the most critical part of any metaheuristic algorithm.

In ABC-SA approach, instead of employing a single search mechanism throughout the search process, a probabilistic multisearch mechanism with three different search rules is used. A probabilistic selection is employed using predefined probability parameters to select the search rule within both employed and onlooker bee phases. The three search rules which were proposed by [7, 13, 14], respectively, are presented as follows:where is a food source, is a randomly selected index for all , and , respectively. is a randomly chosen food source where . stands for the global best solution, whereas is the best solution in the current population. represents a real random number within the range of [7]. Finally, is a real random number within the range of where is a predetermined number [13].

Equation (7) is the original search rule, which was discussed in previous sections. Equation (8) is presented to improve the intensification capability of ABC. Equation (8) uses the information provided by the global best solution which is similar to PSO. In (9), guides the search with the random effect of the term . Equation (7) has an explorative character, whereas (9) favors intensification. On the other hand, (8) explores the search space using the second term and exploits effectively by the third term. Therefore, (8) balances diversification and intensification performance. In summary, the proposed ABC-SA uses three different search rules to achieve a trade-off between diversification and intensification. In ABC-SA, search probabilities , and are introduced such that and to select a search rule to be used in the employed and the onlooker bee phases. A roulette wheel method is employed with three cumulative ranges , , and assigned to (7), (8), and (9), respectively, where . Algorithm 2 shows the mechanism of probabilistic multisearch.

Input: , ,  , lbest, gbest
Output:
(1) Produce a random number within the range 0, 1
(2) if   =<   then // and are cumulative probabilities
(3) Produce a new neighbor solution by (7)
(4) elseif   =<   then
(5) Produce a new neighbor solution by (8)
(6) Else // is in the range of (, 1)
(7) Produce a new neighbor solution by (9)
(8) end if
3.3. Proposed Approach

Algorithm 3 summarizes the ABC-SA framework. The novel parts of the ABC-SA mechanism are the probabilistic multisearch (Lines 9 and 19) and the solution acceptance rule (Lines 10 and 20) sections.

(1)set control parameters: SN, Max.iter, limit, , , , .
(2)Generate initial population //Use (1).
(3)Evaluate initial population //Calculate , record local and global best.
(4)set iter = 1
(5)for each food source   do, Set    end for
(6)do while iter ≤ Max.iter
(7)//EMPLOYED BEE PHASE
(8)   for each food source   do
(9)    Generate a neighbor solution from by Algorithm 2.
(10)   Make a selection between and by Algorithm 1.
(11)   end for
(12) //ONLOOKER BEE PHASE
(13)   Calculate cumulative probability values //Use (3).
(14)   set  ,
(15)   while   < PopSize do
(16)   Produce random number within the range 0, 1
(17)   if    then
(18)  set  
(19)   Generate a neighbor solution from by Algorithm 2.
(20)  Make a selection between and by Algorithm 1.
(21)   end if
(22)   set  
(23)   if   > PopSize then set    end if
(24)   end while
(25) //SCOUT BEE PHASE
(26)   set   = index of maximum(trial) //Find the index that has the maximum trial value.
(27)   if limit =<   then
(28)   Replace with a new randomly generated solution //Use (1)
(29)   set  
(30)   end if
(31)   Save necessary information //Record local and global best.
(32)   set iter = iter + 1
(33) end while

4. Computational Results

4.1. Test Instances

In literature, many test functions with different characters were used to test algorithms [11, 13, 15, 17, 1921, 34, 3740]. Unimodal functions have one local minimum as the global optimum. These functions are generally used to test the intensification ability of algorithms. Multimodal functions have one or more local optimums which may be the global optimum. Therefore, diversification behavior of algorithms is analyzed on multimodal instances. Separable functions can be written as sum of functions with one variable, whereas nonseparable functions can not be reformulated as subfunctions. In this study, to analyze the performance of the proposed ABC-SA algorithm, 13 scalable benchmark functions with dimensions ,  , and are used and listed in Table 1. They are Rosenbrock, Ackley, Rastrigin, Weierstrass, Schwefel 2.26, Shifted Sphere, Shifted Schwefel 1.2, Shifted Rosenbrock, Shifted Rastrigin, Step, Penalized 2, and Alpine. In Table 1, function label, name, formulation, type (UN: unimodal and nonseparable, MS: multimodal and separable, and MN: multimodal and nonseparable), range, and optimal values () are given.


LabelNameFormulationTypeRange

F1RosenbrockUN−2.048, 2.0480
F2AckleyMS−32.768, 32.7680
F3RastriginMS−5.12, 5.120
F4GriewankMN−600, 6000
F5WeierstrassMS−0.5, 0.50
F6Schwefel 2.26MS−500, 500
F7Shifted SphereUS−100, 100
F8Shifted Schwefel 1.2UN−100, 100
F9Shifted RosenbrockMN−100, 100
F10Shifted RastriginMS−5, 5
F11StepUS−100, 1000
F12Penalized 2
MN−50, 500
F13AlpineMS−10, 100

4.2. Parameters Settings

Parameter settings may have a great influence on the computational results. The ABC-SA mechanism has seven control parameters such as maximum iteration number (), , population size (), , , , and . Maximum iteration number is the termination condition, and is the initial acceptance probability. First, is set to 4,000, , where is the dimension of the problem [21], is taken to be 40 for 50 and 100 problems and 50 for 200 problems [40], and is set to be a random real number within (0, 1.5) [14]. Then, preliminary experiments were conducted with appropriate combinations of the following parameter values to determine the best settings: = 0.25, 0.20, 0.15, 0.10, and 0.05, = 0.2, 0.4, and 0.6, = 0.2, 0.4, and 0.6, = 0.2, 0.4, and 0.6.

From the results of the pilot studies, , , , and settings achieved the best results. Therefore, these parameter settings are used for further experiments.

4.3. Comparison with ABC Variants

In this section, aforementioned ABC-SA is implemented and evaluated by benchmarking with other well-known ABC variants including the original ABC [7], GABC [14], and IABC [13] on problems F1–F13.

The parameters of test algorithms are set to their original values given in their corresponding papers, except for the maximum number of function evaluations, population size, and limit, which are set to the same values for all ABC variants. ABC-SA, ABC, and GABC implement random initialization mechanisms whereas IABC employs a chaotic initialization as described in [13]. All algorithms have been simulated in MATLAB environment and executed on the same computer with Intel Xeon CPU (2.67 GHz) and 16 GB of memory.

The computational results are presented in Table 2 for 50 problems, Table 3 for 100 problems, and Table 4 for 200 problems. In Tables 24, results are given in terms of mean and standard deviation of the objective values due to the repetitive runs for the global best solutions. All algorithms were run 30 times with random seeds and the stopping criteria set to 4,000 iteration, which means that 320,000 functions evaluations for 50 and 100 problems and 400,000 function evaluations for 200 problems approximately. For a precise and pairwise comparison, statistical significances of the differences between the means of two algorithms are analyzed using -tests where significance level is set to 0.05. In Tables 24, “+” in the columns next to competing algorithms shows that ABC-SA outperforms the competitor algorithm, “=” indicates that the difference between the ABC-SA and the compared algorithm is not statistically significant, and “−” depicts that the competitor algorithm is better than ABC-SA at a level of 0.05 significance.


Func.ABC-SAABCGABCIABC
MeanStd. Dev.MeanStd. Dev.SignMeanStd. Dev.SignMeanStd. Dev.Sign

F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12