Mathematical Problems in Engineering

Volume 2018, Article ID 3102628, 16 pages

https://doi.org/10.1155/2018/3102628

## An Improved Artificial Bee Colony Algorithm Based on Factor Library and Dynamic Search Balance

^{1}School of Mechatronics Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China^{2}Institute of Electronic and Information Engineering of UESTC in Guangdong, Guangdong, China

Correspondence should be addressed to Wenjie Yu; moc.kooltuo@y.eijnew

Received 29 July 2017; Revised 12 December 2017; Accepted 20 December 2017; Published 28 January 2018

Academic Editor: Jose J. Muñoz

Copyright © 2018 Wenjie Yu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The artificial bee colony (ABC) algorithm is a relatively new optimization technique for simulating the honey bee swarms foraging behavior. Due to its simplicity and effectiveness, it has attracted much attention in recent years. However, ABC search equation is good at global search but poor at local search. Some different search equations are developed to tackle this problem, while there is no particular algorithm to substantially attain the best solution for all optimization problems. Therefore, we proposed an improved ABC with a new search equation, which incorporates the global search factor based on the optimization problem dimension and the local search factor based on the factor library (FL). Furthermore, aimed at preventing the algorithm from falling into local optima, dynamic search balance strategy is proposed and applied to replace the scout bee procedure in ABC. Thus, a hybrid, fast, and enhanced algorithm, HFEABC, is presented. In order to verify its effectiveness, some comprehensive tests among HFEABC and ABC and its variants are conducted on 21 basic benchmark functions and 20 complicated functions from CEC 2017. The experimental results show HFEABC offers better compatibility for different problems than ABC and some of its variants. The HFEABC performance is very competitive.

#### 1. Introduction

In the fields of science and engineering, a wide variety of actual problems can be converted to optimization problems and then be settled by optimization techniques. Unfortunately, many of these problems are often characterized as nonconvex, discontinuous or nondifferentiable; thus it is extremely hard to find optimal solutions. Over the last two decades, numerous algorithms have been developed to tackle such complex problems. Among these algorithms, many are inspired by swarm behaviors, such as ant colony optimization (ACO) [1, 2], particle swarm optimization (PSO) [3], artificial bee colony algorithm (ABC) [4], cuckoo search [5], and firefly algorithm [6]. These algorithms, belonging to Swarm Intelligence (SI) [7], have been studied so far and are well employed to solve various intricate computational problems, for instance, optimization of gait pattern tuning for humanoid robots [2], objective functions [8], and relay node deployment [9].

ABC algorithm inspired by the foraging behavior of honeybee swarms is a population based metaheuristic optimization algorithm [10]. Owing to its effectiveness and simplicity, the ABC has been diffusely employed to settle both continuous and discrete optimization problems since its introduction [11]. Thus, in this approach, we focus on ABC algorithm, while, in [12–14], it has been pointed out that ABC is prone to suffer poor intensification performance on complicated problems. In other words, it is easy to encounter the problem of poor convergence. The possible reason is that the search equation employed to produce new candidate solutions has good global search ability but poor local search ability [15], and thus it leads to the problem of slow convergence speed. The global search procedure is associated with the ability of independently exploring the global optimum, while the local search procedure is associated with the ability of applying the information existing to hunt for better solutions. The global search and local search are both extremely important mechanisms in ABC. Therefore, how to further balance and accelerate the two processes is a challenging research topic.

In recent years, some modified or improved algorithms [16] based on the foraging behavior of honey bee swarm are raised, for instance, the -guided ABC (GABC) [15], the best-so-far ABC (BSFABC) [17], Bee Swarm Optimization (BSO) [18], qABC [19], and GBABC [20]. These ABC variants show some better features compared to the original ABC. However, there is no particular algorithm to substantially attain the best solution for all optimization problems. Some algorithms only perform better than others on some special problems. Accordingly, it is very necessary to design a well improved algorithm.

In order to solve all the problems mentioned above, a hybrid, fast, and enhanced ABC method was proposed to further improve the performance of ABC. The improved and hybrid ABC is called HFEABC. It has the following differences with respect to ABC. Initially, global adjustable factor based on the dimension of the optimization problem and the best local search factor chosen from the factor library (FL) are introduced to modify the search process. This strategy is used to enhance the performance of ABC in terms of high convergence speed, high robustness, and being more compatible with different problems, while the improvement of ABC using merely the strategy based on FL may lead the algorithm to fall into local optimum to a degree. Therefore, a dynamic search balance strategy is proposed to strengthen the global search ability in order to further balance the exploration and exploitation of the algorithm. This strategy replaces the scout bee phase in original ABC so that one key parameter* limit* in ABC is discarded. The experimental results verify that our method shows a competitive performance.

The rest of this paper is organized as follows. Section 2 gives the main related work on ABC improvements. Section 3 describes the original ABC algorithm. In Section 4, the proposed approach is described in detail. Section 5 presents and discusses the experimental results. Finally, Section 6 provides a summary of this paper.

#### 2. Related Works

In the last decade, many researchers contributed to the improvements on ABC due to its simplicity and efficiency. As a result, there is a lot of work on proposing ABC variants. After the comparative analysis of the literatures about improved and modified ABCs, we divide them into two categories. The first one takes the line on the improvements of the solution search equation, while the other one studies the effect of the hybridization of ABC with other search methods [21].

Taking the first category, we may cite some representative works focusing mainly on the improvements of the solution search equation. In [15], Zhu and Kwong inspired by the PSO algorithm proposes a -guided ABC algorithm, namely, GABC, which considers improving the capability of the local search based on taking the knowledge of the global best solution into the solution search equation. The experimental results show that GABC outperforms the basic ABC on some used benchmark functions. In lieu of the original solution search equation, a Lévy mutation is utilized in [22] to produce new candidate solutions in the neighborhood of the global best solution of current population. Akay and Karaboga [16] probed the effects of two elements, frequency of the perturbation and magnitude of the perturbation, on the performance of ABC. Consequently, a modified ABC algorithm is proposed, introducing two new parameters controlling both factors. Banharnsakun et al. [17] proposed an improved ABC variant based on the best-so-far solutions. In this work, the best-so-far solution-predicated method is utilized by onlooker bees to search direction. In [23], Gao et al. carried out a comparison on the performance of two different ABC variants based on, respectively, the ABC/best/1 and ABC/best/2 search equations. The experimental outcomes exhibit that ABC/best/1 appreciably outperforms ABC/best/2. Further, in [24], Gao and Liu put forward a modified ABC (MABC) algorithm by adopting the same ABC/best/1 search method. But the MABC algorithm omits the probabilistic selection method and scout bee procedure. This is different from the original ABC. Das et al. [25] put forward an ABC variant based on fitness learning and proximity stimuli (FlABCps). This variant utilizes the Rechenberg’s 1/5th mutation rule and the information of the top q% food sources to generate a new solution with more than one dimension updated. In [26], Bansal et al. considered utilizing a self-adaptive step size method to well adjust the parameters used in the solution update strategy. This improved ABC is called self-adaptive ABC (SAABC), and its parameter of limit is set as adaptive. In order to enhance the local search ability of ABC, Karaboga and Gorkemli [19] put forward a quick ABC (qABC) algorithm, in which the behavior of the onlookers is changed to just search the neighborhood food source. Li and Yin [27] proposed a self-adaptive modification rate to enhance the convergence rate of the ABC. Gao et al. [8] developed two new search equations in the employed bee phase and the onlookers phase, respectively, to balance the exploration and exploitation in the ABC. Further, in Zhou et al. [20], to balance the global search ability and the local search ability and remedy the “oscillation” phenomenon, the candidate solution is produced by utilizing the Gaussian distribution based on the global best solution. In Sharma et al. [28], Lévy Flight ABC (LFABC) is proposed, where the candidate solution is generated around the best solution by tuning the Lévy flight parameters, thereby tuning the step sizes, to enhance the local search capability.

The second category takes the line of hybridization. Kang et al. [29] proposed an improved ABC algorithm based on Rosenbrock, in which the Rosenbrock method is developed for multimodal optimization problems. In [30], the Lévy flight random walk was introduced into ABC to perform an additional local search. In order to find out more beneficial information from the search experiences for ABC, Gao et al. [21] employed the orthogonal experimental method to compose an orthogonal learning strategy. In the paper by Wang et al. [31], they utilize a pool of distinct solution search strategies coexisting throughout the search process and competing to produce offspring. In the paper by Aydin [32], he conducted a systematic experimental study to the proposed modifications of a few ABC variants to evaluate their impact on algorithm performance, and based on these analyses, two new variants of ABC, using the best schemes tested in the experiment, are developed. To efficiently solve optimization problems with different characteristics, Kiran et al. [33] proposed the integration of multiple solution update rules with ABC, while Yurtkuran and Emel [34] used a random select strategy to select one solution search strategy from a variety of search strategies to balance the global search ability and the local search ability. Ma et al. [35] introduced a modified ABC algorithm, which utilizes the life cycle scheme to generate dynamical varying population and ensure proper balance between the global search and the local search.

#### 3. The ABC Algorithm

The ABC algorithm is a population-predicated metaheuristic algorithm that simulates the foraging behavior of honey bee swarms. This technique is very easy to implement and effective. There are three groups of foraging bees in the ABC: employed bees, onlooker bees, and scout bees. The number of employed bees is the same as onlooker bees, being half of the colony. The responsibility of employed bees is to exploit the food sources and then onlooker bees decide whether to exploit the food sources or not according to the information shared by the employed bees. Scout bees try to find a new food source through random searching. One food source in ABC stands for a possible solution to the optimization problem. The amount of nectar on the food source represents the quality of this solution. The number of the food sources and the number of employed bees are the same. When the quality of a food source remains unchanged for a determinate time, the employed bee exploiting this food source will transform to a scout bee. And once the scout bee finds a new food source, it transforms back to an employed bee.

The following is the main procedure of the ABC.(1)Initialization(2)Assess the population(3)Loop(4)Employed bee process (5)Onlooker bee process (6)Scout bee process (7)Memorize the best food source (8)Until (the termination criteria is satisfied).

In the procedure of initialization, the ABC produces a randomly distributed population of SN solutions (food sources), where SN is half of the colony size and also represents the number of employed or onlooker bees. Let represent the* i*th food source, where* D* is the problem size. Each food source is produced within the limited range of* j*th index bywhere and , and are the lower and upper bounds for the index* j*, respectively, and is a random real number within the range .

In the procedure of employed bees, the candidate solution is produced by performing a local search around a neighboring food source. The equation is as follows:where* j* is a randomly selected dimension such that and* k* is a randomly chosen food source such that and . is produced randomly in the range . Then comparing the fitness of and , the one with greater fitness is kept and the employed bees will come back to hive to share the information on new food sources with the onlookers. The fitness is calculated as follows:where represents the fitness of solution* i*, is the result of objective function, and .

In the procedure of onlooker bees, the food source is chosen relying on the probability value , and the calculation method of* p* might be given as follows:

Via utilizing this scheme, the food sources with greater fitness value are more possible to be chosen by onlookers for update. Once the onlooker selects the food source, it produces a candidate solution utilizing (2). Then, the selection procedure in the employed bee phase is conducted on and . The one with greater fitness value is kept.

#### 4. Proposed Algorithm: HFEABC

In this section, the algorithm proposed is introduced in detail. First, it is presented that a new search equation combines a dynamic adaptive global search part based on the dimension of the problem and the local search part based on a factor library. Second, in order to prevent the algorithm from falling into local optima, the dynamic search balance strategy is proposed instead of scout bee phase in original ABC.

##### 4.1. A New Search Equation Based on Factor Library

As we all know, for any metaheuristic algorithm, the balance between the global search and the local search is one of the most critical mechanisms. The global search means the capability of searching for global optimal solution in entire solution space. And the local search means the capability of employing the information of previous solutions to find better solutions. However, the procedures of the global search and the local search against each other must be well adjusted to achieve desired optimization results. In ABC, because a new solution is produced utilizing the knowledge of the previous food source with the guidance of the term in (2) and is a random number within , there is no guarantee that a better individual influences the candidate solution. This search equation has a good global search ability but it ignores the local search ability. This may lead to a poor convergence speed and intensification performance. To overcome such issues, in literature [15], they proposed GABC with a new searching strategy:where , , and are yielded in the same manner as in (2), is a uniform random number within , where is a nonnegative constant, and is the* j*th element of the global best solution. This new search equation somehow improves the local search ability without harming the global search ability and the experimental results show that it is superior to the original one on some test functions. However, the scheme of employing still brings some inefficiency to the search capability of the algorithm and decelerates convergence speed [21]. In order to analyze this in detail, we rewrite (5) as

In (6), it is clear that stands for exploration and stands for exploitation. It is important to note that this algorithm assumes a random local search ability throughout the optimization process. However, according to general optimization procedure, global search is more important than local search in early stage in order to avoid trapping in local optimum. In late stage of optimization process, local search becomes more important than global search, because better local search ability means higher speed of convergence and leads to more accurate result. In addition, although a few different search equations are proposed in [17, 36–39], each algorithm only supplies a better solution for some specific problems than others. Therefore, it is very necessary to seek for a well improved method to be compatible for different problems. To this end, we consider redesigning this search equation to get a good balance between exploration and exploitation with higher speed of convergence and moreover it will be suited for different problems.

First, we consider reducing the ability of global search ability to accelerate the speed of convergence to a degree; therefore we modify aswhere is the parameter to control the global search ability; it is defined aswhere* D* is the size of the problem and* l* is a linear parameter, being defined aswhere is the current number of iterations and Maxcycle is the maximum number of iterations.

It can be noted that the global search ability will decrease as* l* increases, but the higher the dimension of the problem, the stronger the global search ability of the algorithm.

Second, with the objective of strengthening robustness and further improving the convergence speed of the algorithm, is developed aswhere is a positive constant as the description of [15] and is the value of the best factor chosen from the FL. It is noticed that the most appropriate factor for different problem is different. Therefore, we propose the concept of FL. FL includes the following factors: , , , , , , and . These factors contained in FL are derived by experimenter and are detailed in Section 5.3. Finally, the search equation is given as

##### 4.2. Dynamic Search Balance Strategy

In real world, problems are complex. It cannot be figured out when an algorithm will find the best solutions to the problems. Aimed at speeding the convergence of the algorithm and obtaining a more accurate result, in (11), the global search ability is reduced somehow while the local search ability is strengthened. However, this may lead to the phenomenon of prematurity. To improve this situation, the dynamic search balance strategy is proposed, which can provide good global search ability. This strategy replaces the scout bee procedure in ABC. In this strategy, the position of global solution is monitored. If* GlobalMin* is updated, set* GlobalSearch* to false and (11) will be used to generate the candidate solution. If* GlobalMin* is not updated, then the swarm of bees (the onlooker bees and the employed bees) chooses the search equation proposed in ABC, which has a strong global search ability [15]. Then, all the solutions are sorted. One solution will be selected with a certain probability and processed with GOBL strategy [20]. The main idea behind GOBL is that when a candidate solution* S* to a given problem is evaluated, simultaneously computing its opposite solution can provide a higher chance for to be closer to the global optimum than a random generated solution. It is beneficial to preserve the search experiences for the efficiency of the algorithm. One more important point in our approach is that the key parameter* limit* in ABC is eliminated. The process of search equation chosen is given in Algorithm 1.