Abstract

The artificial bee colony (ABC) algorithm is a relatively new optimization technique for simulating the honey bee swarms foraging behavior. Due to its simplicity and effectiveness, it has attracted much attention in recent years. However, ABC search equation is good at global search but poor at local search. Some different search equations are developed to tackle this problem, while there is no particular algorithm to substantially attain the best solution for all optimization problems. Therefore, we proposed an improved ABC with a new search equation, which incorporates the global search factor based on the optimization problem dimension and the local search factor based on the factor library (FL). Furthermore, aimed at preventing the algorithm from falling into local optima, dynamic search balance strategy is proposed and applied to replace the scout bee procedure in ABC. Thus, a hybrid, fast, and enhanced algorithm, HFEABC, is presented. In order to verify its effectiveness, some comprehensive tests among HFEABC and ABC and its variants are conducted on 21 basic benchmark functions and 20 complicated functions from CEC 2017. The experimental results show HFEABC offers better compatibility for different problems than ABC and some of its variants. The HFEABC performance is very competitive.

1. Introduction

In the fields of science and engineering, a wide variety of actual problems can be converted to optimization problems and then be settled by optimization techniques. Unfortunately, many of these problems are often characterized as nonconvex, discontinuous or nondifferentiable; thus it is extremely hard to find optimal solutions. Over the last two decades, numerous algorithms have been developed to tackle such complex problems. Among these algorithms, many are inspired by swarm behaviors, such as ant colony optimization (ACO) [1, 2], particle swarm optimization (PSO) [3], artificial bee colony algorithm (ABC) [4], cuckoo search [5], and firefly algorithm [6]. These algorithms, belonging to Swarm Intelligence (SI) [7], have been studied so far and are well employed to solve various intricate computational problems, for instance, optimization of gait pattern tuning for humanoid robots [2], objective functions [8], and relay node deployment [9].

ABC algorithm inspired by the foraging behavior of honeybee swarms is a population based metaheuristic optimization algorithm [10]. Owing to its effectiveness and simplicity, the ABC has been diffusely employed to settle both continuous and discrete optimization problems since its introduction [11]. Thus, in this approach, we focus on ABC algorithm, while, in [1214], it has been pointed out that ABC is prone to suffer poor intensification performance on complicated problems. In other words, it is easy to encounter the problem of poor convergence. The possible reason is that the search equation employed to produce new candidate solutions has good global search ability but poor local search ability [15], and thus it leads to the problem of slow convergence speed. The global search procedure is associated with the ability of independently exploring the global optimum, while the local search procedure is associated with the ability of applying the information existing to hunt for better solutions. The global search and local search are both extremely important mechanisms in ABC. Therefore, how to further balance and accelerate the two processes is a challenging research topic.

In recent years, some modified or improved algorithms [16] based on the foraging behavior of honey bee swarm are raised, for instance, the -guided ABC (GABC) [15], the best-so-far ABC (BSFABC) [17], Bee Swarm Optimization (BSO) [18], qABC [19], and GBABC [20]. These ABC variants show some better features compared to the original ABC. However, there is no particular algorithm to substantially attain the best solution for all optimization problems. Some algorithms only perform better than others on some special problems. Accordingly, it is very necessary to design a well improved algorithm.

In order to solve all the problems mentioned above, a hybrid, fast, and enhanced ABC method was proposed to further improve the performance of ABC. The improved and hybrid ABC is called HFEABC. It has the following differences with respect to ABC. Initially, global adjustable factor based on the dimension of the optimization problem and the best local search factor chosen from the factor library (FL) are introduced to modify the search process. This strategy is used to enhance the performance of ABC in terms of high convergence speed, high robustness, and being more compatible with different problems, while the improvement of ABC using merely the strategy based on FL may lead the algorithm to fall into local optimum to a degree. Therefore, a dynamic search balance strategy is proposed to strengthen the global search ability in order to further balance the exploration and exploitation of the algorithm. This strategy replaces the scout bee phase in original ABC so that one key parameter limit in ABC is discarded. The experimental results verify that our method shows a competitive performance.

The rest of this paper is organized as follows. Section 2 gives the main related work on ABC improvements. Section 3 describes the original ABC algorithm. In Section 4, the proposed approach is described in detail. Section 5 presents and discusses the experimental results. Finally, Section 6 provides a summary of this paper.

In the last decade, many researchers contributed to the improvements on ABC due to its simplicity and efficiency. As a result, there is a lot of work on proposing ABC variants. After the comparative analysis of the literatures about improved and modified ABCs, we divide them into two categories. The first one takes the line on the improvements of the solution search equation, while the other one studies the effect of the hybridization of ABC with other search methods [21].

Taking the first category, we may cite some representative works focusing mainly on the improvements of the solution search equation. In [15], Zhu and Kwong inspired by the PSO algorithm proposes a -guided ABC algorithm, namely, GABC, which considers improving the capability of the local search based on taking the knowledge of the global best solution into the solution search equation. The experimental results show that GABC outperforms the basic ABC on some used benchmark functions. In lieu of the original solution search equation, a Lévy mutation is utilized in [22] to produce new candidate solutions in the neighborhood of the global best solution of current population. Akay and Karaboga [16] probed the effects of two elements, frequency of the perturbation and magnitude of the perturbation, on the performance of ABC. Consequently, a modified ABC algorithm is proposed, introducing two new parameters controlling both factors. Banharnsakun et al. [17] proposed an improved ABC variant based on the best-so-far solutions. In this work, the best-so-far solution-predicated method is utilized by onlooker bees to search direction. In [23], Gao et al. carried out a comparison on the performance of two different ABC variants based on, respectively, the ABC/best/1 and ABC/best/2 search equations. The experimental outcomes exhibit that ABC/best/1 appreciably outperforms ABC/best/2. Further, in [24], Gao and Liu put forward a modified ABC (MABC) algorithm by adopting the same ABC/best/1 search method. But the MABC algorithm omits the probabilistic selection method and scout bee procedure. This is different from the original ABC. Das et al. [25] put forward an ABC variant based on fitness learning and proximity stimuli (FlABCps). This variant utilizes the Rechenberg’s 1/5th mutation rule and the information of the top q% food sources to generate a new solution with more than one dimension updated. In [26], Bansal et al. considered utilizing a self-adaptive step size method to well adjust the parameters used in the solution update strategy. This improved ABC is called self-adaptive ABC (SAABC), and its parameter of limit is set as adaptive. In order to enhance the local search ability of ABC, Karaboga and Gorkemli [19] put forward a quick ABC (qABC) algorithm, in which the behavior of the onlookers is changed to just search the neighborhood food source. Li and Yin [27] proposed a self-adaptive modification rate to enhance the convergence rate of the ABC. Gao et al. [8] developed two new search equations in the employed bee phase and the onlookers phase, respectively, to balance the exploration and exploitation in the ABC. Further, in Zhou et al. [20], to balance the global search ability and the local search ability and remedy the “oscillation” phenomenon, the candidate solution is produced by utilizing the Gaussian distribution based on the global best solution. In Sharma et al. [28], Lévy Flight ABC (LFABC) is proposed, where the candidate solution is generated around the best solution by tuning the Lévy flight parameters, thereby tuning the step sizes, to enhance the local search capability.

The second category takes the line of hybridization. Kang et al. [29] proposed an improved ABC algorithm based on Rosenbrock, in which the Rosenbrock method is developed for multimodal optimization problems. In [30], the Lévy flight random walk was introduced into ABC to perform an additional local search. In order to find out more beneficial information from the search experiences for ABC, Gao et al. [21] employed the orthogonal experimental method to compose an orthogonal learning strategy. In the paper by Wang et al. [31], they utilize a pool of distinct solution search strategies coexisting throughout the search process and competing to produce offspring. In the paper by Aydin [32], he conducted a systematic experimental study to the proposed modifications of a few ABC variants to evaluate their impact on algorithm performance, and based on these analyses, two new variants of ABC, using the best schemes tested in the experiment, are developed. To efficiently solve optimization problems with different characteristics, Kiran et al. [33] proposed the integration of multiple solution update rules with ABC, while Yurtkuran and Emel [34] used a random select strategy to select one solution search strategy from a variety of search strategies to balance the global search ability and the local search ability. Ma et al. [35] introduced a modified ABC algorithm, which utilizes the life cycle scheme to generate dynamical varying population and ensure proper balance between the global search and the local search.

3. The ABC Algorithm

The ABC algorithm is a population-predicated metaheuristic algorithm that simulates the foraging behavior of honey bee swarms. This technique is very easy to implement and effective. There are three groups of foraging bees in the ABC: employed bees, onlooker bees, and scout bees. The number of employed bees is the same as onlooker bees, being half of the colony. The responsibility of employed bees is to exploit the food sources and then onlooker bees decide whether to exploit the food sources or not according to the information shared by the employed bees. Scout bees try to find a new food source through random searching. One food source in ABC stands for a possible solution to the optimization problem. The amount of nectar on the food source represents the quality of this solution. The number of the food sources and the number of employed bees are the same. When the quality of a food source remains unchanged for a determinate time, the employed bee exploiting this food source will transform to a scout bee. And once the scout bee finds a new food source, it transforms back to an employed bee.

The following is the main procedure of the ABC.(1)Initialization(2)Assess the population(3)Loop(4)Employed bee process (5)Onlooker bee process (6)Scout bee process (7)Memorize the best food source (8)Until (the termination criteria is satisfied).

In the procedure of initialization, the ABC produces a randomly distributed population of SN solutions (food sources), where SN is half of the colony size and also represents the number of employed or onlooker bees. Let represent the ith food source, where D is the problem size. Each food source is produced within the limited range of jth index bywhere and , and are the lower and upper bounds for the index j, respectively, and is a random real number within the range .

In the procedure of employed bees, the candidate solution is produced by performing a local search around a neighboring food source. The equation is as follows:where j is a randomly selected dimension such that and k is a randomly chosen food source such that and . is produced randomly in the range . Then comparing the fitness of and , the one with greater fitness is kept and the employed bees will come back to hive to share the information on new food sources with the onlookers. The fitness is calculated as follows:where represents the fitness of solution i, is the result of objective function, and .

In the procedure of onlooker bees, the food source is chosen relying on the probability value , and the calculation method of p might be given as follows:

Via utilizing this scheme, the food sources with greater fitness value are more possible to be chosen by onlookers for update. Once the onlooker selects the food source, it produces a candidate solution utilizing (2). Then, the selection procedure in the employed bee phase is conducted on and . The one with greater fitness value is kept.

4. Proposed Algorithm: HFEABC

In this section, the algorithm proposed is introduced in detail. First, it is presented that a new search equation combines a dynamic adaptive global search part based on the dimension of the problem and the local search part based on a factor library. Second, in order to prevent the algorithm from falling into local optima, the dynamic search balance strategy is proposed instead of scout bee phase in original ABC.

4.1. A New Search Equation Based on Factor Library

As we all know, for any metaheuristic algorithm, the balance between the global search and the local search is one of the most critical mechanisms. The global search means the capability of searching for global optimal solution in entire solution space. And the local search means the capability of employing the information of previous solutions to find better solutions. However, the procedures of the global search and the local search against each other must be well adjusted to achieve desired optimization results. In ABC, because a new solution is produced utilizing the knowledge of the previous food source with the guidance of the term in (2) and is a random number within , there is no guarantee that a better individual influences the candidate solution. This search equation has a good global search ability but it ignores the local search ability. This may lead to a poor convergence speed and intensification performance. To overcome such issues, in literature [15], they proposed GABC with a new searching strategy:where , , and are yielded in the same manner as in (2), is a uniform random number within , where is a nonnegative constant, and is the jth element of the global best solution. This new search equation somehow improves the local search ability without harming the global search ability and the experimental results show that it is superior to the original one on some test functions. However, the scheme of employing still brings some inefficiency to the search capability of the algorithm and decelerates convergence speed [21]. In order to analyze this in detail, we rewrite (5) as

In (6), it is clear that stands for exploration and stands for exploitation. It is important to note that this algorithm assumes a random local search ability throughout the optimization process. However, according to general optimization procedure, global search is more important than local search in early stage in order to avoid trapping in local optimum. In late stage of optimization process, local search becomes more important than global search, because better local search ability means higher speed of convergence and leads to more accurate result. In addition, although a few different search equations are proposed in [17, 3639], each algorithm only supplies a better solution for some specific problems than others. Therefore, it is very necessary to seek for a well improved method to be compatible for different problems. To this end, we consider redesigning this search equation to get a good balance between exploration and exploitation with higher speed of convergence and moreover it will be suited for different problems.

First, we consider reducing the ability of global search ability to accelerate the speed of convergence to a degree; therefore we modify aswhere is the parameter to control the global search ability; it is defined aswhere D is the size of the problem and l is a linear parameter, being defined aswhere is the current number of iterations and Maxcycle is the maximum number of iterations.

It can be noted that the global search ability will decrease as l increases, but the higher the dimension of the problem, the stronger the global search ability of the algorithm.

Second, with the objective of strengthening robustness and further improving the convergence speed of the algorithm, is developed aswhere is a positive constant as the description of [15] and is the value of the best factor chosen from the FL. It is noticed that the most appropriate factor for different problem is different. Therefore, we propose the concept of FL. FL includes the following factors: , , , , , , and . These factors contained in FL are derived by experimenter and are detailed in Section 5.3. Finally, the search equation is given as

4.2. Dynamic Search Balance Strategy

In real world, problems are complex. It cannot be figured out when an algorithm will find the best solutions to the problems. Aimed at speeding the convergence of the algorithm and obtaining a more accurate result, in (11), the global search ability is reduced somehow while the local search ability is strengthened. However, this may lead to the phenomenon of prematurity. To improve this situation, the dynamic search balance strategy is proposed, which can provide good global search ability. This strategy replaces the scout bee procedure in ABC. In this strategy, the position of global solution is monitored. If GlobalMin is updated, set GlobalSearch to false and (11) will be used to generate the candidate solution. If GlobalMin is not updated, then the swarm of bees (the onlooker bees and the employed bees) chooses the search equation proposed in ABC, which has a strong global search ability [15]. Then, all the solutions are sorted. One solution will be selected with a certain probability and processed with GOBL strategy [20]. The main idea behind GOBL is that when a candidate solution S to a given problem is evaluated, simultaneously computing its opposite solution can provide a higher chance for to be closer to the global optimum than a random generated solution. It is beneficial to preserve the search experiences for the efficiency of the algorithm. One more important point in our approach is that the key parameter limit in ABC is eliminated. The process of search equation chosen is given in Algorithm 1.

Input a new solution set .
(1) if the GlobalMin is updated  then
(2)
(3) else
(4)
(5)Sort all the solutions according to the fitness values in
ascending order. Each solution gets its rank order .
(6)Calculate a selection range for each
solution  .
(7)Generate a random real number .
(8)if   in the selection range of any solution   then
(9)Do GOBL on solution  .
(10)end if
(11) end if
4.3. Pseudocode of HFEABC

Compared with the original ABC, HFEABC makes three modifications. First, in the employed and onlooker bee procedures, the original search equation is displaced by the new designed one. Second, a dynamic search balance strategy is employed to enhance the global search ability of this algorithm. Third, the scout bee procedure in the original ABC is replaced by dynamic search balance strategy. The pseudocode of HFEABC is depicted in Algorithm 2.

(1) Parameter initialization (PS = 2SN, Maxcycle = Maximum
number of iterations, )
(2) Produce initial population by Eq. (1).
(3) Assess initial population // Calculate and memorize the
global best.
(4) Set .
(5) for each food source   do, set   end for
(6) do while
(7) /EMPLOYED BEE PROCEDURE/
(8)for each do
(9)if    then
(10)Produce by Eq. (11).
(11)else
(12)Produce by Eq. (2).
(13)end if
(14)Evaluate , if it is better, replace the original one.
(15)end for
(16)/ONLOOKER BEE PROCEDURE/
(17)Calculate probability values for all by Eq. (4).
(18)Set ,
(19)while    do
(20)Produce random number  
(21)if    then
(22)
(23)if    then
(24)Produce by Eq. (11).
(25)else
(26)Produce by Eq. (2).
(27)end if
(28)Evaluate , if it is better, replace the original one.
(29)end if
(30)Set .
(31)if    then  set   end if
(32)end while
(33)Do dynamic search balance as depicted in Algorithm 1.
(34)Memorize the best GlobalMin.
(35)Set .
(36) end while

5. Experiments and Discussion

In this section, the performance of HFEABC (H) is compared with other famous ABC variants, which involves the standard ABC (A1) [4], AABCLS (A2) [38], BSFABC (A3) [17], GABC (A4) [15], GABCS (A5) [40], GBABC (A6) [20], HGABC (A7) [41], IABC (A8) [36], and LABCSS (A9) [37]. The functions used to test these algorithms are described in Section 5.1. The parameters’ configuration is given in Section 5.2. Section 5.3 analyzes the best factors chosen to process in this approach. The numerical comparison results are discussed in Sections 5.4 and 5.5.

5.1. Benchmark Functions

The performance of the proposed scheme is verified through minimizing 21 basic benchmark functions and the hybrid and composite functions from CEC 2017 [42] in this subsection.

The basic benchmark functions contain a set of 19 scalable benchmark functions of dimensions or 60 and a set of 2 functions of dimensions and 200, as shown in Table 1. In this table, the first column represents the function names. Functions’ mathematical expressions are given in the second column, in which the vector is . The third column gives the search range of the vector , in which each variable has the same search range. These benchmark functions are widely employed to test the performance of global optimization algorithms [15, 4345]. Among them, functions , , , , , , , and are taken from CEC2017 [46]. For simplicity, based on different dimensions, these functions are divided into the low- and high-dimensional functions. For instance, the low-dimensional functions include 30-dimensional functions and 100-dimensional functions -. The high-dimensional functions include 60-dimensional functions and 200-dimensional functions -. In addition, the functions in Table 1 are classified into unimodal and multimodal functions. Unimodal functions have one local minimum as the global optimum. These functions are generally used to test the intensification ability of algorithms. Multimodal functions have one or more local optimums which may be the global optimum. and are continuous unimodal functions. is a discontinuous step function, and is a noisy quartic function. are multimodal and the number of their local minima increases exponentially with the problem dimension. Moreover, is a bound constrained function.

In addition, the hybrid and composite functions from CEC 2017 are included in this experiment to further testify the proposed algorithm. These functions are also tested in low dimension and high dimension , respectively.

5.2. Parameter Settings

For an equitable comparison among ABCs, they are tested by using the same configurations of the parameters; that is, the number of food sources SN is set to 20. The other parameters of test algorithms are set to their original values given in their corresponding papers. The parameter is set to 1.5 as GABC proposed. All experiments were run for 2,500 and 5,000 iterations, respectively, in the case of D = low and high or until the function error dropped below (values less than were reported as 0). Each of the experiments was repeated 30 times independently.

5.3. The Analysis of the Best Factors Selected for Search Equation

Inspired by [36], we consider taking more different factors to bring ABC better performance for different problems. Therefore, we assembled different elementary functions into different factors. All tested factors in this experiment include , , , , , , , , , , , , , , , , , , , , , , , , and . It should be noted that the text in parentheses is the name of the factor. In each iteration, the HFEABC algorithm just takes one factor. In order to select the best factors for HFEABC, the Friedman test is conducted to obtain average rankings according to the suggestion in [47, 48]. Table 2 shows the average rankings (AR) of HFEABC with different single factor in all iterations to 21 basic benchmark functions mentioned in Section 5.1. In the procedure of factors chosen, it is found that more factors are included in this algorithm, with better performance but higher computational complexity and longer calculation time, while including too few factors into the algorithm means smaller computational complexity and shorter calculation time but poor performance. Thus, to get a balance, the factors with the average rankings smaller than 10 are selected into the Factor Library of HFEABC.

After combining some different factors into the HFEABC, this algorithm is tested on basic benchmark functions to observe the effect of the best factor chosen. The results are depicted in Figure 1. It can be seen from Figure 1 that the choice of factors for different problems is different and the combination of factors in high-dimensional functions and low-dimensional functions tests shows similarities.

5.4. Numerical Results and Comparisons on Basic Benchmark Functions

(1) The Comparison of HFEABC and ABC Variants. This subsection presents a comparative study of HFEABC with ABC and its variants at both and 60. The values shown in Tables 3 and 4 are medians of the proposed algorithm and other algorithms in this approach. The best values are emphasized in boldface. For the low-dimensional functions testing results shown in Table 3, it is clear that HFEABC performs significantly better than the other algorithms on the majority of test functions. To be specific, it is noted that only the optimization results of A3 (BSFABC) on , , and are a little better than HFEABC, but the difference between A3 (BSFABC) and H (HFEABC) on , , and is not significant according to comparison results of Wilcoxon test. For high-dimensional functions testing results listed in Table 4, the HFEABC algorithm outperforms the other algorithms on all test functions excluding A3 (BSFABC) on and , A5 (GABCS) on , and A6 (GBABC) on . According to the Friedman test, HFEABC ranks first on these functions for both low- and high-dimensional functions. It can be seen from the rank order at the bottom of Tables 3 and 4.

Moreover, to compare the significance between HFEABC and other ABC variants, the paired Wilcoxon signed-rank test on median results is applied. The Wilcoxon signed-rank test is a nonparametric statistical hypothesis test, which can be used as an alternative to the paired -test when the results cannot be supposed to be normally distributed [47, 48]. The comparison results are shown in Table 5. In this table, “+”, “−,” and “=” indicate that our approach is, respectively, better than, worse than, and similar to that of its competitor according to the Wilcoxon signed-ranked test at . The competition results are summarized as “Win/Loss/Draw,” which indicates our approach wins on Win functions, loses on Loss functions, and ties on Draw functions, compared with its competitor. In Table 5, it is clear that the performance of HFEABC is significantly better than the ABC variants on low- and high-dimensional functions; moreover the Wilcoxon test finds significant difference between HFEABC and ABC variants on low- and high-dimensional functions. Therefore, it can be said that HFEABC is the best performing algorithm compared to ABC and its variants in terms of medians test.

(2) The Comparisons of Robustness Utilizing the Analysis of Variance Test. In order to further investigate the efficacy and robustness of the proposed HFEABC, the analysis of variance (ANOVA) test is also employed to determine the statistical characteristics of each tested algorithm. The box plots depicted in Figure 2 demonstrate the statistical performance representation of all algorithms on some basic benchmark functions test. Looking at these box plots, the general features of the data distribution can be noticed and it is clearly visible and proved that HFEABC achieved good variance distribution of compromise solutions on the majority of the benchmark functions. Note that the A7 (HGABC) algorithm also displayed its robustness on few particular functions, but its accuracy is poor.

In addition to the solution accuracy and robustness, the convergence graphs are also important tools for comparing the useful approximation abilities of different algorithms. To compare the convergence speed of the HFEABC and other ABC variants algorithms, the convergence histories of the means of 30 runs from all these algorithms for several representative functions are depicted in Figure 3, in which the horizontal axis represents the number of iterations and the vertical axis shows the mean value of the objective function values from 30 runs. It can be seen that the HFEABC has a faster convergence speed than the other algorithms on the majority of benchmark functions. To be specific, only GABCS’s convergence speed on Schwefel 2.21 ( and ) is faster than HFEABC. This is because GABCS sacrifices its accuracy for faster convergence speed using an enhanced local search strategy in employed bee procedure, onlooker bee procedure, and scout bee procedure and the dynamic search balance strategy used in HFEABC reduces its convergence speed to a degree. However, the accuracy and robustness of GABCS are poorer than HFEABC.

5.5. Numerical Results and Comparisons on Some Functions from CEC 2017

In addition to testing on the basic benchmark functions, some more complicated functions from CEC 2017 are used to testify the proposed algorithm. The selected functions contain hybrid functions 1~10 (function names are abbreviated as hf1~hf10) and composition functions 1~10 (function names are abbreviated as cf1~cf10). The comparison results of HFEABC with ABC and its variant at low dimension and high dimension are given in Tables 6 and 7. The values in these tables are medians obtained by these algorithms. The best values are emphasized in boldface.

In Table 6 (functions with low dimension), HFEABC (H) shows better performance than other algorithms on most hybrid functions, while for composition functions, the results obtained by GBABC (A6) are slightly better than HFEABC (H) on composition functions and they are better than other algorithms. In Table 7 (functions with high dimension), HFEABC (H) is still better than other algorithms on hybrid functions, while its performance on composition functions is improved to be nearly as good as GBABC (A6) on composition functions and they are better than other algorithms. It can be concluded that the proposed algorithm’s comprehensive performance is very competitive for most complicated functions, especially for the functions with high dimension. This verifies that the proposed algorithm could provide competitive optimization performance on different problems.

6. Conclusion

This approach proposed HFEABC to improve the performance of ABC. In HFEABC, the search equation is enhanced with the global search factor based on dimension of the problem and the local search factor based on the factor library. The choice of factors in this algorithm is different for different problems and the combination of factors in low-dimensional functions and high-dimensional functions tests shows similarities. Furthermore, to prevent the algorithm from the prematurity, dynamic search balance is employed and replaces the scout bee procedure in ABC. The proposed algorithm is very effective as compared to ABC and other novel ABC variants. In basic benchmark functions tests, HFEABC outperforms ABC and its variants in terms of compatibility with different problems, robustness, and convergence speed. In some CEC 2017 functions tests, HFEABC shows better comprehensive performance. All in all, the performance of the proposed algorithm is very competitive. In addition, introducing the strategy of dynamic search balance eliminates the parameter limit needed in the original ABC. Future research will be along the line of implementing the HFEABC to solve more other complex engineering problems. More effective factors will be tested and the factor chosen mechanism will be studied in depth.

Conflicts of Interest

The authors declare that they have no conflicts of interest.