#### Abstract

To accelerate the convergence speed of Artificial Bee Colony (ABC) algorithm, this paper proposes a Dynamic Reduction (DR) strategy for dimension perturbation. In the standard ABC, a new solution (food source) is obtained by modifying one dimension of its parent solution. Based on one-dimensional perturbation, both new solutions and their parent solutions have high similarities. This will easily cause slow convergence speed. In our DR strategy, the number of dimension perturbations is assigned a large value at the initial search stage. More dimension perturbations can result in larger differences between offspring and their parent solutions. With the growth of iterations, the number of dimension perturbations dynamically decreases. Less dimension perturbations can reduce the dissimilarities between offspring and their parent solutions. Based on the DR, it can achieve a balance between exploration and exploitation by dynamically changing the number of dimension perturbations. To validate the proposed DR strategy, we embed it into the standard ABC and three well-known ABC variants. Experimental study shows that the proposed DR strategy can efficiently accelerate the convergence and improve the accuracy of solutions.

#### 1. Introduction

Artificial Bee Colony (ABC) is an efficient optimization tool, which mimics the foraging behavior of honey bees [1]. It has some superior characteristics, such as simple concept, few control parameters, and strong search ability. In the last decade, ABC has been widely applied to many optimization problems [2].

However, some studies pointed out that ABC has snow convergence and weak exploitation ability in solving complex problems [3]. In ABC, there are many bees and food sources (called solutions). The process of finding food sources is abstracted to search potential solutions. For bees, Karaboga defined a simple model to search new solutions (food sources) by changing one dimension of the current solutions (parent solutions) [4]. This may result in very small differences between new solutions and their corresponding solutions. Thus, the convergence speed becomes very slow.

To overcome the above issue, a Dynamic Reduction (DR) strategy for dimension perturbation in ABC is proposed. In DR, the number of dimension perturbations is initialized to a predefined value. As the iteration increases, the number of dimension perturbations dynamically decreases. By changing the number of dimension perturbations, the convergence speed can be improved. To validate the proposed DR strategy, we embed it into ABC and other three improved versions. Simulation results show that the DR strategy can efficiently accelerate the convergence and improve the accuracy of solutions.

The rest of the paper is organized as follows. Section 2 introduces the standard ABC. A short literature review of ABC is presented in Section 3. Our strategy is described in Section 4. Experimental results and analysis are given in Section 5. Finally, this work is concluded in Section 6.

#### 2. Standard ABC

Intelligent optimization algorithms (IOAs) have attracted much attention in the past several years. Recently, different IOAs were proposed to solve various optimization problems [5–13]. ABC is a popular optimization algorithm based on swarm intelligence [1]. It is motivated by the foraging behavior of bees, labor division, and information sharing. It is usually used to solve continuous or discrete optimization problems. According to the labor division mode of bees, ABC employs different search strategies to complete the optimization task together with few control parameters, strong stability, and simple implementation.

In ABC, there are three types of bees including employed bees, onlooker bees, and scouts. These bees are related to three search processes: employed bee phase, onlooker bee phase, and scout phase. The quantity of employed bees is equal to the onlooker bees.

Let us consider an initial population with solutions , where =1,2,…,*SN*,* SN* is the population size, and is the dimension size. In the employed bee phase, each employed bee is responsible for searching the neighborhood of a solution. For the -th solution , the corresponding employed bee finds a new solution according to the search strategy [1]:where is a random value within , is a different solution randomly chosen from the swarm and* k*≠*I*, and is an integer randomly chosen from the set . The standard ABC employs an elite selection method to determine which solution is chosen from and . If is better than *, * will be selected into the next iteration and the current is replaced.

From (1), the differences between and are the -th dimension. For the rest of -1 dimensions, both and are the same. Thus, and are very similar. Even if is replaced by , the new solution is also near . It means that the step size for the current search is very small, because it only jumps in one dimension. As a result, the search will be slow.

In the onlooker bee phase, the onlooker bees focus on deep search. Unlike the employed phase, the onlooker bees do not search the neighborhoods of all solutions in the swarm. ABC firstly calculates the selection probability of each solution. Based on the probability, a solution is chosen and a new solution is generated in its neighborhood. The probability () for the -th solution is defined by [1]where is the fitness of and it is computed by [1]where is the function value of .

Similar to the employed bees, the onlooker bees also use (1) to attain a new solution. Then, the function value of is compared with . If is better than , will be selected into the next iteration and the current is replaced.

For the elite selection method, when is worse than , the improvement of is failure; otherwise the improvement is successful. For each solution in the population, a counter is used to calculate the number of failures. If is very large, it means that may fall into local minima and cannot jump out. Under such circumstance, is initialized as follows [1]: where is a random number within and [*Low*,* Up*] is the definition domain.

#### 3. A Brief Review of Recent Work on ABC

In the past decade, research on ABC has attracted wide attention. Different ABCs and their applications were proposed. In this section, a brief review of this work is presented.

The standard ABC obtained good performance on many optimization problems, but it showed some difficulties in complex problems [3]. To enhance the performance of ABC, various ABC algorithms were proposed. In [3], the global best solution (*Gbest*) was utilized to modify the search equation and experiments proved its effectiveness. Alatas [14] used different chaotic maps to adjust the parameters of ABC. In [15], a Rosenbrock ABC was proposed, in which the rotational direction of Rosenbrock was employed to enhance the exploitation capacity. Experiments showed that Rosenbrock ABC could improve the convergence and accuracy. Motivated by the mutation equation of Differential Evolution (DE), a new search equation was designed. As mentioned before, only one dimension is perturbed in the standard ABC [16]. In [17], a parameter* MR* was defined to determine the probability of dimension perturbations. A larger* MR* means more dimension perturbations. Experimental results show a suitable* MR* can accelerate the convergence of ABC. In [18], an external archive was used in ABC to guide the search. Li et al. [19] employed multiple strategies including the best solution, inertia weight, and Lévy mutation to enhance the search.

In [20], multiple strategies ensemble of ABC was proposed, in which multiple search strategies were employed instead of a single search strategy. Similar to [20], Kiran et al. [21] used five search strategies in ABC and obtained promising performance. To determine which search strategies are chosen, a roulette wheel selection mechanism was used. In [22], a bare bones ABC based on Gaussian sampling was proposed. Moreover, a new method was used to calculate the selection probability in the onlooker bee phase. In [23], an adaptive method was used to change the swarm size of ABC. The search equation was also modified based on DE/rand/1 mutation. In [24], depth-first search and elite guided search were used in ABC. In [25], recombination operation was introduced into ABC to enhance exploitation. Xiang et al. [26] introduced decomposition technique into ABC and extended ABC to solve many-objective optimization problems. Experiments on 13 problems with up to 50 objectives showed that the decomposition technique can effectively help ABC achieve good results.

Dokeroglu et al. [27] proposed an island parallel ABC to solve the Quadratic Assignment Problem (QAP), in which Tabu search was used to balance exploitation and exploration. Ni et al. [28] used an improved version of ABC for optimizing cumulative oil steam ratio. The recombination and random perturbation were employed to maintain diversity and improve the search. In [19, 29], ABC was applied to image contrast enhancement, where the objective function is designed based on a new image contrast measure. For a generalized covering traveling salesman problem, Pandiri and Singh [30] proposed a new ABC with variable degree of perturbation. Pavement resurfacing aims to extend the service life of pavement. Panda and Swamy [31] used a new ABC to find the optimal scenarios in pavement resurfacing optimization problem. Sharma et al. [32] designed a new ABC based on beer froth decay phenomenon to find the optimal job sequence in job shop scheduling problem.

#### 4. Dynamic Reduction of Dimension Perturbation in ABC

The standard ABC uses (1) as the search strategy for both employed and onlooker bees. Based on (1), only one dimension is perturbed between the parent solution and offspring . Such one-dimensional perturbation will result in large similarities between and . Consequently, the convergence speed may easily be slowed down during the search. This is the possible reason that many studies reported that ABC was not good at exploitation.

To tackle the above issue, a Dynamic Reduction (DR) strategy for dimension perturbation is proposed in ABC. Initially, the number of dimension perturbations is fixed to a large value (less than the dimensional size ). More dimension perturbations can result in larger differences between offspring and their parent solutions. This will be helpful to accelerate the search and find better solutions quickly. As the growth of iterations, the number of dimension perturbations dynamic decreases. Less dimension perturbations can reduce the dissimilarities between offspring and their parent solutions. This will be beneficial for finding more accurate solutions.

Assume that is the number of dimension perturbations at iteration . Based on the DR strategy, is dynamically updated bywhere is the maximum number of iterations and is an initial value for dimension perturbation. In this paper, is set to , and . At the beginning, is equal to . As the iteration increases, gradually decreases from to zero. When* DP*(*t*)<1, the number of dimension perturbations is less than one. It is obvious that this case is not accepted. To avoid this case, a simple method is employed as follows:

To clearly illustrate the DR strategy, Figure 1 shows the dynamic changes of during the iteration. In this case, and are set to 100 and 1500, respectively. As seen, the number of dimension perturbations gradually decreases with growth of iterations. The initial value of is related to . By setting different values of , we can clearly observe the characteristics of our DP strategy. A larger means more dimension perturbations; a smaller means less dimension perturbations. So, it is not an easy task to choose the best parameter* λ*. In Section 5.2, the effects of the parameter are investigated.

From (6), the number of dimension perturbations varies from to 1. However, it is not convenient to implement (6) in ABC. To overcome this problem, a probability () is used to replace the number of dimension perturbations (*DP*(*t*)). The probability for dimension perturbation at the -th iteration is defined by

Let us consider an extreme case for (7). When =1 and =1000, is equal to 0.001. It is possible that no dimension is perturbed for a small . To prevent this case, a simple method is used to ensure that the number of dimension perturbations is not less than 1. For the -th solution , a random value is generated for each dimension of . If the random value satisfies the probability , the corresponding dimension of is chosen for dimension perturbation according to the search equation (i.e., (1) for the standard ABC). If no dimension perturbation occurs, a dimension index is randomly chosen and executes the dimension perturbation. The main procedure of the dimension perturbation is described in Table 1, where* rand*(0,1) is a random value between 0 and 1.0.

#### 5. Experimental Results

##### 5.1. Benchmark Functions

To validate the performance of the proposed Dynamic Reduction strategy for dimension perturbation, twelve classical benchmark functions are employed. These functions were used in many optimization papers [21, 33–35]. Table 2 lists the function names, search range, dimension size, and global optimum. For detailed definitions of these functions, please refer to [35].

##### 5.2. Investigations of the Parameter *λ*

In the proposed DR strategy, the number of dimension perturbations is proportional to , and* D*_{0}=* λ*·

*D*is used in this paper. The parameter

*plays a significant role in controlling the number of dimension perturbations. A larger*

*λ**means more dimension perturbations and a smaller*

*λ**represents less dimension perturbations. In this section, the standard ABC is used as an example. The DR strategy is embedded into ABC and a new ABC variant, namely, DR-ABC, is constructed. We focus on investigating the effects of different*

*λ**on the performance of ABC. This is helpful to choose a reasonable parameter*

*λ**.*

*λ*In the experiments, the parameter* λ* is set to 1.0, 1/2, 1/3, 1/5, 1/10, and 1/20, respectively.

*SN*=50 and

*limit*=100 are used. When the number of function evaluations (

*FEs*) reaches

*MAXFES*, DR-ABC is terminated. For

*D*=30,

*MAXFES*is equal to 5000·

*D*=1.5E+05. Then, is equal to 1500. For each function, DR-ABC is run 50 times.

Table 3 shows the results of DR-ABC under different* λ*, where “Mean” represents the mean best function value. From the results,

*=1/2 achieves the best performance on*

*λ**f*

_{1}and

*f*

_{2}. For function

*f*

_{3},

*=1/5 and*

*λ**=1/10 outperform other*

*λ**values and ABC. A large*

*λ**is better for function*

*λ**f*

_{4}, while a small

*is better for functions*

*λ**f*

_{5}and

*f*

_{8}. For

*f*

_{5}, ABC is better than

*=1.0, 1/2, 1/3. and 1/5. All*

*λ**values can help ABC find better solutions on*

*λ**f*

_{7},

*f*

_{10}, and

*f*

_{11}. For the rest of function

*f*

_{12},

*=1.0 and 1/2 obtains worse results than ABC. With the increase of*

*λ**, the performance of DR-ABC is approaching the standard ABC.*

*λ*Figure 2 presents some convergence graphs of ABC and DR-ABC under different* λ*. For

*f*

_{1}, the standard ABC is faster than DR-ABC at the beginning search stage. As the iteration increases, the Dynamic Reduction strategy can accelerate the convergence and

*=1/3 obtains the fastest convergence speed. For*

*λ**f*

_{3},

*=1.0 and 1/2 do not improve the convergence speed at the middle and last search stages. It demonstrates that small*

*λ**λ*values are helpful to improve the convergence. For

*f*

_{4},

*=1/20 shows the worst convergence among all*

*λ**λ*values. Similar to

*f*

_{3}, large

*λ*values do not show advantages at the beginning and middle search stages.

**(a)**

**(b)**

**(c)**

**(d)**

Based on above results, it is not an easy task to choose the best *λ* value. Too large or too small *λ* values may slow down the convergence at the beginning search stage. To choose a reasonable *λ* value, the mean rank values of ABC and DR-ABC with each *λ* are calculated according to Friedman test [35]. Table 4 presents the mean rank values of the seven ABC algorithms. From the results, DR-ABC (* λ*=1/5) obtains the best rank value 2.38. It means that

*=1/5 is the relatively best parameter setting. The standard ABC has the worst rank 5.58. It demonstrates that the proposed Dynamic Reduction with any*

*λ**λ*values can help ABC find more accurate solutions.

##### 5.3. Different Methods for Dimension Perturbation

In this paper, we propose a Dynamic Reduction method for dimension perturbations. When generating offspring, the number of dimension perturbations is dynamically decreased as the iteration increases. In [17], Akay and Karaboga designed a parameter* MR* to control the probability of dimension perturbations. In the search process, the parameter* MR* is fixed. Then, the number of dimension perturbations is fixed to* MR*·*D*. This method is called fixed number of dimension perturbations ABC (FNDP-ABC). Moreover, the number of dimension perturbations may be randomly determined. In this section, different methods for dimension perturbation are investigated and the involved methods are listed as follows:(i)ABC (with only one-dimensional perturbation);(ii)fixed number of dimension perturbations in ABC (FNDP-ABC);(iii)random number of dimension perturbations in ABC (RNDP-ABC);(iv)the proposed Dynamic Reduction of dimension perturbation in ABC (DR-ABC).

The parameter settings of DR-ABC, FNDP-ABC, RNDP-ABC, and ABC are the same with Section 5.2. For DR-ABC, the parameter* λ* is equal to 1/5. Then, the number of dimension perturbations decreases from 6 to 1, because of

*D*

_{0}=

*·*

*λ**D*=6. For FNDP-ABC, we use an average value 3 as the fixed number of dimension perturbations (the probability

*MR*is equal to 3/30=0.1). For RNDP-ABC, the number of dimension perturbations is randomly chosen between 1 and 6.

Table 5 lists the comparison results of different methods for dimension perturbation. From the results, DR-ABC, FNDP-ABC, and RNDP-ABC outperform ABC on all problems except for* f*_{5},* f*_{6}, and* f*_{9}. On* f*_{5} and* f*_{9}, all dimension perturbation strategies are not effective. All algorithms can find the global minima on* f*_{6}. For other nine functions, three strategies can improve the quality of solutions. Among FNDP-ABC, RNDP-ABC, and DR-ABC, DR-ABC performs better than RNDP-ABC and FNDP-ABC. RNDP-ABC is better than FNDP-ABC. Results demonstrate that the Dynamic Reduction is better than fixed and random strategies.

Figure 3 gives the convergence graphs of ABC with different dimension perturbation strategies. For functions* f*_{1},* f*_{2}, and* f*_{4}, DR-ABC is the fastest algorithm. For the first two functions, the convergence characteristics of FNDP-ABC and RNDP-ABC are similar and they are faster than ABC (with only one-dimensional perturbations). For* f*_{3}, RNDP-ABC is faster than FNDP-ABC, DR-ABC, and ABC at the middle search stage. FNDP-ABC and RNDP-ABC converge much faster than ABC on* f*_{4}. From these convergence figures, all dimension strategies can accelerate the search and our proposed dynamic reduction is better than the other two methods.

**(a)**

**(b)**

**(c)**

**(d)**

##### 5.4. Extending the Dynamic Reduction Strategy into Other Well-Known ABC Algorithms

The above experiments proved that our proposed Dynamic Reduction strategy for dimension perturbation is effective to enhance the optimization performance of ABC. In this section, the Dynamic Reduction strategy is extended to three other well-known ABC algorithms including GABC [3], MABC [16], and ABCVSS [21]. The involved ABCs are listed as follows:(i)*Gbest *guided ABC (GABC) [3];(ii)GABC with Dynamic Reduction strategy for dimension perturbation (DR-GABC);(iii)modified ABC (MABC) [16];(iv)MABC with Dynamic Reduction strategy for dimension perturbation (DR-MABC);(v)ABC with variable search strategy (ABCVSS) [21];(vi)ABCVSS with Dynamic Reduction strategy for dimension perturbation (DR-ABCVSS).

In the experiments, we investigate whether the Dynamic Reduction strategy is still effective in other ABC algorithms. The population size* SN*,* limit*,* λ, *and

*MAXFES*are set to 50, 100, 1/5, and 1.5E+05, respectively. We use the original settings in their corresponding references for other parameters in MABC, GABC, and ABCVSS [3, 16, 21]. For each function, each algorithm is run 50 times.

Table 6 lists the results of different ABC algorithms with the Dynamic Reduction strategy. From the results of DR-GABC and GABC, DR-GABC is better than GABC on eight functions, while GABC performs better than DR-GABC on only one function (*f*_{5}). Especially for* f*_{1}, f_{2},* f*_{3},* f*_{4}, and* f*_{11}, the Dynamic Reduction strategy significantly improves the performance of GABC. Similar to GABC, DR-MABC is worse than MABC on* f*_{5}. The Dynamic Reduction strategy helps MABC find more accurate solutions on six functions. For the rest of the five functions, both of them have the same results. DR-ABCVSS is significantly better than ABCVSS on five functions* f*_{1}, f_{2},* f*_{3},* f*_{4}, and* f*_{11}. For* f*_{5},* f*_{7}, and* f*_{10}, DR-ABCVSS also outperforms ABCVSS.

Figure 4 lists the convergence graphs of different ABC algorithms with the Dynamic Reduction strategy. For functions* f*_{1} and* f*_{2}, DR-ABCVSS is the fastest algorithm. DR-MABC and DR-GANC obtain the second and third place, respectively. DR-GABC is faster than DR-MABC on* f*_{3}. For* f*_{4}, DR-MABC converges faster than DR-GABC at the last search stage and DR-GABC is faster at the middle search stage. For all functions, all dynamic reduction based ABCs are better than GABC, MABC, and ABCVSS. By comparing each pair of ABC and its dynamic reduction based version, we can find that the Dynamic Reduction strategy can effectively accelerate the convergence speed.

**(a)**

**(b)**

**(c)**

**(d)**

Table 7 presents the mean rank of DR-GABC, GABC, DR-MABC, MABC, DR-ABCVSS, and ABCVSS. From the rank values of each pair of ABC and its Dynamic Reduction based version, the Dynamic Reduction strategy can help its parent ABC algorithm obtain better rank. For example, GABC achieves a rank value 4.54 and DR-GABC obtains a better rank 3.25. Especially for ABCVSS, it seems that this algorithm is the worst one on the benchmark set. By embedding the Dynamic Reduction strategy into ABCVSS, DR-ABCVSS achieves the best rank and becomes the best algorithm among six ABCs. The results confirm the effectiveness of our Dynamic Reduction strategy.

#### 6. Conclusions

In this paper, a Dynamic Reduction (DR) strategy for dimension perturbation is proposed to accelerate the search of ABC. In the standard ABC, a new solution (offspring) is generated by perturbing one dimension of its parent solution. Based on one-dimensional perturbation, both new solutions and their parent solutions have high similarities. This will easily cause slow convergence speed. In our DR strategy, the number of dimension perturbations is assigned a large value at the initial search stage. More dimension perturbations can result in larger differences between offspring and their parent solutions. This will be helpful to accelerate the search. With the growth of iterations, the number of dimension perturbations dynamically decreases. Less dimension perturbations can reduce the dissimilarities between offspring and their parent solutions. Based on the DR, it can achieve a balance between exploration and exploitation by dynamically changing the number of dimension perturbations. Experiments are carried out on twelve benchmark functions to validate the effectiveness of the DR strategy.

The parameter *λ* affects the performance of the DR strategy. Experimental results show DR-ABC with different *λ* results in different performance. Too large or too small *λ* values may slow down the convergence speed at the beginning search stage. By calculating the mean ranks of multiple *λ* values, *λ*=1/5 is considered as the relatively best choice. For all different *λ* values, DR-ABC outperforms the standard ABC. Results demonstrate the DR strategy is effective for improving the performance of ABC.

For dimension perturbations, there are three different kinds of methods: fixed number of dimension perturbations (FNDP), random number of dimension perturbations (RNDP), and the proposed DR. Results show the DR strategy is better than FNDP and RNDP.

By extending the DR strategy into ABCVSS, GABC, and MABC, we get three new ABC variants, DR-ABCVSS, DR-GABC, and DR-MABC, respectively. Results show that DR-ABCVSS, DR-GABC, or DR-MABC is better than its corresponding parent algorithm (ABCVSS, GABC, or MABC) and the DR strategy can effectively accelerate their convergence speed.

From the results of DR-ABC with different* λ* values, we can find that a fixed

*is not effective during the whole search process. For different search stages, different*

*λ**values may be needed. In our future work, an adaptive*

*λ**strategy will be investigated.*

*λ*#### Data Availability

The data used to support the findings of this study are included within the article.

#### Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

#### Acknowledgments

This work was supported by the Quality Engineering Project of Anhui Province (no. 2018sxzx38), the Project of Industry-University-Research Innovation Fund (no. 2018A01010), Anhui Provincial Education Department's Excellent Youth Talent Support Program (no. gxyq2017159), the Anhui Provincial Natural Science Foundation (no. 1808085MF202), the Science and Technology Plan Project of Jiangxi Provincial Education Department (no. GJJ170994), and the Anhui Provincial Key Project of Science Research of University (no. KJ2019A0950).