Abstract

In recent ten years, artificial bee colony (ABC) has attracted more and more attention, and many state-of-the-art ABC variants (ABCs) have been developed by introducing different biased information to the search equations. However, the same biased information is employed in employed bee and onlooker bee phases, which will cause over exploitation and lead to premature convergence. To overcome this limit, an effective framework with tristage adaptive biased learning is proposed for existing ABCs (TABL + ABCs). In TABL + ABCs, the search direction in the employed bee stage is guided by learning the ranking biased information of the parent food sources, while in the onlooker bee stage, the search direction is determined by extracting the biased information of population distribution. Moreover, a deletion-restart learning strategy is designed in scout bee stage to prevent the potential risk of population stagnation. Systematic experiment results conducted on CEC2014 competition benchmark suite show that proposed TABL + ABCs perform better than recently published AEL + ABCs and ACoS + ABCs.

1. Introduction

Optimization has long been a basic research topic, which attracts an ever-increasing interest from scientific research to engineering practice due to its obvious application potential in almost all real-world systems. Without losing generality, a box-constrained optimization problem can be modelled as follows:where is a real-value objective function and is a candidate solution in search space , where and are the upper and lower boundaries for the th dimension.

To better achieve the goal of global optimization, various optimization technologies have been developed, mainly including mathematical programming and evolutionary computing (EC). Compared with mathematical programming, EC has its own advantages in the weak assumption of mathematical properties for optimized problems and high probability to find a global optimal solution. In the past thirty years, a variety of EC algorithms have been proposed, such as genetic algorithm (GA) [1], differential evolution (DE) [2], particle swarm optimization (PSO) [3], and artificial bee colony (ABC) [4]. As a relatively new evolutionary optimization algorithm, ABC has become popular in EC community due to its simple concept and easy implementation yet effectiveness.

ABC developed by Karaboga is a population-based stochastic algorithm which simulates the foraging behavior of honey bees. A recent study has shown that ABC performs a good performance on many box-constrained continuous optimization problems [59], but its slow convergence rate has been widely criticized. The main reason is that the solution search strategy of ABC is good at exploration but poor at exploitation. Therefore, how to properly balance the exploration and exploitation during the search process is the core idea of improvement works of ABC. For this purpose, a lot of improvement works have been developed in the last ten years. In the following, a brief survey about ABC is reviewed from four aspects.

1.1. Search Equations with Biased Information

In the original ABC, one parent of the solution search equation is the current target vector, and the other is randomly chosen from the population. Therefore, one of the reasons for the slow convergence of ABC is that the search equation has no biased information. To overcome this limitation, many researchers have proposed various solution search equations with biased information [1017]. For example, Cui et al. proposed an adaptive ABC (ARABC) [18], in which all parent food sources were chosen based on their rankings. Xiang et al. put forward a gravity model-based ABC (ABCG) [19]. In ABCG, an attractive force model was designed to select a better neighbor of the current target vector. Kumar and Mishra proposed a covariance-based guided ABC (CABC) [20], in which the covariance information was embedded to accelerate the convergence. Aslan et al. designed an improved qABC (iqABC) [21] to balance the search ability. Ji et al. proposed a scale-free ABC (SFABC) [22], in which the topology information of a scale-free network introduced into ABC. Awadallah et al. introduced natural selection methods for ABC (NSM + ABCs) [23]. Bajer and Zoric proposed an improved ABC based on diversity refining (DRABC) [24]. In DRABC, the solution search equation was modified by introducing the information of the top population members to increase the exploitation ability.

1.2. Ensemble of Multiple Search Strategies

In the original ABC, only one solution search equation is used as the update rule during the search process, which is considered to be very difficult to adapt to optimization problems with different characteristics. To enhance the robustness of ABC, many researchers make efforts to realize an ensemble of multiple search strategies [2534]. For instance, Song et al. designed a two-strategy adaptive ABC (TSaABC) [35]. In TSaABC, the two search strategies were dynamically adjusted by success rate to balance the exploration and exploitation ability. Chen et al. developed a self-adaptive differential ABC (sdABC) [36]. In sdABC, three differential search strategies were combined into the framework of ABC, and the selection probability of each strategy was adjusted adaptively. Zhou et al. presented a multicolony ABC (IDABC) [37], in which the whole colony was divided into superior, mid, and inferior subcolonies, and three search strategies with different biases were assigned into the corresponding subcolony for different roles. Gao et al. introduced a Parzen window-based ABC (ABCPW) [38], in which three search strategies with different characteristics were estimated by PW and the best one was used to generate offspring. Yavuz and Aydin designed a self-adaptive search equation-based ABC (SSEABC) [39] which integrated three local search strategies into the ABC, and each strategy was selected by competitive learning. Very recently, Song et al. put forward a multistrategy fusion ABC (MFABC) [40], in which the search strategy with high exploration was inherited and the other two search strategies were selected adaptively according to the evolution ratio.

1.3. Hybridization of ABCs and Other Metaheuristic Algorithms

Combining the advantages of different algorithms is an effective way to design high-performance algorithms. According to this idea, many researchers have proposed various hybrid algorithms based on ABCs. For example, Gao et al. proposed an enhanced ABC through DE (DGABC) [41] which combined the good features of DE and GABC to accelerate the convergence. Liang et al. employed adaptive differential operator to remedy the limitation of slow convergence speed of ABC (ABCADE) [42]. Ghanem and Jantan mixed the ABC with monarch butterfly optimization (HAM) [43], in which the modified butterfly adjusting operator was used as the search equation of employed bees. Wang and Yi developed a hybridization algorithm of krill herd (KH) and ABC (KHABC) [44], in which the found optimal solutions by KHABC were considered as a neighbor food source for onlooker bees. Chen et al. proposed a hybrid ABC (TLABC) [45] by combining the teaching-learning-based optimization with the ABC. Very recently, Chen et al. further presented a mixed ABC based on fireworks explosion (FW-ABC) [46]. In FW-ABC, the fireworks explosion search was implemented to find better solutions after three bee search stages.

1.4. Combination of ABCs and Local Search Techniques

The success of memetic framework has confirmed the effectiveness of the combination of metaheuristic algorithms and local search techniques, which drives many researchers to design various enhanced ABCs with local search strategies. For example, Kang et al. presented a memetic ABC based on the Rosenbrock approach (RABC) [47]. In RABC, the Rosenbrock rotational direction was used to improve the quality of the optimal solution. Kang et al. further proposed another memetic ABC (HABC) [48], in which the Hooke–Jeeves method was carried out to improve the local search capability. Gao et al. introduced two other memetic ABCs, denoted as PABC [49] and OL-ABC [50], respectively. In PABC and OL-ABC, Powell’s method and orthogonal crossover operator were employed to enhance their exploitation ability.

Based on the previously reported experimental results, we note that most ABCs are very efficient for separable functions, but they still suffer from low convergence rate for nonseparable functions. Therefore, there is much room for improvement of existing ABCs in terms of nonseparable functions. Nevertheless, not much effort on nonseparable functions for ABCs has been done so far. In our recently published work [51, 52], an adaptive encoding learning was introduced to improve the performance of existing ABCs (AEL + ABCs) on complex nonseparable functions. Different from most existing ABCs based on the individual biased information to guide search, the AEL method guides the population to converge to the promising region by extracting the population distribution information. It is therefore natural to combine both types of biased information into existing ABCs for further improving their performance on complex nonseparable functions. To achieve this goal, we propose a tristage adaptive biased learning framework for ABCs (TABL + ABCs), where the main new contributions are summarized as follows:(i)A tristage adaptive biased learning is designed for existing ABCs, in which the search direction in the employed bee stage is guided by the ranking biased information of the parent food sources, while in the onlooker bee stage, the search direction is determined by extracting the biased information of population distribution. Both types of biased information are adaptively learned from the feedback information of the population evolution.(ii)A deletion-restart learning strategy is designed in scout bee stage to prevent the potential risk of population stagnation.(iii)The proposed TABL framework is applied to several existing ABCs. Systematic experiment results conducted on CEC2014 competition benchmark suite show that proposed TABL + ABCs has better optimization performance than the corresponding ABCs, recently published AEL + ABCs and ACoS + ABCs.

In the rest of the paper, we first describe the original ABC in Section 2. Section 3 develops a tristage adaptive biased learning for ABCs. Section 4 designs a set of experiments to verify the advantages of TABL + ABCs, and some discusses are made in Section 5. The proposed algorithm is applied to big optimization problems in Section 6. Finally, Section 7 concludes the entire work and identifies some future studies.

 //Initialization
 Randomly create food sources by Equation (2);
 Evaluate the quality information of each food source ;
while Termination condition is not met do
  //Employed bee phase
  fordo
   Create a new food source near the by Equation (3);
   Evaluate the quality information of ;
   ifthen
    
    
   else
    
   end
  end
//Onlooker bee phase
 Calculate the selection probability for each onlooker bee by Equation (4);
;
whiledo
  ifthen
   Create a new food source near the by Equation (3);
   Evaluate the quality information of ;
   ifthen
    
    
   else
    
   end
   ;
  end
  ifthen
    ;
  end
end
//Scout bee phase
  ifthen
   Reinitialize by Equation (2);
   
  end
end

2. Artificial Bee Colony

The ABC is a population-based intelligent optimization technique. It is inspired by the foraging behavior of bees and their cooperative foraging process is transformed into a search mechanism of optimal solution. In ABC, the location of a food source is hypothesized as a candidate solution, and its fitness is measured by the amount of nectar. During the cooperative foraging, employed bees are employed to explore the food sources and pass on their quality information to onlooker bees through waggle dancing. Based on the obtained quality information, onlooker bees select some better food sources for further exploitation. If the quality of some food source is not improved over a predefined cycle (), the food source is considered as exhausted and its associated employed bee is transformed into a scour bee to explore other food source in a random way. Algorithm 1 is the main framework of ABC and its key steps are introduced as follows:(i)Initialization. ABC randomly creates food sources by the following equation:where , . is a random number between 0 and 1. The quality information of each food source is measured by the objective function. The smaller the is, the better the solution quality is.(ii)Employed Bee Phase. After initialization, each employed bee begins to visit its associated food source in an attempt to explore a better one . The process is performed by equation (3), which updates a single component of as the linear combination:where , and . The better one between and in terms of quality will survive to the next generation.(iii)Onlooker Bee Phase. Based on the quality information of the food sources, each onlooker bee will fly to a better food source for further exploitation in terms of the selection probability which is calculated by the following equations:where is the fitness value of the solution. Obviously, solutions with larger fitness values have more opportunity to be selected.(iv)Scout Bee Phase. During the search process, each update of a particular food source in the previous two phases is tracked. When the number of unsuccessful update attempts exceeds the threshold , the food source exceeding the threshold by the largest amount denoted as equation (6) is replaced by equation (2):where is the number of consecutive unsuccessful update associated with the food source .

3. Proposed TABL + ABCs

3.1. Motivation

The AEL framework introduced in [51] is a very promising method. However, we also notice that the AEL framework is not always effective or even negative for ABCs on some multimode problems. Figure 1 shows that the AEL strategy can effectively accelerate the convergence speed but significantly reduce the convergence accuracy. The reason is that the information of population distribution is used in two search stages may increase the risk of falling into local optimum for some complex multimode problems. In addition, the experimental results of [52] have been clearly shown that most existing ABCs are only efficient for separable functions, while the convergence rate of these algorithms is still very poor when working with nonseparable functions.

Motivated by the above two aspects, an effective framework with tristage adaptive biased learning is proposed for existing ABCs (TABL + ABCs) to improve their performance on nonseparable problems, and its main framework is described in Algorithm 2. In the following, we will elaborate on the TABL + ABCs framework.

3.2. Rank-Based Biased Learning

As discussed in introduction, the imbalance between exploration and exploitation is the focus of the argument about ABC. One of the reasons for this phenomenon is the lack of biased information in the solution search equation. In order to reverse this phenomenon, many improved ABCs have been proposed by introducing various biased information into solution search equations. Most of them are inspired by the natural phenomenon that good individuals always contain good biased information and thus they always have more chances to generate good offspring. Different from the guidance mechanism based on single individual biased information, the AEL method extracted the biased information of population distribution to guide the search. As discussed in Section 3.1, this method may also increase the risk of falling into local optimum for some complex multimode problems. The reason is that the same biased information is employed in two search phases, which will cause over exploitation and lead to premature convergence.

 //Initialization
 Randomly create food sources by Equation (2);
 Evaluate the quality information of each food source ;
 Set , ;
while Termination condition is not met do
  ;
  //Employed bee phase
  Assign a linear ranking for each food source ;
  Calculate the selection probability for each food source by Equation (10);
  fordo
   Select the parent food source index in Equation (9) by ;
   Create a new food source near the by Equation (9);
   Evaluate the quality information of ;
   ifthen
    
   end
  end
  //Onlooker bee phase
  Compute the covariance matrix of the top food sources;
  Get new coordinate system B by the Eigen decomposition of ;
  Calculate the selection probability for each onlooker bee by Equation (4);
  ;
  whiledo
   ifthen
    Randomly create a food source different from ;
    Create a new food source near the by B;
    Evaluate the quality information of ;
    ifthen
     
    end
    ;
   end
   ifthen
    ;
   end
  end
  //Scout bee phase
  ;
  ifthen
   Replace the worst-ranking food source by the new food source generated by “DE/rand/1”;
  else
   Calculate according to Equation (11);
   Delete worst-ranking food sources from the population;
  end
  ;
end
 Randomly select ;
whiledo
  Randomly select ;
end
Return

To reduce the over exploitation caused by the population distribution information, the rank-based biased information instead of it is introduced into the solution search equation for guiding the search of employed bees. The main process is described lines 8-18 of Algorithms 2 and 3. The modified solution search equation with the rank-based biased learning is designed as follows:

It is obvious that the only difference between equations (3) and (7) is that the parent food source of equations (3) is selected with the same probability from the population, while the parent food source in equation (7) is selected based on its ranking among the population. According to the linear ranking model [18], the selection probability of the food source is calculated aswhere is the linear ranking of food source among the population by fitness value. Apparently, the higher the ranking is, the greater the selection probability is. Compared with equation (3), equation (7) can make each target individual converge to the global optimal position faster by learning the biased information from higher ranking solutions. Figure 2 shows this process. In an ideal state, the solution can reach the global optimal point of the quadratic function by only two-step search ( or ) and achieve the global best of the rotated quadratic function by only three-step search ( or ).

In addition, it is noteworthy that the target vector in equation (7) is still selected from the population in turn, while in [18], it is selected based on the selection probability. The reason is that the target vectors have been proportionally selected by equation (4) in the onlooker bee phase, and the similar biased information adopted in the employed bee phase may cause over exploitation and premature convergence. To further confirm it, Figure 3 shows the situation based on a typical multimodal problem, where the higher rank solutions are distributed near the local optimal solution, and the lower ranking solutions are distributed near the global optimal solution. In this case, ABC is not sufficient to yield any progress and may be further intensified into local convergence since these higher rank solutions are frequently selected for search.

3.3. Population Distribution Information-Based Biased Learning

For a separable function, the original coordinate system is the best choice for ABC. This is because the original coordinate system can maximize the improvement interval such as Figure 4(a). When the fitness landscape of Figure 4(a) is rotated, however, the improvement interval under the original coordinate system such as Figure 4(b) is shrunk rapidly, which severely limits the search ability of ABC. Therefore, increasing the improvement interval of variables by turning the coordinate system is an effective way to enhance the optimization performance of ABC on nonseparable problems. According to this, an adaptive Eigen coordinate system (i.e., B) was built by learning the population distribution information. A more detailed introduction can be found in [51].

It is widely known that the covariance matrix of population can release variable correlation to a certain extent. Therefore, the obtained Eigen coordinate system can effectively increase the improvement interval of variables and thus improve the search ability. Figure 5 shows a simple example to illustrate how the Eigen coordinate system increases the improvement interval of variables. In addition, the search direction guided by Eigen coordinate system is based on the biased information of population distribution instead of individual rank-based biased information. Therefore, the Eigen coordinate system is served as the guiding mechanism in onlooker bee phase, which is conducive to the realization of the complementary advantages of two kinds of biased learning.

3.4. Deletion-Restart Learning Strategy

In addition to slow convergence for nonseparable problems, ABC is also prone to fall into local optimum for complex multimodal problems. Figure 6 shows the situation for a typical multimodal problem, where the local optimum position is not distributed on a grid that is aligned with the coordinate system. It is assumed that the population has converged to the vicinity of the lower-left local optimum. In this case, ABC is not sufficient to yield any progress. In other words, the global optimum cannot be reached by modifying one decision variable per food source position.

To detect the state evolution in real time, the improvement of the optimal value of two adjacent generations is identified. If the improvement value is larger than a predefined threshold (e.g., 0.1), the particles should pay more attention exploitation to accelerate the convergence. In this case, worst-ranking food sources need be deleted based on the linear decreasing strategy of population size defined in equation (9); otherwise, the particles should pay more attention exploration to prevent the potential risks of stagnation as shown in Figure 6. In this situation, intervention of multivariable perturbation strategy is very useful for jumping out of local optimum. Based on this consideration, the “DE/rand/1” mutation strategy is adopted to create a new food source position to replace the worst one. The reason for “DE/rand/1” mutation strategy is the rotation-invariant property, while the solution search equation in ABC as well as the random initialization operator is rotation-variant. Therefore, the combination of two operators with different properties can achieve complementary advantages:where and . Whenever , the worst-ranking food sources are deleted from the population.

4. Experimental Studies

For experimental comparison, 30 CEC2014 competition benchmarks (denoted as F1–F30) with 30 dimensions (30D) are employed for performance test. These competition benchmarks are nonseparable and can be further subdivided into unimodal functions (F1–F3), simple multimodal functions (F4–F16), hybrid functions (F17–F22), and composition functions (F23–F30). A more detailed description about them can be found in [53].

In addition, we performed 51 independents runs for each algorithm on each test problem, which is based on a PC with Intel Core i7-4790 CPU @ 3.60 GHz, 8 GB RAM, and 64 bit Windows 7 OS. In each run, the absolute error was recorded when the maximum number of function evaluations 10000D was met, where and represent the best solution found by some algorithm and the optimal solution of a test problem, respectively. The mean error values and their standard deviation are considered for performance assessment. Furthermore, the Wilcoxon rank sum test was introduced to test the statistical significance of experimental results between pair of algorithms, and the Friedman test was used to compute the average rankings of all compared algorithms [54].

4.1. Comparison between TABL and AEL Framework

The aim of this section is to compare TABL with the AEL framework by applying them to the eight ABCs above. The resultant methods are denoted as TABL + ABCs and AEL + ABCs, respectively. For fair comparison, TABL + ABCs and AEL + ABCs adopt the same parameter settings shown in Table 1, while the AEL + ABCs have an additional learning period which is set to 50 suggested in [51]. Tables 2 and 3 summarize the experimental results and some important observations can be made as follows:(i)TABL + ABCs perform better than AEL + ABCs for 20, 22, 23, 20, 22, 27, 25, 27 out of 30 test functions with 30D, respectively. However, AEL + ABCs cannot surpass the TABL + ABCs on more than seven test functions.(ii)Compared with AEL + ABCs, TABL + ABCs can achieve significant performance improvement on all the three simple modal functions (F6, F7, and F11), three hybrid functions (F15 and F19–F20), and two composition functions (F28 and F30).

Combining the above experiment results, it is concluded that the proposed TABL framework is more effective than the AEL framework and its main advantage is the performance improvement of complex multimodal problems. It further shows that the proposed TABL approach is effective to fuse the advantages of the ranking biased information biased information and the biased information of population distribution and thus improve optimization accuracy of multimodal problems due to the risk reduction of falling into local optimum. Moreover, as can be seen from the statistical results of the multiproblem Wilcoxon test of Table 4, TABL + ABCs obtain higher values than values in all cases, which further confirms that TABL + ABCs perform significantly better than the AEL + ABCs. Finally, the convergence curves of TABL + ABCs and AEL + ABCs on F1, F7, and F17 functions with 30D are shown in Figure 7.

4.2. Comparison between TABL and ACoS Framework

The main purpose of this subsection is to compare TABL with the ACoS framework [55] by applying them to the eight ABCs above. The resultant algorithms are denoted as TABL + ABCs and ACoS + ABCs, respectively. Different from the AEL framework, the ACoS framework adaptively tunes the coordinate systems using the cumulative population distribution information. Therefore, the ACoS framework can be considered as an improved version of the AEL framework. For fair comparison, TABL + ABCs and ACoS + ABCs adopt the same parameter settings shown in Table 1. The experimental results are summarized in Tables 5 and 6 , and some important observations can be made as follows:(i)TABL + ABCs beat the ACoS + ABCs on 22, 20, 19, 19, 20, 17, 22, and 27 out of 30 test functions, respectively. However, ACoS + ABCs cannot surpass the TABL + ABCs on more than six test functions.(ii)Compared with ACoS + ABCs, TABL + ABCs can achieve significant performance improvement on all the three simple modal functions (F6, F7, and F11), two hybrid functions (F15–F16), and one composition function (F28).

Based on the above experiment results, it can be seen that the proposed TABL framework is more effective than the ACoS framework. The main advantage of TABL framework is that it can reduce the risk of falling into local optimum for most complex multimodal problems, which shows that the proposed TABL framework can achieve a better balance between exploration and exploitation by tristage adaptive biased learning. Moreover, as can be seen from the statistical results of the multiproblem Wilcoxon test of Table 7, TABL + ABCs obtain higher values than values in all cases, which further confirms that TABL + ABCs perform significantly better than the ACoS + ABCs for most cases. Finally, the convergence curves of TABL + ABCs and ACoS + ABCs on F1, F7, and F17 functions with 30D are shown in Figure 8.

4.3. The Average Ranking of All Algorithms

The previous comparison of experimental results is based on the Wilcoxon test, which is only applicable to the comparison of a pair of algorithms. To compare the performance of the algorithm more comprehensively, Table 8 gives the average rankings of all compared algorithms based on the Friedman test, and some important observations can be made as follows:(i)From Table 8, we can see that the overall rankings of TABL + ABCs are obviously better than AEL + ABCs and ACoS + ABCs, which further verifies the effectiveness of proposed TABL framework.(ii)Under the TABL framework, the performance of different algorithms is obviously different, which implies that how to design the matching operators with this framework is one of the key factors affecting the performance of the algorithm.(iii)The TABL + qABC ranks the first in all twenty-four comparison algorithms by calculating the sum of the average rankings for 30D.

4.4. Comparison of Computation Complexity

To compare the computation complexity of AEL + ABCs, ACoS + ABCs, and TABL + ABCs, we take the AEL + ABC, ACoS + ABC, and TABL + ABC as a comparative case. The computational costs of ABC mainly include the following three parts: (1) initialization phase (), (2) employed bee phase (), and (3) onlooker bee phase (). Therefore, the total worst computation complexity of ABC in one generation is

The computational costs of TABL + ABC mainly include the following three parts: (1) ABC (), (2) sorting fitness values (), and (3) Eigen decomposition of covariance matrix (). Therefore, the total worst computation complexity of TABL + ABC in one generation is

According to the above discussion, the main difference between TABL + ABC, AEL + ABC, and ACoS + ABC includes two aspects: (1) the AEL + ABC employed the covariance matrix, while the ACoS + ABC introduced the cumulative covariance matrix; (2) the TABL + ABC introduced the multivariable perturbation strategy as the scout bee phase. As the Eigen decomposition process of the cumulative covariance matrix of ACoS + ABC is the same as the AEL + ABC, its computational complexity () is . In addition, the computational complexity of the multivariable perturbation strategy is . Therefore, the total worst computation complexity of TABL + ABC in one generation is

Based on the above discussion, the worst computation complexity of TABL + ABC is the same as that of the AEL + ABC and ACoS + ABC.

5. Discussion

In the previous experimental discussion, we have verified the effectiveness of the proposed TABL framework for ABCs. In this section, we further compare the proposed algorithms in terms of TABL + qABC with other state-of-the-art EAs. They are the CMM_rcBBOg [56], NCS [57], TLBO [58], TSaABC [35], CSO [59], SLPSO [60], and CMA-ES [61]. The experimental platform for all compared algorithms is the same as that of TABL + qABC, and the parameter settings are suggested as the corresponding original literature studies. Table 9 summarizes the experimental results.

From Table 9, it can be seen that the TABL + qABC shows better performance than other seven compared algorithms. To be specific, the TABL + qABC outperforms the CMM_rcBBOg, NCS, TLBO, TSaABC, CSO, SLPSO, and CMA-ES on 24, 26, 27, 25, 21, 24, and 24 functions, respectively. In contrast, the CMM_rcBBOg, NCS, TLBO, TSaABC, CSO, SLPSO, and CMA-ES can beat the TABL + qABC on 3, 2, 2, 2, 8, 5, and 4 functions, respectively.

In addition, we note that the TABL + ABCs is not as good as CMA-ES in solving unimodal functions. The reason is that the covariance matrix of TABL + ABCs only utilizes the distribution information of the current population, while the CMA-ES employs the cumulative learning of population distribution information from the previous to current generations. Generally, the CMA-ES has faster guidance capability, which speeds up the convergence speed.

6. TABL + qABC for Big Optimization Problems

In this section, the proposed TABL + ABC algorithm is applied to big optimization problems. The results are compared with the other learning algorithms PSO, DE, and ABC.

6.1. Big Optimization Problems

In the optimization of big data 2015 competition [62], the big optimization problem is introduced. Next, we will briefly introduce this problem. Assume the dimension of matrix Y is , where and are the number and length of interdependent time series, respectively. The same is true for matrix X. A linear transformation matrix A of is given. Then, we have

The main problem of big optimization problem is how to decompose the matrix into two matrices and as follows:

The Pearson correlation coefficient C between and is given as follows:where is the covariance matrix and is the standard deviation.

The optimization objective can be defined as

6.2. Experimental Results

To verify the effectiveness of the proposed TABL + qABC algorithm, six datasets as test problems are introduced. They are shown in Table 10, and the experimental results are given in Table 11.

Table 11 shows that TABL + qABC achieves the best results on D4, D12, and D19 problems, which suggested that the TABL + qABC is more suitable for solving high-dimensional optimization problems.

7. Conclusions

In this paper, we have discussed the search behavior of ABC and confirmed that ABC always works perfectly for separable problems but suffers a drastic performance loss for nonseparable problems. Based on this analysis, a tristage adaptive biased learning framework is proposed to enhance the performance of ABCs on nonseparable problems, termed TABL + ABCs. In TABL + ABCs, the ranking biased information of the parent food sources is served as the search direction to accelerate the convergence speed in employed bee stage, while in onlooker bee stage, the search direction is guided by turning the coordinate system for increasing the improvement interval of variables. Moreover, a deletion-restart learning strategy is designed in scout bee stage to prevent the potential risk of population stagnation. Finally, we compared TABL + ABCs with AEL + AELs and ACoS + ABCs by applying them to 30 CEC2014 test problems with 30D, respectively. The experimental results show that TABL + ABCs perform better than the compared algorithms in most cases.

Although TABL + ABCs can effectively solve these benchmark problems with 30D, like many evolutionary algorithms, they still suffer from the curse of dimensionality. Therefore, it needs to study the TABL strategy in depth to meet the requirements of large-scale optimization in the future. In addition, establishing a bridge between the benchmark problems and real-world application problems is of great significance to the generalization of TABL + ABCs.

Data Availability

The data used to support the findings of this study are available from the first author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors wish to thank the partial support of the National Natural Science Foundation of China under grant 61803301 and 61773314, the Key Project of Shaanxi Key Research and Development Program under grant 2020ZDLGR07-06, and the Doctoral Foundation of Xi'an University of Technology (112-256081812).