Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2019, Article ID 2564754, 19 pages
https://doi.org/10.1155/2019/2564754
Research Article

A Multistrategy Artificial Bee Colony Algorithm Enlightened by Variable Neighborhood Search

1School of Traffic & Transportation, Lanzhou Jiaotong University, Lanzhou, Gansu 730070, China
2Institute of Modern Logistics, Lanzhou Jiaotong University, Lanzhou, Gansu 730070, China

Correspondence should be addressed to Wan-li Xiang; nc.ude.ujt@lwgnaix

Received 26 April 2019; Revised 13 August 2019; Accepted 11 September 2019; Published 3 November 2019

Academic Editor: Juan Carlos Fernández

Copyright © 2019 Wan-li Xiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Artificial bee colony (ABC) has a good exploration ability against its exploitation ability. For enhancing its comprehensive performance, we proposed a multistrategy artificial bee colony (ABCVNS for short) based on the variable neighborhood search method. First, a search strategy candidate pool composed of two search strategies, i.e., ABC/best/1 and ABC/rand/1, is proposed and employed in the employed bee phase and onlooker bee phase. Second, we present another search strategy candidate pool which consists of the original random search strategy and the opposition-based learning method. Then, it is used to further balance the exploration and exploitation abilities in the scout bee phase. Last but not least, motivated by the scheme of neighborhood change of variable neighborhood search, a simple yet efficient choice mechanism of search strategies is presented. Subsequently, the effectiveness of ABCVNS is carried out on two test suites composed of fifty-eight problems. Furthermore, comparisons among ABCVNS and several famous methods are also carried out. The related experimental results clearly demonstrate the effectiveness and the superiority of ABCVNS.

1. Introduction

There are a vast lot of problems to be optimized in the social production and life. In order to achieve better solutions to these problems, a great number of algorithms have been developed. Owing to shortages (e.g., function’s continuity needed) of deterministic approaches, various nature-enlightened algorithms had been developed to better deal with extremely difficult optimization problems. Generally speaking, during the past four decades, various researchers have developed different nature-inspired approaches such as genetic algorithms [1, 2], particle swarm optimization [3], differential evolution (called DE) [4], and artificial bee colony [5, 6].

Among them, artificial bee colony (ABC) is a distinguished representative of population-based global optimization methods, and it was first proposed by Karaboga in 2005 [5]. After that, comparative studies were carried out by Karaboga et al. [7, 8] among ABC, PSO, DE, GA, and so on. The comparative outcomes show its superiority when compared with those competitors. In addition, it needs fewer parameters. Afterwards, a great number of researchers show a lively interest in addressing ABC. Meanwhile, a lot of enhanced variants had been presented to solve various problems such as function optimization [922], vehicle routing problem [23], multiobjective optimization problems [24], and others [2527].

Besides practical applications of ABC in the research fields listed above, many researchers concentrate on improving the performance of the traditional ABC to solve function optimization problems with different characteristics of nonconvexity, noncontinuity, separability, etc. For example, Alatas [9] proposed a chaotic ABC, in which both a chaotic initialization technique and a chaotic search method are proposed. Enlightened from the search process of particle swarm optimization, Zhu and Kwong [10] designed a novel search technique for improving ABC. In the proposed GABC, the search can effectively utilize the information of the global best individual. Motivated from the search strategy of DE [4], Gao and Liu [11] creatively developed a modified artificial bee colony (named MABC), in which a new equation named ABC/best/1 is proposed. To enhance the level of information sharing among various individuals, Akay and Karaboga [12] introduced a new control parameter called modification rate. It was used to randomly change parameters. Furthermore, it effectively enhances its exploitation ability. Frequently, the ABC was hybridized with other local approaches [13] to enhance its search ability. More recently, many researchers would like to utilize hybridization of multiple search strategies to balance the exploration and exploitation abilities of ABC [18, 21]. For instance, Gao et al. [18] first constructed a strategy candidate pool composed of three search strategies with different search abilities and then proposed an adaptive selection mechanism to select a search strategy for each individual based on previous search experiences. Moreover, the proposed algorithm, namely, MuABC, had achieved a better performance when compared with the standard ABC and other state-of-the-art algorithms. Kiran et al. [21] first chose five search strategies to form a strategy candidate pool and then designed a probabilistic selection scheme for choosing a search strategy during the evolving process. Furthermore, the proposed approach, called ABCVSS for short, outperformed the basic ABC, ABC variants, and other kinds of methods in terms of solution quality for most of the cases.

To better enhance the basic ABCs performance, a multistrategy ABC is proposed enlightened from the variable neighborhood search technique [2830]. For convenience, it was named after ABCVNS, where both ABC/best/1 and ABC/rand/1 are used to construct the first search strategy candidate pool. It is utilized both in the first stage and in the second stage. Furthermore, the second search strategy candidate pool is employed in the scout bee phase, and it consists of an original random search strategy and an opposition-based learning method. Then, a novel mechanism of choice of search strategies is proposed inspired by the variable neighborhood search method. In addition, an opposition-based learning method is employed to initially generate a population with better diversities. To comprehensively show the advantage of ABCVNS, a few experiments on a large number of benchmark problems are conducted. Comparisons of ABCVNS and many other famous methods are also provided. The related comparative results demonstrate that the proposed ABCVNS can be regarded as a competitive method.

The remainder of the work is organized as follows. Section 2 briefly describes the basic ABC. Next, a novel multistrategy ABC is proposed and described in detail in Section 3. In Section 4, a few comparative experiments are carried out and the comparative results are provided and discussed in detail. Finally, Section 5 concludes the work and puts forward a few future research directions.

2. Classical ABC

Enlightened from the collective intelligence behaviors of bee swarm [5], Karaboga developed ABC in 2005. In ABC, the bee swarm is divided into three groups. They are employed bees, onlooker bees, and scout bees, respectively. Among them, employed bees stand half of the swarm and onlooker bees form another half. As far as labor division of honey bees is concerned, the task of exploring nectar sources is undertaken by the employed bees. After that, they would pass the information of nectar amount onto onlooker bees. On the basis of the shared information, a food source is selected and exploited by an onlooker bee in turn with a certain percentage. If one employed bee or one onlooker bee exhausts a food source, then the corresponding bee would play a role of a scout bee, which will perform a random search to get out of a local trap.

Generally, by imitating the foraging behavior of artificial bee colony, the ABC is made up of four sequential phases. They are the initialization, employed bee, onlooker bee, and scout bee phases, respectively.

Before gathering the nectar of honey bees, a population of artificial individuals is randomly generated according to the following equation to denote real-world honey bees:where ; and represent the upper and lower bounds of the component j, respectively; and ξ is a random number in the range of [0,1). Randomly generated D-dimensional vector of indicates an artificial agent. Meanwhile, we should predefine a suitable stopping criterion and the parameter limit employed for controlling appearances of scout bees.

Following the initialization phase, employed bees begin to explore food sources successively in the light of the following equation:where and together with are randomly produced by a uniform distribution. In addition, k has to be different from i. is a randomly generated number between .

Next, fitness values of artificial individuals are calculated as follows:where and indicate the cost value and the fitness value of the i-th artificial individual, respectively.

At the beginning of the second stage, probability values are calculated according to the following formula:where denotes the probability of the i-th artificial food source chosen by onlooker bees. It depends on nectar amounts of the corresponding food source. That is, the higher the is, the higher the chance of choosing the i-th food source is. In this context, employed bees pass information on to onlooker bees.

Based on equation (2), each onlooker bee chooses a food source in turn. Then, it begins to exploit around the corresponding food source.

Next, a predetermined parameter limit is employed to decide whether or not a scout bee occurs. Concretely speaking, provided that a bee continuously performs limit searches around the same food source, and it fails to achieve a better one, then the bee will become a scout bee. That is, it will randomly search for a new food source to jump out of a local trap in the light of the following equation:where . Besides, the other parameters are the same settings as those of equation (1).

During the foraging processes of the honey bee colony, they may cross some borders. That is, the artificial individuals/solutions may violate boundary constraints. To make solutions feasible, the following equation is employed to repair those infeasible solutions:

To summarize, after initializing a population, other stages of ABC are executed repeatedly until one halt condition is encountered.

3. A Multistrategy Artificial Bee Colony Algorithm

3.1. Initialize a Population in View of Opposition-Based Learning

First of all, a population of artificial individuals is randomly produced according to equation (1). Based on the initial artificial individuals/solutions, some opposite solutions are generated to improve population diversities. That is, an opposition-based learning (OBL) method [31] is used to construct these opposite solutions. Since 2008, the OBL method has been widely applied in many population-based algorithms such as DE [32, 33] and ABC [11]. More concretely, equation (7) is employed here to produce oppositional vectors :where . The rest of the parameters are the same settings as those of equation (1).

By integrating the OBL and the random initialization approaches, the corresponding integrated initialization approach can be listed in Algorithm 1.

Algorithm 1: The integrated initialization method.
3.2. Search Strategy Candidate Pool

To simultaneously get better accuracies of solutions and a faster convergence performance, Gao et al. [18] and Kiran et al. [21] have proposed multiple search strategies to coordinate the exploitation and the exploration abilities of ABC from various perspectives, respectively. In addition, different neighbor search operators are also employed in other methods [34].

In this work, a new search strategy candidate pool is constructed with two search strategies. In addition, it is used in both the employed bee phase and the onlooker bee phase, respectively.

Enlightened from DE/best/1, Gao and Liu [11, 35, 36] first designed a very good equation named as ABC/best/1. In a few ABC variants like MABC [11], these enhanced versions achieved a better performance. This is the reason that the global best individual can guide faster the i-th individual around the best individual than a current individual i. Namely, ABC/best/1 is better than the traditional search strategy in the aspect of the exploitation ability. Therefore, ABC/best/1 is suitable to be integrated into the first candidate pool in this work. Its formula can be described as follows:where , , and are mutually different random integers. The two numbers i and are also mutually different. is the index of the global best individual in a population. ξ is a function to produce a uniformly distributed random number between [0, 1).

Enlightened from DE/rand/1, Gao and Liu [36] designed ABC/rand/1. Its exploitation ability is worse than that of ABC/best/1. However, its exploration ability is better than that of ABC/best/1. To better coordinate the two kinds of abilities, ABC/rand/1 is also added into the candidate pool. Its formula is described as follows:where are randomly generated, and . The remainder of the parameters are the same settings as those of equation (8).

To further coordinate the exploration and the exploitation abilities of ABC, in the last stage, another search strategy candidate pool is constructed. The second candidate pool of search strategies is also composed of two search strategies.

Equation (5) is chosen as the first search strategy in the second strategy candidate pool. The second search strategy is described by equation (7). That is, the original random search strategy and the OBL search strategy are integrated to make up the second candidate pool of search strategies.

To improve the comprehensive performance of ABC, two candidate pools with different strategies are presented in this work.

3.3. The Choice of Search Strategies

As far as multiple search strategies are considered, the choice of search strategies also plays a key role during the process of improving the performance of ABC. Inspired by the variable neighborhood search (VNS) method [2830], a simple yet efficient mechanism of choosing a search strategy for each bee is proposed.

In VNS, during the process of neighborhood changes, there are three steps: (i) set an iterative variable denoting a neighborhood as k = 1; (ii) if k is less than predetermined, perform a local search in the k-th neighborhood, and then the objective value of new generated solution x′ is compared with that of the incumbent x previously achieved; and (iii) if an improvement is obtained, k is reset to its initial value and the incumbent is updated, namely, replace x with x′. Otherwise, the next neighborhood is considered, i.e., k = k + 1, and go to (ii). More details can be found in the literature [37].

In this work, a search strategy in a search strategy candidate pool is considered as a neighborhood. Thus, the main idea of choosing a search strategy (or changing neighborhood in VNS) is described as follows: (i) set k = 1; (ii) perform a search according to the search strategy k in the corresponding search strategy candidate pool by a bee; and (iii) if an improvement is obtained, the next bee continues to search a food source using the same search strategy. Otherwise, the next search strategy is chosen, namely, set k = k + 1, which also means that the next bee performs a search using the next search strategy. Provided that k is greater than its predefined maximum value, reset k to 1. Namely, let k = 1, which indicates that the first search strategy will be used by the next bee.

3.4. The Proposed Method

In the light of the aforementioned analysis, the major steps of ABCVNS are summarized in Algorithm 2.

Algorithm 2: The framework of ABCVNS.

4. Experimental Study and Discussion

4.1. Benchmark Problems and Experimental Settings

To test the effects of modifications of ABCVNS, twenty-eight benchmark problems given in Table 1 are employed in the following comparisons. Their detailed information can be found in the research work [21]. Many different kinds of optimization problems are covered by these benchmark problems, and more details can be found in [21].

Table 1: Benchmark test problems.

Except for functions and in the experiments, all other benchmark problems are partitioned into two groups in the following experiments: One group consists of 30-dimensional functions, and another group consists of 60-dimensional functions. In the first group, functions and with D = 100 are tested. In the second group, functions and with D = 200 are tested. Accordingly, the number of maximum function evaluations (parameter maxFEs) is set to 15e4 and 30e4, respectively. To address the advantage of ABCVNS, it is compared with ABC at the beginning. As far as the two contenders are concerned, the other parameters of each algorithm are given below:(i)ABC: the population size is 40, namely,  = 20 [18], and the control parameter limit is set to [8, 21](ii)ABCVNS: the population size is 40, namely,  = 20 [18], and the control parameter limit is also set to [8, 21]

For the following experiments, the aforementioned values of parameters are used only if a change is mentioned. Moreover, the corresponding algorithm is independently run over 30 times while optimizing each benchmark problems.

4.2. Comparison of ABC vs. ABCVNS

To address the ABCVNSs effectiveness, comprehensive comparisons between ABCVNS and ABC are carried out. The sizes of test problems are set as D = 30 and D = 60, respectively. The corresponding results are provided in Tables 2 and 3. Concretely, we provide some statistical results such as standard deviation values (Std. listed in the ninth column of Tables 2 and 3) obtained by ABC and ABCVNS through 30 independent runs. Furthermore, the Wilcoxon signed rank tests between ABC and ABCVNS are also performed at the 5% significance level. The related significance statuses (Sig. for short) are listed in the tenth column of Tables 2 and 3, respectively. The symbols “+//−” show that ABCVNS is better than, equal to, or inferior to ABC, respectively. Moreover, some representative convergence curves of ABC and ABCVNS are shown in Figures 1 and 2 to display the convergence rate of ABCVNS more clearly.

Table 2: Objective values searched by ABC and ABCVNS for 30 and 100 ( and ) test problems.
Table 3: Objective values searched by ABCVNS and ABC for 60 and 200 ( and ) dimensional problems.
Figure 1: Convergence curves of ABCVNS and ABC on the 12 test functions at D = 30. (a) f01, (b) f03, (c) f04, (d) f05, (e) f08, (f) f11, (g) f13, (h) f14, (i) f16, (j) f24, (k) f25, and (l) f27.
Figure 2: Convergence curves of ABCVNS and ABC on the 12 test functions at D = 60. (a) f01, (b) f02, (c) f04, (d) f05, (e) f08, (f) f12, (g) f14, (h) f16, (i) f18, (j) f24, (k) f26, and (l) f28.

In the light of the tenth column of Table 2, ABCVNS is superior to or equal to ABC on almost all the test problems. In terms of mean values found by ABC and ABCVNS, the solution accuracies of ABCVNS are obviously enhanced with respect to those of ABC on nine benchmark functions, namely, , , , , , , , , and . As a note, the two algorithms, i.e., ABCVNS and ABC, are all coded in MATLAB R2014a, which implies that experimental results report zero until they are less than 1e−308, respectively. Furthermore, it is worth noting that ABCVNS achieves global optima on seven test problems, namely, , , , , , , and , whose characteristics include the characteristic of unimodal-separable (US) function, the characteristic of multimodal-separable (MS) function, and the characteristic of multimodal-nonseparable function. From the experimental results, it is clear that ABCVNS is obviously better than ABC.

As seen from Figure 1, ABCVNS is superior to ABC in terms of solution accuracy or convergence speed on most of the representative problems. Especially, the convergence speeds of ABCVNS are faster than those of ABC although the solution accuracies found by ABC are the same as those obtained by ABCVNS on some cases. The superiority of ABCVNS embodies on a few problems such as and is shown in Figures 1(f) and 1(j).

According to the last column of Table 3, we can find that ABCVNS is superior to or equal to ABC on almost all benchmark problems although the problem sizes raise from 30 to 60. These verify that ABCVNS is robust to problem sizes. Furthermore, ABCVNS achieves global optima on the seven benchmark functions, namely, , , , , , , and . Like the test problems with D = 30, the superiority of ABCVNS is also kept on the test problems with D = 60 in terms of solution accuracy as well as convergence speed as shown in Figure 2. Generally speaking, ABCVNS achieves better performance than ABC on most of the test problems. That is, our modifications on ABC are active.

4.3. Comparisons among ABCVNS as well as Other Famous ABCs

To further verify the superiority of ABCVNS, comparisons among ABCVNS together with a few well-known or recently published research works are done again. These famous contenders include GABC [10], ABCBest1 [35], MABC [11], and ABCVSS [21]. To make a fair comparison, the terminal condition for the approaches is the maximum number of function evaluations. It is set to 15e4 for 30-dimensional test problems (problems and with D = 100). It is set to 30e4 for 60-dimensional test problems (problems and with D = 200). The remaining parameters are the same as those employed in the research work [21]. As seen from Tables 4 and 5, some statistical results found by each algorithm are provided. For brevity, results of the competitors of ABCVNS are straightly adopted from the research findings of Kiran et al. [21].

Table 4: Comparisons of ABCVNS and other ABCs over 30 independent runs on the 30 and 100 () dimensional problems.
Table 5: Comparison of the ranks of the algorithms for the results of 30 and 100 () dimensional problems.

Next, comparisons among ABCVNS and its competitors are carried out based on rank. The related results are listed in Tables 6 and 7, respectively. For the comparisons, mean value is first used to compare ABCVNS and its contender. If mean values found by ABCVNS and its contender are the same with each other, then standard deviation values are used to decide which method is better. If both of them are also the same with each other, then their ranks are the same.

Table 6: Comparison of ABCVNS and other ABCs over 30 independent runs on the 60 and 200 () dimensional problems.
Table 7: Comparison of the ranks of the algorithms for the results of 60 and 200 () dimensional problems.

As seen from Tables 4 and 6, ABCVNS is superior to or the same to its competitors like GABC, ABCBest1, and MABC together with ABCVSS in most of the cases. As seen from Table 6, ABCVNS gets the first prize in terms of the average rank among five algorithms. In sum, ABCVNS is a competitive algorithm.

From Tables 5 and 7, it can be observed that the best results are mainly achieved by ABCVNS or ABCVSS. Especially, ABCVNS still keeps original competitive advantage over its contenders although problem sizes increase from 30 to 60. As before, ABCVNS wins the first place compared with the four other famous approaches as shown in the last row of Table 7.

4.4. Comparison on CEC2014 Test Problems

To further address the superiority of ABCVNS, a comparison among ABCVNS, dABC [20], qABC [15], and ABCVSS [21] together with DFSABC_elite [38] is carried out on thirty CEC2014 benchmark problems [39] with D = 10.

For a fair comparison, the number of maximum function evaluations is employed as the halt condition. It is set to 10e4, which is also used in other research works [38, 39]. In addition, 25 runs are independently executed by each algorithm in the following experimental research studies. The statistical results are reported in Table 8. Note that the average and the standard deviation of the function error values  −  are provided. Here, denotes the best solution obtained by the corresponding algorithm in each run, and represents the real global optimal solution. Furthermore, except for results found by ABCVNS, all other reported ones are directly taken from the research work [38].

Table 8: Comparison of ABCVNS and four recent ABC variants on CEC2014 test functions with D = 10.

Both DFSABC_elite and ABCVNS are better than dABC, qABC, and ABCVSS as shown in Table 8. More concretely, DFSABC_elite wins the first place according to the average rank value reported in the last row of Table 8. Although ABCVNS is slightly inferior to DFSABC_elite, the orders of magnitude of mean values obtained by ABCVNS are very close to those searched by DFSABC_elite on ten test functions, i.e., F7, F10, F11, F12, F14, F16, F21, F22, F26, and F28. Especially, for the benchmark functions F26 and F28, the mean values of solutions found by ABCVNS are equal to those of solutions found by DFSABC_elite, respectively. DFSABC_elite wins ABCVNS on the two functions merely because the standard deviation values of solutions found by DFSABC_elite are slightly better than those of solutions obtained by ABCVNS. In sum, the proposed ABCVNS is also suitable for solving the very difficult problems.

To comprehensively investigate ABCVNS, another comparison among the three recent ABC variants and ABCVNS is carried out on twenty-two traditional complex functions with D = 30. These problems are also employed in the research work [38]. The experimental results are reported in Table 9. Among them, except for results found by ABCVNS, all other results are directly adopted from the research work [38].

Table 9: Comparison among dABC, qABC, DFSABC_elite, and ABCVNS on some test problems with D = 30.

As seen from Table 9, ABCVNS wins the first place according to the average rank value. Furthermore, it is worth noting that ABCVNS ranks 1st in thirteen of twenty-two functions. DFSABC_elite gets the second place. In addition, qABC is better than dABC. Especially, ABCVNS searches global optima on five test functions, i.e., , , , , and with D = 30. Furthermore, the mean values of solutions found by ABCVNS are obviously better than those obtained by DFSABC_elite on six test functions, i.e., , , , , , and with D = 30. However, the mean values of solutions found by DFSABC_elite are slightly superior or equal to those found by ABCVNS on the three test functions, i.e., , , and . To sum up, ABCVNS can be considered as a competitive method.

5. Conclusion

In this work, two search strategy candidate pools are proposed in order to overcome the contradiction between the fast convergence speed and the high solution accuracy. The first search strategy candidate pool is composed of two search strategies, i.e., ABC/best/1 and ABC/rand/1. The first search strategy candidate pool is employed in the employed bee phase and the onlooker bee phase. The second search strategy candidate pool also consists of two search strategies. They are the original random search strategy and the OBL method. In addition, it is employed in the scout bee phase to get a better compromise between the exploration ability and the exploitation ability. In addition, a simple yet efficient choice mechanism of search strategies is presented under the inspiration of variable neighborhood search algorithm. Then, a new variant of ABC is proposed, and it is called ABCVNS for short. To validate the convergence performance of the proposed ABCVNS, experiments on twenty-eight benchmark functions are performed. The basic comparison results between ABC and ABCVNS demonstrate that the modifications on ABC take effect. That is, ABCVNS obtains better performance than ABC. To fully validate the effectiveness of ABCVNS, it is compared with four other famous algorithms including ABCVSS. Related experimental results show that ABCVNS wins the first place according to the average rank. Subsequently, ABCVNS is further tested on a very difficulty test suite, i.e., CEC2014 benchmark functions. The related experimental results also demonstrate its superiority.

In a word, the proposed ABCVNS can be considered as a promising method. In the future, more smarter mechanisms of choosing different strategies is worth developing to take full advantage of various strategies.

Data Availability

The related benchmark problems used to support the findings of this study can be found in this article or the web site https://www.ntu.edu.sg/home/epnsugan/.

Conflicts of Interest

All authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (grant nos. 61563028, 71861022, and 71661021), the Foundation of A Hundred Youth Talents Training Program of Lanzhou Jiaotong University (grant no. 1520220203), and the Innovative Foundation of Lanzhou Jiaotong University-Tianjin University (grant no. 2018065).

References

  1. H. M. Pandey, A. Chaudhary, and D. Mehrotra, “A comparative review of approaches to prevent premature convergence in GA,” Applied Soft Computing, vol. 24, pp. 1047–1077, 2014. View at Publisher · View at Google Scholar · View at Scopus
  2. H. Niu and X. Zhou, “Optimizing urban rail timetable under time-dependent demand and oversaturated conditions,” Transportation Research Part C: Emerging Technologies, vol. 36, pp. 212–230, 2013. View at Publisher · View at Google Scholar · View at Scopus
  3. Y. Chen, L. Li, H. Peng, J. Xiao, Y. Yang, and Y. Shi, “Particle swarm optimizer with two differential mutation,” Applied Soft Computing, vol. 61, pp. 314–330, 2017. View at Publisher · View at Google Scholar · View at Scopus
  4. S. Das and P. N. Suganthan, “Differential evolution: a survey of the state-of-the-art,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 4–31, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization, Kayseri/Türkiye: Erciyes University, Kayseri, Turkey, 2005, Report No.: Technical Report-TR06.
  6. D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at Publisher · View at Google Scholar · View at Scopus
  7. D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing, vol. 8, no. 1, pp. 687–697, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. D. Karaboga and B. Akay, “A comparative study of artificial bee colony algorithm,” Applied Mathematics and Computation, vol. 214, no. 1, pp. 108–132, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. B. Alatas, “Chaotic bee colony algorithms for global numerical optimization,” Expert Systems with Applications, vol. 37, no. 8, pp. 5682–5687, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. G. Zhu and S. Kwong, “Gbest-guided artificial bee colony algorithm for numerical function optimization,” Applied Mathematics and Computation, vol. 217, no. 7, pp. 3166–3173, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. W.-F. Gao and S.-Y. Liu, “A modified artificial bee colony algorithm,” Computers & Operations Research, vol. 39, no. 3, pp. 687–697, 2012. View at Publisher · View at Google Scholar · View at Scopus
  12. B. Akay and D. Karaboga, “A modified artificial bee colony algorithm for real-parameter optimization,” Information Sciences, vol. 192, pp. 120–142, 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. W.-F. Gao, S.-Y. Liu, and L.-L. Huang, “A novel artificial bee colony algorithm with Powell’s method,” Applied Soft Computing, vol. 13, no. 9, pp. 3763–3775, 2013. View at Publisher · View at Google Scholar · View at Scopus
  14. W. Gao, S. Liu, and L. Huang, “A novel artificial bee colony algorithm based on modified search equation and orthogonal learning,” IEEE Transactions on Cybernetics, vol. 43, no. 3, pp. 1011–1024, 2013. View at Publisher · View at Google Scholar · View at Scopus
  15. D. Karaboga and B. Gorkemli, “A quick artificial bee colony (qABC) algorithm and its performance on optimization problems,” Applied Soft Computing, vol. 23, pp. 227–238, 2014. View at Publisher · View at Google Scholar · View at Scopus
  16. H. Wang, Z. Wu, S. Rahnamayan, H. Sun, Y. Liu, and J.-S. Pan, “Multi-strategy ensemble artificial bee colony algorithm,” Information Sciences, vol. 279, pp. 587–603, 2014. View at Publisher · View at Google Scholar · View at Scopus
  17. W.-F. Gao, S.-Y. Liu, and L.-L. Huang, “Enhancing artificial bee colony algorithm using more information-based search equations,” Information Sciences, vol. 270, pp. 112–133, 2014. View at Publisher · View at Google Scholar · View at Scopus
  18. W.-F. Gao, L.-L. Huang, S.-Y. Liu, F. T. S. Chan, C. Dai, and X. Shan, “Artificial bee colony algorithm with multiple search strategies,” Applied Mathematics and Computation, vol. 271, pp. 269–287, 2015. View at Publisher · View at Google Scholar · View at Scopus
  19. W. Gao, F. T. S. Chan, L. Huang, and S. Liu, “Bare bones artificial bee colony algorithm with parameter adaptation and fitness-based neighborhood,” Information Sciences, vol. 316, pp. 180–200, 2015. View at Publisher · View at Google Scholar · View at Scopus
  20. M. S. Kıran and O. Fíndík, “A directed artificial bee colony algorithm,” Applied Soft Computing, vol. 26, pp. 454–462, 2015. View at Publisher · View at Google Scholar · View at Scopus
  21. M. S. Kiran, H. Hakli, M. Gunduz, and H. Uguz, “Artificial bee colony algorithm with variable search strategy for continuous optimization,” Information Sciences, vol. 300, pp. 140–157, 2015. View at Publisher · View at Google Scholar · View at Scopus
  22. W.-L. Xiang, Y.-Z. Li, X.-L. Meng, C.-M. Zhang, and M.-Q. An, “A grey artificial bee colony algorithm,” Applied Soft Computing, vol. 60, pp. 1–17, 2017. View at Publisher · View at Google Scholar · View at Scopus
  23. P. Y. Yin and Y. L. Chuang, “Adaptive memory artificial bee colony algorithm for green vehicle routing with cross-docking,” Applied Mathematical Modelling, vol. 40, no. 21-22, pp. 9302–9315, 2016. View at Publisher · View at Google Scholar · View at Scopus
  24. Y. Xiang, Y. Zhou, and H. Liu, “An elitism based multi-objective artificial bee colony algorithm,” European Journal of Operational Research, vol. 245, no. 1, pp. 168–193, 2015. View at Publisher · View at Google Scholar · View at Scopus
  25. Z. H. Ding, M. Huang, and Z. R. Lu, “Structural damage detection using artificial bee colony algorithm with hybrid search strategy,” Swarm and Evolutionary Computation, vol. 28, pp. 1–13, 2016. View at Publisher · View at Google Scholar · View at Scopus
  26. K. Z. Gao, P. N. Suganthan, Q. K. Pan, M. F. Tasgetiren, and A. Sadollah, “Artificial bee colony algorithm for scheduling and rescheduling fuzzy flexible job shop problem with new job insertion,” Knowledge-Based Systems, vol. 109, pp. 1–16, 2016. View at Publisher · View at Google Scholar · View at Scopus
  27. C. Ma, W. Hao, A. Wang, and H. Zhao, “Developing a coordinated signal control system for urban ring road under the vehicle-infrastructure connected environment,” IEEE Access, vol. 6, pp. 52471–52478, 2018. View at Publisher · View at Google Scholar · View at Scopus
  28. P. Hansen and N. Mladenović, “Variable neighborhood search: principles and applications,” European Journal of Operational Research, vol. 130, no. 3, pp. 449–467, 2001. View at Publisher · View at Google Scholar · View at Scopus
  29. N. Mladenović and P. Hansen, “Variable neighborhood search,” Computers & Operations Research., vol. 24, no. 11, pp. 1097–1100, 1997. View at Google Scholar
  30. Y. Xiao, R. Zhang, Q. Zhao, I. Kaku, and Y. Xu, “A variable neighborhood search with an effective local search for uncapacitated multilevel lot-sizing problems,” European Journal of Operational Research, vol. 235, no. 1, pp. 102–114, 2014. View at Publisher · View at Google Scholar · View at Scopus
  31. H. R. Tizhoosh, “Opposition-based learning: a new scheme for machine intelligence,” in Proceedings of the 2005 International Conference on Computational Intelligence for Modelling, Control and Automation, pp. 695–701, Vienna, Austria, November 2005. View at Publisher · View at Google Scholar
  32. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition-based differential evolution,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 64–79, 2008. View at Publisher · View at Google Scholar · View at Scopus
  33. H. Wang, Z. Wu, and S. Rahnamayan, “Enhanced opposition-based differential evolution for solving high-dimensional continuous optimization problems,” Soft Computing, vol. 15, no. 11, pp. 2127–2140, 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. X. Zhou, H. Wang, M. Wang, and J. Wan, “Enhancing the modified artificial bee colony algorithm with neighborhood search,” Soft Computing, vol. 21, no. 10, pp. 2733–2743, 2017. View at Publisher · View at Google Scholar · View at Scopus
  35. W. Gao, S. Liu, and L. Huang, “A global best artificial bee colony algorithm for global optimization,” Journal of Computational and Applied Mathematics, vol. 236, no. 11, pp. 2741–2753, 2012. View at Publisher · View at Google Scholar · View at Scopus
  36. W. Gao and S. Liu, “Improved artificial bee colony algorithm for global optimization,” Information Processing Letters, vol. 111, no. 17, pp. 871–882, 2011. View at Publisher · View at Google Scholar · View at Scopus
  37. https://en.wikipedia.org/wiki/Variable_neighborhood_search.
  38. L. Cui, G. Li, Q. Lin et al., “A novel artificial bee colony algorithm with depth-first search framework and elite-guided search equation,” Information Sciences, vol. 367-368, pp. 1012–1044, 2016. View at Publisher · View at Google Scholar · View at Scopus
  39. J. J. Liang, B. Y. Qu, and P. N. Suganthan, Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization, Nanyang Technological University, Singapore, 2014, Report No.: Technical Report 201311.