Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2015 (2015), Article ID 285730, 15 pages
http://dx.doi.org/10.1155/2015/285730
Research Article

An Enhanced Differential Evolution Algorithm Based on Multiple Mutation Strategies

School of Traffic & Transportation, Lanzhou Jiaotong University, Lanzhou, Gansu 730070, China

Received 12 May 2015; Accepted 5 July 2015

Academic Editor: Yufeng Zheng

Copyright © 2015 Wan-li Xiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Differential evolution algorithm is a simple yet efficient metaheuristic for global optimization over continuous spaces. However, there is a shortcoming of premature convergence in standard DE, especially in DE/best/1/bin. In order to take advantage of direction guidance information of the best individual of DE/best/1/bin and avoid getting into local trap, based on multiple mutation strategies, an enhanced differential evolution algorithm, named EDE, is proposed in this paper. In the EDE algorithm, an initialization technique, opposition-based learning initialization for improving the initial solution quality, and a new combined mutation strategy composed of DE/current/1/bin together with DE/pbest/bin/1 for the sake of accelerating standard DE and preventing DE from clustering around the global best individual, as well as a perturbation scheme for further avoiding premature convergence, are integrated. In addition, we also introduce two linear time-varying functions, which are used to decide which solution search equation is chosen at the phases of mutation and perturbation, respectively. Experimental results tested on twenty-five benchmark functions show that EDE is far better than the standard DE. In further comparisons, EDE is compared with other five state-of-the-art approaches and related results show that EDE is still superior to or at least equal to these methods on most of benchmark functions.

1. Introduction

Optimization problems are ubiquitous in the various areas including production, life, and scientific community. These optimization problems are usually nonlinear and nondifferentiable. Particularly, the number of their local optima may increase exponentially with the problem size. Thus, evolutionary algorithms (EAs) only needing the value information of objective functions have many more advantages and have drawn more and more attention of many researchers all over the world. In this way, a lot of researchers have developed a great number of evolutionary algorithms, such as genetic algorithms (GAs), particle swarm optimization (PSO), ant colony optimization (ACO), and differential evolution (DE) algorithm. Among them, differential evolution is one of the most powerful stochastic real-parameter optimization algorithms [1]. It was originally developed by Storn and Price [2, 3] in 1995.

Due to its simple implementation, few control parameters, and fast convergence, DE has been widely and successfully applied in function optimization problems [226], constrained optimization problems [2729], multiobjective optimization problems [30], scheduling [3133], and others [3439].

According to the aforementioned statements, it can be seen that DE has been very successful in solving various optimization problems. As far as the type of optimization problems is concerned, more researches mainly focus on continuous function optimization. However, the convergence precision and convergence speed over function optimization are still to be improved. That is, the exploration ability and exploitation ability of DE cannot be well balanced. To overcome the shortage of imbalance of the two abilities, more and more researchers have developed a large number of DE variants. For example, Noman and Iba [11] proposed a kind of accelerated differential evolution by incorporating an adaptive local search technique. Rahnamayan et al. [13] proposed an opposition-based differential evolution (ODE for short), in which a novel opposition-based learning (OBL) technique and a generation-jumping scheme are employed. Qin et al. [14] proposed a self-adaptive differential evolution algorithm, called SaDE, in which both trial vector generation strategies and their associated parameter values are dynamically self-adapted during the process of producing promising solutions. Zhang and Sanderson [15] proposed a novel differential evolution referred to as JADE, in which a novel self-adaptive parameters scheme and a new mutation strategy with optional archive are proposed. And these improvements made JADE achieve a very fast convergence speed and high-quality solutions. Subsequently, Gong et al. [22, 23] proposed a few enhanced DE versions based on JADE by introducing adaptive strategy selection schemes or control parameters adaption mechanisms. In summary, all these state-of-the-art DE variants have achieved better convergence performance than the traditional DE.

Unfortunately, up to now, there exists no specific DE version to substantially achieve the best solution for all optimization problems because the exploration and the exploitation often mutually contradict in reality. Hence, searching for better approaches is very necessary. In order to solve continuous optimization problems more efficiently, an enhanced differential evolution algorithm based on multiple mutation strategies, called EDE for short, is presented in this paper.

The structure of the paper is organized as follows. The standard differential evolution algorithm is described briefly in Section 2. In Section 3, an enhanced differential evolution algorithm is presented and described in detail. Subsequently, Section 4 employs a set of benchmark functions to comprehensively investigate the performance of the proposed algorithm through experimental results of these functions and comparisons with other well-known evolutionary algorithms. Finally, conclusions and further study directions are given in Section 5.

2. Differential Evolution Algorithm

Differential evolution algorithm was first proposed by Storn and Price [2, 3]. Like other evolutionary algorithms, an initialization phase is its first task. In addition, it also consists of three major operations: mutation, crossover, and selection. Meanwhile, there exist a few mutation strategies proposed in the work [3]. In order to distinguish the different DE versions with various mutation strategies or different crossover schemes, the famous notation DE/ was introduced in the literature [3], where represents the vector to be mutated, is the number of differential vectors used, and denotes the crossover scheme employed. DE/rand/1/bin was applied most commonly and it was also usually considered as the canonical DE version. To be specific, the canonical DE version can be described as follows.

2.1. Initialization

At the first step, a population of NP individuals is generated randomly by the following form:where , ; and are the lower bound and upper bounds of the parameter , respectively. Then, the cost function of each solution is evaluated.

2.2. Mutation

Mutation strategy is very important in DE. At the step, a mutant vector is generated by the following formula for each -dimensional target vector :where , are mutually different random integer number, and they are such that . The mutation scale factor is a real and constant factor which controls the amplification of the differential variation [3].

2.3. Crossover

In order to exchange information between a mutant vector and the current target vector , crossover operation is introduced. At this time, a trial vector is produced by the following form:where , is a random real number between , and is a randomly chosen index, which ensures that the trial vector obtains at least one parameter from the mutant vector . Crossover rate Cr is a predefined constant within the range and it controls the fraction of parameter values copied from the mutant vector.

2.4. Selection

After crossover operation, the trial vector is compared to the target vector through a greedy selection mechanism. The winner is retained and it will become a member of next generation. For a minimization problem, the selection process can be described according to the following equation:where denotes the objective of solution and is an offspring corresponding to the target vector .

In a word, except for the initialization phase, the aforementioned steps will be repeated in turn until a stopping criterion is reached.

3. An Enhanced Differential Evolution Algorithm

3.1. Initialization Based on Opposition-Based Learning

Recently, Rahnamayan et al. [12, 13] proposed a new scheme for generating random numbers, called opposition-based learning (OBL), which can effectively make use of random numbers and their opposites. Moreover, the ability of OBL accelerating the optimization, search, or learning process in many soft computing techniques has been reported in the literatures [12, 13]. At first, a state-of-the-art algorithm, named ODE, was proposed by applying the OBL scheme to accelerate DE [13]. After that, the OBL scheme has been successfully used in other evolutionary algorithms such as artificial bee colony algorithm [40], harmony search algorithm [41], particle swarm optimization [42, 43], and teaching learning based algorithm [44]. A comprehensive survey about the OBL scheme can be found in [45].

In order to improve the solution quality of initial population, the OBL scheme is employed to initialize the population individuals of EDE in the work. The initial process can be described as shown in Algorithm 1.

Algorithm 1: Initialization based on opposition-based learning.

In Algorithm 1, two sets, that is, sets and , are generated, where and . The initial population consists of the top NP individuals chosen from the set according to their fitness values.

3.2. Multiple Mutation Strategies

A mutation strategy DE/current/1/bin is employed. Namely, the target vector is employed as the base vector in this DE version. That is, a mutant vector will be generated by the following equation:where represents the index of current individual, and are random integers, and .

In order to better take advantage of the guiding information of best individual, a new version of DE/best/1/bin, DE/best/1/bin, proposed by Zhang and Sanderson [15], is further employed in the work to speed up the convergence speed of the proposed approach EDE. That is, a mutant vector is produced as follows:where is a random number and it denotes the top individuals according to the fitness values of individuals. It should be noted that of DE/best/1/bin in JADE [15] is a proportional number between .

More specifically, according to the first mutation strategy, it can be seen that new generated mutant vectors will be scattered around the respective target vectors, which can not only keep good population diversity but also avoid the overrandomness of classic mutation strategy DE/rand/1/bin. According to the second mutation strategy DE/best/1/bin, owing to the guidance of one of several better individuals rather than the only best individual , the used mutation strategy can drive population towards better individuals so as to enhance the convergence speed. In addition, it can also prevent EDE from congregating the vicinity of global best individual to some extent.

In the meantime, a probabilistic parameter is time varying and designed to control which of the two mutation strategies is to be executed at the mutation step. The parameter can be described as follows:where and denote the maximum probability value and the minimum probability value, respectively. FEs is an iterative variable. represents the maximum number of fitness function evaluations.

As a matter of fact, the probability parameter plays an important role in balancing the exploration ability and the exploitation ability. That is, it is hoped that good population diversity is kept at the beginning of evolution and fast convergence speed is achieved at the end of search.

3.3. Perturbation

After repeating all operations (mutation, crossover, and select operations) of differential evolution, a perturbed scheme is conducted over the best individual in order to further trade off the searching ability of the aforementioned solution search equations. During the process, two perturbed equations are introduced and the best individual is perturbed dimension by dimension according to them, which are described by (8) and (9), respectively. One haswhere , is a temporary copy of the best individual, best represents the index of best individual in current population, is a uniform random number, and is also a random number. One haswhere all the notations are the same as those in (8).

From (9), it can be observed that perturbation operation occurs on the current component of best individual, and the differential variation () acts as perturbed scales. Notice that dimension may be different from , which is helpful to enrich perturbation scales to some extent. That is, it may increase the probability of getting out of local minima trap.

What is more, the term of (9) is different from the first term on the right hand side of formulation (8). The reason for (8) introduced is that information between different dimensions of best individual could be shared. Thus, the EDE algorithm could get out of local optimal trap with a larger probability.

Like the aforementioned tradeoff scheme, a probability parameter is employed. The parameter is linear time-varying during the evolution process as follows:where and denote the maximum probability value and the minimum probability value, respectively. The rest of these parameters are the same as those in (7).

Concretely speaking, (8) is executed with a probability value , but (9) is executed with a probability value ().

3.4. Boundary Constraints Handling Technique

In order to keep solutions subject to boundary constraints, some components of a solution violating the predefined boundary constraints should be repaired. That is, if a parameter value produced by solution search equations exceeds its predefined boundaries, the parameter should be set to an acceptable value. The following repair rule used in the literature [17] is employed in this work:

3.5. The Proposed Approach

In order to effectively take use of the guidance information of best individual, mutation strategy DE/best/1/bin is considered. In order to prevent a large number of individuals from clustering around the global best individual, inspired by JADE [15], mutation strategy DE/best/1/bin is actually used. In addition, another mutation strategy DE/current/1/bin is employed to further trade off the exploitation ability of DE/best/1/bin. At the same time, a selective probability with linear time-varying nature is introduced to decide which mutation strategy works at the mutation phase of DE. Subsequently, a perturbation scheme for the best individual is incorporated into the modified DE version. In short, the pseudocode of EDE can be given in Algorithm 2 based on the above explanation.

Algorithm 2: The EDE algorithm.

4. Experimental Study and Discussion

4.1. Benchmark Functions and Parameter Settings

To verify the optimization effectiveness of EDE, twenty-five benchmark functions with different characteristics taken from Yao et al. [46], Gong et al. [23], and Gao and Liu [40] are employed here.

These benchmark functions are listed briefly in Table 1, in which designates the dimensionality of test functions. All the functions are scalable and high-dimensional problems. Functions , , and are unimodal. Function , that is, the step function, has one minimum and is discontinuous. Function is a quartic function with noise. Functions and are difficult multimodal functions where the number of local minima increases exponentially as the dimension of test function increases. In addition, six shifted functions are chosen to evaluate the performance of EDE. Namely, functions are shifted functions and representing a shifted vector is generated randomly in the corresponding search range.

Table 1: Benchmark functions used in experiments.

In our experimental study, all benchmark functions are tested in 30 dimensions and 100 dimensions. The corresponding maximum number of fitness function evaluations () is and , respectively. Moreover, the other specific parameters of DE and EDE are set as follows.

DE Settings. In canonical DE/rand/1/bin, the scale factor is set to 0.5, the parameter of crossover rate Cr is set to 0.9, and the population size SN is 100. It should be noted that the values of three parameters are the same as those of the state-of-the-art algorithm ODE [13].

EDE Settings. In our proposed algorithm, the scale factor is set to 0.5. The parameter of crossover rate Cr is set to 0.9. And the population size SN is 20. A few other parameters are set as follows: , , , , and .

For the set of experiments tested on 25 benchmark functions, we use the aforementioned parameter settings unless a change is mentioned. Furthermore, each test case is optimized thirty runs independently. Then, experimental results for these well-known problems as well as some comparisons with other famous methods are reported as follows.

4.2. Comparison between DE and EDE

For the purpose of validating the enhancing effectiveness of EDE, EDE is first compared with canonical DE in terms of best, worst, median, mean, and standard deviation (Std.) values of solutions achieved by each algorithm in 30 independent runs. The corresponding results are listed in Table 2. Furthermore, the Wilcoxon rank sum test is conducted to compare the significant difference between DE and EDE at significance level. The related test results are also reported in Table 2. And then, some representatives of convergence curves of DE and EDE are shown in Figure 1 in order to show the convergence speed of EDE more clearly.

Table 2: Best, worst, median, mean, and standard deviation values achieved by DE and EDE through 30 independent runs.
Figure 1: Convergence performance of DE and EDE on the twelve test functions at .

From Table 2, it can be seen that EDE is significantly superior to DE in most cases. To be specific, EDE is significantly better than DE on 20 functions, that is, , , , , , , , , , , , , , , , , , , , and , in terms of related Wilcoxon rank sum test results. In addition, for function with , EDE is still better than DE. For function with , EDE is equal to DE; actually, the mean result achieved by EDE is slightly better than that of DE. For function with , EDE is similar to DE. For the function with , DE outperforms EDE. Nevertheless, the results obtained by EDE are very close to those found by DE. For the functions and , DE is better than EDE. And yet, the results obtained by EDE are very close to those found by DE on the functions at and at .

From Figure 1, it can also be observed that EDE is far better than DE in terms of solutions accuracy and convergence speed on the representative cases.

According to the aforementioned analyses, it can be concluded that EDE is better than or approximately equal to DE on almost all the functions. In other words, multiple mutation strategies and perturbation schemes are beneficial to the performance of EDE.

4.3. Comparison between EDE and Other Three DE Variants

In this subsection, EDE is further compared with some representatives of state-of-the-art DE variants, such as SaDE [14], JADE [15], and SaJADE [23]. Here sixteen test functions are used for the comparison. The related comparison results are listed in Table 3. For a fair comparison, except for the proposed algorithm EDE, the rest of the results reported in Table 3 are directly taken from Gong et al. [23].

Table 3: Performance comparison between EDE and other three DEs over 30 independent runs for the 16 test functions at , where “” means that EDE wins in functions, ties in functions, and loses in functions, compared with its competitors.

From Table 3, it can be seen that EDE is obviously better than JADE on twelve functions, that is, , , , , , , , , , , , and . JADE works better than EDE on four functions. Notice that EDE is just slightly inferior to JADE on the three functions , and . When compared with SaDE, EDE performs better than it does on thirteen functions. And the results found by EDE are very close to those found by SaDE on other two functions and . When compared with SaJADE, SaJADE is better than EDE on four functions, but the superiority of SaJADE is not obvious on the three functions , and except for function . Yet EDE is better than or equal to SaJADE on other twelve functions.

It should be pointed out that the results are summarized as in the last line of Table 3, which means that EDE wins in function cases, ties in cases, and loses in cases when compared with its competitor. For JADE, SaDE, and SaJADE, they are , , and , respectively. The results show that EDE is superior to or similar to other three approaches on the majority of benchmark functions.

4.4. Comparison among EDE and Two Artificial Bee Colony Algorithms

Artificial bee colony algorithm introduced by Karaboga and Basturk is a relatively new swarm-based optimization algorithm [47]. And it has become a promising technique [48]. Particularly, a modified artificial bee colony algorithm, named MABC, proposed by Gao and Liu [40], is an outstanding representative of many enhanced ABC versions. In order to further demonstrate the superiority of EDE, EDE is compared with standard ABC and MABC on twenty-one functions again. In the experimental study, the maximum number of fitness function evaluations () is set to for all compared algorithms as recommended by Gao and Liu [40].

The further comparison results are given in Table 4. For convenience, besides the data achieved by the EDE algorithm, the rest of the results are gained by Gao and Liu [40] directly.

Table 4: Comparison between EDE and other two ABCs over 30 independent runs on the 21 test functions with in terms of mean and standard deviation.

From Table 4, it is clear that EDE is better than or at least even with ABC on nineteen functions, but ABC only works better than EDE on two functions. EDE is better than or equal to MABC on eighteen functions. MABC also only surpasses EDE on three functions. In addition, the accuracy of solution obtained by EDE is far better than that obtained by ABC on many benchmark functions such as , and . Meanwhile, the accuracy of solution obtained by EDE is far better than that obtained by MABC on some test functions including , and . In summary, EDE is superior to both ABC and MABC.

5. Conclusion

In order to achieve a better compromise between the exploration ability and the exploitation ability of DE, in this work, an enhanced differential evolution algorithm, called EDE, is presented. In EDE, first, an initialization technique, opposition-based learning initialization, is employed. Next, inspired by JADE [15], a mutation strategy DE/best/1/bin is introduced in EDE. At the same time, a new mutation strategy DE/current/bin/1 is also introduced. That is, there are multiple mutation strategies composed of the two mutation strategies in EDE to better balance the exploration and the exploitation of DE. When performing the EDE algorithm, one of the two mutation strategies is chosen randomly with a linear time-varying scheme. Last, a perturbation scheme for the best individual is presented in order to get out of local minima, where the perturbation scheme is also composed of two solution search equations. Specifically, the best individual is perturbed dimension by dimension in two modes. All these modifications make up the proposed algorithm EDE.

To testify the convergence performance of EDE, twenty-five benchmark functions with different characteristics from literatures are employed. The first experimental results demonstrate that EDE significantly enhances the performance of standard DE in terms of the best, worst, median, mean, and standard deviation (Std.) values of final solutions in most cases. Moreover, other two comparisons also show that EDE performs significantly better than or at least highly competitive with other five well-known algorithms, that is, JADE, SaDE, SaJADE, ABC, and MABC, on the majority of the corresponding benchmark functions. Therefore, it can be concluded that EDE is an efficient method and it may be a good alternative for solving complex numerical optimization problems.

Last but not least, it is desirable to further apply the EDE algorithm to deal with other optimization problems such as the training of neural networks, system parameter identification, and data clustering.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant nos. 61064012, 61164003, 61263027, 61364026, and 61563028), the Natural Science Foundation of Gansu Province (Grant no. 148RJZA030), New Teacher Project of Research Fund for the Doctoral Program of Higher Education of China (Grant no. 20126204120002), and the Science and Technology Foundation of Lanzhou Jiaotong University (Grant no. ZC2014010).

References

  1. S. Das and P. N. Suganthan, “Differential evolution: a survey of the state-of-the-art,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 4–31, 2011. View at Publisher · View at Google Scholar · View at Scopus
  2. R. Storn and K. Price, “Differential evolution—a simple and efficient adaptive scheme for global opitmization over continuous spaces,” Report TR-95-012, 1995. View at Google Scholar
  3. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  4. M. M. Ali and A. Törn, “Population set-based global optimization algorithms: some modifications and numerical studies,” Computers & Operations Research, vol. 31, no. 10, pp. 1703–1725, 2004. View at Publisher · View at Google Scholar · View at Scopus
  5. J. Liu and J. Lampinen, “A fuzzy adaptive differential evolution algorithm,” Soft Computing, vol. 9, no. 6, pp. 448–462, 2005. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Sun, Q. Zhang, and E. P. Tsang, “DE/EDA: a new evolutionary algorithm for global optimization,” Information Sciences, vol. 169, no. 3-4, pp. 249–262, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. J. Brest, S. Greiner, B. Bošković, M. Mernik, and V. Zumer, “Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 6, pp. 646–657, 2006. View at Publisher · View at Google Scholar · View at Scopus
  8. P. Kaelo and M. M. Ali, “A numerical study of some modified differential evolution algorithms,” European Journal of Operational Research, vol. 169, no. 3, pp. 1176–1184, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. M. M. Ali, “Differential evolution with preferential crossover,” European Journal of Operational Research, vol. 181, no. 3, pp. 1137–1147, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  10. Y.-J. Wang and J.-S. Zhang, “Global optimization by an improved differential evolutionary algorithm,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 669–680, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. N. Noman and H. Iba, “Accelerating differential evolution using an adaptive local search,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 107–125, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition versus randomness in soft computing techniques,” Applied Soft Computing Journal, vol. 8, no. 2, pp. 906–918, 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. R. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition-based differential evolution,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 64–79, 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential evolution algorithm with strategy adaptation for global numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 398–417, 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. J. Zhang and A. C. Sanderson, “JADE: adaptive differential evolution with optional external archive,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 945–928, 2009. View at Publisher · View at Google Scholar
  16. Z. Yang, K. Tang, and X. Yao, “Scalability of generalized adaptive differential evolution for large-scale continuous optimization,” Soft Computing, vol. 15, no. 11, pp. 2141–2155, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. W. Gong, Z. Cai, and C. X. Ling, “DE/BBO: a hybrid differential evolution with biogeography-based optimization for global numerical optimization,” Soft Computing, vol. 15, no. 4, pp. 645–665, 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. M. Weber, F. Neri, and V. Tirronen, “A study on scale factor in distributed differential evolution,” Information Sciences, vol. 181, no. 12, pp. 2488–2511, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. Wang, Z. X. Cai, and Q. F. Zhang, “Differential evolution with composite trial vector generation strategies and control parameters,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 55–66, 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. R. Mallipeddi, P. N. Suganthan, Q. K. Pan, and M. F. Tasgetiren, “Differential evolution algorithm with ensemble of parameters and mutation strategies,” Applied Soft Computing, vol. 11, no. 2, pp. 1679–1696, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. A. Ghosh, S. Das, A. Chowdhury, and R. Giri, “An improved differential evolution algorithm with fitness-based adaptation of the control parameters,” Information Sciences, vol. 181, no. 18, pp. 3749–3765, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. W. Gong, Á. Fialho, Z. Cai, and H. Li, “Adaptive strategy selection in differential evolution for numerical optimization: an empirical study,” Information Sciences. An International Journal, vol. 181, no. 24, pp. 5364–5386, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. W. Y. Gong, Z. H. Cai, C. X. Ling, and H. Li, “Enhanced differential evolution with adaptive strategies for numerical optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 41, no. 2, pp. 397–413, 2011. View at Publisher · View at Google Scholar · View at Scopus
  24. M. Weber, F. Neri, and V. Tirronen, “Shuffle or update parallel differential evolution for large-scale optimization,” Soft Computing, vol. 15, no. 11, pp. 2089–2107, 2011. View at Publisher · View at Google Scholar · View at Scopus
  25. A. P. Piotrowski, J. J. Napiorkowski, and A. Kiczko, “Differential evolution algorithm with separated groups for multi-dimensional optimization problems,” European Journal of Operational Research, vol. 216, no. 1, pp. 33–46, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. S. Ghosh, S. Das, A. V. Vasilakos, and K. Suresh, “On convergence of differential evolution over a class of continuous functions with unique global optimum,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 1, pp. 107–124, 2012. View at Publisher · View at Google Scholar · View at Scopus
  27. E. Mezura-Montes, M. E. Miranda-Varela, and R. del Carmen Gómez-Ramón, “Differential evolution in constrained numerical optimization: an empirical study,” Information Sciences, vol. 180, no. 22, pp. 4223–4262, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. H. Liu, Z. Cai, and Y. Wang, “Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization,” Applied Soft Computing Journal, vol. 10, no. 2, pp. 629–640, 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. D. Zou, H. Liu, L. Gao, and S. Li, “A novel modified differential evolution algorithm for constrained optimization problems,” Computers and Mathematics with Applications, vol. 61, no. 6, pp. 1608–1623, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  30. W. Gong and Z. Cai, “An improved multiobjective differential evolution based on Pareto-adaptive ϵ-dominance and orthogonal design,” European Journal of Operational Research, vol. 198, no. 2, pp. 576–601, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  31. X. Li and M. Yin, “An opposition-based differential evolution algorithm for permutation flow shop scheduling based on diversity measure,” Advances in Engineering Software, vol. 55, pp. 10–31, 2013. View at Publisher · View at Google Scholar · View at Scopus
  32. M. Fatih Tasgetiren, Q.-K. Pan, P. N. Suganthan, and O. Buyukdagli, “A variable iterated greedy algorithm with differential evolution for the no-idle permutation flowshop scheduling problem,” Computers and Operations Research, vol. 40, no. 7, pp. 1729–1743, 2013. View at Publisher · View at Google Scholar · View at Scopus
  33. R. Zhang, S. Song, and C. Wu, “A hybrid differential evolution algorithm for job shop scheduling problems with expected total tardiness criterion,” Applied Soft Computing Journal, vol. 13, no. 3, pp. 1448–1458, 2013. View at Publisher · View at Google Scholar · View at Scopus
  34. A. Nobakhti and H. Wang, “A simple self-adaptive differential Evolution algorithm with application on the ALSTOM gasifier,” Applied Soft Computing Journal, vol. 8, no. 1, pp. 350–370, 2008. View at Publisher · View at Google Scholar · View at Scopus
  35. Y. Liu and F. Sun, “A fast differential evolution algorithm using k-Nearest Neighbour predictor,” Expert Systems with Applications, vol. 38, no. 4, pp. 4254–4258, 2011. View at Publisher · View at Google Scholar · View at Scopus
  36. L. Wang, X. Fu, Y. Mao, M. Ilyas Menhas, and M. Fei, “A novel modified binary differential evolution algorithm and its applications,” Neurocomputing, vol. 98, pp. 55–75, 2012. View at Publisher · View at Google Scholar · View at Scopus
  37. Y. Tang, X. Zhang, C. Hua, L. Li, and Y. Yang, “Parameter identification of commensurate fractional-order chaotic system via differential evolution,” Physics Letters A, vol. 376, no. 4, pp. 457–464, 2012. View at Publisher · View at Google Scholar · View at Scopus
  38. H.-C. Lu, M.-H. Chang, and C.-H. Tsai, “Parameter estimation of fuzzy neural network controller based on a modified differential evolution,” Neurocomputing, vol. 89, pp. 178–192, 2012. View at Publisher · View at Google Scholar · View at Scopus
  39. F. Fabris and R. A. Krohling, “A co-evolutionary differential evolution algorithm for solving min-max optimization problems implemented on GPU using C-CUDA,” Expert Systems with Applications, vol. 39, no. 12, pp. 10324–10333, 2012. View at Publisher · View at Google Scholar · View at Scopus
  40. W.-F. Gao and S.-Y. Liu, “A modified artificial bee colony algorithm,” Computers & Operations Research, vol. 39, no. 3, pp. 687–697, 2012. View at Publisher · View at Google Scholar · View at Scopus
  41. A. K. Qin and F. Forbes, “Dynamic regional harmony search with opposition and local learning,” in Proceedings of the 13th Annual Genetic and Evolutionary Computation Conference (GECCO '11), pp. 53–54, Dublin, Ireland, July 2011. View at Publisher · View at Google Scholar · View at Scopus
  42. W.-f. Gao, S.-Y. Liu, and L.-l. Huang, “Particle swarm optimization with chaotic opposition-based population initialization and stochastic search technique,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 11, pp. 4316–4327, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  43. N. Dong, C.-H. Wu, W.-H. Ip, Z.-Q. Chen, C.-Y. Chan, and K.-L. Yung, “An opposition-based chaotic GA/PSO hybrid algorithm and its application in circle detection,” Computers and Mathematics with Applications, vol. 64, no. 6, pp. 1886–1902, 2012. View at Publisher · View at Google Scholar · View at Scopus
  44. P. K. Roy, C. Paul, and S. Sultana, “Oppositional teaching learning based optimization approach for combined heat and power dispatch,” International Journal of Electrical Power & Energy Systems, vol. 57, pp. 392–403, 2014. View at Publisher · View at Google Scholar · View at Scopus
  45. Q. Xu, L. Wang, N. Wang, X. Hei, and L. Zhao, “A review of opposition-based learning from 2005 to 2012,” Engineering Applications of Artificial Intelligence, vol. 29, pp. 1–12, 2014. View at Publisher · View at Google Scholar · View at Scopus
  46. X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at Publisher · View at Google Scholar · View at Scopus
  47. D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  48. D. Karaboga and B. Akay, “A comparative study of Artificial Bee Colony algorithm,” Applied Mathematics and Computation, vol. 214, no. 1, pp. 108–132, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus