Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2016, Article ID 8341275, 12 pages
http://dx.doi.org/10.1155/2016/8341275
Research Article

Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization

1State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China
2College of Electronics and Information Engineering, South-Central University for Nationalities, Wuhan 430074, China

Received 6 August 2015; Revised 26 December 2015; Accepted 30 December 2015

Academic Editor: Leonardo Franco

Copyright © 2016 Xiangzhu He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO.

1. Introduction

Optimization problems are always associated with many kinds of difficult characteristics involving multimodality, dimensionality, and differentiability [1]. Traditional methods like linear programming and dynamic programming generally fail to optimize such problems especially when these problems have nonlinear objective functions, as most of these traditional techniques require gradient information and easily converge to local optima. Moreover, those classical search approaches depend heavily on variables and functions, which prevents them from yielding a generalized and flexible solution scheme, especially for large-scale and nonlinear optimization [2]. Under this circumstance, swarm intelligence, which deals with the collective behavior of swarms through complex interaction of individuals without supervision, has become a hot research area [3]. The inherent strengths of the swarm optimization techniques, including fault tolerance, adaptation, speed, autonomy, and parallelism [4], allow them to be applied more effectively and widely compared with the previous algorithms [5].

Several well-known swarm algorithms have been proposed in the latest years. For example, ant colony optimization (ACO) is based on the metaphor of ants seeking food [6]. Particle swarm optimization (PSO) works on the foraging behavior of a biological social system like a flock of birds [7]. Artificial bee colony (ABC) simulates the intelligent foraging behavior of a honeybee [8]. These algorithms have been applied to many engineering optimization problems and proved effective in solving some specific kind of problems.

Teaching-learning-based optimization (TLBO) algorithm is a teaching-learning inspired algorithm proposed by Rao et al., which is based on the effect of influence of a teacher on the output of learners in a class [9, 10]. The TLBO is free from parameters and has been compared with other well-known optimization algorithms such as PSO [11]. The results show better performance of TLBO over other methods. Applications of this algorithm have also been widely tested in different optimization fields; for example, Toĝan [12] employed the TLBO algorithm in the discrete optimization of planar steel frames and found that TLBO is a more powerful optimization method than other algorithms like Genetic Algorithm (GA), ACO, and Harmony Search (HS). Amiri [13] similarly applied TLBO to solve clustering problems and verified the robustness and flexibility of this method. However, simulation results from Huang et al. showed that TLBO could not obtain satisfactory results for several difficult benchmark problems which have complex landscapes and was prone to becoming trapped in locally optimal solutions [14]. To overcome this technical barrier, Rao and Patel modified many aspects of the basic TLBO such as incorporating an elitism strategy in it, using adaptive teaching factor and multiteacher approaches to improve its performance [15]. Based on some insight into the structure of TLBO, we also found that it lacks diversification because it only calculates the mean value of the population and searches between two randomly chosen individual solutions in the search iterations.

Chaos is a universal phenomenon of nonlinear dynamic systems, which has been extensively studied since Lorent [16] discovered the authoritative chaotic attractor in 1963. Chaos is a bounded unstable dynamic behavior that exhibits sensitive dependence on initial conditions and includes infinite unstable periodic motions. Although it appears to be stochastic, it occurs in a deterministic nonlinear system under deterministic conditions [17]. Due to its properties, chaos has been applied to many kinds of areas of optimization computation [18, 19]. Zuo and Fan [20] proposed the chaos search immune algorithm and applied it to neurofuzzy controller design. Alatas et al. used the chaotic search to improve the performance of PSO algorithms [21] and proposed chaotic bee colony algorithms [22]. Chuang et al. [23] proposed chaotic catfish PSO.

Lévy flight is another technique for speeding up the convergence rate of the algorithm and escaping from local optima [24]. As a typical flight behavior of many animals and insects, Lévy flight was originally researched by Lévy and Borel in 1954 [25] and has been subsequently used for nonlocal searches in many optimization problems due to its promising capability [26, 27]. Since the step length of the random walk produced by Lévy flight is drawn from a power-law distribution with a heavy tail, namely, Lévy distribution, part of the new population is generated near the current best solution, and therefore this technique can speed up the local search. Further, most of the new solutions are produced far from the current best solution, which prevents the algorithm from becoming trapped in local optima.

An efficient optimization algorithm means it has both strong exploration ability and a fast exploitation rate; moreover, the method can be adapted to tackle a broad range of problems [28]. In order to reinforce the performance of the TLBO and broaden the diversification of the algorithm, a chaotic system and Lévy flight mechanism are introduced into the TLBO. The basic idea of the proposed algorithm is as follows. First, the population in the TLBO is divided into two parts according to the fitness of the solutions in the population. Then a Lévy flight is performed on the worse part, while using the original teaching-learning search mechanism for the better part. Secondly, the chaotic search is implemented on a randomly chosen part of the population for the sake of diversity. The numerical experiments demonstrate the effectiveness of the proposed algorithm.

This paper is organized as follows. In Section 2, the basic TLBO is introduced. Then the proposed chaotic TLBO with Lévy flight is presented in Section 3. In Section 4, some experiments are performed and the numerical results are shown. Finally, the conclusion of the paper is presented in Section 5.

2. Teaching-Learning-Based Optimization

TLBO is a recently published population-based method, which mimics the classic teaching-learning phenomenon within a classroom environment. In this novel optimization algorithm a group of learners is considered as population and different design variables are considered as different subjects offered to the learners and learners’ result is analogous to the fitness value of the optimization problem. In the entire population the best solution is considered as the teacher. The main procedure of TLBO consists of two phases: teacher phase and learner phase. These two phases will be explained in the following parts.

2.1. Teacher Phase

This is the first stage of the algorithm where learners learn from the teacher. During this phase a teacher tries to increase the mean of the whole class to his or her level (the new mean). The difference between the existing mean and the new mean is given aswhere is the mean of each design variable and is the new mean for the th iteration; within the equation, two randomly generated parameters are applied: ranges between 0 and 1 and is a teaching factor which can be either 1 or 2, thus influencing the value of the mean to be changed. In the algorithm, plays a role of adjusting factor, which controls the moving direction and scale when updating solutions. The value of is decided randomly with equal probability as

Based on this Difference_Mean, the existing solution is updated according to the following expression:

2.2. Learner Phase

It is the second part of the algorithm where learners increase their knowledge by interaction between themselves. A learner interacts randomly with another learner for enhancing his or her knowledge. A learner learns new things if the other one has more knowledge than him or her. Mathematically the learning phenomenon of this phase is expressed below.

At any iteration , considering two different learners (solutions) and , where ,

is accepted into the population if it gives a better function value.

The steps for implementing TLBO are as follows.

Step 1 (define the optimization problem and initialize algorithm parameters). Initialize the population size (), number of design variables (), and number of generations (). Define the optimization problem as follows: minimize , where is the objective function and is a vector for design variables. Construct initial solutions according to and .

Step 2 (calculate and ). Calculate the mean of the population columnwise, which will give the mean of each design variable as . Identify the best solution (teacher) according to ; the teacher will try to move to , so let = .

Step 3. Calculate the Difference_Mean according to (1) by utilizing the teaching factor .

Step 4. Modify the solutions in the teacher phase based on (3) and accept the new solution if it is better than the existing one.

Step 5. Update the solution in the learner phase according to (4) and (5) and accept the better one into the population.

Step 6. Repeat Steps 2 to 5 until the termination criterion is met.

3. Chaotic Teaching-Learning-Based Optimization with Lévy Flight

An effective optimization algorithm must have a strong global searching ability along with a fast convergence rate. TLBO is free from specific algorithm parameters and outperforms PSO, HS, and so on due to its simplicity and efficiency. However, several hard benchmarks with complicated landscapes pose challenges to TLBO in finding a satisfactory result and escaping from local optima.

In order to enhance the performance of TLBO as well as take advantage of the properties of the chaotic system and Lévy flight, we integrate the chaotic search mechanism and Lévy flight into TLBO to improve its search efficiency. Hence, a chaotic TLBO with Lévy flight (CTLBO) is proposed in this paper. In the algorithm, the population is divided into two parts: the part with better fitness is evolved by the teaching-learning process in TLBO, while another part is performed with a Lévy flight. Then the chaotic perturbation is implemented on a randomly selected part of the population in terms of the diversification of the population. The main steps of CTLBO are elaborated in the next sections.

3.1. Lévy Flight

Lévy flights, also called Lévy motion, represent a kind of non-Gaussian stochastic process whose step sizes are distributed based on a Lévy stable distribution [25].

When generating new solutions for solution , a Lévy flight is performed:where is the step size which is relevant to the scales of the problem. In most conditions, we let . The product means entrywise multiplications [24]. Lévy flights essentially provide a random walk while their random steps are drawn from a Lévy distribution for large steps:which has an infinite variance with an infinite mean. Here the consecutive steps of a solution essentially form a random walk process which obeys a power-law step-length distribution with a heavy tail.

There are a few ways to implement Lévy flights; the method we chose in this paper is one of the most efficient and simple ways based on Mantegna algorithm; all the equations are detailed in [29].

3.2. Chaotic Search

Chaos is a deterministic, quasi-random process that is sensitive to the initial condition [30]. The nature of chaos is apparently random and unpredictable. Mathematically, chaos is randomness of a simple deterministic dynamical system and chaotic system can be considered as sources of randomness.

A chaotic map is a discrete-time dynamical system running in a chaotic condition [22]:where is the chaotic sequence, which can be utilized as spread-spectrum sequence as random number sequence.

Chaotic sequences have been proved to be simple and fast to produce and reserve; it is unnecessary to store long sequences [31]. Only a few functions (chaotic maps) and parameters (initial conditions) are required even for very long sequences [22].

In this paper, chaotic variables are generated by the following logistic mapping:where is the serial number of chaotic variables and . Given the chaotic variables and different initial values (), the values of the chaotic variables () are then produced by the logistic equation. Let , and hence, other solutions are produced by the same method.

3.3. Proposed Methods

By introducing the Lévy flight and the chaotic search into the TLBO, a new algorithm is proposed in this paper. The pseudocode of the proposed CTLBO is shown in Pseudocode 1.

Pseudocode 1: Pseudocode of CTLBO.

4. Experimental Analysis and Numerical Results

In order to verify the performance of the proposed CTLBO and to analyze its properties, two sets of optimization problems are selected for the test experiments. In each set of problems, several well-known functions are used as benchmark problems to study the search behavior of the proposed CTLBO and to compare its performance with those of other algorithms.

4.1. Experiment 1

Firstly, to demonstrate the performance of the proposed algorithm, eight benchmark optimization problems [32] are selected as test functions. These eight benchmark functions were tested earlier with TLBO and improved TLBO by Rao and Patel [15]. The details of the benchmark functions are given in Table 1.

Table 1: Benchmark functions considered in Experiment .

In [15], Rao and Patel tested all functions with 30000 maximum function evaluations. To maintain the consistency in the comparison, the CTLBO algorithm is also tested with the same maximum function evaluations. Each benchmark function undergoes 30 independent tests with CTLBO. The comparative results are in the form of the mean value and standard deviation of the objective function obtained after 30 independent runs, which are shown in Table 2.

Table 2: Comparative results of different algorithms over 30 independent runs.

It can be seen from Table 2 that the CTLBO achieved the global optimum value for the Sphere, Griewank, Weierstrass, Rastrigin, and NCRastrigin functions, and the CTLBO and I-TLBO algorithms perform equally well for these functions. For the Rosenbrock function, CTLBO performs better than the rest of the algorithms. For the Ackley function, the modified ABC algorithm performs better than the rest of the considered algorithms.

It can also be seen from Table 2 that the result of CTLBO for the Schwefel function is not as good as those of other functions. This is mainly because, in the teacher phase of TLBO, a mean value is used to update the solutions, and this mechanism may lead the solutions to the centre of a search region, but when the global minimum is not located in the centre of the feasible solution region, TLBO usually fails to find the global solution of the tested function. Actually this is one of the limitations of TLBO, and in CTLBO, part of the solution inherits this mechanism. Hence this is a potential weakness of the proposed algorithm.

In order to observe the performance of CTLBO visually, the convergence curves of six functions are drawn as shown in Figure 1. To compare CTLBO with other algorithms, we select the convergence curves of PSO- and TLBO to observe their convergence properties.

Figure 1: Convergence curve of six functions.

The Rosenbrock function is always used as a test function to test the performance of optimization algorithms. The global optimum lies inside a long, narrow, parabolic shaped flat valley, and it is very difficult to find the global optimum. It can be seen from Figure 1(a) that the PSO- and TLBO converge with low optimization accuracy, while the CTLBO has good searching ability for the Rosenbrock function.

The Ackley function is a continuous, rotating, and nonseparable multimodal function. The exterior region of the function is nearly flat while the centre is a high peak, and it has many widespread locally optimal points from the flat region to the centre peak. From Figure 1(b) we can see that both TLBO and CTLBO have better performance than the PSO- algorithm. In addition, the convergence rate and accuracy of CTLBO are better than those of TLBO.

It can be observed from Figures 1(c)1(e) that, for the Griewank, Weierstrass, and Rastrigin functions, TLBO and CTLBO can achieve the global optimum in a very few iterations, while the PSO- can only reach a local optimum. Also, the convergence rate of CTLBO is faster than that of TLBO.

From Figure 1(f), we can see that Schwefel’s function is difficult for all three algorithms, which converge with low optimization accuracy.

From the results and analysis, we can see that the proposed TLBO has good searching ability for most functions, and CTLBO has improved its performance. The convergence rate and accuracy of CTLBO are better than those of TLBO. In order to test the proposed algorithm comprehensively, more test functions will be introduced in the next section.

4.2. Experiment 2

In this experiment, the performance of the proposed CTLBO algorithm is compared with those of the recently developed PS-ABC [33], TLBO, and I-TLBO. In this part of the work, CTLBO is tested on 13 unconstrained benchmark functions. These functions have no fixed number of dimensions. In other words, the dimension of the problems can be set at will [34]. In this case, we can test the performance of algorithms for high-dimensional problems. The characteristics of these functions are described in Table 3.

Table 3: Benchmark functions considered in Experiment .

This experiment is conducted from small-scale to large-scale by considering 20, 30, and 50 dimensions for all the benchmark functions. The number of function evaluations is set as 120000 for all tested algorithms. Each benchmark function is tested 30 times and the results are obtained in the form of the mean solution and the standard deviation of the objective function after 30 independent runs of the algorithms.

Table 4 shows the comparative results of PS-ABC, TLBO, I-TLBO, and CTLBO algorithms for the 13 functions with 120000 maximum function evaluations.

Table 4: Comparative results of different algorithms over 30 independent runs.

It can be observed from Table 4 that I-TLBO outperforms the basic TLBO and PS-ABC algorithms for the Quartic, Penalized, and Penalized 2 functions (for all the dimensions) and the Rosenbrock function (for 20 dimensions). PS-ABC outperforms TLBO and I-TLBO for the Rosenbrock (for 30 and 50 dimensions) and Schwefel functions. For the Schwefel 1.2 function, the performances of TLBO and I-TLBO are identical and better than that of the PS-ABC algorithm. The performances of PS-ABC and I-TLBO are identical for the Rastrigin function, while the performances of all three algorithms are identical for the Sphere, Schwefel 2.22, and Griewank functions. For the Ackley function, the performances of PS-ABC and CTLBO are more or less similar.

In order to observe the performance of CTLBO visually, the convergence curves of algorithms for several functions in 20 dimensions are drawn as shown in Figure 2.

Figure 2: Convergence curve of four functions.

From Figure 2(a) we can see that, for the Rosenbrock function with 20 dimensions, PSO- and TLBO become trapped in local optima and the search accuracy is very low, while the capability of CTLBO is good and its convergence accuracy is high.

The Quartic function is unimodal with random noise. Noisy functions are widespread in real-world problems, and every evaluation of the function is disturbed by noise, so the algorithms’ information is inherited and diffused noisily, which makes the problem hard to optimize. It can be observed from Figure 2(b) that CTLBO has better performance than the other two algorithms.

From Figures 2(c) and 2(d), we can observe that CTLBO has a better searching ability for the Penalized and Penalized 2 functions. CTLBO has better results for these two functions, while TLBO and PSO- are trapped in local optima.

From the above analysis, we can see that the proposed TLBO has good searching ability for most of the functions, and CTLBO has improved its performance based on TLBO. The convergence rate and accuracy of CTLBO show better performance compared to TLBO, which reveals that the proposed chaotic mechanism is effective and provides an improvement on TLBO.

4.3. Discussion

This paper formulated a novel TLBO algorithm, based on its combination with chaotic search and Lévy flight. From a quick look, CTLBO resembles other swarm-intelligence approaches such as GA and PSO in many aspects; for example, they are all population-based algorithms and the initial populations are randomly produced; similarly to other evolution strategies, CTLBO has special mutation operators like the teacher phase and the learner phase. However, despite the fact that CTLBO is somewhat similar to other metaheuristics, there are some significant differences between them which help it to outperform other techniques on a number of problems.

It can be seen from the framework of the CTLBO that the population is first divided into two parts in each iteration, then these two subparts evolve with the Lévy flight and teaching-learning mechanisms, respectively, and then the population is perturbed by using chaotic searching. This can be viewed as a kind of coevolution to some extent; that is, two independent subpopulations evolve interactively and, due to this process, not only are the decision solutions diversely exploited but also the convergence rate of the algorithm is accelerated.

Taking a closer look at the CTLBO, we conclude that it essentially consists of three components: exploitation by mutation operator, a global exploration by Lévy flight, and diversification by chaos mapping. The mutation operators including the teacher phase and learner phase ensure the exploitation around the best solution obtained so far. Lévy flight makes the search move away from the worst place with a large step and, at the same time, samples the search space effectively so that the new solutions are thoroughly diversified. Chaos mapping can disturb the solution so as to maintain the population diversity as well as avoid falling into local optima. In general, a good integration of the above three components may thus lead to an efficient algorithm such as CTLBO.

Furthermore, from simulation studies in which the CTLBO algorithm’s controlling parameters were varied, we observed that the convergence rate is insensitive to algorithm parameters such as . This feature is mainly inherited from the chaotic search. In this process, the randomness with nonzero probability ensures that some of the solutions are discarded and replaced by new ones, which is similar in spirit to the probability of acceptance of worse solutions in the annealing process. From this perspective, it means that there is no need to fine-tune the algorithm parameters of the proposed CTLBO for a specific problem.

Moreover, it can be seen that the proposed method may not find the global minima of a few specific functions. This has its root in the mechanism of the teacher phase in TLBO, where a mean value is used to update the solutions, which may lead solutions directly to the centre of a search region. For those functions whose global minima are not located in the centre of the feasible solution region, it is usually challenging to find the global optima of the tested functions. However, in the CTLBO, the divided population weakens the effect of the “mean mechanism.”

Finally, in this paper merely logistic map has been embedded to diversify the population of CTLBO algorithm; however, other different chaotic maps will be analyzed in the future work.

5. Conclusion

This paper proposes chaotic teaching-learning-based optimization with Lévy flight (CTLBO). The algorithm is improved via a Lévy walk and perturbed by chaotic searching, which can enhance the diversification of the algorithm. The experimental results demonstrate that the designed algorithm has better performance than other methods. In addition, the properties of the proposed algorithm are analyzed and the characteristics and features are discussed in the paper.

Future work is likely to apply this novel method to a wider spectrum of problems such as constrained optimization problems and many engineering applications in the real world. What is more, the parallel implementation mechanism of CTLBO and its application to multiobjective optimization as well as combinatorial optimization problems will also be studied.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research is supported by the National Basic Research Program of China (973 Program) under Grant nos. 2011CB706804 and 2014CB046705 and by the National Natural Science Foundation of China under Grant no. 51121002.

References

  1. J. S. Arora, O. A. Elwakeil, A. I. Chahande, and C. C. Hsieh, “Global optimization methods for engineering applications: a review,” Structural Optimization, vol. 9, no. 3-4, pp. 137–159, 1995. View at Publisher · View at Google Scholar · View at Scopus
  2. B. Alatas, “Chaotic harmony search algorithms,” Applied Mathematics and Computation, vol. 216, no. 9, pp. 2687–2699, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  3. E. Bonabeau, M. Dorigo, and G. Theraulaz, Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press, Oxford, UK, 1999.
  4. I. Kassabalidis, M. A. El-Sharkawi, R. J. Marks II, P. Arabshahi, and A. A. Gray, “Swarm intelligence for routing in communication networks,” in IEEE Global Telecommunications Conference (GLOBECOM '01), vol. 6, pp. 3613–3617, San Antonio, Tex, USA, November 2001. View at Scopus
  5. A. Osyczka, Evolutionary Algorithms for Single and Multicriteria Design Optimization, Springer, Berlin, Germany, 2002.
  6. M. Dorigo, Optimization, learning and natural algorithms [Ph.D. thesis], Politecnico di Milano, Milano, Italy, 1992.
  7. R. Eberhart and J. Kennedy, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Wash, USA, November 1995. View at Publisher · View at Google Scholar
  8. D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep. tr06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. View at Google Scholar
  9. R. V. Rao, V. J. Savsani, and D. P. Vakharia, “Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems,” Computer-Aided Design, vol. 43, no. 3, pp. 303–315, 2011. View at Publisher · View at Google Scholar · View at Scopus
  10. R. V. Rao, V. J. Savsani, and D. P. Vakharia, “Teaching–learning-based optimization: an optimization method for continuous non-linear large scale problems,” Information Sciences, vol. 183, pp. 1–15, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. R. V. Rao and V. Patel, “An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems,” International Journal of Industrial Engineering Computations, vol. 3, no. 4, pp. 535–560, 2012. View at Publisher · View at Google Scholar · View at Scopus
  12. V. Toĝan, “Design of planar steel frames using teaching-learning based optimization,” Engineering Structures, vol. 34, pp. 225–232, 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. B. Amiri, “Application of teaching-learning-based optimization algorithm on cluster analysis,” Journal of Basic and Applied Scientific Research, vol. 2, no. 11, pp. 11795–11802, 2012. View at Google Scholar
  14. J. Huang, X. Li, and L. Gao, “A new hybrid algorithm for unconstrained optimisation problems,” International Journal of Computer Applications in Technology, vol. 46, no. 3, pp. 187–194, 2013. View at Publisher · View at Google Scholar · View at Scopus
  15. R. V. Rao and V. Patel, “An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems,” Scientia Iranica, vol. 20, no. 3, pp. 710–720, 2013. View at Publisher · View at Google Scholar · View at Scopus
  16. E. Lorent, “Deterministic non-periodic flow. Spectral vorticity equations,” Journal of the Atmospheric Sciences, vol. 19, 1963. View at Google Scholar
  17. G. Chen and X. Dong, Methodologies, Perspectives and Applications, World Scientific, 1998. View at MathSciNet
  18. C.-S. Zhou and T.-L. Chen, “Chaotic annealing for optimization,” Physical Review E, vol. 55, no. 3, pp. 2580–2587, 1997. View at Google Scholar · View at Scopus
  19. M. Javidi and R. HosseinpourFard, “Chaos genetic algorithm instead genetic algorithm,” International Arab Journal of Information Technology, vol. 12, no. 2, pp. 163–168, 2015. View at Google Scholar
  20. X. Q. Zuo and Y. S. Fan, “A chaos search immune algorithm with its application to neuro-fuzzy controller design,” Chaos, Solitons and Fractals, vol. 30, no. 1, pp. 94–109, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  21. B. Alatas, E. Akin, and A. B. Ozer, “Chaos embedded particle swarm optimization algorithms,” Chaos, Solitons & Fractals, vol. 40, no. 4, pp. 1715–1734, 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. B. Alatas, “Chaotic bee colony algorithms for global numerical optimization,” Expert Systems with Applications, vol. 37, no. 8, pp. 5682–5687, 2010. View at Publisher · View at Google Scholar · View at Scopus
  23. L.-Y. Chuang, S.-W. Tsai, and C.-H. Yang, “Chaotic catfish particle swarm optimization for solving global numerical optimization problems,” Applied Mathematics and Computation, vol. 217, no. 16, pp. 6900–6916, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. X.-S. Yang and S. Deb, “Cuckoo search via Lévy flights,” in Proceedings of the World Congress on Nature and Biologically Inspired Computing (NABIC '09), pp. 210–214, Coimbatore, India, December 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. P. Lévy and M. É. Borel, Théorie de l'Addition des Variables Aléatoires, vol. 1, Gauthier-Villars, Paris, France, 1954.
  26. M. F. Shlesinger, G. M. Zaslavsky, and U. Frisch, Lévy Flights and Related Topics in Physics, Springer, Berlin, Germany, 1995.
  27. I. Pavlyukevich, “Lévy flights, non-local search and simulated annealing,” Journal of Computational Physics, vol. 226, no. 2, pp. 1830–1844, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. L. Gao, J. Huang, and X. Li, “An effective cellular particle swarm optimization for parameters optimization of a multi-pass milling process,” Applied Soft Computing, vol. 12, no. 11, pp. 3490–3499, 2012. View at Publisher · View at Google Scholar · View at Scopus
  29. X.-S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, 2010.
  30. H. G. Schuster and W. Just, Deterministic Chaos: An Introduction, John Wiley & Sons, New York, NY, USA, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  31. G. Heidari-bateni and C. D. McGillem, “A chaotic direct-sequence spread-spectrum communication system,” IEEE Transactions on Communications, vol. 42, no. 2, pp. 1524–1527, 1994. View at Publisher · View at Google Scholar · View at Scopus
  32. B. Akay and D. Karaboga, “A modified Artificial Bee Colony algorithm for real-parameter optimization,” Information Sciences, vol. 192, pp. 120–142, 2012. View at Publisher · View at Google Scholar · View at Scopus
  33. G. Li, P. Niu, and X. Xiao, “Development and investigation of efficient artificial bee colony algorithm for numerical function optimization,” Applied Soft Computing Journal, vol. 12, no. 1, pp. 320–332, 2012. View at Publisher · View at Google Scholar · View at Scopus
  34. Y. Shi, H. Liu, L. Gao, and G. Zhang, “Cellular particle swarm optimization,” Information Sciences, vol. 181, no. 20, pp. 4460–4493, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus