Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 8341275 | 12 pages | https://doi.org/10.1155/2016/8341275

Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization

Academic Editor: Leonardo Franco
Received06 Aug 2015
Revised26 Dec 2015
Accepted30 Dec 2015
Published31 Jan 2016

Abstract

Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO.

1. Introduction

Optimization problems are always associated with many kinds of difficult characteristics involving multimodality, dimensionality, and differentiability [1]. Traditional methods like linear programming and dynamic programming generally fail to optimize such problems especially when these problems have nonlinear objective functions, as most of these traditional techniques require gradient information and easily converge to local optima. Moreover, those classical search approaches depend heavily on variables and functions, which prevents them from yielding a generalized and flexible solution scheme, especially for large-scale and nonlinear optimization [2]. Under this circumstance, swarm intelligence, which deals with the collective behavior of swarms through complex interaction of individuals without supervision, has become a hot research area [3]. The inherent strengths of the swarm optimization techniques, including fault tolerance, adaptation, speed, autonomy, and parallelism [4], allow them to be applied more effectively and widely compared with the previous algorithms [5].

Several well-known swarm algorithms have been proposed in the latest years. For example, ant colony optimization (ACO) is based on the metaphor of ants seeking food [6]. Particle swarm optimization (PSO) works on the foraging behavior of a biological social system like a flock of birds [7]. Artificial bee colony (ABC) simulates the intelligent foraging behavior of a honeybee [8]. These algorithms have been applied to many engineering optimization problems and proved effective in solving some specific kind of problems.

Teaching-learning-based optimization (TLBO) algorithm is a teaching-learning inspired algorithm proposed by Rao et al., which is based on the effect of influence of a teacher on the output of learners in a class [9, 10]. The TLBO is free from parameters and has been compared with other well-known optimization algorithms such as PSO [11]. The results show better performance of TLBO over other methods. Applications of this algorithm have also been widely tested in different optimization fields; for example, Toĝan [12] employed the TLBO algorithm in the discrete optimization of planar steel frames and found that TLBO is a more powerful optimization method than other algorithms like Genetic Algorithm (GA), ACO, and Harmony Search (HS). Amiri [13] similarly applied TLBO to solve clustering problems and verified the robustness and flexibility of this method. However, simulation results from Huang et al. showed that TLBO could not obtain satisfactory results for several difficult benchmark problems which have complex landscapes and was prone to becoming trapped in locally optimal solutions [14]. To overcome this technical barrier, Rao and Patel modified many aspects of the basic TLBO such as incorporating an elitism strategy in it, using adaptive teaching factor and multiteacher approaches to improve its performance [15]. Based on some insight into the structure of TLBO, we also found that it lacks diversification because it only calculates the mean value of the population and searches between two randomly chosen individual solutions in the search iterations.

Chaos is a universal phenomenon of nonlinear dynamic systems, which has been extensively studied since Lorent [16] discovered the authoritative chaotic attractor in 1963. Chaos is a bounded unstable dynamic behavior that exhibits sensitive dependence on initial conditions and includes infinite unstable periodic motions. Although it appears to be stochastic, it occurs in a deterministic nonlinear system under deterministic conditions [17]. Due to its properties, chaos has been applied to many kinds of areas of optimization computation [18, 19]. Zuo and Fan [20] proposed the chaos search immune algorithm and applied it to neurofuzzy controller design. Alatas et al. used the chaotic search to improve the performance of PSO algorithms [21] and proposed chaotic bee colony algorithms [22]. Chuang et al. [23] proposed chaotic catfish PSO.

Lévy flight is another technique for speeding up the convergence rate of the algorithm and escaping from local optima [24]. As a typical flight behavior of many animals and insects, Lévy flight was originally researched by Lévy and Borel in 1954 [25] and has been subsequently used for nonlocal searches in many optimization problems due to its promising capability [26, 27]. Since the step length of the random walk produced by Lévy flight is drawn from a power-law distribution with a heavy tail, namely, Lévy distribution, part of the new population is generated near the current best solution, and therefore this technique can speed up the local search. Further, most of the new solutions are produced far from the current best solution, which prevents the algorithm from becoming trapped in local optima.

An efficient optimization algorithm means it has both strong exploration ability and a fast exploitation rate; moreover, the method can be adapted to tackle a broad range of problems [28]. In order to reinforce the performance of the TLBO and broaden the diversification of the algorithm, a chaotic system and Lévy flight mechanism are introduced into the TLBO. The basic idea of the proposed algorithm is as follows. First, the population in the TLBO is divided into two parts according to the fitness of the solutions in the population. Then a Lévy flight is performed on the worse part, while using the original teaching-learning search mechanism for the better part. Secondly, the chaotic search is implemented on a randomly chosen part of the population for the sake of diversity. The numerical experiments demonstrate the effectiveness of the proposed algorithm.

This paper is organized as follows. In Section 2, the basic TLBO is introduced. Then the proposed chaotic TLBO with Lévy flight is presented in Section 3. In Section 4, some experiments are performed and the numerical results are shown. Finally, the conclusion of the paper is presented in Section 5.

2. Teaching-Learning-Based Optimization

TLBO is a recently published population-based method, which mimics the classic teaching-learning phenomenon within a classroom environment. In this novel optimization algorithm a group of learners is considered as population and different design variables are considered as different subjects offered to the learners and learners’ result is analogous to the fitness value of the optimization problem. In the entire population the best solution is considered as the teacher. The main procedure of TLBO consists of two phases: teacher phase and learner phase. These two phases will be explained in the following parts.

2.1. Teacher Phase

This is the first stage of the algorithm where learners learn from the teacher. During this phase a teacher tries to increase the mean of the whole class to his or her level (the new mean). The difference between the existing mean and the new mean is given aswhere is the mean of each design variable and is the new mean for the th iteration; within the equation, two randomly generated parameters are applied: ranges between 0 and 1 and is a teaching factor which can be either 1 or 2, thus influencing the value of the mean to be changed. In the algorithm, plays a role of adjusting factor, which controls the moving direction and scale when updating solutions. The value of is decided randomly with equal probability as

Based on this Difference_Mean, the existing solution is updated according to the following expression:

2.2. Learner Phase

It is the second part of the algorithm where learners increase their knowledge by interaction between themselves. A learner interacts randomly with another learner for enhancing his or her knowledge. A learner learns new things if the other one has more knowledge than him or her. Mathematically the learning phenomenon of this phase is expressed below.

At any iteration , considering two different learners (solutions) and , where ,

is accepted into the population if it gives a better function value.

The steps for implementing TLBO are as follows.

Step 1 (define the optimization problem and initialize algorithm parameters). Initialize the population size (), number of design variables (), and number of generations (). Define the optimization problem as follows: minimize , where is the objective function and is a vector for design variables. Construct initial solutions according to and .

Step 2 (calculate and ). Calculate the mean of the population columnwise, which will give the mean of each design variable as . Identify the best solution (teacher) according to ; the teacher will try to move to , so let = .

Step 3. Calculate the Difference_Mean according to (1) by utilizing the teaching factor .

Step 4. Modify the solutions in the teacher phase based on (3) and accept the new solution if it is better than the existing one.

Step 5. Update the solution in the learner phase according to (4) and (5) and accept the better one into the population.

Step 6. Repeat Steps 2 to 5 until the termination criterion is met.

3. Chaotic Teaching-Learning-Based Optimization with Lévy Flight

An effective optimization algorithm must have a strong global searching ability along with a fast convergence rate. TLBO is free from specific algorithm parameters and outperforms PSO, HS, and so on due to its simplicity and efficiency. However, several hard benchmarks with complicated landscapes pose challenges to TLBO in finding a satisfactory result and escaping from local optima.

In order to enhance the performance of TLBO as well as take advantage of the properties of the chaotic system and Lévy flight, we integrate the chaotic search mechanism and Lévy flight into TLBO to improve its search efficiency. Hence, a chaotic TLBO with Lévy flight (CTLBO) is proposed in this paper. In the algorithm, the population is divided into two parts: the part with better fitness is evolved by the teaching-learning process in TLBO, while another part is performed with a Lévy flight. Then the chaotic perturbation is implemented on a randomly selected part of the population in terms of the diversification of the population. The main steps of CTLBO are elaborated in the next sections.

3.1. Lévy Flight

Lévy flights, also called Lévy motion, represent a kind of non-Gaussian stochastic process whose step sizes are distributed based on a Lévy stable distribution [25].

When generating new solutions for solution , a Lévy flight is performed:where is the step size which is relevant to the scales of the problem. In most conditions, we let . The product means entrywise multiplications [24]. Lévy flights essentially provide a random walk while their random steps are drawn from a Lévy distribution for large steps:which has an infinite variance with an infinite mean. Here the consecutive steps of a solution essentially form a random walk process which obeys a power-law step-length distribution with a heavy tail.

There are a few ways to implement Lévy flights; the method we chose in this paper is one of the most efficient and simple ways based on Mantegna algorithm; all the equations are detailed in [29].

3.2. Chaotic Search

Chaos is a deterministic, quasi-random process that is sensitive to the initial condition [30]. The nature of chaos is apparently random and unpredictable. Mathematically, chaos is randomness of a simple deterministic dynamical system and chaotic system can be considered as sources of randomness.

A chaotic map is a discrete-time dynamical system running in a chaotic condition [22]:where is the chaotic sequence, which can be utilized as spread-spectrum sequence as random number sequence.

Chaotic sequences have been proved to be simple and fast to produce and reserve; it is unnecessary to store long sequences [31]. Only a few functions (chaotic maps) and parameters (initial conditions) are required even for very long sequences [22].

In this paper, chaotic variables are generated by the following logistic mapping:where is the serial number of chaotic variables and . Given the chaotic variables and different initial values (), the values of the chaotic variables () are then produced by the logistic equation. Let , and hence, other solutions are produced by the same method.

3.3. Proposed Methods

By introducing the Lévy flight and the chaotic search into the TLBO, a new algorithm is proposed in this paper. The pseudocode of the proposed CTLBO is shown in Pseudocode 1.

() Randomly initialize the population
() Evaluate the population
() while  (number of iterations, or the stopping criterion is not met)
()  Divide the population into two parts according to the fitness
()  for  solutions in the worse population   do
()   Perform Lévy flight for to generate a new one
()   
()   
()  end  for
()  for  solutions in the better population  do
()   Perform TLBO for to generate a new one
()   
()   
()  end for
()  Randomly choose a section of the whole population ()
()  for  solutions in this   do
()   Perform chaos search for to generate a new one
()   
()   
()  end for
() end

4. Experimental Analysis and Numerical Results

In order to verify the performance of the proposed CTLBO and to analyze its properties, two sets of optimization problems are selected for the test experiments. In each set of problems, several well-known functions are used as benchmark problems to study the search behavior of the proposed CTLBO and to compare its performance with those of other algorithms.

4.1. Experiment 1

Firstly, to demonstrate the performance of the proposed algorithm, eight benchmark optimization problems [32] are selected as test functions. These eight benchmark functions were tested earlier with TLBO and improved TLBO by Rao and Patel [15]. The details of the benchmark functions are given in Table 1.


NumberFunctionFormulationDim.Search range

1Sphere10[–100, 100]

2Rosenbrock10[–2.048, 2.048]

3Ackley10[–32.768, 32.768]

4Griewank10[–600, 600]

5Weierstrass.10[–0.5, 0.5]

6Rastrigin10[–5.12, 5.12]

7NCRastrigin.10[–5.12, 5.12]

8Schwefel10[–500, 500]

In [15], Rao and Patel tested all functions with 30000 maximum function evaluations. To maintain the consistency in the comparison, the CTLBO algorithm is also tested with the same maximum function evaluations. Each benchmark function undergoes 30 independent tests with CTLBO. The comparative results are in the form of the mean value and standard deviation of the objective function obtained after 30 independent runs, which are shown in Table 2.


AlgorithmSphereRosenbrockAckleyGriewank
MeanSDMeanSDMeanSDMeanSD

PSO-
PSO-cf
PSO--local
PSO-cf-local
UPSO
FDR
FIPS
CPSO-H
CLPSO
ABC
Modified ABC
TLBO0.000.000.000.00
I-TLBO (NT = 4)0.000.000.000.00
CTLBO0.000.000.000.00

AlgorithmWeierstrassRastriginNCRastriginSchwefel
MeanSDMeanSDMeanSDMeanSD

PSO-
PSO-cf
PSO--local
PSO-cf-local
UPSO
FDR
FIPS
CPSO-H