The Scientific World Journal

The Scientific World Journal / 2014 / Article
Special Issue

Swarm Intelligence and Its Applications 2014

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 136920 | https://doi.org/10.1155/2014/136920

Feng Zou, Lei Wang, Xinhong Hei, Debao Chen, Qiaoyong Jiang, Hongye Li, "Bare-Bones Teaching-Learning-Based Optimization", The Scientific World Journal, vol. 2014, Article ID 136920, 17 pages, 2014. https://doi.org/10.1155/2014/136920

Bare-Bones Teaching-Learning-Based Optimization

Academic Editor: Y. Zhang
Received20 Feb 2014
Accepted07 Apr 2014
Published10 Jun 2014

Abstract

Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.

1. Introduction

Many real-life optimization problems are becoming more and more complex and difficult with the development of scientific technology. So how to resolve these complex problems in an exact manner within a reasonable time cost is very important. The traditional optimization algorithms are difficult to solve these complex nonlinear problems. In recent years, nature-inspired optimization algorithms which simulate natural phenomena and have different design philosophies and characteristics, such as evolutionary algorithms [13] and swarm intelligence algorithms [47], are a research field which simulates different natural phenomena to solve a wide range of problems. In these algorithms the convergence rate of the algorithm is given prime importance for solving real-world optimization problems. The ability of the algorithms to obtain the global optima value is one aspect and the faster convergence is the other aspect.

As a stochastic search scheme, TLBO [8, 9] is a newly population-based algorithm based on swarm intelligence and has characters of simple computation and rapid convergence; it has been extended to the function optimization, engineering optimization, multiobjective optimization, clustering, and so forth [917]. TLBO is a parameter-free evolutionary technique and is also gaining popularity due to its ability to achieve better results in comparatively faster convergence time to genetic algorithms (GA) [1], particle swarm optimizer (PSO) [5], and artificial bee colony algorithm (ABC) [6]. However, in evolutionary computation research there have been always attempts to improve any given findings further and further. This work is an attempt to improve the convergence characteristics of TLBO further without sacrificing the accuracies obtained in TLBO and in some occasions trying to even better the accuracies. The aims of this paper are of threefold. First, authors propose an improved version of TLBO, namely, BBTLBO. Next, the proposed technique is validated on unimodal and multimodal functions based on different performance indicators. The result of BBTLBO is compared with other algorithms. Results of both the algorithms are also compared using statistical paired -test. Thirdly, it is applied to solve the real-world optimization problem.

The remainder of this paper is organized as follows. The TLBO algorithm is introduced in Section 2. Section 3 presents a brief overview of some recently proposed bare-bones algorithms. Section 4 describes the improved teaching-learning-based optimization algorithm using neighborhood search (BBTLBO). Section 5 presents the tests on several benchmark functions and the experiments are conducted along with statistical tests. The applications for training artificial neural network are shown in Section 6. Conclusions are given in Section 7.

2. Teaching-Learning-Based Optimization

Rao et al. [8, 9] first proposed a novel teaching-learning-based optimization (TLBO) inspired from the philosophy of teaching and learning. The TLBO algorithm is based on the effect of the influence of a teacher on the output of learners in a class which is considered in terms of results or grades. The process of working of TLBO is divided into two parts. The first part consists of “teacher phase” and the second part consists of “learner phase.” The “teacher phase” means learning from the teacher and the “learner phase” means learning through the interaction between learners.

A good teacher is one who brings his or her learners up to his or her level in terms of knowledge. But in practice this is not possible and a teacher can only move the mean of a class up to some extent depending on the capability of the class. This follows a random process depending on many factors. Let be the mean and let be the teacher at any iteration. will try to move mean toward its own level, so now the new mean will be designated as . The solution is updated according to the difference between the existing and the new mean according to the following expression: where TF is a teaching factor that decides the value of mean to be changed and is a random vector in which each element is a random number in the range . The value of TF can be either 1 or 2, which is again a heuristic step and decided randomly with equal probability as

Learners increase their knowledge by two different means: one through input from the teacher and the other through interaction between themselves. A learner interacts randomly with other learners with the help of group discussions, presentations, formal communications, and so forth. A learner learns something new if the other learner has more knowledge than him or her. Learner modification is expressed as

As explained above, the pseudocode for the implementation of TLBO is summarized in Algorithm 1.

(1) Begin
(2)    Initialize (number of learners) and (number of dimensions)
(3)    Initialize learners and evaluate all learners
(4)   Donate the best learner as Teacher and the mean of all learners as Mean
(5)    while (stopping condition not met)
(6)       for each learner of the class % Teaching phase
(7)      
(8)      for    : 
(9)          
(10)      endfor
(11)      Accept if   is better than
(12)     endfor
(13)     for each learner of the class % Learning phase
(14)       Randomly select one learner , such that
(15)        if   better
(16)           for    : 
(17)           
(18)           endfor
(19)        else
(20)           for    : 
(21)           
(22)           endfor
(23)        endif
(24)        Accept if   is better than
(25)     endfor
(26)     Update the Teacher and the Mean
(27)  endwhile
(28) end

3. Bare-Bones Algorithm

In this section, we only presented a brief overview of some recently proposed bare-bones algorithms.

3.1. BBPSO and BBExp

PSO is a swarm intelligence-based algorithm, which is inspired by the behavior of birds flocking [5]. In PSO, each particle is attracted by its personal best position and the global best position found so far. Theoretical studies [18, 19] proved that each particle converges to the weighted average of and : where and are two leaning factors in PSO.

Based on the convergence characteristic of PSO, Kennedy [20] proposed a new PSO variant called bare-bones PSO (BBPSO). Bare-bones PSO retains the standard PSO social communication but replaces dynamical particle update with sampling from a probability distribution based on and as follows: where is the th dimension of the th particle in the population and represents a Gaussian distribution with mean and standard deviation .

Kennedy [20] proposed also an alternative version of the BBPSO, denoted by BBExp, where (5) is replaced by where is a random value within for the th dimension. For the alternative mechanism, there is a 50% chance that the search process is focusing on the previous best positions.

3.2. BBDE, GBDE, and MGBDE

Inspired by the BBPSO and DE, Omran et al. [21] proposed a new and efficient DE variant, called bare-bones differential evolution (BBDE). The BBDE is a new, almost parameter-free optimization algorithm that is a hybrid of the bare-bones particle swarm optimizer and differential evolution. Differential evolution is used to mutate, for each particle, the attractor associated with that particle, defined as a weighted average of its personal and neighborhood best positions. For the BBDE, the individual is updated as follows: where , , and are three indices chosen from the set with , rand (0, 1) is a random value within for the th dimension, and is defined by where and are personal best position and the global best position, , is a random value within for the th dimension.

Based on the idea that the Gaussian sampling is a fine tuning procedure which starts during exploration and is continued to exploitation, Wang et al. [22] proposed a new parameter-free DE algorithm, called GBDE. In the GBDE, the mutation strategy uses a Gaussian sampling method which is defined by where represents a Gaussian distribution with mean and standard deviation and CR is the probability of crossover.

To balance the global search ability and convergence rate, Wang et al. [22] proposed a modified GBDE (called MGBDE). The mutation strategy uses a hybridization of GBDE and DE/best/1 as follows:

4. Proposed Algorithm: BBTLBO

The bare-bones PSO utilizes this information by sampling candidate solutions, normally distributed around the formally derived attractor point. That is, the new position is generated by a Gaussian distribution for sampling the search space based on the and the at the current iteration. As a result, the new position will be centered around the weighted average of and . Generally speaking, at the initial evolutionary stages, the search process focuses on exploration due to the large deviation. With an increasing number of generations, the deviation becomes smaller, and the search process will focus on exploitation. From the search behavior of BBPSO, the Gaussian sampling is a fine tuning procedure which starts during exploration and is continued to exploitation. This can be beneficial for the search of many evolutionary optimization algorithms. Additionally, the bare-bones PSO has no parameters to be tuned.

Based on a previous explanation, a new bare-bones TLBO (BBTLBO) with neighborhood search is proposed in this paper. In fact, for TLBO, if the new learner has a better function value than that of the old learner, it is replaced with the old one in the memory. Otherwise, the old one is retained in the memory. In other words, a greedy selection mechanism is employed as the selection operation between the old and the candidate one. Hence, the new teacher and the new learner are the global best and learner’s personal best found so far, respectively. The complete flowchart of the BBTLBO algorithm is shown in Figure 1.

4.1. Neighborhood Search

It is known that birds of a feather flock together and people of a mind fall into the same group. Just like evolutionary algorithms themselves, the notion of neighborhood is inspired by nature. Neighborhood technique is an efficient method to maintain diversity of the solutions. It plays an important role in evolutionary algorithms and is often introduced by researchers in order to allow maintenance of a population of diverse individuals and improve the exploration capability of population-based heuristic algorithms [2326]. In fact, learners with similar interests form different learning groups. Because of his or her favor characteristic, the learner maybe learns from the excellent individual in the learning group.

For the implementation of grouping, various types of connected distances may be used. Here we have used a ring topology [27] based on the indexes of learners for the sake of simplicity. In a ring topology, the first individual is the neighbor of the last individual and vice versa. Based on the ring topology, a -neighborhood radius is defined, where is a predefined integer number. For each individual, its -neighborhood radius consists of individuals (including oneself), which are . That is, the neighborhood size is for a -neighborhood. For simplicity, is set to 1 (Figure 2) in our algorithm. This means that there are 3 individuals in each learning group. Once groups are constructed, we can utilize them for updating the learners of the corresponding group.

4.2. Teacher Phase

To balance the global and local search ability, a modified interactive learning strategy is proposed in teacher phase. In this learning phase, each learner employs an interactive learning strategy (the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning) based on neighborhood search.

In BBTLBO, the updating formula of the learning for a learner in teacher phase is proposed by the hybridization of the learning strategy of teacher phase and the Gaussian sampling learning as follows: where called the hybridization factor is a random number in the range for the th dimension, and are the existing neighborhood best solution and the neighborhood mean solution of each learner, and TF is a teaching factor which can be either 1 or 2 randomly.

In the BBTLBO, there is a chance that the th dimension of the th learner in the population follows the behavior of the learning strategy of teacher phase, while the remaining follow the search behavior of the Gaussian sampling in teacher phase. This will be helpful to balance the advantages of fast convergence rate (the attraction of the learning strategy of teacher phase) and exploration (the Gaussian sampling) in BBTLBO.

4.3. Learner Phase

At the same time, in the learner phase, a learner interacts randomly with other learners for enhancing his or her knowledge in the class. This learning method can be treated as the global search strategy (shown in (3)).

In this paper, we introduce a new learning strategy in which each learner learns from the neighborhood teacher and the other learner selected randomly of his or her corresponding neighborhood in learner phase. This learning method can be treated as the neighborhood search strategy. Let represent the interactive learning result of the learner . This neighborhood search strategy can be expressed as follows: where and are random vectors in which each element is a random number in the range , is the teacher of the learner ’s corresponding neighborhood, and the learner is selected randomly from the learner’s corresponding neighborhood.

In BBTLBO, each learner is probabilistically learning by means of the global search strategy or the neighborhood search strategy in learner phase. That is, about 50% of learners in the population execute the learning strategy of learner phase in the standard TLBO (shown in (3)), while the remaining 50% execute neighborhood search strategy (shown in (12)). This will be helpful to balance the global search and local search in learner phase.

Moreover, compared to the original TLBO, BBTLBO only modifies the learning strategies. Therefore, both the original TLBO and BBTLBO have the same time complexity , where NP is the number of the population, is the number of dimensions, and is the maximum number of generations.

As explained above, the pseudocode for the implementation of BBTLBO is summarized in Algorithm 2.

(1) Begin
(2)   Initialize (number of learners), (number of dimensions) and hybridization factor
(3)   Initialize learners and evaluate all learners
(4)   while (stopping condition not met)
(5)      for each learner of the class % Teaching phase
(6)       
(7)       Donate the and the in its neighborhood for each learner
(8)       Updating each learner according (11)
(9)       Accept if is better than
(10)     endfor
(11)     for each learner of the class % Learning phase
(12)       Randomly select one learner , such that
(13)       if rand(0, 1) < 0.5
(14)         Updating each learner according (3)
(15)       else
(16)         Donate the in its neighborhood for each learner
(17)         Updating each learner according (12)
(18)       endif
(19)       Accept if is better than
(20)     endfor
(21)   endwhile
(22) end

5. Functions Optimization

In this section, to illustrate the effectiveness of the proposed method, 20 benchmark functions are used to test the efficiency of BBTLBO. To compare the search performance of BBTLBO with some other methods, other different algorithms are also simulated in the paper.

5.1. Benchmark Functions

The details of 20 benchmark functions are shown in Table 1. Among 20 benchmark functions, to are unimodal functions, and to are multimodal functions. The searching range and theory optima for all functions are also shown in Table 1.


FunctionFormula RangeOptima

Sphere 30 −100, 100 0
Sum Square 30 −100, 100 0
Quadric 30 −1.28, 1.28 0
Step 30 −100, 100 0
Schwefel 1.2 30 −100, 100 0
Schwefel 2.21 30 −100, 100 0
Schwefel 2.22 30 −10, 10 0
Zakharov 30 −100, 100 0
Rosenbrock 30 −2.048, 2.048 0
Ackley 30 −32, 32 0
Rastrigin 30 −5.12, 5.12 0
Weierstrass 30 −0.5, 0.5 0
Griewank 30 −600, 600 0
Schwefel 30 −500, 500 0
Bohachevsky1 2 −100, 100 0
Bohachevsky2 .3 2 −100, 100 0
Bohachevsky3 .3 ( 2 −100, 100 0
Shekel5 4 0, 10 −10.1532
Shekel7 4 0, 10 −10.4029
Shekel10 4 0, 10 −10.5364

5.2. Parameter Settings

All the experiments are carried out on the same machine with a Celoron 2.26 GHz CPU, 2 GB memory, and Windows XP operating system with Matlab 7.9. For the purpose of reducing statistical errors, each algorithm is independently simulated 50 times. For all algorithms, the population size was set to 20. Population-based stochastic algorithms use the same stopping criterion, that is, reaching a certain number of function evaluations (FEs).

5.3. Effect of Variation in Parameter

The hybridization factor u is set to . Comparative tests have been performed using different . In our experiment, the maximal FEs are used as ended condition of algorithm, namely, 40,000 for all test functions. Table 2 shows the mean optimum solutions and the standard deviation of the solutions obtained using different hybridization factor in the 50 independent runs. The best results among the algorithms are shown in bold. Figure 3 presents the representative convergence graphs of different benchmark functions in terms of the mean fitness values achieved by using different hybridization factor on all test functions. Due to the tight space limitation, some sample graphs are illustrated.


FunBBTLBO ( )BBTLBO ( )BBTLBO ( )BBTLBO ( )BBTLBO ( )BBTLBO ( )BBTLBO ( )