Abstract

An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher’s behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods.

1. Introduction

Global optimization is a concerned research area in science and engineering. Many real-world optimization applications can be formulated as global optimization problems. To efficiently solve the global optimization problems, the efficient and robust optimization algorithms are needed. The traditional methods often fail to solve complex global optimization problems. A detailed overview of the research progress in deterministic global optimization can be found in [1]. To overcome the difficulties of traditional methods, some well-known metaheuristics are developed for solving global optimization problems during the last four decades. Among existing metaheuristics, particle swarm optimization (PSO) algorithm [2] plays very important role in solving global optimization problems. PSO inspired by the social behaviors of birds has been successfully utilized to optimize continuous nonlinear functions [3], but the standard PSO algorithm (SPSO) might trap into local optima when solving complex multimodal problems. To improve the global performance of PSO, some variants are developed. Linear decreasing inertia weight particle swarm optimization (LDWPSO) [4] was introduced by Shi and Eberhart to overcome the lack of velocity control in standard PSO; the inertia weight of the algorithm decreases from a large value to a small one with increasing of evolution. To improve the convergence accuracy of PSO, some modified operators are adopted to help the swarm escape from the local optima especially when the best fitness of the swarm is not changed in some continuous iteration [58]. To improve the convergence speed of PSO, some modifications for updating rule of particles were presented in [9], and the improved algorithm was tested on some benchmark functions. Adaptive particle swarm optimization algorithm was introduced by Zhan to improve the performance of PSO, many operators were proposed to help the swarm jump out of the local optima and the algorithms have been evaluated on 12 benchmark functions [10, 11]. Some detail surveys of development to PSO are introduced for the interested readers in [1214].

Though swarm intelligence optimization algorithms have been successfully used to solve global optimization problems, the main limitation of the previously mentioned algorithms is that many parameters (often more than two parameters) should be determined in updating process of individuals, and the efficiency of algorithms are usually affected by these parameters. For example, there are three parameters (, , and ) that should be determined in updating equations of PSO. Moreover, the optimal parameters of the algorithms are often difficult to be determined. To decrease the effects of parameters for the algorithms, teaching-learning-based (TLBO) algorithm [15] is proposed recently, and it has been used in some real applications [1619]. Some variants of the TLBO algorithm have been presented for improving the performance of the original TLBO [20, 21].

Under the framework of population-based optimizations, many variations of evolutionary optimization algorithms have been designed. Each of these algorithms performs well in certain cases and none of them are dominating one another. The key reason for employing the hybridization is that the hybrid algorithm can take advantage of the strengths of each individual technique while simultaneously overcoming its main limitations. On top of this idea, many hybrid algorithms have been presented [2227]. In the paper, to improve the performance of TLBO algorithm for solving global optimization problems, an improved TLBO algorithm with combining of the social character of PSO, named TLBO-PSO, is proposed. In the improved TLBO algorithm, the teacher improves not only the performance of the mean grade of the whole class but also the performance of every student. The proposed algorithm has been evaluated on some benchmark functions, and the results are compared with some other algorithms.

The paper is organized as follows. Section 2 provides a brief description of the standard PSO algorithm. Original teaching-learning-based (TLBO) algorithm is introduced in Section 3. Section 4 provides the detail procedure of the improved teaching-learning-based optimization algorithm with combining of the social character of PSO (TLBO-PSO). Some experiments are given in Section 5. Section 6 concludes the paper.

2. Particle Swarm Optimizers

In the standard PSO (SPSO) algorithm, a swarm of particles are represented as potential solutions; each particle searches for an optimal solution to the objective function in the search space. The th particle is associated with two vectors, that is, the velocity vector and the position vector , where is the dimensions of the variable. Each particle dynamically updates its position in a limited searching domain according to the best position of current iteration and the best position which it has achieved so far. The velocity and position of the th particle are updated as follows:where and are the acceleration coefficients, the value of them often equals 2, and are random numbers between 0 and 1. is the inertia weight which influences the convergent character of the swarm. Large inertial weight benefits global searching performance, while small one facilitates local searching. is the position of the th particle on the th dimension, is the best position which the th has achieved so far, and is the best position of the swarm in current iteration. In general, the velocity of all the particles is limited by the maximum velocity (), and positions of them are limited by the maximum position () and the minimum position (). To improve the performance of PSO, the method LDWPSO in which the inertia weight decreases linearly from a relatively large value to a small one is proposed [4], and the changed weight with evolution iteration is shown as follows:where , are the maximum and minimum values of inertia weight, respectively. gen is the current generation, is the maximum evolutionary iteration. The adaptive weights make swarm have good global searching ability at the beginning of iterations and good local searching ability near the end of runs. Algorithm 1 presents the detail steps of LDWPSO algorithm.

Initialize:
  Initialize , , , , , , Population size (Ps), Dimension size () and the initial swarm
Optimize:
  for gen = 1 : 
    Calculate the the inertia weight according to (3)
    for  : Ps
      Calculate fitness of all particles;
      Calculate the best position of current generation;
      Calculate the best position of th particle which it has achieved so far;
      for  : 
        Update the velocity of th particle according to (1)
        If then
        If then
        Update the position of th particle according to (2)
        If then
        If then
      end
    end
  end

3. Teaching-Learning-Based Optimization (TLBO) Algorithm

TLBO is one of the recently proposed population-based algorithms and it simulates the teaching-learning process of the class [15]. The main idea of the algorithm is based on the influence of a teacher on the output of learners in a class and the interaction between the learners. The TLBO algorithm does not require any specific parameters. TLBO requires only common controlling parameters like population size and number of generations for its working. The use of teaching in the algorithm is to improve the average grades of the class. The algorithm contains two phases: teacher phase and learner phase. The detail description of the algorithm can be found in [15]. In this paper, only the two main phases of TLBO are introduced as follows.

3.1. Teacher Phase

In the teacher phase of TLBO algorithm, the task of the teacher is to increase the mean grades of the class. Suppose that an objective function is with -dimensional variables, the th student can be represented as . At any iteration , assume that the population size of the class is , the mean result of students in current iteration is . The student with the best fitness is chosen as the teacher of current iteration; it is represented as . All the students will update their position as follows:where and are the new and the old position of the th student and is the random number in the range . If the new solution is better than the old one, the old position of individual will be replaced by the new position. The value of is randomly decided by the algorithm according to

3.2. Learner Phase

In learner phase of TLBO algorithm, learners can increase their knowledge from others. A learner interacts randomly with other learners for enhancing his or her knowledge. The learning of learner phase can be expressed as follows. For the th individual in the th generation, randomly select the th individual which is different from , and the updated formula of is defined in (6) and (7).

If is better than according to their fitness, then

Elsewhere is the random number in the range .

If the new position is better than the old one , the old position is replaced by the new ; otherwise, the position of the th individual is not changed. The detail algorithm is shown in Algorithm 2.

Initialize (number of learners) and (number of dimensions)
Initialize learners and evaluate all learners
Donate the best learner as and the mean of all learners as
while (stopping condition not met)
for each learner of the class % Teaching phase
    = round(1 + rand())
   
   Accept if is better than
endfor
for each learner of the class % Learning phase
   Randomly select one learner , such that
   if better
      
   else
      
   endif
   Accept if is better than
endfor
Update and
endwhile

4. Teaching-Learning-Based Optimization Algorithm with PSO (TLBO-PSO)

4.1. The Main Idea of TLBO-PSO

As previously reviewed, the main idea of TLBO is to imitate the teaching-learning process in a class. The teacher tries to disseminate knowledge among the learners to increase the knowledge level of the whole class, and the learners also study knowledge from the others to improve its grade. Algorithm displays that all individuals update their positions based on the distance between one or two times of the mean solution and the teacher in teacher phase. An individual also renewed its position based on the distance between it and a randomly selected individual from the class. Equation (4) indicates that the teacher only improves the grades of the students by using the mean grade of the class; the distance between the teacher and the students is not considered in teacher phase. In PSO, the difference between the current best individual and the individual can help the individual improve its performance. Based on this idea, the method in PSO is introduced into TLBO to improve the learning efficiency of the TLBO algorithm. The main change for TLBO is represented in updating equation. Equation (4) in TLBO is modified aswhere , are the random number in the range . With this modification, the performance of TLBO algorithm might be improved.

In original TLBO algorithm, all genes of each individual should be compared with those of all other individuals for removing the duplicate individual. This operator is a heavy computation cost process, and the function evaluations required are not clearly known. In our opinion, duplication testing is not needed in every generation especially in the beginning of evolution. Assume that individuals and have the same genes in generation, and the new position of these two individuals might be different when is a random number (1 or 2). At the beginning of evolution, all individuals generate better positions easily. Random operator for may benefit for maintaining diversity of class, but, in the anaphase of evolution, the positions of individuals might be close to each other. When the mean solution equals the best solution, the individual does not change () or the large change of genes might destroy the individual in large degree () so that the better individual is difficult to be generated. To decrease the computational effort of comparing all the individuals, the removing of the duplicate individual process in the original TLBO is deleted in the improved TLBO algorithm and a mutation operator according to the best fitness of successive generations is introduced in the paper. If the best fitness of continuous generations is not changed or changed slightly, an individual will be randomly selected according to mutation possibility to be mutated. To maintain the global performance of the algorithm, the best individual is not mutated. The subroutine for the mutation is described as shown in Algorithm 3.

pop = sort(pop);
if abs(bestfit(gen) − bestfit(gen − 1)) < then ; else ;
  if ( == )
     ;
     for  : popsize
       if rand(1) < p c
        = ceil(rand(1) dimsize);
        pop() = pop() +  rands();
        if pop() > then pop) = ;
        if pop() < then pop() = ;
       end
     end
  end
end

In Algorithm 3, is the setting generation, is the mutation possibility, and is a small number which is given by the designer. is a mutation parameter between 0.01 and 0.1.

4.2. The Steps of TLBO-PSO

Step 1. Set the maximum and minimum of position, the maximal evolution generation , mutation possibility and mutation parameter , the population size popsize, and the dimension size of the task. Initialize the initial population pop as follows:where is the random number in the range .

Step 2. Evaluate the individual, select the best individual as the teacher, and calculate the mean solution of the population.

Step 3. For each individual, update its position according to (8). If is better than , then .

Step 4. For each individual, randomly select another individual, update its position according to (6) and (7), and choose the better solution from and as the new position of the individual.

Step 5. Execute mutation operator for the population according to Algorithm 3.

Step 6. If the ended condition of TLBO-PSO is not satisfied, the algorithm will go back to Step 2, or it is terminated.

5. Simulation Experiments

To test the performance of the improved TLBO-PSO algorithm, nine benchmark functions are simulated in this section. These nine benchmark functions are listed in Table 1. In particular, in order to compare the performance of the proposed TLBO-PSO with variants of TLBO and PSO, CLPSO [22], TLBO [15], and ETLBO [21] are selected and simulated.

5.1. Parameters Setting

To reduce statistical errors, each function in the paper is independently simulated 30 runs, and their mean results are used in the comparison. The value of function is defined as the fitness function. All the experiments are carried out on the same machine with a Celeron 2.26 GHz CPU, 512-MB memory, and Windows XP operating system with Matlab 7.0. All functions are simulated in 10 and 30 dimensions. The nine functions are summarized in Table 1. “Range” is the lower and upper bounds of the variables. “” is the theory global minimum solution. “Acceptance” is the acceptable solutions of different functions. For CLPSO, TLBO, and ETLBO algorithms, the training parameters are the same as those used in the corresponding references except that the maximal FEs are 50000 and the size of population is 30. The size of elitism is 2 in ETLBO algorithm. When the best fitness is changed to be smaller than , the large value of mutation will possibly be chosen. In our experiments, the mutation possibility is 0.6, is 0.01, and is 0.05.

5.2. Comparisons on the Solution Accuracy

The performance of different algorithms for 10- and 30-dimensional functions in terms of the best solution, the mean (Mean), and standard deviation (STD) of the solutions obtained in the 30 independent runs is listed in Tables 2 and 3. Boldface in the tables indicates the best solution among those obtained by all four contenders.

The results for 10-dimensional functions in Table 2 display that the improved TLBO (TLBO-PSO) outperforms all the other algorithms in terms of the mean best solutions and standard deviations (STD) for functions , , , and . For function , ETLBO has the best performance. For function , the mean best solution of TLBO-PSO is the best, and the STD of TLBO is the smallest. For functions and , CLPSO has the best performance. For function , three TLBOs have the same solutions in terms of the mean best solutions and standard deviations (STD).

The results for 30-dimensional functions in Table 3 indicate that TLBO-PSO also has the best performance in terms of the mean best solutions and standard deviations (STD) for functions , , , and with 30 dimensions. TLBO has the best performance for function . For function , TLBO and ETLBO have the same performance in terms of the mean best solutions and standard deviations (STD). For function , the mean best solution of TLBO is the smallest, and the standard deviation of TLBO-PSO is the smallest. Three TLBOs have the same solutions for functions and .

5.3. Comparisons on the Convergence Speed and Successful Ratios

The mean number of function evolutions (FEs) is often used to measure the convergence speed of algorithms. In this paper, the mean value of FEs is used to measure the speed of all algorithms. The average FEs (when the algorithm is globally convergent) of all algorithms for nine functions with 10 and 30 dimensions are shown in Tables 4 and 5. Boldface in the tables indicates the best result among those obtained by all algorithms. When the algorithm is not globally convergent in all 30 runs, the mFEs is represented as “NaN.” The successful ratios of different algorithms for the nine functions are also shown in the tables.

Tables 4 and 5 display that mFEs of TLBO-PSO are the smallest for large part of functions except that for function . The merit in terms of mean FEs for 10-dimensional function with ETLBO is the best and the four algorithms cannot converge to the acceptable solution for 30-dimensional function. For function , all algorithms can converge to the global optima with 100% successful ratios on 30 dimensions except CLPSO. For function , CLPSO can converge to optima with 100% successful ratios for 10-dimensional function. They all cannot converge to 30-dimensional function except that the successful ratio of TLBO-PSO is 10%. The convergence speed of average best fitness of four algorithms is shown in Figure 1. The figures indicate that the TLBO-PSO has the best performance for large part of functions. According to the theorem of “no free lunch” [28], one algorithm cannot offer better performance than all the others on every aspect or on every kind of problem. This is also observed in our experimental results. For example, the merit of TLBO-PSO is worse than those of TLBO and ETLBO for 10-dimension function but it is better than large part of algorithms for other functions.

6. Conclusions

An improved TLBO-PSO algorithm which is considering the difference between the solutions of the best individual and the individual that want to be renewed is designed in the paper. The mutation operator is introduced to improve the global convergence performance the algorithm. The performance of TLBO-PSO is improved for larger part of functions especially for 30-dimension functions in terms of convergence accuracy and mFEs. The local convergence of TLBO-PSO is major caused by the lost diversity in the later stage of the evolution.

Further works include researches into adaptive selection of parameters to make the algorithm more efficient. Moreover, it needs to seek a better method to improve the TLBO algorithm for functions with optimal parameters displaced from all zeroes. Furthermore, the algorithm may be applied to constrained, dynamic optimization domain. It is expected that TLBO-PSO will be used in real-world optimization problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is partially supported by National Natural Science Foundations of China under Grants 61304082 and 61572224 and the National Science Fund for Distinguished Young Scholars under Grant 61425009, and it is partially supported by the Major Project of Natural Science Research in Anhui Province under Grant KJ2015ZD36.