Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2015 / Article

Research Article | Open Access

Volume 2015 |Article ID 292576 | 15 pages | https://doi.org/10.1155/2015/292576

Nonlinear Inertia Weighted Teaching-Learning-Based Optimization for Solving Global Optimization Problem

Academic Editor: Michael Schmuker
Received28 Jun 2015
Revised11 Aug 2015
Accepted17 Aug 2015
Published02 Sep 2015

Abstract

Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well.

1. Introduction

Most of the swarm intelligent optimization studies and applications have been focused on nature-inspired algorithms. Numerous population-based and nature-inspired optimization algorithms have been presented, such as the Ant Colony Optimization (ACO), Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), and Differential Evolution (DE). These optimization algorithms are based on different natural phenomena. ACO works based on the behavior of ant colony searching foods from the source to a destination [1, 2]. GA applies the theory of Darwin based on the survival of the fittest to the optimization problems [3, 4]. PSO emulates the collaborative behavior of birds flocking and fish schooling in searching for foods [57]. ABC uses the foraging behavior of a honey bee [810]. DE derived from the Genetic Algorithm, which is an efficient global optimizer in the continuous search domain [11, 12]. These algorithms have been applied to many engineering optimization problems and proven effective in solving specific types of problems. However, various algorithms have their own advantages and disadvantages in solving diverse problems. Generally, a good optimization algorithm should possess the three essential conditions. First, the algorithm has the ability of obtaining the true global optima value. Second, the convergence speed of the algorithms should be fast. Third, the program should have a minimum of control parameters so that it will be easy to use. If an optimization algorithm meets the above three conditions at the same time, it would be a great algorithm. Some optimization techniques often achieve global optima results but at the cost of the convergence speed. Those algorithms tend to focus on the quality of computational results rather than the convergence speed. However, the higher calculation accuracy and faster convergence speed are the ultimate aim in the practical applications.

Recently, Rao et al. [13, 14] proposed a teaching-learning-based optimization (TLBO) algorithm, inspired by the phenomenon of teaching and learning in a class. The TLBO requires only the common control parameters like population size and numbers of generation and that does not require any algorithm-specific control parameters; that is, it is a parameter-less algorithm [15]. Thus, there is no burden of tuning control parameters in the TLBO algorithm. Hence, the TLBO algorithm is simpler and more effective and involves relatively less computational cost. What it is more important is that the TLBO algorithm has the ability to achieve better results at comparatively faster convergence speed to other algorithms mentioned above. Therefore, the TLBO algorithm has been successfully applied in diverse optimization fields such as mechanical engineering, task scheduling, production planning and control, and vehicle-routing problems in transportation [1620]. Similar to other swarm intelligent optimization algorithms, the basic TLBO can be improved further and further. In order to improve the performance of TLBO, several variants of the TLBO have been proposed. Rao and Patel presented an elitist TLBO (ETLBO) algorithm [15] to solve complex constrained optimization problems and used a modified version of TLBO algorithm [17] to solve the multiobjective optimization problem of heat exchangers. Sultana and Roy [19] proposed a quasioppositional teaching-learning-based optimization (QOTLBO) methodology in order to find the optimal location of the distributed generator to simultaneously optimize power loss, voltage stability index, and voltage deviation of radial distribution network. Ghasemi et al. [20] used Lévy mutation strategy based on TLBO for optimal settings of optimal power flow problem control variables. Furthermore, some improved TLBO algorithms have been proposed to solve the global function optimization problem [2124] and the multiobjective optimization problem [17, 25, 26].

In this paper, we propose a novel improved TLBO, which is called nonlinear inertia weighted TLBO (NIWTLBO). A nonlinear inertia weighted factor is introduced into the basic TLBO to control the memory rate of learners, and another dynamic inertia weighted factor is used to replace the original random number in teacher phase and learner phase. So, as a result, the NIWTLBO has faster convergence speed and higher calculation accuracy for most of these optimization problems than the basic TLBO. The performance of NIWTLBO for solving global function optimization problems is compared with basic TLBO and other optimization algorithms. The analysis results show that the proposed algorithm outperforms most of the other algorithms investigated in this paper.

The rest of this paper is organized as follows. Section 2 describes the basic TLBO algorithm in detail. In Section 3, the proposed NIWTLBO algorithm will be introduced. And Section 4 provides numerical experiments and results demonstrating the performance of NIWTLBO in comparison with other optimization algorithms. Finally, our conclusions are mentioned in Section 5.

2. Teaching-Learning-Based Optimization

The basic TLBO algorithm mainly consists of two parts, namely, the teacher phase and the learner phase. In teacher phase, the students can learn from the teacher to make their knowledge level closer to the teacher’s. In learner phase, the students can learn from the interaction of other individuals to increase their knowledge. In the TLBO algorithm, a group of learners is considered as a population. Each learner is analogous to an individual of the evolutionary algorithm. The different subjects offered to the learners are considered as design variables of the optimization problem. A learner’s result is analogous to the fitness value of the objective function for optimization problems. The best learner (i.e., the best solution in the entire population) is considered as the teacher. The best solution is the best value of the objective function of the given optimization problem. The design variables are the input parameters of the objective function.

The process of basic TLBO algorithm is described below.

2.1. Initialization

The notations used in TLBO are described as follows: is number of learners in a class (i.e., population size). is number of subjects offered to the learners (i.e., dimensions of design variables).MAXITER is maximum number of allowable iterations. denotes a learner in class (i.e., the individual in the population) at any iterator . denotes the result of th subject offered to th learner at th iterator. represents the teacher, that is, the best learner in a class at th iterator.

The population is randomly initialized by a search space bounded by matrix. The values of are assigned randomly using the equationwhere and . The rand represents a uniformly distributed random variable within the range . represents the lower bound of design variable. represents the upper bound of design variable.

2.2. Teacher Phase

In this phase, the algorithm simulates the students learning from teachers. A good teacher can bring his or her learners up to his or her level in terms of knowledge. Hence, the mean result of a class may increase from a low level to the teacher’s level. But, in fact, it is impossible that the mean result of a class reaches the teacher’s level. Because of the individual differences and the forgetfulness of memory, the learners cannot gain all the knowledge of the teacher. A teacher can increase the mean result of a class to a certain value which depends on the capability of the whole class.

Let be the mean result of the learners on a particular subject “” () and let be the teacher at any iteration . will try to move mean towards its own level which is the new mean. is the difference between the existing mean result of each subject and the corresponding result of the teacher for each subject at the iteration . The solution is updated according to the difference between the existing and the new means given bywhere is the result of the teacher in subject at the iteration . is a random number in the range , and is the teaching factor, which decides the value of mean to be changed. can be either 1 or 2. The values of and are generated randomly in the algorithm and both of these parameters are not supplied as input to the algorithm.

In every iteration, is the updated value of . Because the optimization problem is a minimization problem, our goal is to find the minimum of . If the new value gives a better function value, then the old value is updated with the new value. The updated formula is given aswhere and represent the new and old total result of th student at the iteration , respectively. All the accepted new values at the end of the teacher phase become the input to the learner phase.

2.3. Learner Phase

In learner phase, the algorithm simulates the learning of the learners through interaction among themselves. A learner interacts randomly with other learners to increase his or her knowledge. If a learner has more knowledge than others, the other learners can quickly achieve new knowledge by learning from him or her to increase their level. In this learning process, two learners are randomly selected. One is and another is , . The updated formula is given aswhere is a random number in the range . and represent the total result of th student and th student at the iteration , respectively. Accept the new value if it improves the value of the objective function. Similarly, use (5) to update the learner.

In each iteration of the TLBO, it is necessary to detect the repeated solution to the entire population. If there is a repeated solution, it needs to remove the repeated solution and generate a new individual randomly. Hence, it will expand the diversity of populations and avoid premature convergence of the algorithm. After a number of generations, the knowledge level of the entire class is smoothly approximated to a point that is considered the teacher, and the algorithm converges to a solution.

2.4. Algorithm Termination

The algorithm is terminated after MAXITER iterations. The details of TLBO algorithm can be referred to in literature [13, 14].

3. Nonlinear Inertia Weighted Teaching-Learning-Based Optimization

The basic TLBO algorithm is based on teaching-learning phenomenon of a classroom. In the teacher phase, the teacher tries to shift the mean of the learners towards himself or herself by teaching. In the learner phase, learners improve their knowledge by interaction among themselves. In the process of the teaching-learning, learners improve their level by accumulating knowledge. In other words, they learn new knowledge based on existing knowledge. In the real world, the teacher tends to wish that his or her students should achieve the knowledge equal to him in fast possible time. But it is impossible for a student because of his or her forgetting characteristics. In fact, a student usually forgets a part of existing knowledge due to the physiological phenomena of the brain. With increasing the iteration numbers of learning, more and more existing knowledge will be remembered. As the learning curve presented by Ebbinghaus, it describes how fast learning knowledge is in learning process. The sharpest increase occurs after the first try and then gradually evens out, meaning that less and less new knowledge is retained after each repetition. Like the forgetting curve, the learning curve is exponential. So it is necessary to add a memory weight to the existing knowledge of the student for simulating this learning scenario. According to this phenomenon, a nonlinear inertia weighted factor is introduced into (4) and (6) in the basic TLBO, and this factor is considered as memory weighted factor which controls the memory rate of learners. This nonlinear inertia weighted factor will scale the existing knowledge of the learner for computing the new value. In contrast to the basic TLBO, in our algorithm the part of previous value of the learner is decided by a weighted factor while computing the new learner value.

Accordingly, to meet the characteristic of memory to conform to the learning curve, the nonlinear inertia weighted factor (i.e., memory rate) is nonlinearly increased from to 1.0 over time, whose value is given aswhere iter is the current iteration number, MAXITER is the maximum number of allowable iterations, and is the minimum value of nonlinear inertia weighted factor . The value should be above 0.5 (here it is selected 0.6), or the individuals are worse due to remembering too little existing knowledge at first. Hence, if the value is too small, the algorithm could not converge to the true global optimal solution. curve (i.e., memory rate curve) is shown as Figure 1. The nonlinear inertia weighted factor is applied to the new equations shown as (10) and (11). In this modified TLBO, the individuals try to sample diverse zones of the search space during the early stages of the search. During the later stages, the individuals adjust the movements of trial solutions finely so that they can explore the interior of a relative small space.

In the teacher phase, in order to obtain a new set of better learners, the difference between the existing mean result and the corresponding result of the teacher is added to the existing population of learners. Similarly, to obtain a new set of better learners in the learner phase, two learners are selected randomly, and the difference between their result of each corresponding subject is added to the existing learner. As (2) and (6) shown, the difference value added to the existing learner is formed from the difference of result and the random number . Therefore, in the teacher and learner phases, the difference value is decided by the random number to a large extent. In our proposed method, we modify the random number as follows:where is a uniformly distributed random number within the range . The value should be neither too big nor too small. Here, is selected to be 0.5, which conforms to the dynamic inertia weight proposed by Eberhart and Shi [28]. So (8) is modified as

Equation (9) generates a random number in the range which is similar to the method proposed by Satapathy and Naik [23]. We call dynamic inertia weighted factor. Therefore, the mean value of the random number is raised from 0.5 to 0.75. This increases the probability of stochastic variations and enlarges the difference value added to the existing learners, so as to improve population diversity, avoid prematurity in the search process, and increase the ability of the basic TLBO to escape from local optima. On the multimodal function surface, the original random weighed factor leads to most of the populations clustering near a local optimum point. However, the population with new dynamic inertia weight has more chances to jump out of the local optima and continuously move towards the global optimum point until a true global optimum is reached.

With the nonlinear inertia weighted factor and the dynamic inertia weighted factor, the new set of improved learners can be expressed by using equation in the teacher phaseand the new set of improved learners can be expressed by using equation in the learner phasewhere is given by (7) and is given by (9).

4. Experiments on Benchmark Functions

In this section, NIWTLBO is applied on several benchmark functions to evaluate its performance with different dimensions and search space, comparing with the basic TLBO algorithm and with other optimization algorithms available in the literature. All tests are evaluated on a laptop having Intel core i5 2.67 GHz processor and 2 GB RAM. The algorithm is coded using the MATLAB programming language and run in MATLAB 2012a environment. This section provides the results obtained by the NIWTLBO algorithm compared to the basic TLBO and other intelligent optimization algorithms. The details of the 24 benchmark functions with different characteristics like unimodality/multimodality and separability/nonseparability are shown in Table 1. “C” denotes the characteristic of function; “” is the dimensions of function; “range” of each function is the difference between the lower and upper bounds of the variables; “” is the theoretical global minimum solution.


NumberFunctionCRangeFormulation

SphereUS30[−100, 100]

SumSquaresUS30[−100, 100]

TabletUS30[−100, 100]

QuarticUS30[−1.28 1.28]

Schwefel 1.2UN30[−100, 100]

Schwefel 2.22UN30[−10, 10]

Schwefel 2.21UN30[−100, 100]

ZakharovUN30[−5, 10]

RosenbrockUS30[−4, 4]

SchafferMN2[−10, 10]

DropwaveMN2[−2, 2]

Bohachevsky1MN2[−100, 100]

Bohachevsky2MN2[−100, 100]

Bohachevsky3MN2[−100, 100]

Six-Hump Camel BackMN2[−5, 5]

BraninMS2[−5, 15]

Goldstein-PriceMN2[−2, 2]
  

AckleyMN30[−32, 32]

RastriginMN30[−5.12, 5.12]

GriewankMN30[−600, 600]

Schwefel 2.26MN30[−500, 500]

MultimodMN30[−10, 10]

Noncontinuous RastriginMS30[−5.12, 5.12], 

WeierstrassMS30[−0.5, 0.5]

C: characteristic; : dimension; U: unimodal; M: multimodal; S: separable; N: nonseparable.

NumberFunctionPSOABCDETLBONIWTLBO

Sphere0Mean8.99E − 129.91E − 167.15E − 271.85E − 2860
Std.5.92E − 125.36E − 161.06E − 2600

SumSquares0Mean1.11E − 107.81E − 169.06E − 261.57E − 2860
Std.2.49E − 101.32E − 163.07E − 2600

Tablet0Mean3.68E − 089.54E − 162.40E − 267.66E − 2850
Std.1.64E − 081.78E − 161.87E − 2600

Quartic0Mean5.84E − 021.52E − 014.03E − 012.07E − 022.03E − 02
Std.3.83E − 024.18E − 021.29E − 015.26E − 023.52E − 02

Schwefel 1.20Mean2.47E + 058.82E + 032.21E + 041.52E − 840
Std.1.48E + 051.28E + 035.21E + 032.97E − 840

Schwefel 2.220Mean5.16E − 032.01E − 144.31E − 161.79E − 143 4.45E – 323
Std.6.94E − 031.08E − 141.04E − 161.21E − 1430

Schwefel 2.210Mean1.21E + 005.49E + 011.21E − 028.31E − 1202.40E − 315
Std.6.02E − 011.38E + 012.81E − 034.05E − 1200

Zakharov0Mean1.62E + 022.59E + 025.84E + 015.95E − 511.06E − 319
Std.6.33E + 012.84E + 017.01E + 005.22E − 510

Rosenbrock0Mean3.01E + 011.04E + 012.43E + 011.29E + 011.83E + 01
Std.2.57E + 012.57E + 004.61E + 005.28E + 006.91E + 00

Schaffer−1Mean−1−1−1−1−1
Std.00000

Dropwave−1Mean−1−1−1−1−1
Std.00000

Bohachevsky10Mean00000
Std.00000

Bohachevsky20Mean00000
Std.00000

Bohachevsky30Mean08.46E − 16000
Std.02.95E − 16000

Six-Hump Camel Back−1.03163Mean−1.03163−1.03163−1.03163−1.03163−1.03163
Std.00000

Branin0.398Mean0.39790.39790.39790.39790.3979
Std.00000

Goldstein-Price3Mean33333
Std.8.11E − 154.32E − 151.36E − 156.78E − 166.56E − 16

Ackley0Mean1.18E + 002.82E − 132.49E − 144.44E − 158.66E − 16
Std.3.85E − 013.06E − 146.07E − 1500

Rastrigin0Mean1.08E + 021.29E − 139.33E + 016.93E + 000
Std.2.80E + 012.57E − 139.43E + 005.92E + 000

Griewank0Mean6.77E − 037.10E − 03000
Std.9.29E − 039.56E − 03000

Schwefel 2.26−837.9658Mean −8789.43 −12561.79 −11312.51 −9178.59 −8324.302
Std.4.63E + 021.96E + 021.58E + 037.97E + 021.71E + 02

Multimod0Mean8.69E − 678.52E − 194.66E − 31100
Std.1.74E − 668.34E − 19000

Noncontinuous Rastrigin0Mean1.83E + 021.99E − 146.94E + 011.55E + 010
Std.3.15E + 011.83E − 149.13E + 002.65E + 000

Weierstrass0Mean6.27E + 011.12E − 021.38E + 0100
Std.2.03E + 017.73E − 036.07E − 0100


NumberFunctionPSOABCDETLBONIWTLBO

SphereMean80,00080,00080,00080,00029,514
Std.00001.02E + 02

SumSquaresMean80,00080,00080,00080,00029,628
Std.00001.23E + 02

TabletMean80,00080,00080,00080,00029,562
Std.00001.52E + 02

QuarticMean80,00080,00080,00080,00080,000
Std.00000

Schwefel 1.2Mean80,00080,00080,00080,00039,416
Std.00001.09E + 02

Schwefel 2.22Mean80,00080,00080,00080,00080,000
Std.00000

Schwefel 2.21Mean80,00080,00080,00080,00080,000
Std.00000

ZakharovMean80,00080,00080,00080,00080,000
Std.00000

RosenbrockMean80,00080,00080,00080,00080,000
Std.00000

SchafferMean12,43243,6368,6869,6883,029
Std.3.38E + 023.03E + 022.06E + 022.29E + 023.03E + 02

DropwaveMean11,39413,8245,4903,021812
Std.3.26E + 011.09E + 021.53E + 021.22E + 023.32E + 01

Bohachevsky1Mean9,5323,2633,9922,266842
Std.2.21E + 027.52E + 018.74E + 013.23E + 012.01E + 01

Bohachevsky2Mean9,5784,7174,2452,568952
Std.1.33E + 029.27E + 011.17E + 022.05E + 012.56E + 01

Bohachevsky3Mean9,79280,0005,3762,875965
Std.2.52E + 0201.26E + 021.03E + 023.12E + 01

Six-HumpMean1,9971,3721,7817122,560
Camel BackStd.1.38E + 021.17E + 021.36E + 025.93E + 019.07E + 01

BraninMean1,8511,8131,8911,0862,172
Std.1.17E + 021.23E + 021.04E + 021.06E + 021.23E + 02

Goldstein-PriceMean2,0181,8571,7651,2282,865
Std.1.25E + 021.48E + 022.08E + 026.85E + 011.42E + 02

AckleyMean80,00080,00080,00080,00080,000
Std.00000

RastriginMean80,00080,00080,00080,0001,436
Std.00003.02E + 01

GriewankMean80,00080,00053,03212,0641,284
Std.006.16E + 029.37E + 012.54E + 01

Schwefel 2.26Mean80,00080,00080,00080,00080,000
Std.00000

MultimodMean80,00080,00080,00028,3041,427
Std.0001.05E + 025.16E + 01

Noncontinuous RastriginMean80,00080,00080,00080,0001,324
Std.00001.22E + 02

WeierstrassMean80,00080,00080,00012,7122,044
Std.0001.19E + 021.21E + 02


NumberFunctionPSOABCDETLBONumberFunctionPSOABCDETLBO

Sphere++++Bohachevsky2NANANANA
SumSquares++++Bohachevsky3NA+NANA
Tablet++++Six-Hump Camel Back NANANANA
Quartic+++BraninNANANANA
Schwefel 1.2++++Goldstein-PriceNANANANA
Schwefel 2.22++++Ackley++++
Schwefel 2.21++++Rastrigin++++
Zakharov++++Griewank++NANA
RosenbrockSchwefel 2.26++++
SchafferNANANANAMultimod++NANA
DropwaveNANANANANoncontinuous Rastrigin++++
Bohachevsky1NANANANAWeierstrass+++NA

“+” indicates that value is significant, “⋅” indicates that value is not statistically significant, and “NA” stands for not applicable.
4.1. Experiment 1: NIWTLBO versus PSO, ABC, DE, and TLBO

This experiment is aimed at identifying the performance of the NIWTLBO algorithm to achieve the global optimum value comparing with PSO, ABC, DE, and the basic TLBO. To be fair, each algorithm uses the same values of common control parameters such as population size and maximum evaluation number. Population size is 40 and the maximum fitness function evaluation number is 80,000 for all benchmark functions in Table 1. The other specific parameters of algorithms are given below.

PSO Setting. Cognitive attraction , social attraction , and inertia weight . As mentioned in [5], a recommended choice for constant and is integer 2, since it on average makes the weights for “social” and “cognition” parts be 1. When is in the range of , the PSO will have the best chance to find the global optimum and takes a moderate number of iterations [29].

ABC Setting. For ABC there are no other specific parameters to set.

DE Setting. In DE, is a real constant which affects the differential variation between two solutions and is crossover rate. Set and . The configuration parameters for DE are decided on the results of experiments using different parameter values. We choose the parameter values which make the DE algorithms get the best result.

TLBO Settings. For TLBO there are no other specific parameters to set.

NIWTLBO Settings. In NIWTLBO, there are no other specific parameters too.

In this section, each benchmark function is independently experimented 30 times with PSO, ABC, DE, TLBO, and NIWTLBO. Each algorithm was terminated after running for 80,000FEs or when it reached the global minimum value before completely running for 80,000FEs. The mean and standard deviation of fitness value obtained through 30 experiments on each benchmark function are recorded in Table 2. Meanwhile, the mean value and standard deviations of the number of function fitness evaluations produced by the experiments are reported in Table 3. In order to analyze the performance whether there is significance between the results of the NIWTLBO and other algorithms, we carried out -test on pairs of algorithms which is very popular in evolutionary computing [12]. The statistical significance levels of difference of the means of PSO and NIWTLBO algorithm, ABC and NIWTLBO algorithm, DE and NIWTLBO algorithm, and TLBO and NIWTLBO algorithm are reported in Table 4. Here, “+” symbol indicates that value is significant at 0.05 level of significance by two tailed tests, “” symbol marks value being not statistically significant, and “NA” means not applicable due to the results of one pair of algorithms having the same accuracy.

The comparative results of each benchmark function for PSO, ABC, DE, and TLBO are presented in Table 2 in the form of average solution and standard deviation obtained in 30 independent runs on each benchmark function. The significance of NIWTLBO comparing with PSO, ABC, DE, and TLBO is shown in Table 4. It is observed from Tables 2 and 4 that the performance of NIWTLBO outperforms PSO, ABC, DE, and TLBO for functions , , , , and . Furthermore, TLBO performs better than PSO, ABC, and DE for functions and . For functions , the performance of NIWTLBO, PSO, ABC, DE, and TLBO is alike that almost all the algorithms can obtain the global optimum value except for ABC on Bohachevsky3. For Rosenbrock, the performance of different algorithms is similar to each other. For Griewank and Multimod, the performance of NIWTLBO, DE, and TLBO is alike and better than PSO and ABC. For Weierstrass, the performance of NIWTLBO and TLBO is alike and outperforms PSO, ABC, and DE.

It is observed from the results in Table 3 that the smaller the number of fitness evaluations the more quickly the algorithm obtains the global optimum value; that is, the convergence rate of the algorithm is faster. Obviously, the NIWTLBO algorithm requires less numbers of function evaluations than the basic TLBO algorithm and other algorithms mentioned to achieve the global optimum value for most of the benchmark functions. Hence, the convergence rate of the NIWTLBO algorithm is faster than other algorithms mentioned for most of the benchmark functions except Six-Hump Camel Back, Branin, and Goldstein-Price.

4.2. Experiment 2: NIWTLBO versus PSO-, PSO-cf, CPSO-H, and CLPSO

In this section, the experiment is aimed at analysing the ability of the NIWTLBO algorithm to obtain the global optimum value comparing with other variant PSO algorithms such as PSO- [29], PSO-cf [30], CPSO-H [31], and CLPSO [32]. In this experiment, 8 different unimodal and multimodal benchmark functions are tested using the NIWTLBO algorithm. The details of benchmark functions are shown in Table 1. In order to maintain the consistency in the comparison, NIWTLBO algorithm is performed with the same maximum function evaluations and dimensions. Each benchmark function is independently experimented 30 times for NIWTLBO. The comparative results are reported in Table 5 in the form of the average solution and standard deviation obtained in 30 independent runs on each benchmark function. In Table 5, the results of algorithms except NIWTLBO are taken from literatures [24, 27], where the algorithms run 30,000FEs with 10 population sizes for 10 dimensional functions.


NumberFunctionPSO-wPSO-cfCPSO-HCLPSOTLBONIWTLBO

SphereMean7.96E − 519.84E − 1054.98E − 455.15E − 2900
Std.3.56E − 504.21E − 1041.00E − 442.16E − 2800

RosenbrockMean3.08E + 006.98E − 011.53E + 002.46E + 001.72E + 001.69E + 00
Std.7.69E − 011.46E + 001.70E + 001.70E + 006.62E − 017.18E − 01

AckleyMean1.58E − 149.18E − 011.49E − 144.32E − 103.55E − 158.58E − 16
Std.1.60E − 141.01E + 006.97E − 152.55E − 148.32E − 316.37E − 32

RastriginMean5.82E + 001.25E + 012.12E + 0006.77E − 080
Std.2.96E + 005.17E + 001.33E + 0003.68E − 070

GriewankMean9.69E − 021.19E − 014.07E − 024.56E − 0300
Std.5.01E − 027.11E − 022.80E − 024.81E − 0300

Schwefel 2.26Mean3.20E + 029.87E + 022.13E + 0202.94E + 022.67E + 02
Std.1.85E + 022.76E + 021.41E + 0202.68E + 021.92E + 02

Noncontinuous RastriginMean4.05E + 001.20E + 012.00E − 0102.65E − 080
Std.2.58E + 004.99E + 004.10E − 0101.23E − 070

WeierstrassMean2.28E − 036.69E − 011.07E − 1502.42E − 050
Std.7.04E − 037.17E − 011.67E − 1501.38E − 200

“†” mark indicates that NIWTLBO is statistically better than the corresponding algorithm.
“‡” mark indicates that NIWTLBO is statistically worse than the corresponding algorithm.

It is observed from the results in Table 5 that the performance of NIWTLBO and TLBO algorithms is better than PSO-, PSO-cf, CPSO-H, and CLPSO algorithms for Sphere, Ackley, and Griewank. The performance of NIWTLBO and CLPSO is alike for Rastrigin, Noncontinuous Rastrigin, and Weierstrass. For Rosenbrock and Schwefel 2.26, the NIWTLBO algorithm does not perform well comparing with other algorithms.

4.3. Experiment 3: NIWTLBO versus CABC, GABC, RABC, and IABC

In this section, the experiment is conducted to identify the performance of the NIWTLBO algorithm to achieve the global optimum value versus CABC [33], GABC [34], RABC [8], and IABC [35] on 7 benchmark functions shown in Table 1. The comparative results are reported in Table 6. To maintain the consistency in the comparison, the parameters of the algorithms are similar to the literature [8], where the population size is set as 20 and dimension is set as 30. The results of CABC, GABC, RABC, and IABC are taken from the literature [23] directly. The results of NIWTLBO and TLBO, in the form of average solution and standard deviation, are obtained in 30 independent runs on each benchmark function. In this experiment, TLBO and NIWTLBO are tested with the same function evaluations listed in Table 6 to compare their performance with CABC, GABC, RABC, and IABC algorithms.


NumberFunctionCABCGABCRABCIABCTLBONIWTLBO

SphereMean2.3E − 403.6E − 639.1E − 615.34E − 17800
FEs: 1.5 × 105Std.1.7E − 405.7E − 632.1E − 60000

Schwefel 1.2Mean8.4E + 024.3E + 022.9E − 241.78E − 65