Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2018, Article ID 1806947, 19 pages
https://doi.org/10.1155/2018/1806947
Research Article

Teaching-Learning-Based Optimization with Learning Enthusiasm Mechanism and Its Application in Chemical Engineering

1School of Electrical and Information Engineering, Jiangsu University, Zhenjiang, Jiangsu 212013, China
2Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China
3School of Mechanical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
4School of Electrical Engineering, Zhengzhou University, Zhengzhou 450001, China

Correspondence should be addressed to Xu Chen; nc.ude.sju@nehcux

Received 16 January 2018; Accepted 8 April 2018; Published 21 May 2018

Academic Editor: Xiaohui Yuan

Copyright © 2018 Xu Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Teaching-learning-based optimization (TLBO) is a population-based metaheuristic search algorithm inspired by the teaching and learning process in a classroom. It has been successfully applied to many scientific and engineering applications in the past few years. In the basic TLBO and most of its variants, all the learners have the same probability of getting knowledge from others. However, in the real world, learners are different, and each learner’s learning enthusiasm is not the same, resulting in different probabilities of acquiring knowledge. Motivated by this phenomenon, this study introduces a learning enthusiasm mechanism into the basic TLBO and proposes a learning enthusiasm based TLBO (LebTLBO). In the LebTLBO, learners with good grades have high learning enthusiasm, and they have large probabilities of acquiring knowledge from others; by contrast, learners with bad grades have low learning enthusiasm, and they have relative small probabilities of acquiring knowledge from others. In addition, a poor student tutoring phase is introduced to improve the quality of the poor learners. The proposed method is evaluated on the CEC2014 benchmark functions, and the computational results demonstrate that it offers promising results compared with other efficient TLBO and non-TLBO algorithms. Finally, LebTLBO is applied to solve three optimal control problems in chemical engineering, and the competitive results show its potential for real-world problems.

1. Introduction

In recent years, many real-world problems have become extremely complex and are difficult to solve using classic analytical optimization algorithms. Metaheuristic search (MS) algorithms have shown more favorable performance on nonconvex and nondifferentiable problems, resulting in the development of various MS algorithms for difficult real-world problems. Most of these MS algorithms are nature-inspired, and several of the prominent algorithms include genetic algorithms (GA) [1], evolution strategies (ES) [2], differential evolution (DE) [3], particle swarm optimization (PSO) [4, 5], harmony search (HS) [6], and biogeography-based optimization (BBO) [7, 8]. However, the “No Free Lunch” theorem suggests that no single algorithm is suitable for all problems [9]; therefore, more research is required to develop novel algorithms for different optimization problems with high efficiency [10].

Teaching-learning-based optimization (TLBO) is a relative new MS algorithm proposed by Rao et al. [11], motivated by the simulation of the behaviors of teaching and learning process in a classroom. TLBO utilizes two productive operators, namely, teacher phase and learning phase to search good solutions [12]. Due to its attractive characters such as simple concept, without the specific algorithm parameters, easy implementation, and rapid convergence, TLBO has captured great attention and has been extended to handle constrained [13], multiobjective [14], large-scale [15], and dynamic optimization problems [16]. Furthermore, TLBO has also been successfully applied to many scientific and engineering fields, such as neural network training [17], power system dispatch [18], and production scheduling [19].

In the basic TLBO and most of its variants, all the learners have the same chance to acquire knowledge from the teacher in teacher phase or the other learners in learner phase. Actually, in the real world, learners are different, and they have different learning enthusiasm. The learners with high learning enthusiasm are more concentrated when they are studying; therefore they have large possibility of getting knowledge from others. By contrast, learners with low learning enthusiasm often neglect the information of others, so they have comparatively small possibility of getting knowledge from others. Motivated by this phenomenon, this study will introduce the concept of learning enthusiasm mechanism to the TLBO to propose a new approach called learning enthusiasm based teaching-learning-based optimization (LebTLBO).

The main contributions of this study are as follows:

(1) Learning enthusiasm mechanism is introduced to TLBO, and learning enthusiasm based teacher phase and learner phase are presented to enhance the search efficiency.

(2) A poor student tutoring phase is also introduced to improve the quality of the poor learners.

(3) LebTLBO is evaluated on 30 benchmark functions from CEC2014 and compared with other efficient TLBO and non-TLBO algorithms.

(4) LebTLBO is applied to solve three optimal control problems in chemical engineering.

The rest of this paper is organized as follows. Section 2 states the basic TLBO and related work. Section 3 presents the proposed LebTLBO in detail. Section 4 displays the simulation results on benchmark functions. In Section 5, LebTLBO is applied to solve three optimal control problems in chemical engineering. Finally, Section 6 concludes this paper.

2. TLBO and Related Work

2.1. Basic TLBO

TLBO is a population-based MS algorithm which mimics the teaching and learning process of a typical class [11]. In the TLBO, a group of learners is considered as the population of solutions, and the fitness of the solutions is considered as results or grades. The algorithm adaptively updates the grade of each learner in the class by learning from the teacher and learning through the interaction between learners. The TLBO process is carried out through two basic operations: teacher phase and learner phase.

In the teacher phase, the best solution in the entire population is considered as the teacher, and the teacher shares his or her knowledge to the learners to increase the mean result of the class. Assume is the position of the ith learner, the learner with the best fitness is identified as the teacher , and the mean position of a class with NP learners can be represented as . The position of each learner is updated by the following equation:where and are the ith learner’s new and old positions, respectively, is a random vector uniformly distributed within , is a teacher factor, and its value is heuristically set to either 1 or 2. If is better than , is accepted, otherwise is unchanged.

In the learner phase, a learner randomly interacts with other different learners to further improve his/her performance. Learner randomly selects another learner and the learning process can be expressed by the following equation:where is the objective function with D-dimensional variables and is the old position of the jth learner. If is better than , is used to replace . The pseudocode for TLBO is shown in Algorithm 1.

Algorithm 1: Basic TLBO.
2.2. Improvements on TLBO

The basic TLBO algorithm often falls into local optima when solving complex optimization problems. In order to enhance the search performance of the TLBO, some variants of TLBO have been proposed, which can be roughly divided into the following three categories.

The first category of TLBO variants introduces modified learning strategies in teacher phase or learner phase to improve its performance. Rao and Patel [20] introduced more teachers and adaptive teaching factor into TLBO and proposed a modified TLBO for multiobjective optimization of heat exchangers. Satapathy et al. [21] introduced a parameter called weight factor into the update equations of teacher phase and learner phase and presented the weighted TLBO for global optimization. Wu et al. [22] proposed nonlinearly increasing nonlinear inertia weighted factor for TLBO inspired by the learning curve presented by Ebbinghaus, and nonlinear inertia weighted TLBO (NIWTLBO) was developed. Satapathy and Naik [23] put forward a modified TLBO (mTLBO), in which an extra term is added in the learner phase based on the concept of tutorial class, and it improves the learning outputs due to the close interaction among learners and teacher and also collaborative discussion among learners. Zou et al. [24] proposed an improved TLBO with learning experience of other learners (LETLBO). In this algorithm, the learners can adaptively learn knowledge from experience information of some other learners in both learner phase and teacher phase. Patel and Savsani [25] incorporated two additional search mechanisms, namely, tutorial training and self-learning in the basic TLBO, and presented tutorial training and self-learning inspired TLBO (TS-TLBO) for the multiobjective optimization of a Stirling heat engine. Ghasemi et al. [26] proposed Gaussian barebones TLBO (GBTLBO) by using Gaussian sampling technology, and the GBTLBO was applied to the optimal reactive power dispatch problem.

The second category is the hybrid TLBO method by combining it with other search strategies. Ouyang et al. [27] incorporated a global crossover (GC) operator to balance the local and global searching and presented a new version of TLBO called TLBO-GC. Tuo et al. [28] presented a harmony search based on teaching-learning strategy (HSTL) for high dimension complex optimization problems. Lim and Isa [29] adapted the TLBO framework into the particle swarm optimization and proposed teaching and peer-learning particle swarm optimization. Huang et al. [30] proposed an effective teaching-learning-based cuckoo search (TLCS) algorithm for parameter optimization problems in structure designing and machining processes. Zou et al. [31] proposed an improved teaching-learning-based optimization with differential learning (DLTLBO) for IIR System Identification problems. Wang et al. [32] combined TLBO with differential evolution to propose TLBO-DE for chaotic time series prediction. Zhang et al. [33] combined TLBO with bird mating optimizer (BMS) and proposed a hybrid algorithm called TLBMO for global numerical optimization. Güçyetmez and Çam [34] developed a genetic-teaching-learning optimization (G-TLBO) technique for optimizing of power flow in wind-thermal power systems. Chen et al. [35] hybridized TLBO with artificial bee colony and proposed a teaching-learning-based artificial bee colony (TLABC) for solar photovoltaic parameter estimation.

The third category of TLBO variants is that topology methods or efficient population utilization technologies are designed to balance the global and local search abilities. Zou et al. [36] employed dynamic group strategy (DGS), random learning strategy, and quantum-behaved learning to maintain the diversity of the population and discourage premature convergence, and improved TLBO with dynamic group strategy (DGSTLBO) was presented. Chen et al. [37] proposed a variant of TLBO algorithm with multiclasses cooperation and simulated annealing operator (SAMCCTLBO). In SAMCCTLBO, the population is divided into several subclasses: the learners in different subclasses only learn new knowledge from others in their subclasses, and the simulated annealing operator is used to improve the diversity of the whole class. Wang et al. [38] presented a TLBO with neighborhood search (NSTLBO) by introducing a ring neighborhood topology to maintain the exploration ability of the population. Zou et al. [39] proposed a two-level hierarchical multi-swarm cooperative TLBO (HMCTLBO) inspired by the hierarchical cooperation mechanism in the social organizations. Savsani et al. proposed a modified subpopulation TLBO (MS-TLBO) by introducing four improvements, namely, number of teachers, adaptive teaching factor, learning through tutorial, and self-motivated learning, and the MS-TLBO was used to solve structural optimization problems. Chen et al. [17] proposed an improved TLBO with variable population size in the form of a triangle form (VTTLBO), and the algorithm was applied to optimize the parameters of artificial neural network.

3. Proposed Approach: LebTLBO

In the basic TLBO and most of its variants, learners have the same possibility of acquiring knowledge from others in both teacher phase and learner phase. However, in the real world, learners are different, and each learner’s learning enthusiasm is not the same. Usually, the learners with higher learning enthusiasm are more interested in study, and they are more concentrated when they are studying; therefore, they can get more knowledge from others. By contrast, the learners with lower learning enthusiasm are less interested in study, and they often neglect the information of others, so they have small possibility of getting knowledge from others. Motivated by this consideration, this study will introduce a learning enthusiasm mechanism into TLBO and propose a learning enthusiasm based TLBO (LebTLBO).

The LebTLBO algorithm has three main phases: learning enthusiasm based teacher phase, learning enthusiasm based learner phase, and poor student tutoring phase. The details of these three phases are described as follows.

3.1. Learning Enthusiasm Based Teacher Phase

The proposed LebTLBO uses a learning enthusiasm based teaching strategy. To design the learning enthusiasm based teaching strategy, we make the following assumption: the learners with good grades have high learning enthusiasm, and they have large probability of getting the knowledge from the teacher, while the learners with bad grades have low learning enthusiasm, and they have small probability of getting the knowledge from the teacher.

In learning enthusiasm based teacher phase, all learners are sorted from best to worst based on their grades. For a minimization problem, assume that Then, the learning enthusiasm values of learners are defined aswhere and are the maximum and minimum learning enthusiasm, respectively. The values of the two parameters are suggested as and . The learning enthusiasm curve is plotted in Figure 1. According to Figure 1, we can see that the best learner will have the highest learning enthusiasm, while the worst learner will have the lowest learning enthusiasm.

Figure 1: Learning enthusiasm curve.

After defining the learning enthusiasm, each learner can choose to learn or not learn from the teacher based on the learning enthusiasm value . For learner , generate a random number , if , then learner will learn from the teacher; else learner will neglect the knowledge of the teacher. If learner can acquire the knowledge from the teacher, it updates the position using a diversity enhanced teaching strategy, as shown in Figure 2 and the following equation: where , and are integers randomly selected from ; ; and are two random numbers uniformly distributed within ; and is a scale factor in . Equation (5) can be viewed as a hybrid of basic teaching strategy of TLBO (i.e., (1)) and mutation operator of DE [40].

Figure 2: Diversity enhanced teaching strategy.

In the basic TLBO, all learners use the same differential vector to update the positions, so the diversity of the basic teaching strategy is poor. By contrast, the learning enthusiasm based teacher phase utilizes a hybridization of basic teaching strategy and DE mutation to update learners’ positions, which improves the diversity of search directions greatly.

The pseudocode of learning enthusiasm based teacher phase in LebTLBO is shown in Algorithm 2.

Algorithm 2: Learning enthusiasm based teacher phase.
3.2. Learning Enthusiasm Based Learner Phase

The LebTLBO employs a learning enthusiasm based learning strategy. In this learning strategy, similar to learning enthusiasm based teaching strategy, learners with good grades have high learning enthusiasm, and they have large probability of getting the knowledge from the other learners, while the learners with bad grades have low learning enthusiasm, and they have relative small probability of getting the knowledge from the other learners.

In learning enthusiasm based learner phase, all learners are sorted from best to worst based on their grades, and the learning enthusiasm values of all learners are defined as (4). For learner , generate a random number , if , then learner will get the knowledge from the other learners; else learner will neglect the knowledge of the other learners. The pseudocode of learning enthusiasm based learner phase is shown in Algorithm 3.

Algorithm 3: Learning enthusiasm based learner phase.
3.3. Poor Student Tutoring Phase

Besides the learning enthusiasm based teacher phase and learner phase, a poor student tutoring phase is also employed to improve the grades of the poor students in each generation of LebTLBO. In this phase, all the learners are sorted from best to worst based on their grades, and the learners ranked at the bottom 10% are considered as the poor students. For each poor student , it randomly selects a student ranked at the top 50% and learns from the student using the following equation: If is better than , is accepted, otherwise is unchanged. The pseudocode of poor student tutoring phase in LebTLBO is shown in Algorithm 4.

Algorithm 4: Poor student tutoring phase.

The introduction of poor student tutoring phase is based on two considerations. Firstly, in the learning enthusiasm based teacher phase and learner phase, the learners with bad grades have relative small probabilities of updating their positions compared with the learners with good grades. Therefore, the poor student tutoring phase is helpful for the improvement of the poor learners. Second, this strategy can be viewed as the simulation of the real-world tutoring. In real-world learning space, poor students need tutoring more than any other population and will benefit from tutoring more than any other population.

3.4. Framework of LebTLBO

The overall framework of LebTLBO can be summarized in Algorithm 5.

Algorithm 5: The LebTLBO algorithm.

LebTLBO differs from previous TLBO on two aspects:

(1) LebTLBO uses learning enthusiasm based teaching and learning models, in which different learners have different probabilities of getting knowledge from others based on their learning enthusiasm. However, in the previous TLBO, all learners have the same probability of getting knowledge from others.

(2) LebTLBO introduces a poor student tutoring phase, which is not included in previous TLBO.

4. Experimental Results on Benchmark Functions

4.1. Experimental Settings

To test the performance of LebTLBO, a suit of 30 benchmark functions proposed in the CEC 2014 special session on real-parameter single objective optimization are utilized [41]. These functions can be categorized into four groups: (1) G1: unimodal functions (F01–F03); (2) G2: simple multimodal functions (F04–F16); (3) G3: hybrid functions (F17–F22); and (4) G4: composition functions (F23–F30).

The following performance criteria are used for the performance comparisons:(i)Solution Error. The function error value for solution is defined as , where is the global optimum of the corresponding function [42]. The best error value of each run is recorded when the maximum function evaluation () is reached. The mean and standard deviation of the best error values are calculated for comparison.(ii)Convergence. The convergence graphs for some typical functions are plotted to illustrate the mean error performance of the best solution over the total run in the respective experiments.(iii)Statistics by Wilcoxon and Friedman Tests. The Wilcoxon rank sum test at 5% significance level is conducted to show the significant differences between two algorithms on the same problem. The symbols “+”, “−”, and “=” mean that one algorithm performs significantly better than, worse than, or similar to its competitor, respectively. To identify differences between pair of algorithms on all problems, the multiproblem Wilcoxon signed-rank test is carried out. The Friedman test is also used to obtain the rankings of multiple algorithms on all problems. The statistical test is conducted by the KEEL software [43].

To provide a fair comparison, we run each algorithm 30 times independently over the benchmark functions with , and the maximum function evaluations () are set to .

4.2. Comparison with Other TLBO Algorithms

In order to study the performance of LebTLBO against TLBO variants, we compare LebTLBO with six well-established TLBO variants:(i)Basic TLBO [11](ii)Modified TLBO (mTLBO) [23](iii)Differential learning TLBO (DLTLBO) [31](iv)Nonlinear inertia weighted TLBO (NIWTLBO) [22](v)TLBO with learning experience of other learners (LETLBO) [24](vi)Generalized oppositional TLBO (GOTLBO) [44]

Table 1 lists the parameter settings for LebTLBO and the other TLBO algorithms.

Table 1: Parameter settings for the compared TLBO algorithms.

Table 2 shows the mean error and standard deviation (in bracket) of the function error values obtained by all the TLBO algorithms. In Table 2, the best and the second best results are highlighted with boldface and italic, respectively. The results of Wilcoxon’s rank sum test between LebTLBO and other TLBO algorithms are presented in the last five rows of Table 2.

Table 2: Comparison between LebTLBO and other TLBO algorithms.

Firstly, for unimodal functions G1, LebTLBO shows the best performance on F01 and F02. DLTLBO gets the best performance on F03 and LebTLBO the second best on F03. Based on the results of Wilcoxon’s rank sum test, LebTLBO is significantly better than all other TLBO algorithms on most of functions.

Secondly, for simple multimodal functions G2, LebTLBO shows the best performance on 7 functions (F04, F06~F09, F13, and F15) and the second best performance on 3 functions (F05, F10, and F16). DLTLBO gets the best performance on 3 functions (F10, F11, and F16). mTLBO, NIWTLBO, and GOTLBO show the best performance on 1 function. The results of Wilcoxon’s rank sum test show that LebTLBO outperforms all other TLBO on most of functions.

Thirdly, as for hybrid functions G3, DLTLBO performs best, as it achieves the best results on 4 functions (F17, F18, F20, and F21). LebTLBO gets the best results on F19. The results of Wilcoxon’s rank sum test show that LebTLBO is worse than DLTLBO, but it is better than TLBO, mTLBO, NIWTLBO, LETLBO, and GOTLBO.

Finally, with regard to the composition functions G4, GOTLBO gets the best results on 5 functions (F23~F25, F27, and F28). Both LebTLBO and NIWTLBO get the best results on 3 functions, followed by TLBO, DLTLBO, and mTLBO. The results of Wilcoxon’s rank sum test show that LebTLBO performs similar to NIETLBO and better than other TLBO algorithms.

In summary, for all 30 functions, LebTLBO shows the best results on 13 functions, followed by DLTLBO with 10 functions and GOTLBO with 6 functions. Based on the results of Wilcoxon’s rank sum test, LebTLBO is significantly better than TLBO, mTLBO, DLTLBO, NIWTLBO, LETLBO, and GOTLBO on 22, 22, 12, 18, 19, and 19 functions, respectively. It is significantly worse than TLBO, mTLBO, DLTLBO, NIWTLBO, LETLBO, and GOTLBO on 1, 1, 8, 7, 1, and 6 functions and similar to them on 7, 7, 10, 5, 10, and 5 functions, respectively.

The multiple-problem Wilcoxon signed-rank test is presented in Table 3. According to the multiple-problem Wilcoxon signed-rank test, LebTLBO attains higher positive ranks (R+) than TLBO, mTLBO, NIWTLBO, LETLBO, and GOTLBO, and there are significant differences among these algorithms when and . There are no significant differences between LebTLBO and DLTLBO when and .

Table 3: Multiple-problem Wilcoxon signed-rank between LebTLBO and other TLBO algorithms.

In addition, the rankings of all the TLBO according to the Friedman test are shown in Figure 3. As shown in Figure 3, LebTLBO ranks the first and DLTLBO the second, followed by GOTLBO, TLBO, LETLBO, NIWTLBO, and mTLBO.

Figure 3: Friedman rankings of the compared TLBO algorithms.
4.3. Comparison with Other Non-TLBO Algorithms

We further compare LEBTBO with five other MS algorithms. The five algorithms are as follows:(i)Real code biogeography-based optimization with Gaussian mutation (RCBBOG) [7](ii)Improved harmony search (IHS) [6](iii)Covariance matrix adaptation evolution strategy (CMAES) [2](iv)Comprehensive learning particle swarm optimization (CLPSO) [4](v)Adaptive differential evolution with optional external archive (JADE) [3]

Table 4 lists the parameter settings for the involved MS algorithms.

Table 4: Parameter settings for the other MS algorithms.

Table 5 shows the mean error and standard deviation of the function error values obtained by LebTLBO and the other MS algorithms. As shown in Table 5, JADE achieves the best results on 17 functions, followed by CMAES with 8 functions and LebTLBO with 5 functions. The results of Wilcoxon’s rank sum tests indicate that LebTLBO is superior to RCBBOG, IHS, CMAES, CLPSO, and JADE on 19, 23, 14, 13, and 5 functions, respectively. It is worse than RCBBOG, IHS, CMAES, CLPSO, and JADE on 8, 6, 14, 10, and 22 functions and similar to them on 3, 1, 2, 7, and 3 functions, respectively. The reasons why LebTLBO performs worse than JADE is that JADE uses some more efficient mechanisms including the mutation strategy “current-to-pbest-1” with an external archive and parameter adaptation.

Table 5: Comparison between LebTLBO and other MS algorithms.

The multiple-problem Wilcoxon signed-rank test between LebTLBO and other MS algorithms is presented in Table 6. LebTLBO attains higher positive ranks (R+) than RCBBOG, IHS, CMAES, and CLPSO and lower positive ranks (R+) than JADE. There are significant differences among LebTLBO, RCBBOG, and IHS when and . Figure 4 shows the Friedman rankings of LebTLBO and other non-TLBO algorithms. It can be seen from Figure 4 that JADE ranks the first and LebTLBO and CLPSO rank the second, followed by CMAES, IHS, and RCBBOG.

Table 6: Multiple-problem Wilcoxon signed-rank between LebTLBO and other MS algorithms.
Figure 4: Friedman rankings of LebTLBO and other non-TLBO algorithms.
4.4. More Discussions
4.4.1. Effectiveness of Diversity Enhanced Teaching Strategy

In LebTLBO, a hybrid teaching strategy (i.e., (5)) is used to improve the original teaching strategy (i.e., (1)). To verify the effectiveness of this hybrid teaching strategy, we perform experiments for LebTLBO with the original teaching strategy (denoted as LebTLBO-1), and the comparison between LebTLBO and LebTLBO-1 is shown in Table 7. From the table, it can be seen that, based on Wilcoxon’s rank sum test, LebTLBO significantly outperforms LebTLBO-1 on all kinds of functions (G1–G4). Specifically, LebTLBO is better than, worse than, and similar to LebTLBO-1 on 20, 8, and 2 functions, respectively.

Table 7: Comparison between LebTLBO and LebTLBO-1.
4.4.2. Discussion of Parameter

In LebTLBO, the maximum learning enthusiasm can be trivially set as , while the minimum learning enthusiasm needs to be optimized. We use six candidate values for (i.e., 0.1, 0.2, 0.3, 0.4, and 0.5), and the box plots of LebTLBO’s results with different values on some typical functions are shown in Figure 5. From Figure 5, it can be found that LebTLBO acquires the best results when the value is on these functions. Therefore, it is reasonable to set for the LebTLBO.

Figure 5: Box plots of LebTLBO’s results with different : (a) F4, (b) F7, (c) F20, and (d) F29.

5. Application to Optimal Control Problems

In this section, to test the efficiency of LebTLBO in dealing with real-world optimization problems, LebTLBO together with other elven MS algorithms is applied to solve three optimal control problems of chemical engineering processes. The control vector parameterization (CVP) [45, 46] is employed to discretize the optimal control problems. In CVP, the time interval is divided into 20 stages with equal length, and piece-linear functions are used to approximate the control variables. The parameter settings for all the algorithms are the same as those in Section 4. The maximum functional evaluations are set as 10000 on these three problems.

5.1. Multimodal CSTR Problem

This problem describes the optimal control of a first-order irreversible chemical reaction carried out in a continuous stirred tank reactor (CSTR) [46]. It has two local optima, and the gradient based algorithms often get trapped in the poor one. The problem is formulated as follows:

Table 8 shows the simulation results of LEBLTBO and other eleven MS algorithms on this problem. Figure 6 shows the optimal control obtained by LebTLBO for this problem. In Table 8, “+”, “−”, and “=” mean that LebTLBO is better than, worse than, or similar to its competitor, respectively, according to the Wilcoxon rank sum test at . The best and the second best results are highlighted with boldface and italic, respectively. From the table, JADE obtains the best mean value (0.13310) and standard deviation . The proposed LebTLBO obtains second best mean value (0.13312) and standard deviation . Based on the Wilcoxon rank sum test, LebTLBO performs significantly better than all the other algorithms except the JADE. In terms of the mean performance, JADE ranks the first and LebTLBO the second, followed by DLTLBO, GOTLBO, HIS, RCBBOG, LETLBO, TLBO, mTLBO, CMAES, CLPSO, and NIWTLBO.

Table 8: Simulation results on the multimodal CSTR problem.
Figure 6: Optimal control for the multimodal CSTR problem.
5.2. Oil Shale Pyrolysis Problem

The oil shale pyrolysis problem is a very challenging optimal control problem [47]. There are five chemical reactions taking place among four chemical species, giving the system of equations:where denotes the concentration of the ith chemical component. The expressions for the rate constant are given by ; represents the gas constant, with the value of 1.9872 × 10−3 kcal(g·mol·K)−1. The values for the preexponential factor and the activation energy are given in [47]. The objective is to determine the optimal temperature profile , by which the final product of is maximized.

Table 9 shows the simulation results of LebTLBO and other eleven MS algorithms. Figure 7 shows the optimal control obtained by LebTLBO for this problem. For this problem, LebTLBO attains the best mean value (0.35366) and standard deviation . Based on the Wilcoxon rank sum test, LebTLBO is significantly better than six algorithms (i.e., RCBBOG, IHS, CMAES, CLPSO, JADE, and DLTLBO) and similar to other TLBO algorithms. In terms of the mean performance, LebTLBO and mTLBO rank the first, followed by JADE, LETLBO, GOTLBO, TLBO, DLTLBO, HIS, NIWTLBO, RCBBOG, CALES, and CLPSO.

Table 9: Simulation results on the oil shale pyrolysis problem.
Figure 7: Optimal control for the oil shale pyrolysis problem.
5.3. Six-Plate Gas Absorption Tower

This problem consists of determining two optimal control variables in a nonlinear six-plate gas absorption tower [48], and it is seldom studied in the literature. The problem is formulated as follows:

Table 10 shows the simulation results of LEBLTBO and other eleven MS algorithms on this problem. Figure 8 shows the optimal control obtained by LebTLBO for this problem. As shown in Table 10, JADE and mTLBO obtain the best mean value (0.11243). The proposed LebTLBO also obtains a very close value (0.11244). Based on the Wilcoxon rank sum test, LebTLBO performs significantly better than eight algorithms (i.e., RCBBOG, IHS, CMAES, CLPSO, TLBO, DLTLBO, NIWTLBO, and GOTLBO), similar to LETLBO, and worse than the other two algorithms (i.e., JADE and mTLBO). In terms of the mean performance, JADE and mTLBO rank the first and LebTLBO and LETLBO the third, followed by IHS, RCBBOG, GOTLBO, TLBO, DLTLBO, NIWTLBO, CLPSO, and CMAES.

Table 10: Simulation results on the six-plate gas absorption tower problem.
Figure 8: Optimal control for the six-plate gas absorption tower problem.

6. Discussion and Conclusions

TLBO is a metaheuristic search algorithm that simulates the teaching and learning process in a classroom. In the basic TLBO, all the learners have the same probability of getting knowledge from others in both teacher phase and learner phase. However, in the real world, learners are different, and each learner’s learning enthusiasm is not the same. Usually, the learners with good grades have high learning enthusiasm, while the learners with bad grades have low learning enthusiasm. Motivated by this consideration, this study has introduced a learning enthusiasm mechanism into the basic TLBO and proposed a learning enthusiasm based TLBO (LebTLBO).

In the proposed LebTLBO, a learning enthusiasm parameter is assigned to each learner based on the grade, and each learner chooses to learn or not learn from the others based on the parameters . In the learning enthusiasm based teaching and learning strategies, good learners have larger probabilities of getting knowledge from others, since they have high learning enthusiasm values, while poor learners have relative low probabilities of getting knowledge from others. This learning enthusiasm based strategy may improve the search efficiency of TLBO. In addition, a poor student tutoring phase is introduced to improve the quality of the poor learners.

LebTLBO has been evaluated on the CEC2014 benchmark functions and compared with other six TLBO algorithms, including TLBO, mTLBO, DLTLBO, NIWTLBO, LETLBO, and GOTLBO. The computational results and statistical tests show that LebTLBO shows the best performance among all the TLBO algorithms, which should be attributed to the learning enthusiasm mechanism of the LebTLBO. LebTLBO has also been compared with other five well-established MS algorithms. The results show that LebTLBO performs better than RCBBOG, IHS, and CLPSO, similar to CMAES, and still worse than JADE. It should be noted that the aim of this work is to introduce the learning enthusiasm mechanism to propose a new TLBO variant, instead of proposing a best algorithm. Meanwhile, the No Free Lunch theorem indicates that, for any algorithm, any elevated performance over one class of problems is exactly paid for in performance over another class [9]. Therefore, it is impossible to design a best algorithm for all problems.

We also applied LebTLBO to solve three optimal control problems of chemical engineering processes. The promising results show that LebTLBO is a good alterative algorithm for dealing with optimal control problems and may have potential in dealing with other real-world optimization problems.

In our future work, we plan to apply LebTLBO to solve other real-world optimization problems such as economic load dispatch and job-shop scheduling. In addition, the learning enthusiasm model can be modified to further improve the performance of LebTLBO. Finally, the learning enthusiasm strategy may be used to enhance the performance of multiobjective TLBO.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by Natural Science Foundation of Jiangsu Province (Grant no. BK 20160540), China Postdoctoral Science Foundation (Grant no. 2016M591783), National Natural Science Foundation of China (Grant no. 61703268), Research Talents Startup Foundation of Jiangsu University (Grant no. 15JDG139), and Fundamental Research Funds for the Central Universities under (Grant 222201717006).

References

  1. C. García-Martínez, M. Lozano, F. Herrera, D. Molina, and A. M. Sánchez, “Global and local real-coded genetic algorithms based on parent-centric crossover operators,” European Journal of Operational Research, vol. 185, no. 3, pp. 1088–1113, 2008. View at Publisher · View at Google Scholar · View at Scopus
  2. N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation, vol. 9, no. 2, pp. 159–195, 2001. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Q. Zhang and A. C. Sanderson, “JADE: adaptive differential evolution with optional external archive,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 945–958, 2009. View at Publisher · View at Google Scholar · View at Scopus
  4. J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at Publisher · View at Google Scholar · View at Scopus
  5. X. Chen, H. Tianfield, C. Mei, W. Du, and G. Liu, “Biogeography-based learning particle swarm optimization,” Soft Computing, vol. 21, no. 24, pp. 7519–7541, 2017. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Mahdavi, M. Fesanghary, and E. Damangir, “An improved harmony search algorithm for solving optimization problems,” Applied Mathematics and Computation, vol. 188, no. 2, pp. 1567–1579, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. W. Gong, Z. Cai, C. X. Ling, and H. Li, “A real-coded biogeography-based optimization with mutation,” Applied Mathematics and Computation, vol. 216, no. 9, pp. 2749–2758, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. X. Chen, H. Tianfield, W. Du, and G. Liu, “Biogeography-based optimization with covariance matrix based migration,” Applied Soft Computing, vol. 45, pp. 71–85, 2016. View at Publisher · View at Google Scholar · View at Scopus
  9. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67–82, 1997. View at Publisher · View at Google Scholar · View at Scopus
  10. X. Xia, C. Xie, B. Wei, Z. Hu, B. Wang, and C. Jin, “Particle swarm optimization using multi-level adaptation and purposeful detection operators,” Information Sciences, vol. 385-386, pp. 174–195, 2017. View at Publisher · View at Google Scholar · View at Scopus
  11. R. V. Rao, V. J. Savsani, and D. P. Vakharia, “Teaching-learning-based optimization: an optimization method for continuous non-linear large scale problems,” Information Sciences, vol. 183, no. 1, pp. 1–15, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. R. V. Rao, Teaching Learning Based Optimization and Its Engineering Applications, Springer, Basel, Switzerland, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  13. K. Yu, X. Wang, and Z. Wang, “Constrained optimization based on improved teaching-learning-based optimization algorithm,” Information Sciences, vol. 352-353, pp. 61–78, 2016. View at Publisher · View at Google Scholar · View at Scopus
  14. V. K. Patel and V. J. Savsani, “A multi-objective improved teaching-learning based optimization algorithm (MO-ITLBO),” Information Sciences, vol. 357, pp. 182–200, 2016. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Biswas, S. Kundu, D. Bose, and S. Das, “Cooperative co-evolutionary teaching-learning based algorithm with a modified exploration strategy for large scale global optimization,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface, vol. 7677, pp. 467–475, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. X. Chen, C. Mei, B. Xu, K. Yu, and X. Huang, “Quadratic interpolation based teaching-learning-based optimization for chemical dynamic system optimization,” Knowledge-Based Systems, vol. 145, pp. 250–263, 2018. View at Publisher · View at Google Scholar
  17. D. Chen, R. Lu, F. Zou, and S. Li, “Teaching-learning-based optimization with variable-population scheme and its application for ANN and global optimization,” Neurocomputing, vol. 173, pp. 1096–1111, 2016. View at Publisher · View at Google Scholar · View at Scopus
  18. T. Niknam, F. Golestaneh, and M. S. Sadeghi, “θ-Multiobjective teaching-learning-based optimization for dynamic economic emission dispatch,” IEEE Systems Journal, vol. 6, no. 2, pp. 341–352, 2012. View at Publisher · View at Google Scholar · View at Scopus
  19. H. S. Keesari and R. V. Rao, “Optimization of job shop scheduling problems using teaching-learning-based optimization algorithm,” OPSEARCH, vol. 51, no. 4, pp. 545–561, 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. R. V. Rao and V. Patel, “Multi-objective optimization of heat exchangers using a modified teaching-learning-based optimization algorithm,” Applied Mathematical Modelling, vol. 37, no. 3, pp. 1147–1162, 2013. View at Publisher · View at Google Scholar · View at Scopus
  21. S. C. Satapathy, A. Naik, and K. Parvathi, “Weighted teaching-learning-based optimization for global function optimization,” Applied Mathematics, vol. 4, no. 3, pp. 429–439, 2013. View at Publisher · View at Google Scholar
  22. Z.-S. Wu, W.-P. Fu, and R. Xue, “Nonlinear inertia weighted teaching-learning-based optimization for solving global optimization problem,” Computational Intelligence and Neuroscience, vol. 2015, Article ID 292576, 2015. View at Publisher · View at Google Scholar · View at Scopus
  23. S. C. Satapathy and A. Naik, “Modified teaching-learning-based optimization algorithm for global numerical optimization—a comparative study,” Swarm and Evolutionary Computation, vol. 16, pp. 28–37, 2014. View at Publisher · View at Google Scholar · View at Scopus
  24. F. Zou, L. Wang, X. Hei, and D. Chen, “Teaching-learning-based optimization with learning experience of other learners and its application,” Applied Soft Computing, vol. 37, pp. 725–736, 2015. View at Publisher · View at Google Scholar · View at Scopus
  25. V. Patel and V. Savsani, “Multi-objective optimization of a Stirling heat engine using TS-TLBO (tutorial training and self learning inspired teaching-learning based optimization) algorithm,” Energy, vol. 95, pp. 528–541, 2016. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Ghasemi, M. Taghizadeh, S. Ghavidel, J. Aghaei, and A. Abbasian, “Solving optimal reactive power dispatch problem using a novel teaching-learning-based optimization algorithm,” Engineering Applications of Artificial Intelligence, vol. 39, pp. 100–108, 2015. View at Publisher · View at Google Scholar · View at Scopus
  27. H.-B. Ouyang, L.-Q. Gao, X.-Y. Kong, D.-X. Zou, and S. Li, “Teaching-learning based optimization with global crossover for global optimization problems,” Applied Mathematics and Computation, vol. 265, pp. 533–556, 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. S. Tuo, L. Yong, and T. Zhou, “An improved harmony search based on teaching-learning strategy for unconstrained optimization problems,” Mathematical Problems in Engineering, vol. 2013, Article ID 413565, 29 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  29. W. H. Lim and N. A. M. Isa, “Teaching and peer-learning particle swarm optimization,” Applied Soft Computing, vol. 18, pp. 39–58, 2014. View at Publisher · View at Google Scholar · View at Scopus
  30. J. D. Huang, L. Gao, and X. Y. Li, “An effective teaching-learning-based cuckoo search algorithm for parameter optimization problems in structure designing and machining processes,” Applied Soft Computing, vol. 36, pp. 349–356, 2015. View at Publisher · View at Google Scholar · View at Scopus
  31. F. Zou, L. Wang, D. Chen, and X. Hei, “An improved teaching-learning-based optimization with differential learning and its application,” Mathematical Problems in Engineering, vol. 2015, Article ID 754562, 19 pages, 2015. View at Publisher · View at Google Scholar
  32. L. Wang, F. Zou, X. Hei et al., “A hybridization of teaching–learning-based optimization and differential evolution for chaotic time series prediction,” Neural Computing and Applications, vol. 25, no. 6, pp. 1407–1422, 2014. View at Publisher · View at Google Scholar · View at Scopus
  33. Q. Zhang, G. Yu, and H. Song, “A hybrid bird mating optimizer algorithm with teaching-learning-based optimization for global numerical optimization,” Statistics, Optimization and Information Computing, vol. 3, no. 1, pp. 54–65, 2015. View at Publisher · View at Google Scholar · View at Scopus
  34. M. Güçyetmez and E. Çam, “A new hybrid algorithm with genetic-teaching learning optimization (G-TLBO) technique for optimizing of power flow in wind-thermal power systems,” Electrical Engineering, vol. 98, no. 2, pp. 145–157, 2016. View at Publisher · View at Google Scholar · View at Scopus
  35. X. Chen, B. Xu, C. Mei, Y. Ding, and K. Li, “Teaching–learning–based artificial bee colony for solar photovoltaic parameter estimation,” Applied Energy, vol. 212, pp. 1578–1588, 2018. View at Publisher · View at Google Scholar
  36. F. Zou, L. Wang, X. Hei, D. Chen, and D. Yang, “Teaching-learning-based optimization with dynamic group strategy for global optimization,” Information Sciences, vol. 273, pp. 112–131, 2014. View at Publisher · View at Google Scholar · View at Scopus
  37. D. Chen, F. Zou, J. Wang, and W. Yuan, “SAMCCTLBO: a multi-class cooperative teaching–learning-based optimization algorithm with simulated annealing,” Soft Computing, pp. 1–23, 2015. View at Publisher · View at Google Scholar · View at Scopus
  38. L. Wang, F. Zou, X. Hei, D. Yang, D. Chen, and Q. Jiang, “An improved teaching-learning-based optimization with neighborhood search for applications of ANN,” Neurocomputing, vol. 143, pp. 231–247, 2014. View at Publisher · View at Google Scholar · View at Scopus
  39. F. Zou, D. Chen, R. Lu, and P. Wang, “Hierarchical multi-swarm cooperative teachinglearning-based optimization for global optimization,” Soft Computing, pp. 1–22, 2016. View at Google Scholar
  40. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at Scopus
  41. J. Liang, B. Qu, and P. Suganthan, “Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization,” Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore, 2013. View at Google Scholar
  42. P. N. Suganthan, N. Hansen, J. J. Liang et al., “Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization,” KanGAL report, 2005. View at Google Scholar
  43. J. Alcalá-Fdez, L. Sánchez, S. García et al., “KEEL: a software tool to assess evolutionary algorithms for data mining problems,” Soft Computing, vol. 13, no. 3, pp. 307–318, 2009. View at Publisher · View at Google Scholar · View at Scopus
  44. X. Chen, K. Yu, W. Du, W. Zhao, and G. Liu, “Parameters identification of solar cell models using generalized oppositional teaching learning based optimization,” Energy, vol. 99, pp. 170–180, 2016. View at Publisher · View at Google Scholar
  45. E. Balsa Canto, J. R. Banga, A. A. Alonso, and V. S. Vassiliadis, “Restricted second order information for the solution of optimal control problems using control vector parameterization,” Journal of Process Control, vol. 12, no. 2, pp. 243–255, 2002. View at Publisher · View at Google Scholar · View at Scopus
  46. X. Chen, W. Du, H. Tianfield, R. Qi, W. He, and F. Qian, “Dynamic optimization of industrial processes with nonuniform discretization-based control vector parameterization,” IEEE Transactions on Automation Science and Engineering, vol. 11, no. 4, pp. 1289–1299, 2014. View at Publisher · View at Google Scholar · View at Scopus
  47. D.-Y. Sun, P.-M. Lin, and S.-P. Lin, “Integrating controlled random search into the line-up competition algorithm to solve unsteady operation problems,” Industrial & Engineering Chemistry Research, vol. 47, no. 22, pp. 8869–8887, 2008. View at Publisher · View at Google Scholar · View at Scopus
  48. F.-S. Wang and J.-P. Chiou, “Nonlinear optimal control and optimal parameter selection by a modified reduced gradient method,” Engineering Optimization, vol. 28, no. 4, pp. 273–298, 1997. View at Publisher · View at Google Scholar · View at Scopus