Abstract

The teaching-learning-based optimization (TLBO) algorithm is a population-based optimization algorithm which is based on the effect of the influence of a teacher on the output of learners in a class. A variant of teaching-learning-based optimization (TLBO) algorithm with differential learning (DLTLBO) is proposed in the paper. In this method, DLTLBO utilizes a learning strategy based on neighborhood search of teacher phase in the standard TLBO to generate a new mutation vector, while utilizing a differential learning to generate another new mutation vector. Then DLTLBO employs the crossover operation to generate new solutions so as to increase the diversity of the population. By the integration of the local search and the global search, DLTLBO achieves a tradeoff between exploration and exploitation. To demonstrate the effectiveness of our approaches, 24 benchmark functions are used for simulating and testing. Moreover, DLTLBO is used for parameter estimation of digital IIR filter and experimental results show that DLTLBO is superior or comparable to other given algorithms for the employed examples.

1. Introduction

TLBO is a recently proposed population-based algorithm which simulates the teaching-learning process in a classroom. Rao et al. proposed TLBO for the optimization of mechanical design problems [1] and then applied TLBO to find global solutions to large-scale nonlinear optimization problems [2]. Toĝan [3] presented a design procedure that employed TLBO for the discrete optimization of planar steel frames. In [4], elitist concept is introduced in the TLBO algorithm and its effect on the performance of the algorithm is investigated. The effects of common controlling parameters such as the population size and the number of generations on the performance of the algorithm are also investigated. Degertekin and Hayalioglu [5] applied TLBO in the optimization of four truss structures. Rao and Patel [6] proposed an improved TLBO by introducing the concepts of the number of teachers, adaptive teaching factor, tutorial training, and self-motivated learning. Niknam et al. [7] presented a θ-multiobjective TLBO algorithm to solve the dynamic economic emission dispatch problem, where the optimization procedure considers phase angles attributed to the real value of the design parameters, instead of the design parameters themselves. Zou et al. [8] proposed an improved teaching-learning-based optimization algorithm with dynamic group strategy for global optimization problems. In [9], a ring neighborhood topology is introduced into the original TLBO algorithm to maintain the exploration ability of the population. Details of the conceptual basis of TLBO were given by Waghmare [10]. TLBO has emerged as one of the simplest and most efficient techniques, as it has been empirically shown to perform well on many optimization problems. However, in evolutionary computation research there have been always attempts to improve any given findings further and further.

The problem of IIR system identification can also be viewed as a problem of parameter estimation of adaptive digital filtering. For adaptive digital IIR filter design, the objective of the adaptation is to adjust filter coefficients of a digital filter to estimate actual parameter values of an unknown system from its inputs and outputs. However, there are some potential disadvantages with the design of adaptive digital IIR filters [1113]. A major concern in these disadvantages is the mean square error between the desired response and estimated filter output of digital IIR filter. This error surface is generally nonquadratic and multimodal with respect to the filter coefficients, and hence optimization algorithms are required to find out better global solution. In order to avoid the local optima problem encountered by gradient descent methods in IIR system identification, considering IIR system identification as an optimization problem, recently, evolutionary algorithm and swarm intelligence, such as Genetic Algorithms (GA) [1416], Simulated Annealing (SA) [17], Tabu Search (TS) [18], Differential Evolution (DE) [1921], Ant Colony Optimization (ACO) [22], Artificial Bee Colony (ABC) algorithm [23], Particle Swarm Optimizer (PSO) [24, 25], Gravitation Search Algorithm (GSA) [26, 27], and Cuckoo search optimization [28], have been made for studying alternative structures and algorithms for adaptive digital IIR filters.

In order to find suitable filter coefficients for the digital IIR filter efficiently, this paper aims to develop a teaching-learning-based optimization (TLBO) approach. First, an improved TLBO with differential learning (DLTLBO) is proposed in paper. In the method, DLTLBO utilizes the integration of the learning strategy based on neighborhood search of teacher phase in the standard TLBO and the differential learning to generate two new mutation vectors, and DLTLBO employs the crossover operation to generate new solutions so as to increase the diversity of the population. Then, to evaluate the optimization performance of the proposed algorithm, the comparison study on 24 benchmark optimization problems is given. Moreover, DLTLBO is applied to the design of the digital IIR filter, and its performance is compared to that of other algorithms. The simulation results show that DLTLBO has better or comparable performance compared to the other algorithms for the employed examples.

The remainder of this paper is organized as follows. Section 2 describes the teaching-learning-based optimization. Section 3 presents the improved teaching-learning-based optimization with differential learning (DLTLBO). In Section 4, 24 benchmark functions are tested and the experiments are conducted along with statistical tests. DLTLBO is applied to the design of the digital IIR filter in Section 5. Conclusions are given in Section 6.

2. Teaching-Learning-Based Optimization

Inspired by the philosophy of the classical school teaching and learning process, Rao et al. [1, 2] first proposed a novel teaching-learning-based optimization (TLBO). In this method, population consists of learners in a class and design variables are courses offered. The brief description of the algorithm is given as follows.

2.1. Teacher Phase

During the teacher phase, learner will move their position toward the position of the best learner (), by taking into account the current mean value of the learners () that represents the average qualities of all learners in the population. The updating formula of the learning for a learner in teaching phase is given bywhere TF is a teaching factor that decides the value of to be changed and is a random vector where each element is a random number in the range . The value of TF can be either 1 or 2, which is again a heuristic step and is decided randomly with equal probability as

Learner modification is expressed as follows: for each learner  Accept if it gives a better function value endfor

2.2. Learner Phase

During the learner phase, learners increase their knowledge by interacting with each other. A learner learns something new if the other learner has more knowledge than him or her:where is a random vector where each element is a random number in the range , is the fitness value of the learner , and is the fitness value of the learner .

Learner modification is expressed as follows: for each learner of the class Randomly select one learner , such that ;  Accept if it gives a better function value end for

3. Proposed Algorithm: DLTLBO

The search in TLBO is mainly based on the global information, and DE is mainly based on the distance and direction information which is a kind of local information [29]. To balance the global and local search ability, a modified interactive learning strategy is proposed in teacher phase. In this method, DLTLBO utilizes the learning strategy based on neighborhood search of teacher phase in the standard TLBO to generate a new mutation vector, while utilizing the differential learning to generate another new mutation vector. Different from our previous method [30], DLTLBO employs the DE crossover operation to generate new solutions so as to increase the diversity of the population. The complete flowchart of the DLTLBO algorithm is shown in Figure 1.

In the proposed DLTLBO algorithm, for each learner , its associated mutant vector based on differential mutation can be generated as follows:where and are the teacher and the mean of the learner’s corresponding neighborhood, respectively. In this paper, ring neighborhood topology with -neighborhood radius is used to determine in the neighborhood area of each learner.

For each learner , its other associated mutant vector can be generated based on differential mutation as follows:

After the mutation phase, crossover operation is applied to each pair of the mutant vectors in order to enhance the potential diversity of the population. That is, the updating formula of the learning for a learner in teacher phase is given as follows: is a parameter called crossover probability; is the learners which is calculated according to (4) and is the learners which is calculated according to (5).

Finally, the one-to-one greedy selection is employed by means of comparing a parent and its corresponding offspring .

In learner phase, interactive learning of the original TLBO algorithm in the learner phase is still used in our DLTLBO algorithm. In this learning method, one of the learners randomly learns from the other learner in the population. This learning method can be treated as the global search strategy. As explained above, the pseudocode for the implementation of DLTLBO is summarized in Algorithm 1.

) Begin
() Initialize (number of learners), (number of dimensions) and CR
() Initialize learners and evaluate all learners
() Donate and of the learner’s corresponding neighborhood
() while (stopping condition not met)
() for each learner of the class    % Teacher phase
()      TF = round(1 + rand(0, 1))
()      for  
()         =   + rand ()
()     endfor
()     Select three learners randomly from the current class where
()     
()     for  
()       if ,   = ; else,   = ; endif
()     endfor
()     Accept if f() is better than
() endfor
() for each learner of the class    % Learner phase
()     Randomly select one learner , such that
()     if better
()      for  
()        = + rand ()
()      endfor
()     else
()      for
()        = + rand ()
()      endfor
()     endif
()     Accept if is better than
()  endfor
() endwhile
() end

As explained in Algorithm 1, it can be observed that the probability CR determines adopting the learning strategy of the learners in the teacher phase of the DLTLBO algorithm. That is, a random number between 0 and 1 is generated for each learner; if < CR, the learning strategy of the original TLBO will be adopted by the learner; otherwise, the differential learning strategy will be adopted by the learner. This will be helpful in balancing the advantages of fast convergence rate (the attraction of the learning strategy of TLBO learning) and exploration for local areas (neighborhood search strategy and the differential learning) in DLTLBO. By this means, DLTLBO achieves a tradeoff between exploration and exploitation, improves the global search capability, and accelerates the convergence.

4. Functions Optimization

In this section, to compare the performance of DLTLBO with some other methods, 7 algorithms are also simulated in the paper. The details of benchmark functions, the simulation settings, and experimental results are given as follows.

4.1. Benchmark Functions

The details of 24 benchmark functions are shown in Table 1. Among 24 benchmark functions, to are unimodal functions, to are multimodal functions, and to are the rotated version of to , respectively. to are taken from the CEC 2008 test suite. The searching range and theory optima for all functions are also shown in Table 1.

4.2. Parameter Settings

In this paper, all the experiments are carried out on the same machine with a Celoron 2.26 GHz CPU, 2 GB memory, and Windows XP operating system with MATLAB 7.9. For the purpose of reducing statistical errors, each function is independently simulated 50 runs, and their mean results are used in the comparison. For all approaches, the population size was set to 50, and the maximal number of cost function evaluations (FEs) is used as termination condition of algorithm, namely, 100,000 for 10D problems. The parameter settings for all algorithms in comparison are extracted from their corresponding literatures and are described as follows:(i)jDE [31]: , ;(ii)SaDE [32]: , , , LP = 50;(iii)PSO-wFIPS [33]: w = 0.7298;(iv)PSO-FDR [34]: wmin = 0.4, wmax = 0.9, ψ1 = 1, ψ2 = 1, ψ3 = 2;(v)ETLBO [4]: elite size = 2;(vi)Proposed DLTLBO: , , neighborhood size = 3.

4.3. Comparisons on the Solution Accuracy

The results are shown in Table 2 in terms of the average best solution and the standard deviation of the solutions obtained in the 50 independent runs by each algorithm for 10D problems on 24 test functions, where “” summarizes the competition results among DLTLBO and other algorithms in the last row of the table, which means that DLTLBO wins in functions, ties in functions, and loses in functions. The best results among the algorithms are shown in bold. Figure 2 presents the convergence graphs of different benchmark functions in terms of the median fitness values achieved by 7 algorithms for 50 runs.

The comparisons in Table 2 show that DLTLBO outperforms jDE on 10 functions, while jDE achieves better results on 10 functions. SaDE obtains better performance on 11 functions, while DLTLBO only outperforms jDE for 9 functions. PSO-wFIPS achieves better results than DLTLBO on 5 functions, while DLTLBO performs better for the remaining 19 functions. FDR-PSO only obtains better performance than DLTLBO on , , and , while DLTLBO outperforms FDR-PSO for the remaining 19 functions except and . DLTLBO outperforms TLBO on 16 functions and outperforms ETLBO on 17 functions, while TLBO, ETLBO, and DLTLBO perform well on functions and . It can be concluded that the proposed DLTLBO significantly improves the algorithm’s performance, although DLTLBO is not always better than other algorithms.

5. IIR System Identification Using DLTLBO

The goal of filter modeling is to alter filter coefficients of a digital filter to match an unknown system transfer function. Evolutionary algorithms are heuristic search methods, which are characterized by features as stochastic, adaptive, and learning in order to produce intelligent optimization schemes. In this section, DLTLBO is used to optimize the coefficients vector of the digital IIR filter.

5.1. Digital IIR Filter Identification

Application of IIR filter in system identification has been widely studied since many problems encountered in signal processing can be characterized as a system identification problem. Therefore, in the simulation study, digital IIR filters are designed for the system identification purpose. In this case, the parameters of the filters are successively adjusted by evolutionary optimization algorithms until the error between the output of the filter and the unknown system is minimized. In general, we expect that the number of zero points is more than the number of pole points. A schematic of filter modeling problem using the evolutionary algorithms is shown in Figure 3.

The general expression of digital IIR filters is given by the following difference equation [35]:where is the input of the digital IIR filter at time , is the output of the digital IIR filter at time , and are denominator and numerator orders, is the filter’s order, and are adjustable filter coefficients of the model, and . The number of the digital IIR filter coefficients is .

For adaptive digital IIR filter design, the objective of the adaptation is to adjust filter coefficients of a digital filter to estimate actual parameter values of an unknown system from its inputs and outputs. The design of this filter can be considered as a minimization problem of the cost function. The most commonly used approach to digital IIR filter design is to formulate the problem as an optimization problem with the mean square error (MSE) as the cost function to be minimized:where denotes the digital IIR filter coefficient vector, is the number of samples used for the computation of the time-averaged cost function, and and are the filter’s ideal and actual responses, respectively.

To achieve higher stop band attenuation and to have better control on the transition width, the frequency cost function [27] to be minimized is defined aswhere and are the filter’s ideal and actual frequency responses, respectively. and are the ripples in the pass band and stop band, and are pass band and stop band normalized cutoff frequencies, respectively, and and are the number of samples used in the pass band and stop band. In this case, the best fitness represents that the magnitude response of the solution is within in the pass band and in the stop band.

5.2. Description of the DLTLBO for the Digital IIR Filter

The steps of the DLTLBO for the digital IIR filter are given as follows.

Step 1. Initialize every real coded learner of the population. Each learner consists of numerator filter coefficients and denominator filter coefficients; the total number of filter coefficients is for th order filter to be designed. To maintain the stability of IIR filter, the efficient way of [17] is adopted. Hence, the coefficients are formed by the constraints and all the lattice-reflection-form coefficients of the direct-form coefficients are less than 1.

Step 2. Compute the error fitness value for each learner according to the cost function.

Step 3 (teacher phase). Each learner learns from the teacher according to (4)–(6). Compute the error fitness value for each learner and update each learner.

Step 4 (learner phase). Each learner randomly learns from the other learner according to (3). Compute the error fitness value for each learner and update each learner.

Step 5 (termination criterion). Stop if the termination criterion is met; the dimensional values of the global teacher are the coefficients of the designed digital IIR filter; otherwise, repeat from Step 3.

5.3. Experimental Simulation

This section presents experimental evaluation of DLTLBO on designing digital IIR filters. Some benchmark problems are selected to compare the performance of GA [36], PSOwFIPS [33], SaDE [32], TLBO [1, 2], and DLTLBO algorithms. In all cases, the maximal value of FEs is used as ended condition of algorithm, namely, 25000 for GA, PSOwFIPS, SaDE, TLBO, and DLTLBO, respectively; the number of population is set to 50, and the initial population is selected randomly. For GA, the crossover rate is set to 0.9 and the mutation rate is set to 0.1. For PSOwFIPS, w = 0.7298. For SaDE, , , CR ~ N(CRm, 0.1), and LP = 50. For our DLTLBO algorithm, the scaling factor is set to 0.9 and the crossover probability CR is set to 0.1.

Example 1. In the first example, the transfer function of the unknown plant is unknown and only the performance parameters are given in Table 3, including low-pass (LP), high-pass (HP), band-pass (BP), and band-stop (BS) filters.

In this experiment, the filter orders are set to 8 for all IIR LP, HP, BP, and BS filters and the number of samples is set to 128. The minimum and maximum values of the coefficients are −2 and 2. The cost function is given as shown in (9). All the results are averaged over 20 random runs with randomly chosen initial positions. Table 4 shows the maximum values and the mean values of stop band attenuation, the means and standard deviations of MSE, and the means and standard deviations of CPU times. Figure 4 shows the comparison of convergence curves, magnitude responses, and gain responses of five algorithms. In addition, a statistical t-test [21] is given in Table 4.

As seen from Table 4, it is observed that the proposed 8th order IIR filter design using DLTLBO spends the less CPU running time among all the three algorithms except GA. For low-pass (LP) filter, it is observed that the maximum stop band attenuations 28.13 dB, 30.22 dB, 31.43 dB, 28.74 dB, and 26.21 dB are obtained for GA, PSOwFIPS, SaDE, TLBO, and DLTLBO, respectively. Statistical t-test between other algorithms and DLTLBO also indicates that DLTLBO performs well. Comparative curves in Figures 4(a)~4(c) also explore that the proposed 8th order IIR filter design using SaDE attains the highest stop band attenuation, although DLTLBO obtains the least MSE. Figure 4(a) shows the convergence curves for the designed 8th order IIR LP filter obtained for different algorithms, Figure 4(b) shows the comparative magnitude plots in dB, and Figure 4(c) represents the comparative normalized gain plots for 8th order IIR LP filter. For high-pass (HP) filter and band-stop (BS) filter, it is seen that the proposed 8th order IIR filter design using DLTLBO attains the highest stop band attenuation and the least MSE. It is also observed from Table 4 and Figure 4 that, for band-pass (BP) filter, the proposed 8th order IIR filter design using SaDE attains the highest stop band attenuation and the least MSE, and DLTLBO obtains the second highest stop band attenuation and the second least MSE. It can be concluded that DLTLBO has a good performance of the solution accuracy, although DLTLBO needs more CPU running time.

Example 2. In the first example (taken from [21, 22, 26]), a second-order IIR filter is estimated by a second-order IIR filter. The unknown plant and the filter had the following transfer functions:

The input to the system and the filter is a white sequence. The sequence length used is .

Since the filter order is equal to that of the system, the error surface is unimodal. The best solution should be located at . The cost function is given as shown in (8). All the results are averaged over 20 random runs with randomly chosen initial positions. Table 5 shows the best filter coefficients, the means and standard deviations of MSE, the means and standard deviations of CPU times, and statistical t-test between DLTLBO and other algorithms. Figure 5 presents the normalized cost function value versus number of cost function evaluations and the magnitude responses of three algorithms.

From the values of MSE in Table 5, one can see that PSOwFIPS, SaDE, TLBO, and DLTLBO can find the global minimum. From the values of t-test, it is also seen that DLTLBO is statistically better than GA. As seen from Table 5 and Figure 5, although DLTLBO spends more time than TLBO when the number of FEs reaches the given maximum FEs, DLTLBO has the faster convergence speed than other algorithms. It can be concluded that DLTLBO has a good performance of the solution accuracy.

Example 3. In the second example (taken from [17, 18, 21, 22, 26]), a second-order IIR filter is estimated by a first-order IIR filter. The unknown plant and the filter had the following transfer functions:

The input to the system and the filter is a Gaussian white noise with the mean of zero and unit variance. The sequence length used is .

Since the reduced order filter is employed for the identification, the error surface of the cost function is multimodal. The error surface has a global minimum at and a local minimum at . The cost function is given as shown in (8). All the results are averaged over 20 random runs with randomly chosen initial positions. Table 6 shows the comparative results between DLTLBO and other algorithms. Figure 6 presents the normalized cost function value versus number of cost function evaluations and the magnitude responses of three algorithms.

Table 6 shows the best filter coefficients, the means and standard deviations of MSE, and CPU times. It is observed that DLTLBO has the smaller mean value of MSE, while the standard deviations of MSE obtained by SaDE, TLBO, and DLTLBO are approximate. It can be observed from Table 6 that when the number of FEs reaches the given maximum FEs, GA spends less time among all other three algorithms and DLTLBO spends less time than PSOwFIPS and SaDE. It is also seen that the differences of the means of MSE are statistically significant from the values of t-test between DLTLBO and other corresponding algorithms. As seen from Table 6 and Figure 6, the convergence speed of DLTLBO is faster than those of other algorithms.

Example 4. In the third example (taken from [24]), a third-order IIR filter is estimated by a third-order IIR filter. The unknown plant and the filter had the following transfer functions:

The input to the system and the filter is a Gaussian white noise with the mean of zero and unit variance. The sequence length used is .

Since the filter order is equal to that of the system, the error surface is unimodal. The best solution should be located at . The cost function is given as shown in (8). All the results are averaged over 20 random runs with randomly chosen initial positions. Table 7 shows the comparative results between DLTLBO and other algorithms. Figure 7 presents the normalized cost function value versus number of cost function evaluations and the magnitude responses of three algorithms.

As seen from Table 7, the filter coefficients obtained by PSO and DLTLBO are exactly located at the best solution. Table 7 lists that PSOwFIPS, SaDE, TLBO, and DLTLBO have the same best filter coefficients. SaDE offers the best MSE performance and DLTLBO outperforms GA, PSOwFIPS, and TLBO. The t-test in terms of the means of MSE between DLTLBO and other algorithms indicates that DLTLBO is statistically better than the corresponding algorithms except for SaDE. From Table 7 and Figure 7, we can see that the convergence speed of DLTLBO is faster than those of other algorithms except SaDE. It can be concluded that DLTLBO has a good performance of the solution accuracy, although DLTLBO needs more CPU running time when the number of FEs reaches the given maximum FEs.

Example 5. In the fourth example (taken from [17, 18]), a third-order filter is estimated by a second-order IIR filter. The unknown plant and the filter had the following transfer functions:

The input to the system and the filter is a uniformly distributed white sequence; taking values from , the SNR = 30 dB. The sequence length used is .

Since the reduced order filter is employed for the identification, the error surface of the cost function is multimodal. The error surface has a global minimum at . The cost function is given as shown in (8). All the results are averaged over 20 random runs with randomly chosen initial positions. The comparative results between DLTLBO and other algorithms are given in Table 8. Figure 8 presents the normalized cost function value versus number of cost function evaluations and the magnitude responses of three algorithms.

As seen from Table 8, DLTLBO offers the mean best values and standard deviations of MSE. From the values of t-test between DLTLBO and other algorithms, it is also seen that DLTLBO is statistically better than the corresponding algorithms. From Figure 8, one can see that convergence speed of TLBO is faster than the other two algorithms. It can also be observed from Table 8 that DLTLBO spends the third most times among the five algorithms when the number of FEs reaches the given maximum FEs. It can be concluded that DLTLBO could have the best MSE performance among all the five algorithms, although all five algorithms cannot find the global minimum.

6. Conclusion and Future Work

In this paper, TLBO has been extended to DLTLBO which uses neighborhood search and differential learning to improve the TLBO algorithm’s computation performance. The proposed DLTLBO algorithm maintains the diversity of the population effectively and provides more useful information in the iterated process. To demonstrate the effectiveness of our approaches, 24 benchmark functions have been used for simulating and testing. Moreover, DLTLBO has been also developed to optimize the coefficients setting of digital IIR filters simultaneously. In the experimental study, DLTLBO gained good results on five digital IIR filter design cases even though DLTLBO needs to spend much more time when the number of FEs reaches the given maximum FEs. It is shown that the proposed algorithm is potential for function optimization and parameter estimation of digital IIR system in comparison with other approaches.

Further works include research into the following topics. First, neighborhood topological structures and selection of parameters should be investigated to make the DLTLBO algorithm more efficient. Secondly, DLTLBO of this paper puts emphasis on the use of differential learning, rather than on analyzing the influence of different learning formations of differential evolution on DLTLBO. The latter problem would be interesting to investigate in future research. Thirdly, DLTLBO could be used in a wide variety of filter design applications. Tradeoff between various parameters of the filter can lead to designing different kinds of filters according to different requirements in various kinds of applications. Finally, it is expected that DLTLBO will be used in other real-world optimization problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research was partially supported by National Natural Science Foundation of China (61100173, 61100009, 61272283, 61304082, and 61572224). This work is also supported by Natural Science Foundation of Anhui Province (Grant no. 1308085MF82) and Major Project of Natural Science Research in Anhui Province under Grant KJ2015ZD36.