Abstract

Harmony search (HS) algorithm is an emerging population-based metaheuristic algorithm, which is inspired by the music improvisation process. The HS method has been developed rapidly and applied widely during the past decade. In this paper, an improved global harmony search algorithm, named harmony search based on teaching-learning (HSTL), is presented for high dimension complex optimization problems. In HSTL algorithm, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to maintain the proper balance between convergence and population diversity, and dynamic strategy is adopted to change the parameters. The proposed HSTL algorithm is investigated and compared with three other state-of-the-art HS optimization algorithms. Furthermore, to demonstrate the robustness and convergence, the success rate and convergence analysis is also studied. The experimental results of 31 complex benchmark functions demonstrate that the HSTL method has strong convergence and robustness and has better balance capacity of space exploration and local exploitation on high dimension complex optimization problems.

1. Introduction

With the development of scientific technology, many real-life optimization problems are becoming more and more complex and difficult. So how to resolve the complex problems in an exact manner within a reasonable time cost is very important. The traditional optimization algorithms are difficult to solve the nonlinear and nondifferential problems. In recent years, most popular swarm intelligence optimization algorithms, such as genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE) algorithm, have been successfully applied to large-scale complicated problems of scientific and engineering computing.

Inspired by the process of the musicians’ improvisation of the harmony, the harmony search (HS) algorithm is proposed by Geem et al. [1, 2]. Similar to the GA and PSO, the HS algorithm is a meta-heuristic random optimization algorithm. In recent years, HS has been applied more broadly in the fields of engineering optimization. Such as pipe network design [3], structural optimization [4], clustering of text document [5], combined heat and power economic dispatch problem [6], and scheduling of multiple dam system [7]. The aforementioned applications show that HS algorithm has significant in solving complex engineering application problems. It has strong ability of exploration and has a cheap running cost. However, the classical harmony search algorithm is not efficient enough for solving large-scale problems, which has a slow convergence speed and low-precision solution. So some improved HS algorithms were proposed. Mahdavi et al. proposed an improved HS algorithm (IHS) [8] that employed a novel method generating new solution vectors which enhanced accuracy and convergence speed. Recently, Omran and Mahdavi tried to improve the performance of HS by incorporating some techniques from swarm intelligence. Tuo and Yong presented an improved harmony search algorithm with chaos (HSCH) [9]. The new variant named GHS (Global Best Harmony Search) [10] reportedly outperformed the HS and IHS algorithm over the benchmark problems.

HS algorithm has strong ability on exploring the regions of the solution space in a reasonable time. However, it has lower exploitation ability in later period. Therefore, some improved HS algorithm is proposed to enhance the local search ability and solution precision, such as local-best harmony search algorithm with dynamic subpopulations (DLHS) [11], harmony search algorithm with differential mutation operator (HSDE) [12], and novel global harmony search algorithm for unconstrained problems (NGHS) [13, 14]. In addition, Gao et al. proposed modified harmony search methods for unimodal and multimodal optimization [15]. Pan et al. proposed self-adaptive harmony search algorithm for optimization (SGHS) [16]. Yadav et al. presented an intelligent tuned HS algorithm [17].

Now some existing state-of-the-art harmony search algorithms, such as NGHS, HSDE, and SGHS, have shown exceptional problem-solving ability, but they have some disadvantages over the multimodal and high dimensional function. To enhance the performance of high dimensional multimodal problem by HS method, this paper proposed an improved large-scale HS algorithm with harmony search teaching-learning (HSTL). In the HSTL algorithm, the HS algorithm is improved by embedding teaching-learning-based-optimization (TLBO) [17, 18] method which has a strong searching capacity for high dimensional problem.

The rest of this paper is organized as follows. In Section 2, the basic framework of the classical harmony search method is summarized in a comprehensive style, and the two excellent variants of HS (SGHS and NGHS) are briefly presented. A novel teaching-learning-based-optimization algorithm is provided, and the details of HSTL algorithm are presented in Section 3. In Section 4, 31 different characteristic benchmark problems, which consist of separable problems, nonseparable problems, shifted problems, shifted rotated problems, and hybrid composite problems, are considered, and the numerical results of HSTL method are demonstrated. Furthermore, to investigate the robustness and convergent performance of the HSTL algorithm, the comparison results of convergence speed and success rate are presented, and convergence analysis is preliminary put out. Finally, the research findings and contributions of the proposed HSTL algorithm are discussed in Section 5.

2. HS Algorithm and Other Variants

In this section, we introduce the classical harmony search (HS) algorithm and two excellent variants of HS algorithms: self-adaptive global-best harmony search (SGHS) algorithm and novel global harmony search (NGHS) algorithm.

2.1. Classical Harmony Search Algorithm (HS)

The steps in the procedure of classical harmony search algorithm are as follows.

Step 1 (initialize the problem and algorithm parameters). In this step, the optimization problem is specified as follows: where is an objective function, is the number of decision variables.

The HS algorithm parameters are also specified in this step.HMS: the harmony memory size or the number of solution vectors in the population;HMCR: the harmony considering rate;PAR: pitch adjusting rate;BW: bandwidth;maxFEs: the maximum number of improvisations.

Step 2 (initialize the harmony memory). The  harmony memory (HM) consists of HMS harmony vectors. Each harmony vector is generated from a uniform distribution in the feasible space, as where rand() is a random between 0 and 1.

Step 3 (improvise a new harmony). A new harmony vector is generated based on three rules:(a)Memory consideration;(b)Pitch adjustment;(c)Random generation.

The improvisation procedure of new harmony vector works as Algorithm 1.

For  i = 1 to D  do
If  rand( ) ≤ HMCR  then
  
  If  rand( ) ≤ PAR  then
   
   
  End  If
Else
  
End  If
End  For

A new potential variation (or an offspring) is generated in Step 3, which is equivalent to mutation and crossover operator in standard evolution algorithms (EAs).

Step 4 (update harmony memory). Get the worst harmony memory from the HM. If is better than then .

Step 5 (check stopping criterion). If the stopping criterion (maxFEs) is satisfied, computation is terminated. Otherwise, Steps 3 and 4 are repeated.

2.2. The SGHS Algorithm

In order to select the best parameters automatically, a self-adaptive global-best harmony search (SGHS) algorithm is proposed by Pan et al. [16]. The SGHS dynamically changes BW according to where and are the maximum and minimum distance bandwidths.

HMCR (PAR) is dynamically selected from a suitable normally distribution with mean HMCRm (PARm) and standard deviation 0.01 (0.05). Initially, HMCRm (PARm) is set at 0.98 (0.9). After a specified number of generations, HMCRm (PARm) is recalculated by averaging all the recorded HMCR (PAR) values during the period of evolution.

2.3. The NGHS Algorithm

In the novel global harmony search (NGHS) algorithm [13, 14], three significant parameters, harmony memory considering rate (HMCR), bandwidth (BW), and pitch adjusting rate (PAR), are excluded from NGHS, and a random select rate () is included in the NGHS. In Step 3, NGHS works as Algorithm 2.

For     to     do
.
If  rand( ) ≤  //random mutation
  
End  If
End  For.

Where and , respectively, are the best harmony and the worst harmony in HM, rand() is uniformly generated random number in [].

3. HSTL Algorithm

In this section, we proposed a novel harmony search method with teaching-learning (HSTL) strategy which was derived from teaching-learning-based optimization (TLBO) algorithm. Above all, the TLBO algorithm is introduced and analyzed, and then we focus on the details of HSTL algorithm and the strategies of dynamically adjusting the parameters.

3.1. The TLBO Algorithm

Teaching-learning-based Optimization (TLBO) algorithm [1820] is a new nature-inspired algorithm; it mimics the teaching process of teacher and learning process among learners in a class. TLBO shows a better performance with less computational effort for large scale problems [19], that is, problems of a high dimensionality. In addition, TLBO needs very few parameters.

In the TLBO method, it is the task of teacher to try to increase mean knowledge of all learners of the class in the subject taught by him or her depending on his or her capability. Learners make efforts to increase their knowledge by interaction among themselves. A learner is considered as a solution or a vector, and different design variables of vector will be analogous to different subjects offered to learners and the learners’ result is analogous to the “fitness” as in other population-based optimization techniques. The teacher is considered as the best solution obtained so far. The process of working of TLBO is divided into two phases, teacher phase and learner phase.

Teacher Phase. Assume there are number of subjects (i.e., design variables), “NP” number of learners (i.e., population size,), then is the best learner (i.e., teacher) in subject (). The works of teaching are as follows: where denotes the result of the th () learner before learning the th () subject, and is the result of the th learner after learning the th subject. is the teaching factor which decides the value of to be changed. The value of is generated randomly with probability as = round[1 + rand()].

When the leaner finished his or her learning from the teacher, update the by if is better than .

() Learner Phase. Another important approach to increase knowledge for a learner is to interact with other learners. Learning method is expressed in Algorithm 3.

For each learner  
 Randomly select another learner
If   is superior to then
  
Else
  
End
End  for
If   is superior to   then
End  If

Even since the TLBO algorithm proposed in 2011 by Rao et al. [18], it has been applied in the fields of engineering optimization, such as mechanical design optimization [18, 21], heat exchangers [22], thermoelectric cooler [23], and unconstrained and constrained real parameter optimization problems [24].

In the TLBO method, teacher phase relying on the best solution found so far usually has the fast convergence speed and the best ability of exploitation; it is more suitable for improving accuracy of the global optimal solution. Learner phase relying on other learners usually has the slow convergence speed; however, it bears stronger exploration capability for solving multimodal problems. Therefore, we use the TLBO method to improve the performance and efficiency of HS algorithm in the HSTL method.

3.2. The HSTL Algorithm

In order to achieve the most satisfactory optimization performance by applying the HS algorithm to a given problem, we develop a novel harmony search algorithm combined with teaching-learning strategy, in which both new harmony generation strategies and associated control parameter values can be dynamically changed according to the process of evolution.

It is of a high importance between the convergence and the diversity in order to improve the search efficiency. In the classical HS algorithm, a new harmony is generated in Step 3. After the selecting operation in Step 4, the population variance may increase or decrease. With a high population variance, the diversity and exploration power will increase, in the same time the convergence and the exploitation power will decrease accordingly. Conversely, with a low population variance, the convergence and the exploitation power will increase [25]; the diversity and the exploration power will decrease. So it is significant how to keep balance between the convergence and the diversity. Classical HS algorithm loses its ability easily at later evolution process [26], because of improvising new harmony from HM with a high HMCR and local adjusting with PAR. And HM diversity decreases gradually from the early iteration to the last. However, in HS algorithm, a low HMCR employed will increase the probability (1-HMCR) of random selection in search space; the exploration power will enhance, but the local search ability and the exploitation accuracy cannot be improved by single pitch adjusting strategy.

To overcome the inherent weaknesses of HS, in this section, we propose HSTL method. An improved teaching-learning strategy is employed to improve the search ability of optimal solution in the HSTL method. The HSTL algorithm works as follows.(1)Optimization target vector preparation: , where () is a harmony vector selected randomly in HM. Next, four strategies are employed to improve the target vector.(2)Improve the target vector with the following 4 strategies.

(a) Harmony Memory Consideration. The th design value of the target vector , is chosen randomly from harmony memory (HM) with a probability of HMCR as

The HMCR is the rate of choosing one value from the historical values stored in the HM, which varies between 0 and 1.

(b) Teaching-Learning Strategy. If the th () design variable of the target vector has not been considered in HM, it will learn from the best harmony (i.e., teacher) with probability TLP in the teacher phase or from other harmony (i.e., learner) in the learner phase. The TLP is the probability of performing teaching-learning operator on design variables that have not been considered in (a). It works as follows.

Teacher Phase. In this phase, learner will learn from the best learner (i.e., teacher) in class. Learner modification is expressed as where is the best harmony in HM, is the worst harmony in HM, and rand() is a uniformly distributed random number between 0 and 1.

The contribution in this section is that the mean value of population is replaced with . There are two aspects to this apparent superiority. First, there are cheap to running cost because it do not need to compute the mean value of population in every iteration. Second, diversity of population will be enhanced more than the standard TLBO algorithm, that is because the new mean value will be different for each individual, but the in standard TLBO method is the same for every individual.

Learner Phase. In this phase, through comparing the advantages and disadvantages between the other two learners, the learner will learn from their advantages, which draw on the idea of differential evolution algorithm. The process is as follows.

Randomly select two differential integer and from .If   is better than   Else  End If

(c) Local Pitch Adjusting Strategy. To achieve better solutions in the search space, it will carry out the local pitch adjusting strategy with probability PAR when the th design variable has not performed Harmony memory consideration and Teaching-Learning Strategy as where rand() is a uniformly distributed random number between 0 and 1, and BW() is an arbitrary distance bandwidth.

(d) Random Mutation Operator. As with the HS algorithm, if design variable did not perform previous actions (harmony memory consideration, teaching-learning strategy, and local pitch adjusting strategy), the HSTL method will carry out random mutation operator in feasible space with probability on th design variable of as follows:

The improvisation of new target harmony in HSTL algorithm is shown in Algorithm 4.

, 2, , HMS); // randomly select as optimization target vector
For     to     do
If  rand( ) ≤ HMCR      //(a) Harmony memory consideration
Else  If rand( ) ≤ TLP  //(b) Teaching-Learning strategy
If rand( ) ≤ 0.5
    // Teaching
    
Else
    // Learning
Randomly select and from
    If   is better than
      
    Else
      
  End
  End
   
Else  If rand(0, 1) ≤ PAR  // (c) Local pitch adjusting strategy
Else  If rand(0, 1) ≤    // (d) Random mutation operator
End  If
End  For

The flow chart of HSTL algorithm is shown in Figure 1.

Parameters Dynamically Changed. To efficiently balance the exploration and exploitation power of the HSTL algorithm, HMCR, PAR, BW, and TLP parameters are dynamically adapted to a suitable range with the increase of generations. Equation (10) shows the dynamic change of HMCR, PAR, BW, and TLP, respectively.

Consider the following [8]:

4. Numerical Experiments and Results

4.1. Test Functions

The test functions [2729] are shown in Table 1, and are demonstrated. Table 2 in Functions [30] are hybrid composition functions. The hybrid composition functions are built combining a nonseparable (NS) function with other functions. The considered functions are as folows.(i)Nonseparable functions:(a): shifted Rosenbrock's function,(b): shifted Griewank's function,(c)NS-: nonshifted Extended ,(d)NS-: nonshifted Bohachevsky.(ii)Other component functions:(a): shifted sphere function,(b): shifted Rastrigin function,(c): Schwefel2.22 function.

Hybrid function is composed of a nonseparable function and other function . The hybrid procedure is shown as follows.    is divided into two parts ( and ) as followsIf   then is composed by the first even variables. (length () = ) is composed by the remaining variables. (length () = ())Else is composed by the first odd variables. (length () = ) is composed by the remaining variables. (length () = ())End If(2) Return .

We have used 31 well-known unconstrained benchmark functions [2730] as a testbed to evaluate the performance of the HSTL algorithm presented in this paper. These test functions are considered as particularly challenging for many existing meta-heuristic optimization algorithms except for . The composition of these test functions contains some characteristics, such as separable design variables, nonseparable design variables, strong unimodality, strong multimodality, and hybrid. Many of them blend different characteristics together. Table 3 shows the characteristics of all test functions.

4.2. Experimental Setup

Simulation experiments are carried out to compare the optimization (minimization) capabilities of the presented method (HSTL) with respect to (a) classical HS [1, 2], (b) SGHS [16], and (c) NGHS [13, 14]. All the experiments were performed on Windows XP 32 system with Intel(R) Core(TM) i3-2120 [email protected] GHz and 2 GB RAM, and all the program codes were written in MATLAB R2009a.

In the experiments, the parameters settings for the compared HS algorithms are shown in Table 4; the dimensions of the benchmark problems are set as 50, 100, and 200, respectively. To make the comparison fair, the populations for all the competitor algorithms (for all problems tested) were initialized using the same random seeds. The variants of HS algorithm were set at the same termination criteria: the number of improvisations (function evaluation times: FEs) FEs = 500, respectively.

The best and worst fitness values of each benchmark problem are recorded for 30 independent runs; the mean fitness, standard deviation (STD), and mean runtime of each function are calculated for 30 independent runs, and the Mean fitness, Standard Deviation (STD) and mean runtime of each function are calculated for 30 independent runs.

4.3. The Results Comparison between HSTL, HS, SGHS, and NGHS Algorithm

From Table 5, it can be found that the mean optimal fitness values of HSTL are always less than those of other 3 HS algorithms for all test functions except for , , , and when dimension are equal to 50 and maxFEs = 25000, and all these mean optimal fitness values are slightly larger than the NGHS algorithm for , , , and . According to criterions (best, mean, worst, and STD), the over performance of HSTL method outperforms the other three HS algorithms for 31 benchmark problems except , , and . Taken together, the beast, mean, and STD obtained by the HSTL method are better than those of other three methods for most test functions. In addition, the runtime is less than SGHS for all problems.

In Tables 6 and 7, the results are shown for 100D and 200D problems, respectively. According to the results (best, mean, worst, and STD) of HSTL algorithm outperforms the other three harmony search algorithm. And it can be seen from Tables 6 and 7 that, compared with 50D problems, the HSTL method has more obvious advantages than other three HS algorithms (HS, SGHS, and NSGH).

The convergence of HSTL method is compared with three other HS algorithms: HS, SGHS, and NGHS on 50-D functions , , and , where the HSTL algorithm demonstrates an evident superiority on efficiency and stability, which can be observed from the convergence graphs (Figures 2(a) and 4(a)) and boxplots (Figures 2(b) and 4(b)).

For 50-D problems, the convergence graphs and the boxplots are shown in Figures 2, 3, and 4 on 50-D for sphere unimodal function, griewank multimodal inseparable function, and hybrid function, Hybrid5, which is composed of Bohachevsky and Schwefel2.22 function, where Figures 2(b) and 4(b) plot the boxplots of best results in 30 independent runs. Figures 2(a) and 4(a) portray the convergence curves. It is evident from the convergence graphs that strongly uniform convergence can be maintained throughout the procedure of evolution. From the boxplots we can see that the HSTL algorithm has better convergence, stability, and robustness in most cases than HS, SGHS, and NGHS algorithms.

Figures 5 and 6 show the boxplots and convergence graphs of 3 different problems for dimensions equal to 100 and 200, respectively, where HSTL algorithm demonstrates obvious superiority on efficiency and stability.

4.4. Comparison of the Convergence Speed and Success Rate

In order to give a fair chance to 4 HS algorithm (HS, SGHS, NGHS, and HSTL) compared. We run each algorithm on each benchmark test function and stop as soon as the minimum error value acquired by the algorithm falls below the predefined threshold or a maximum number of FEs is exceeded. For all test functions, the algorithms carry out 30 independent runs.

The respective thresholds values of the error for 30 benchmark problems are in Table 11.

Table 8 displays the statistics results about the success rate, average runtime, and average FEs of .

As Table 8 shows, for a high-dimensional problem (), the HS, SGHS, and NGHS have great difficulty in finding the global optima on all problems. The HSTL yields 100% success rate for , , , , , , , , , , , . The HSTL algorithm performs much better with success rates of 100% on most problems, and over 50% on , , , , , , and where other algorithms fail in finding the global optima.

Table 9 illustrates the costing run of 4 HS algorithm for . The runtime is equal to the average value of all functions mean runtime; the FEs is equal to the average value of all functions mean FEs; the Success Rate is equal to the mean value of the success rate of all functions. In Table 9, we intend to show how well the presented HSTL algorithm performs when compared to HS, SGHS and NGHS algorithm. From the statistics shown in Table 9 can be seen that HSTL uses the least runtime and FEs, and acquires the best success rate among these algorithms on over performances.

4.5. Convergence Analysis

To investigate the convergence of proposed HSTL algorithm, we record the population variance of each algorithm. As Figure 7 shows, for each type function, the fluctuation of population variance in HSTL is smaller than the fluctuation of population variance in SGHS and NGHS algorithm, and the population variance graphs fall steadily throughout the search process. As a consequence, the proposed HSTL algorithm has stronger robustness and convergence than other variants of HS.

4.6. Parameter HMS Study

In this section, the effect of HMS value on the performance of the HSTL method is investigated. The experimental results generated by using different HMS values (5, 10, 15, 20, 25, 30, 35, and 40) for dimensions equal to 50 are demonstrated in Table 10, respectively.

We can see from Table 10, in which there are some unimodal functions (i.e., , ), a small value of HMS (i.e., 5 or 10) is superior to a large value. For other benchmark functions, there is no obvious indication that one setting value of HMS is superior to the other setting values. Therefore, we can think that, a small HMS is suitable for a simple problem. However, for some complex problems, we can set slightly more HMS that is not exceeding 50. It is reasonable and logical, for it is similar to the quickly memory of musician for a simple harmony improvisation and the depth memory of musician for an outstanding harmony improvisation.

5. Conclusion

In this paper, a novel harmony search combined teaching-learning (HSTL) algorithm is presented to improve the performance and efficiency of the harmony search algorithm. The proposed HSTL algorithm employs the idea of teaching and learning. Four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to maintain the proper balance between convergence and population diversity. With the process of evolution, the dynamic strategy is adopted to change the parameters HMCR, TLP, BW, and PAR. Numerical experiments show that the dynamic changes of parameters are especially effective in balance between the exploration power and the exploitation power. The population variance analysis indicated that the HSTL algorithm has strong convergence throughout evolution progresses. The sensitivity analysis of HMS parameter showed that it does not have significant influence on complex multimodal problems.

We have compared the performance of proposed HSTL algorithm with the classical HS algorithm and two excellent variant algorithms over a suite of 31 unconstrained numerical optimization functions and evidently concluded that HSTL algorithm is more effective and stable in obtaining high quality solutions and has less FEs, less runtime, and higher success rates under the same conditions.

Acknowledgments

This work is supported by National Natural Science Foundation of China under Grant no. 60974082 and 81160183, Scientific Research Program funded by Shaanxi Provincial Education Department under Grant no. 12JK0863, Ministry of Education “Chunhui Plan” project (no. Z2011051), Natural Science Foundation of Ningxia Hui Autonomous Region (no. NZ12179), and Scientific Research Fund of Ningxia Education Department (no. NGY2011042).