Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2013 / Article
Special Issue

Swarm Intelligence in Engineering

View this Special Issue

Research Article | Open Access

Volume 2013 |Article ID 413565 | 29 pages | https://doi.org/10.1155/2013/413565

An Improved Harmony Search Based on Teaching-Learning Strategy for Unconstrained Optimization Problems

Academic Editor: Baozhen Yao
Received22 Aug 2012
Accepted12 Nov 2012
Published21 Mar 2013

Abstract

Harmony search (HS) algorithm is an emerging population-based metaheuristic algorithm, which is inspired by the music improvisation process. The HS method has been developed rapidly and applied widely during the past decade. In this paper, an improved global harmony search algorithm, named harmony search based on teaching-learning (HSTL), is presented for high dimension complex optimization problems. In HSTL algorithm, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to maintain the proper balance between convergence and population diversity, and dynamic strategy is adopted to change the parameters. The proposed HSTL algorithm is investigated and compared with three other state-of-the-art HS optimization algorithms. Furthermore, to demonstrate the robustness and convergence, the success rate and convergence analysis is also studied. The experimental results of 31 complex benchmark functions demonstrate that the HSTL method has strong convergence and robustness and has better balance capacity of space exploration and local exploitation on high dimension complex optimization problems.

1. Introduction

With the development of scientific technology, many real-life optimization problems are becoming more and more complex and difficult. So how to resolve the complex problems in an exact manner within a reasonable time cost is very important. The traditional optimization algorithms are difficult to solve the nonlinear and nondifferential problems. In recent years, most popular swarm intelligence optimization algorithms, such as genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE) algorithm, have been successfully applied to large-scale complicated problems of scientific and engineering computing.

Inspired by the process of the musicians’ improvisation of the harmony, the harmony search (HS) algorithm is proposed by Geem et al. [1, 2]. Similar to the GA and PSO, the HS algorithm is a meta-heuristic random optimization algorithm. In recent years, HS has been applied more broadly in the fields of engineering optimization. Such as pipe network design [3], structural optimization [4], clustering of text document [5], combined heat and power economic dispatch problem [6], and scheduling of multiple dam system [7]. The aforementioned applications show that HS algorithm has significant in solving complex engineering application problems. It has strong ability of exploration and has a cheap running cost. However, the classical harmony search algorithm is not efficient enough for solving large-scale problems, which has a slow convergence speed and low-precision solution. So some improved HS algorithms were proposed. Mahdavi et al. proposed an improved HS algorithm (IHS) [8] that employed a novel method generating new solution vectors which enhanced accuracy and convergence speed. Recently, Omran and Mahdavi tried to improve the performance of HS by incorporating some techniques from swarm intelligence. Tuo and Yong presented an improved harmony search algorithm with chaos (HSCH) [9]. The new variant named GHS (Global Best Harmony Search) [10] reportedly outperformed the HS and IHS algorithm over the benchmark problems.

HS algorithm has strong ability on exploring the regions of the solution space in a reasonable time. However, it has lower exploitation ability in later period. Therefore, some improved HS algorithm is proposed to enhance the local search ability and solution precision, such as local-best harmony search algorithm with dynamic subpopulations (DLHS) [11], harmony search algorithm with differential mutation operator (HSDE) [12], and novel global harmony search algorithm for unconstrained problems (NGHS) [13, 14]. In addition, Gao et al. proposed modified harmony search methods for unimodal and multimodal optimization [15]. Pan et al. proposed self-adaptive harmony search algorithm for optimization (SGHS) [16]. Yadav et al. presented an intelligent tuned HS algorithm [17].

Now some existing state-of-the-art harmony search algorithms, such as NGHS, HSDE, and SGHS, have shown exceptional problem-solving ability, but they have some disadvantages over the multimodal and high dimensional function. To enhance the performance of high dimensional multimodal problem by HS method, this paper proposed an improved large-scale HS algorithm with harmony search teaching-learning (HSTL). In the HSTL algorithm, the HS algorithm is improved by embedding teaching-learning-based-optimization (TLBO) [17, 18] method which has a strong searching capacity for high dimensional problem.

The rest of this paper is organized as follows. In Section 2, the basic framework of the classical harmony search method is summarized in a comprehensive style, and the two excellent variants of HS (SGHS and NGHS) are briefly presented. A novel teaching-learning-based-optimization algorithm is provided, and the details of HSTL algorithm are presented in Section 3. In Section 4, 31 different characteristic benchmark problems, which consist of separable problems, nonseparable problems, shifted problems, shifted rotated problems, and hybrid composite problems, are considered, and the numerical results of HSTL method are demonstrated. Furthermore, to investigate the robustness and convergent performance of the HSTL algorithm, the comparison results of convergence speed and success rate are presented, and convergence analysis is preliminary put out. Finally, the research findings and contributions of the proposed HSTL algorithm are discussed in Section 5.

2. HS Algorithm and Other Variants

In this section, we introduce the classical harmony search (HS) algorithm and two excellent variants of HS algorithms: self-adaptive global-best harmony search (SGHS) algorithm and novel global harmony search (NGHS) algorithm.

2.1. Classical Harmony Search Algorithm (HS)

The steps in the procedure of classical harmony search algorithm are as follows.

Step 1 (initialize the problem and algorithm parameters). In this step, the optimization problem is specified as follows: where is an objective function, is the number of decision variables.

The HS algorithm parameters are also specified in this step.HMS: the harmony memory size or the number of solution vectors in the population;HMCR: the harmony considering rate;PAR: pitch adjusting rate;BW: bandwidth;maxFEs: the maximum number of improvisations.

Step 2 (initialize the harmony memory). The  harmony memory (HM) consists of HMS harmony vectors. Each harmony vector is generated from a uniform distribution in the feasible space, as where rand() is a random between 0 and 1.

Step 3 (improvise a new harmony). A new harmony vector is generated based on three rules:(a)Memory consideration;(b)Pitch adjustment;(c)Random generation.

The improvisation procedure of new harmony vector works as Algorithm 1.

For  i = 1 to D  do
If  rand( ) ≤ HMCR  then
  
  If  rand( ) ≤ PAR  then
   
   
  End  If
Else
  
End  If
End  For

A new potential variation (or an offspring) is generated in Step 3, which is equivalent to mutation and crossover operator in standard evolution algorithms (EAs).

Step 4 (update harmony memory). Get the worst harmony memory from the HM. If is better than then .

Step 5 (check stopping criterion). If the stopping criterion (maxFEs) is satisfied, computation is terminated. Otherwise, Steps 3 and 4 are repeated.

2.2. The SGHS Algorithm

In order to select the best parameters automatically, a self-adaptive global-best harmony search (SGHS) algorithm is proposed by Pan et al. [16]. The SGHS dynamically changes BW according to where and are the maximum and minimum distance bandwidths.

HMCR (PAR) is dynamically selected from a suitable normally distribution with mean HMCRm (PARm) and standard deviation 0.01 (0.05). Initially, HMCRm (PARm) is set at 0.98 (0.9). After a specified number of generations, HMCRm (PARm) is recalculated by averaging all the recorded HMCR (PAR) values during the period of evolution.

2.3. The NGHS Algorithm

In the novel global harmony search (NGHS) algorithm [13, 14], three significant parameters, harmony memory considering rate (HMCR), bandwidth (BW), and pitch adjusting rate (PAR), are excluded from NGHS, and a random select rate () is included in the NGHS. In Step 3, NGHS works as Algorithm 2.

For     to     do
.
If  rand( ) ≤  //random mutation
  
End  If
End  For.

Where and , respectively, are the best harmony and the worst harmony in HM, rand() is uniformly generated random number in [].

3. HSTL Algorithm

In this section, we proposed a novel harmony search method with teaching-learning (HSTL) strategy which was derived from teaching-learning-based optimization (TLBO) algorithm. Above all, the TLBO algorithm is introduced and analyzed, and then we focus on the details of HSTL algorithm and the strategies of dynamically adjusting the parameters.

3.1. The TLBO Algorithm

Teaching-learning-based Optimization (TLBO) algorithm [1820] is a new nature-inspired algorithm; it mimics the teaching process of teacher and learning process among learners in a class. TLBO shows a better performance with less computational effort for large scale problems [19], that is, problems of a high dimensionality. In addition, TLBO needs very few parameters.

In the TLBO method, it is the task of teacher to try to increase mean knowledge of all learners of the class in the subject taught by him or her depending on his or her capability. Learners make efforts to increase their knowledge by interaction among themselves. A learner is considered as a solution or a vector, and different design variables of vector will be analogous to different subjects offered to learners and the learners’ result is analogous to the “fitness” as in other population-based optimization techniques. The teacher is considered as the best solution obtained so far. The process of working of TLBO is divided into two phases, teacher phase and learner phase.

Teacher Phase. Assume there are number of subjects (i.e., design variables), “NP” number of learners (i.e., population size,), then is the best learner (i.e., teacher) in subject (). The works of teaching are as follows: where denotes the result of the th () learner before learning the th () subject, and is the result of the th learner after learning the th subject. is the teaching factor which decides the value of to be changed. The value of is generated randomly with probability as = round[1 + rand()].

When the leaner finished his or her learning from the teacher, update the by if is better than .

() Learner Phase. Another important approach to increase knowledge for a learner is to interact with other learners. Learning method is expressed in Algorithm 3.

For each learner  
 Randomly select another learner
If   is superior to then
  
Else
  
End
End  for
If   is superior to   then
End  If

Even since the TLBO algorithm proposed in 2011 by Rao et al. [18], it has been applied in the fields of engineering optimization, such as mechanical design optimization [18, 21], heat exchangers [22], thermoelectric cooler [23], and unconstrained and constrained real parameter optimization problems [24].

In the TLBO method, teacher phase relying on the best solution found so far usually has the fast convergence speed and the best ability of exploitation; it is more suitable for improving accuracy of the global optimal solution. Learner phase relying on other learners usually has the slow convergence speed; however, it bears stronger exploration capability for solving multimodal problems. Therefore, we use the TLBO method to improve the performance and efficiency of HS algorithm in the HSTL method.

3.2. The HSTL Algorithm

In order to achieve the most satisfactory optimization performance by applying the HS algorithm to a given problem, we develop a novel harmony search algorithm combined with teaching-learning strategy, in which both new harmony generation strategies and associated control parameter values can be dynamically changed according to the process of evolution.

It is of a high importance between the convergence and the diversity in order to improve the search efficiency. In the classical HS algorithm, a new harmony is generated in Step 3. After the selecting operation in Step 4, the population variance may increase or decrease. With a high population variance, the diversity and exploration power will increase, in the same time the convergence and the exploitation power will decrease accordingly. Conversely, with a low population variance, the convergence and the exploitation power will increase [25]; the diversity and the exploration power will decrease. So it is significant how to keep balance between the convergence and the diversity. Classical HS algorithm loses its ability easily at later evolution process [26], because of improvising new harmony from HM with a high HMCR and local adjusting with PAR. And HM diversity decreases gradually from the early iteration to the last. However, in HS algorithm, a low HMCR employed will increase the probability (1-HMCR) of random selection in search space; the exploration power will enhance, but the local search ability and the exploitation accuracy cannot be improved by single pitch adjusting strategy.

To overcome the inherent weaknesses of HS, in this section, we propose HSTL method. An improved teaching-learning strategy is employed to improve the search ability of optimal solution in the HSTL method. The HSTL algorithm works as follows.(1)Optimization target vector preparation: , where () is a harmony vector selected randomly in HM. Next, four strategies are employed to improve the target vector.(2)Improve the target vector with the following 4 strategies.

(a) Harmony Memory Consideration. The th design value of the target vector , is chosen randomly from harmony memory (HM) with a probability of HMCR as

The HMCR is the rate of choosing one value from the historical values stored in the HM, which varies between 0 and 1.

(b) Teaching-Learning Strategy. If the th () design variable of the target vector has not been considered in HM, it will learn from the best harmony (i.e., teacher) with probability TLP in the teacher phase or from other harmony (i.e., learner) in the learner phase. The TLP is the probability of performing teaching-learning operator on design variables that have not been considered in (a). It works as follows.

Teacher Phase. In this phase, learner will learn from the best learner (i.e., teacher) in class. Learner modification is expressed as where is the best harmony in HM, is the worst harmony in HM, and rand() is a uniformly distributed random number between 0 and 1.

The contribution in this section is that the mean value of population is replaced with . There are two aspects to this apparent superiority. First, there are cheap to running cost because it do not need to compute the mean value of population in every iteration. Second, diversity of population will be enhanced more than the standard TLBO algorithm, that is because the new mean value will be different for each individual, but the in standard TLBO method is the same for every individual.

Learner Phase. In this phase, through comparing the advantages and disadvantages between the other two learners, the learner will learn from their advantages, which draw on the idea of differential evolution algorithm. The process is as follows.

Randomly select two differential integer and from .If   is better than   Else  End If

(c) Local Pitch Adjusting Strategy. To achieve better solutions in the search space, it will carry out the local pitch adjusting strategy with probability PAR when the th design variable has not performed Harmony memory consideration and Teaching-Learning Strategy as where rand() is a uniformly distributed random number between 0 and 1, and BW() is an arbitrary distance bandwidth.

(d) Random Mutation Operator. As with the HS algorithm, if design variable did not perform previous actions (harmony memory consideration, teaching-learning strategy, and local pitch adjusting strategy), the HSTL method will carry out random mutation operator in feasible space with probability on th design variable of as follows:

The improvisation of new target harmony in HSTL algorithm is shown in Algorithm 4.

, 2, , HMS); // randomly select as optimization target vector
For     to     do
If  rand( ) ≤ HMCR      //(a) Harmony memory consideration
Else  If rand( ) ≤ TLP  //(b) Teaching-Learning strategy
If rand( ) ≤ 0.5
    // Teaching
    
Else
    // Learning
Randomly select and from
    If   is better than
      
    Else
      
  End
  End
   
Else  If rand(0, 1) ≤ PAR  // (c) Local pitch adjusting strategy
Else  If rand(0, 1) ≤    // (d) Random mutation operator
End  If
End  For

The flow chart of HSTL algorithm is shown in Figure 1.

Parameters Dynamically Changed. To efficiently balance the exploration and exploitation power of the HSTL algorithm, HMCR, PAR, BW, and TLP parameters are dynamically adapted to a suitable range with the increase of generations. Equation (10) shows the dynamic change of HMCR, PAR, BW, and TLP, respectively.

Consider the following [8]:

4. Numerical Experiments and Results

4.1. Test Functions

The test functions [2729] are shown in Table 1, and are demonstrated. Table 2 in Functions [30] are hybrid composition functions. The hybrid composition functions are built combining a nonseparable (NS) function with other functions. The considered functions are as folows.(i)Nonseparable functions:(a): shifted Rosenbrock's function,(b): shifted Griewank's function,(c)NS-: nonshifted Extended ,(d)NS-: nonshifted Bohachevsky.(ii)Other component functions:(a): shifted sphere function,(b): shifted Rastrigin function,(c): Schwefel2.22 function.


Function nameBenchmark functions expression Search rangeOptimum value

: Ackley function ,

: Dixon and Price function

: Griewank function ,

: Levy function ,
, where ,

: Michalewics function

: Powell function x* = (3, −1, 0, 1, …, 3, −1, 0, 1),

: Rastrigin function ,

: Rosenbrock function ,

: Schwefel2.26 function , 420.9687),

: Sphere function ,

: Trid function

: Zakharov function ,

: Sphere shift function ,  , ; ,

: Schwefel shift function ,  , ; ,

: Rosenbrock shift function ,
, ;
,

: Griewank shift function , 
, ,
,

: Rastrigin shift function , 
, ;
,

: Ackley shift function , 
, ;
,

: Fast Fractal “Double Dip” function     Unknown
: double, pseudo randomly chosen, with seed , with equal probability from the interval   
: integer, pseudo randomly chosen, with seed , with equal probability from the set   
is an approximation to a recursive algorithm, it does not take account of wrapping at the boundaries, or local reseeding of the random generators.

: Schwefel2.22 function ,

: Extended_f10 shift function