Abstract

To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems.

1. Introduction

Harmony search (HS) [1, 2] is a new population-based metaheuristic optimization algorithm. It has received much attention regarding its application potential as continuous and discrete optimal problem. Inspired by the process of the musicians’ improvisation of the harmony, the HS algorithm improvises its instruments’ pitches searching for a perfect state of harmony. The effort to find a new harmony in music is analogous to finding a better solution in an optimization process. HS has been applied to optimization problems in different areas [311]. The HS algorithm has powerful exploration ability in a reasonable time but is not good at performing a local search. In order to improve the performance of the harmony search method, several variants of HS have been proposed [1220]. These variants have some improvement on continuous optimization problems. However, their effectiveness in dealing with discrete problems is still unsatisfactory.

The knapsack problem is one of the classical combinatorial optimization problems. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items. The knapsack problem often applies to resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science, complexity theory, cryptography, and applied mathematics.

The 0-1 knapsack problem is as follows. Given a set of items and a knapsack, with select some items so that the total profit of the selected items is maximum, and the total weight and the total volumes of selected items are not more than the weight capacity and the volume capacity of the knapsack. Formally, subject to where

Many methods have been employed to solve 0-1 knapsack problems. Zou et al. proposed a novel global harmony search algorithm (NGHS) for 0-1 knapsack problems [21]. Y. Liu and C. Liu presented an evolutionary algorithm to solve 0-1 knapsack problems [22]. Shi used an improved ant colony algorithm to solve 0-1 knapsack problems [23]. Lin solved the knapsack problems with imprecise weight coefficients by using genetic algorithm [24]. Boyer et al. solved knapsack problems on GPU [25]. Hill et al. proposed a heuristic method for 0-1 multidimensional knapsack problems [26]. Gherboudj et al. propose a discrete binary cuckoo search (BCS) algorithm in order to deal with binary optimisation problems [27]. A novel quantum inspired cuckoo search for knapsack problems is present in the literature [28].

In recent years, more and more discrete optimization problems are solved by HS method. To some extent, this is due to the memory consideration rule that is appropriate to be employed to resolve the discrete optimization problems. However, for a high-dimensional discrete optimization problem, classical HS algorithm can be easy to cause premature convergence and stagnation behavior. Therefore, we present a dynamic parameters-adjustment mechanism for solving the high-dimensional 0-1 knapsack problems. To enhance the performance of dealing with discrete problems by HS method, this paper proposed an improved HS algorithm based on teaching-learning (HSTL) strategies.

The rest of the paper is organized as follows. Section 2 introduces the classical HS algorithm and three state-of-the-art variants of HS. The teaching-learning-based optimization (TLBO) algorithm and the proposed approach (HSTL) are introduced in Section 3. Section 4 presents related constraint-handling technique and integer processing method. Experimental results are reported in Section 5. Finally, Section 6 concludes this paper.

2. HS Algorithm and Other Variants

In this section, we introduce the classical HS algorithm and three state-of-the-art variants of HS algorithms: NGHS algorithm [17], intelligent tuned harmony search algorithm (ITHS) [18], and exploratory power of harmony search algorithm (EHS) [19].

2.1. Classical Harmony Search Algorithm (HS)

Classical harmony search (HS) is derivative-free meta-heuristic algorithm. It mimics the improvisation process of music players and uses three rules (memory consideration, pitch adjustments, and randomization) to optimize the harmony memories. The steps in the procedure of classical harmony search algorithm are as follows.

Step 1 (initialize the harmony memory). The harmony memory (HM) consists of HMS harmony. Each harmony is generated from a uniform distribution in the feasible space, as where is a uniform distribution random number between 0 and 1.
Consider the following:

Step 2 (improvise a new harmony via three rules). Improvise a new harmony via three rules: memory consideration, pitch adjustment, and random generation.

(a) Memory Consideration. Decision variable value of the new harmony will be generated by choosing from the harmony memory with probability HMCR.

(b) Pitch Adjustment. Get a component randomly from an adjacent value of one decision variable of a harmony vector with probability PAR.

(c) Random Generation. Generate a component randomly in the feasible region with probability 1-HMCR.

The improvisation procedure of a new harmony works as Algorithm 1.

For  i = 1 to D
  If  rand() < HMCR
    
    If  rand() < PAR
       
       
    Endif
Else
    
  Endif
EndFor

A new potential variation (or an offspring) is generated in Step 2, which is equivalent to mutation and crossover operator in standard Evolution Algorithms (EAs).

Step 3 (update the worst harmony). Consider the following: where denotes the worst harmony in HM.

Step 4 (check stopping criterion). If the stopping criterion (maximum function evaluation times: MaxFEs) is satisfied, computation is terminated. Otherwise, Step 2 and Step 3 are repeated.

2.2. The NGHS Algorithm

In NGHS algorithm, three significant parameters, harmony memory considering rate (HMCR), bandwidth (BW), and pitch adjusting rate (PAR), are excluded from NGHS, and a random select rate () is included in the NGHS. In Step 3, NGHS works as Algorithm 2, where and denote the best harmony and the worst harmony in HM, respectively. We set parameter for the 0-1 knapsack problem and for continuous optimal problem.

For  i = 1 to D
  
  
   .
  If  rand()  %random mutation
         
  Endif
EndFor

2.3. The EHS Algorithm

The EHS algorithm uses the same structure with the classical HS algorithm and does not introduce any complex operations. The only difference between HS and EHS is that the EHS algorithm proposed a new scheme of tuning BW, and it works as follows (proportional to the current population variance): where is the proportionality constant. If the value of is high, the population variance will maintain or increase, and thus the global exploration power of algorithm will enhance. While if the value of is low, the population variance will decrease and the local exploitation performance will increase. The experimental results tested by Das et al. [19] have shown that keeping around 1.2 provided reasonably accurate results.

The EHS algorithm can enhance the exploratory power, and can provide a better balance between diversification and intensification. However, the exploratory power of EHS leads to the slower convergence [18], and the computational time of BW is greatly increased.

2.4. The ITHS Algorithm

The ITHS algorithm [18] proposed a subpopulation formation technique. The population is divided into two groups: Group A and Group B, which is carried out based on (the mean objective function value of all harmony vectors in HM). The harmony vectors, whose objective function value is less than , belong to Group A, and the rest belong to Group B. The improvisation procedure of new harmony by ITHS is shown in Algorithm 3. In ITHS algorithm, the parameter PAR is updated with the number of iterations: where and are the current iterations and the maximum iterations.

For i= 1to Ddo
   If  rand() < HMCR
          
   If  rand() < PAR
       If   //Group  A
         If  rand() < 0.5
         
         Else
           
         Endif
       Else   //Group  B
         
         
       Endif
       
     Endif
  Else
     
   Endif
EndFor

3. HSTL Algorithm

In this section, we proposed a novel harmony search with teaching-learning strategy which derived from Teaching-Learning-Based Optimization (TLBO) algorithm. Above all, the TLBO algorithm is introduced and analyzed, and then we focus on the details of HSTL algorithm and the strategies of dynamically adjusting the parameters.

Since its origination, HS algorithm has been applied to many practical optimization problems. However, for large scale optimization problems, HS has slow convergence and low precision, which is because a new decision variable value in HM can be generated only by pitch adjustment and randomization strategies during the search process, the memory consideration rule is only used to adjust the decision variable values according to the current HM. HS can maintain a strong exploration power in the early stage, but it does not have a good exploitation in the later stage, and thus it is characterized by earlier mature and slow convergence. Therefore, for solving large scale optimization problem, the key is how to balance between global exploration performance and local exploitation ability.

3.1. Dimension Reduction Adjustment Strategy

As we know, for a complex optimization problem, its optimization may experience a process from extensive exploration in a large range to fine adjustment in a small range. For a -dimensional optimization problem, we assume that its optimal solution is . Let initial HM be as follows:

After several iterations, the HM turns into HM2. It can be seen from HM2 that each solution in HM2 has nearly achieved the best solution except for one dimension (, ; ):

Then we assume that only the harmony memory consideration rule of HS algorithm is employed to optimize the HM2. In the following, we employ the two methods to generate two new harmonies: and  , respectively, and then analyze which method is better.

Method 1. Generate the solution on each variable by using the harmony memory consideration rule, as in Algorithm 4.

For  i  =  1 to D
  If  rand() < HMCR
   
  EndIf
EndFor

Method 2. Let and then adjust one of the variables of the new solution by using the harmony memory consideration rule, as in Algorithm 5.

i  =  round(rand()*D) //Generating a random integer between 1 and D.

In the following, we analyze the two methods.

In Method 1, all of the decision variables of harmony are chosen from HM2. If hopes to be the optimal solution , we should choose . In other words, in HM2 cannot be chosen as in the latter iteration. We can see that the probability that turns into is in HM2, so the probability turns into is .

In Method 2, assume that the   , during each iteration, and only one decision variable of needs to be adjusted with harmony memory consideration rule. So the probability that decision variable is chosen to adjust is , and then the probability that will be replaced with is . Therefore, in one iteration, the probability that turns into by Method 2 is .

Next, we compare the success rate between Method 1 and Method 2 at different dimensions.

When HMS=10, , the success rate of Method 1 is and the success rate of Method 2 is Apparently, under this condition, Method 1 is superior to Method 2. However, if we set , , the success rate of Method 1 is while the success rate of Method 2 is

For different , a more detailed success rate of Method 1 and Method 2 is listed in Table 1.

From Table 1, it can be seen that, for low dimensional problem (), success rate of Method 1 is greater than that of Method 2; however, Method 2 has higher success rate than Method 1 when , and with the increasing of dimensionality, the success rate of Method 1 will drop dramatically, but Method 2 can maintain a higher success rate.

By above idea, in the beginning stages, exploring is on a wider domain so as to search fast, and in the later stages, the search focus on a small space to improve the accuracy of solution. Aiming at the HS algorithm, we design a dynamic dimension selection strategy to adjust some selected decision variables. A simple process is shown in Figure 1.

In Figure 1, the parameter TP represents the tune probability of each decision variable. TP changes from to . In other words, each decision variable of the objective harmony vector will be tuned with probability TP which decreases from to with the increase of iterations.

3.2. The TLBO Algorithm

Teaching-Learning-Based Optimization (TLBO) algorithm [2938] is a new nature-inspired algorithm; it mimics the teaching process of teacher and learning process among learners in a class. TLBO shows a better performance with less computational effort for large scale problems [31]. In addition, TLBO needs very few parameters.

In the TLBO method, the task of a teacher is to try to increase mean knowledge of all learners of the class in the subject taught by him or her depending on his or her capability. Learners make efforts to increase their knowledge by interaction among themselves. A learner is considered as a solution or a vector, different design variables of a vector will be analogous to different subjects offered to learners, and the learners’ result is analogous to the “fitness” as in other population-based optimization techniques. The teacher is considered as the best solution obtained so far. The process of TLBO is divided into two phases, “Teacher Phase” and “Learner Phase.”

3.2.1. Teacher Phase

Assume that there are number of subjects (i.e., design variables) and number of learners (i.e., population size); is the best learner (i.e., teacher). For each learner   , the works of teaching are as follows: where and denote the knowledge of the th learner () before and after learning the th subject, respectively. is the teaching factor which decides the value of mean to be changed. is decided by .

3.2.2. Learner Phase

Another important approach to increase knowledge for a learner is to interact with other learners. Learning method is expressed as in Algorithm 6.

For each learner   ,
  Randomly select another learner
   If     is superior to
     
   Else
     
   Endif
Endfor
If is superior to  
  
Endif

Even since the TLBO algorithm proposed by Rao et al. [29], it has been applied to the fields of engineering optimization, such as mechanical design optimization [29, 32, 37], heat exchangers [33], thermoelectric cooler [34], and unconstrained and constrained real parameter optimization problems [35, 36]. Some improved TLBO algorithm was present in last two years. An elitist TLBO algorithm for solving unconstrained optimization problems [38] by Rao and Patel and an improved harmony search based on teaching-learning strategy for unconstrained optimization problems by Tuo et al. [39] are proposed.

In the TLBO method, the teacher phase relying on the best solution found so far usually has the fast convergence speed and the well ability of exploitation; it is more suitable for improving the accuracy of the global optimal solution. Learner phase relying on other learners usually has the slow convergence speed; however, it bears stronger exploration capability for solving multimodal problems.

3.3. The HSTL Algorithm

In order to achieve satisfactory optimization performance by applying the HS algorithm to a given problem, we develop a novel harmony search algorithm combined teaching-learning strategy, in which both new harmony generation strategies and associated control parameter values can be dynamically changed according to the process of evolution.

It is of great importance to realize the balance between the convergence and the diversity. In the classical HS algorithm, a new harmony is generated in Step 3. After the selecting operation in Step 4, the population diversity may increase or decrease. With high population diversity, the algorithm will have strong exploration power, and at the same time the convergence and the exploitation power will decrease accordingly. Conversely, with a low population variance, the convergence and the exploitation power will increase [18]; the diversity and the exploration power will decrease. So it is significant how to keep balance between the convergence and the diversity. Classical HS algorithm loses exploitation ability easily at later evolution process [19], because of improvising new harmony from HM with a high HMCR and local adjusting with PAR. Diversity of HM decreases gradually from the early iteration to the last. Moreover, in HS algorithm, a low HMCR employed will increase the probability (1-HMCR) of random selection in search space; the exploration power will enhance, but the local search ability and the exploitation accuracy cannot be improved by single pitch adjusting strategy.

To overcome the inherent weaknesses of HS, in this section, we propose an HSTL method. In the HSTL method, an improved teaching-learning strategy is employed to improve the search ability. The HSTL algorithm works as follows.(1)Optimization target vector preparation is as follows: , where is the worst harmony in the current HM.(2)Improve the target vector with the following 4 strategies.

(a) Harmony Memory Consideration. The values of the target vector are randomly from HM with a probability of HMCR:

(b) Teaching-Learning Strategy. If the th () design variable of the target vector has not been considered in (a), it will learn from the best harmony (i.e., teacher) with probability TLP in the teacher phase or from the other harmony (i.e., learner) in the learner phase. The TLP is the rate of performing teaching-learning operator on design variables that have not been carried out in (a): harmony memory consideration. It works as follows.

Teacher Phase. In this phase, the learner will learn from the best learner (i.e., teacher) in the class. Learner modification is expressed as where is the best harmony in HM and is the worst harmony in HM.

The contribution of this paper is that is replaced by , instead of the mean value of population. This replacement will enhance diversity of population more than standard TLBO algorithm.

Learner Phase. Randomly select and from , and :

(c) Local Pitch Adjusting Strategy. To achieve better solutions in search space, it will carry out the local pitch adjusting strategy with probability PAR if design variables have not been selected to perform harmony memory consideration and teaching-learning strategy:

(d) Random Mutation Operator. HSTL carries out random mutation in feasible space with probability as follows:

The improvisation of new target harmony in HSTL algorithm is given in Algorithm 7.

;   % select as Optimization target vector
 For i  = 1 to D
   If  rand() HMCR % (a) Harmony memory consideration
     
   Elseif  rand() TLP % (b) Teaching-Learning strategy
     If  rand() 0.5   % Teaching
       
     Else       % Learning
       Randomly select r 1 and r 2 from
       If   is  better  than 
         
       Else
         
       end
     Endif
     
   Elseif rand(0,1) PAR       %(c) Local pitch adjusting strategy
     
     
   Endif
   If rand(0,1) %(d) Random mutation operator in feasible space
     
   Endif
Endfor

The flow chart of HSTL algorithm is shown in Figure 2.

(i) Update Operation. In HSTL algorithm, update operation has some changes, as follows.

Get the best harmony and the worst harmony from the HM:

(ii) Parameters Changed Dynamically. To efficiently balance the exploration and exploitation power of the HSTL algorithm, HMCR, PAR, BW, and TLP are dynamically adapted to a suitable range with the increase of generations: where (25) and (26) are quoted from the literature studies [18] and [16], respectively.

Let , , , , , , , and . The changing curves of parameters (HMCR, TLP, PAR, and BW) are shown in Figure 3.

It can be seen that the parameter HMCR increases gradually from 0.6 to 0.9 linearly. TLP increases with low velocity in the early stage and rises sharply in the final stage. That is to say, in the beginning, the harmony consideration rule and teaching-learning strategy are carried out with a smaller probability; in the later stage, HSTL algorithm begins to focus on local exploitation with harmony consideration and teaching-learning strategy. The benefits of doing so can get more opportunities to reinforce the global exploration by strengthening disturbance in the early stage and in the final stage to intensify local search step by step, thereby acquiring high-precision solution. For the same reason, BW decreases gradually in order to reduce perturbation gradually, and the PAR’s variation from 0.5 to 0.3 is to reduce the probability of pitch adjustment.

Equation (27) shows that the random mutation operation is changed dynamically from 5/D to 3/D with iterations. It can make the random disturbance change from strong to weak, and thus HSTL algorithm has strong global exploration ability in the early stage and has effective local exploitation ability in the latter stage:

The 0-1 knapsack problem is a large scale, multiconstraint, and nonlinear integer programming problem. So solving 0-1 knapsack problems with HSTL algorithm needs to employ constraint handling technique and integer processing method.

4.1. Constraint-Handling Technique

For constrained optimization problems, there have existed many successful constraint-handling techniques on solving constrained optimization problems, such as penalty function method, special representations and operators, repair method, and multiobjective method [4053]. In this paper, because we mainly do some research on discrete optimization problems by using the HS algorithm, so special handling technique and the revision methods [2328] for adjusting the 0-1 knapsack problems have not been adopted in this paper.

In this paper, the multiobjective method has been used for handling constrained 0-1 knapsack problems. The multiobjective method [42] is as follows.(1)Any feasible solution is better than any infeasible solution.(2)Among two feasible solutions, the one that has a better objective function value is preferred.(3)Among two infeasible solutions, the one that has a smaller degree of constraint violation is preferred. The purpose is to render the infeasible solution with large degree of constraint violation from moving gradually towards the solution with no constraint violation or with smaller degree of constraint violation.However, for a knapsack problem with many items () and the knapsack with small capacity, if we only execute HS algorithm without any revise method to this small-capacity knapsack, all of the harmony vectors in HM are probably infeasible. In this case, even in the very good methods it is hard to obtain the feasible solutions. So, for an infeasible solution, we will do simple revise through randomly removing some item from this knapsack. In this way, the infeasible solution can gradually turn into the feasible solution.

4.2. Integer Processing Method

In HSTL algorithm, because the variables are adjusted by the teaching-learning strategy, local pitch adjusting strategy and random mutation operator are real numbers, so every variable is replaced by the nearest integer, that is, as follows: Let . Then .

5. Solving 0-1 Knapsack Problems with HSTL Algorithm

5.1. Experimental Setup and Parameters Setting

In order to evaluate the performance of the HSTL algorithm, we used a set of 13 knapsack problems (KP1–KP13). KP1–KP3 is quoted, respectively, from , and in the literature [21]. KP4–KP13 is quoted from website http://homepage.ntlworld.com/walter.barker2/Knapsack%20Problem.htm. In all the knapsack problems (KP1–KP13), KP1–KP8 are called one-dimensional problems that only include weight constraint and KP9–KP13 are called two-dimensional problems that include both weight constraint and volume constraint.

All simulation experiments are carried out to compare the optimization performance of the presented method (HSTL) with respect to (a) classical HS, (b) NGHS, (c) EHS, and (d) ITHS. In the experiments, the parameters setting for the compared HS algorithms is shown in Table 2. To make the comparison fair, the populations for all the competitor algorithms were initialized using the same random seeds. The variants of HS algorithm were set at the same termination criteria: the number of improvisations (function evaluation times: FEs) , respectively. However, if the , then set .

The best and worst fitness value of each test problem are recorded for 30 independent runs; the mean fitness, standard deviation (Std), and mean runtime of each knapsack problem are calculated for 30 independent runs.

5.2. The Experimental Results and Analysis

Table 3 reports the worst, mean, best, and Std of problem results by applying the five algorithms (HS, NGHS, EHS, ITHS, and HSTL) to optimize the knapsack problems KP1–KP8, respectively. The best results are emphasized in boldface. Figures 4 and 6 illustrate the convergence characteristics in terms of the best values of the median run of each algorithm for knapsack problems KP1–KP8. Figures 5 and 7 demonstrate the performance and stability characteristics according to the distributions of the best values of 30 runs of each algorithm for knapsack problems KP1–KP8.

Based on the resulting data in Table 3, the optimal objective values (best, mean, worst, and Std) can be easily obtained by NGHS, EHS, ITHS, and the HSTL algorithms with high accuracy on the knapsack problems KP1–KP5. However, comparatively speaking, NGHS and the HSTL algorithm have better optimal results on worst, mean, best, and Std, and thus NGHS method and the HSTL algorithm are more effective and stabilized to solve the problems KP1–KP5.

For the high-dimensional knapsack problems KP6–KP8, Table 3 shows that the HSTL algorithm has obvious advantages over other variants of HS algorithms. Comparing with other HS algorithms, although HSTL algorithm has slow convergence speed in the early stage, it can be constant to optimize the solutions and obtain high precision solutions in the latter stage, which can be seen from Figure 6.

It is evident from Figure 7 that HSTL algorithm has better convergence, stability, and robustness in most cases than HS, NGHS, EHS, and ITHS algorithms.

Table 4 shows the experimental results on algorithms HS, NGHS, EHS, ITHS, and HSTL for two-dimensional knapsack problems: KP9–KP13. Figure 8 shows the convergence graphs, and Figure 9 is the box plots of independent 30 runs of knapsack problems KP9–KP13.

It can be seen evidently from Table 4 that the HSTL algorithm attained the optimal best, mean, worst, and Std results among all two-dimensional knapsack problems.

From the convergence graphs (Figure 8), it can be seen that the HSTL algorithm has a strong search ability and convergence throughout the search process for the two-dimensional knapsack problems. As can be seen from the box plots (Figure 9), the HSTL has demonstrated some advantage over the other four algorithms on solving two-dimensional 0-1 knapsack problems.

6. Conclusion

In this paper, a novel harmony search with teaching-learning strategies is presented to solve 0-1 knapsack problems. The HSTL algorithm employs the idea of teaching and learning. Four strategies are used to maintain the proper balance between exploration power and exploitation power. With the process of evolution, the dynamic strategy is adopted to change the parameters HMCR, TLP, BW, and PAR. Experimental results showed that the HSTL algorithm has stronger ability to resolve the high-dimensional 0-1 knapsack problems. However, the HSTL algorithm has more parameters. In the future, we should focus on improving the structure of HSTL algorithm, decreasing the running cost and enhancing efficiency for solving complex optimization problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China under Grant no. 81160183, Scientific Research Program funded by Shaanxi Provincial Education Department under Grant no. 12JK0863, Ministry of Education “Chunhui Plan” project (no. Z2011051), Natural Science Foundation of NingxiaHui Autonomous Region (no. NZ12179), and Scientific Research Fund of Ningxia Education Department (no. NGY2011042).