Abstract

In order to better solve discrete 0-1 knapsack problems, a novel global-best harmony search algorithm with binary coding, called DGHS, is proposed. First, an initialization based on a greedy mechanism is employed to improve the initial solution quality in DGHS. Next, we present a novel improvisation process based on intuitive cognition of improvising a new harmony, in which the best harmony of harmony memory (HM) is used to guide the searching direction of evolution during the process of memory consideration, or else a harmony is randomly chosen from HM and then a discrete genetic mutation is done with some probability during the phase of pitch adjustment. Third, a two-phase repair operator is employed to repair an infeasible harmony vector and to further improve a feasible solution. Last, a new selection scheme is applied to decide whether or not a new randomly generated harmony is included into the HM. The proposed DGHS is evaluated on twenty knapsack problems with different scales and compared with other three metaheuristics from the literature. The experimental results indicate that DGHS is efficient, effective, and robust for solving difficult 0-1 knapsack problems.

1. Introduction

Since the last four decades, the zero-one knapsack problem, inverse -knapsack problem, and their variants have attracted much attention [124]. This is the reason why they play an important role in computing theory and in a number of real-world applications, such as project selection, resource allocation, production planning, and others [24, 25].

In this paper, we focus on studying the classical 0-1 knapsack problem, where a set of items is given and each item has a profit and a weight . And the problem is to choose a subset of the items such that the profit sum of the chosen items is maximized without exceeding the capacity . Mathematically, the classical zero-one knapsack problem can be modelled as the following integer linear programming model: where takes the value one if and only if the item is loaded, coefficients and represent a profit and weight of item , respectively, is a constant denoting the capacity of the corresponding knapsack, and is the number of items. Without loss of generality, it is assumed that all coefficients , , and are positive. Meantime, we suppose that , and that .

As far as 0-1 knapsack problem is concerned, there are essentially two types of algorithms: exact algorithms and heuristic algorithms. The exact approaches for knapsack problems mainly include dynamic programming, branch and bound, and its enhanced variant: branch and cut [47, 9].

Unfortunately, the 0-1 knapsack problem belongs to the class of -hard problems [12]. That is to say, the number of alternate optimal solutions for the problem grows exponentially with the problem sizes. Hence, -hard problems are very difficult to solve and there is no polynomial time algorithm for solving them. Thus, more and more researchers start to investigate the 0-1 knapsack problems by means of heuristics, metaheuristics, and hybridizing algorithms based on exact algorithms and heuristic algorithms [1115, 20, 22, 24]. For example, He et al. [11] proposed a greedy genetic algorithm (GGA) for 0-1 knapsack problems. In GGA, a novel greedy operator and a repair operator are introduced to speed up the performance of standard genetic algorithm. Bansal and Deep [12] proposed a modified binary particle swarm optimization (MBPSO) for solving 0-1 knapsack problems and multidimensional knapsack problems. When compared with basic binary particle swarm optimization (BPSO), MBPSO has achieved better performance. Inspired by the nature of chemical reaction, Truong et al. [13] proposed a chemical reaction optimization with greedy strategy algorithm (CROG) to solve the zero-one knapsack problems. However, it has seven parameters to be predefined and the parameter settings are not easy. In addition, a novel global harmony search algorithm, named NGHS, was first proposed by Zou et al. [24] to solve knapsack problems. In NGHS, an adaptive step scheme for th decision variable, a genetic mutation operation, and a discrete technique for real coded NGHS are introduced. Later, Layeb [14] presented a hybrid algorithm based on harmony search (HS) algorithm and quantum computing, which is called quantum inspired harmony search algorithm (QIHSA) for solving the knapsack problems. Meanwhile, Wang et al. [15] proposed an improved adaptive binary harmony search (ABHS) algorithm for solving binary knapsack problems. Although the recent proposed algorithms have been improved to some extent, their convergence speed, convergence precision, and robustness are waiting to be further enhanced.

Harmony search (HS) algorithm, a novel population-based evolutionary algorithm, was first proposed by Geem et al. [26] in 2001. Due to its simplicity and ease of implementation, it has aroused great interest and has been successfully applied to solve a variety of optimization problems including many real parameter optimization problems [2730] and knapsack problems [15, 24, 3135] during the last decades.

However, the binary coded harmony search algorithm has only begun as stated in [15]. In other words, the convergence performance of HS and its variants is necessary to be further enhanced. Therefore, in order to further improve the convergence performance of HS, a novel discrete global-best harmony search algorithm, called DGHS, is proposed for 0-1 knapsack problems. In DGHS, an initialization based on a greedy operation, a novel improvisation process for generating a new harmony, and a two-phase repair operator are integrated. Then, experimental results tested on twelve benchmark instances with small or medium sizes and eight randomly generated instances with high dimensions show that DGHS is superior to both GHS and NGHS in most cases.

The rest of the paper is organized as follows. In Section 2, the standard harmony search algorithm is described briefly. An overview of the global-best harmony search (GHS) algorithm proposed by Omran and Mahdavi [35] is given in Section 3. Section 4 describes the proposed DGHS in detail. In Section 5, extensive computational experiments are presented and discussed. Finally, some conclusions are drawn in the Section 6.

2. Harmony Search Algorithm

Harmony search (HS) algorithm was developed by Geem et al. [26] in 2001 through mimicking the improvisation process of music players. Like many other population based metaheuristics, it works as an iterative process for an optimization problem. And the optimization problem can be formulated as follows: where and the feasible solution space can be denoted by . Moreover, and represent the lower bound and upper bound of the th decision variable, respectively.

In general, its procedure consists of the following four steps.

Step 1. Initialize the control parameters and a harmony memory. At the step, an initial harmony memory (HM) is filled with a population of HMS (harmony memory size) harmonies generated randomly. In addition, the parameters of HS, that is, harmony memory consideration rate HMCR and pitch adjusting rate PAR, are given in advance.

Step 2. Improvise a new harmony from the current HM. And the details of the procedure can be given in Algorithm 1.

(1) for to do
(2)  //memory consideration
(3)  if then
(4)   Choose a harmony from HM randomly,
(5)    // a new harmony
    // pitch adjustment
(6)   if then
(7)     // is a constant to be predetermined
(8)   end if
(9)   else
(10)    // random selection
(11)  end if
(12) end for

Step 3. If the new generated harmony is better than the worst one in HM, then replace the worst harmony with the new one; otherwise, go to the next step.

Step 4. If a stopping criterion is not satisfied, go to Step 2.

In addition, more detailed information about the original HS can be found in [26].

3. Global-Best Harmony Search Algorithm

To further improve the convergence performance of HS and overcome some shortcomings of HS, a new variant of HS, called GHS, was proposed by Omran and Mahdavi [35].

First, the GHS dynamically updates parameter PAR according to the following equation: where represents the pitch adjusting rate at generation , and are the minimum and maximum adjusting rate, respectively, is the iterative variable, and is the number of improvisations.

Second, GHS modifies the pitch adjustment step of HS in order to take advantage of the guiding information of the best harmony in the HM. Furthermore, GHS excludes the parameter bw at the phase.

Last, from the above mentioned explanation, it can be concluded that GHS has the same steps as HS with the exception that the process of improvising a new harmony is modified as shown in Algorithm 2.

(1) for to do
(2)  // memory consideration
(3)  if then
(4)   Choose a harmony from HM randomly,
(5)   
    // pitch adjustment
(6)   if then
(7)    Generate a random integer number
     // represents the index of the best harmony in the HM
(8)    
(9)   end if
(10)  else
(11)   // random selection
(12)  end if
(13) end for

Due to the guiding information of the best harmony in HM, GHS outperforms both HS and IHS [36]. What is more, Omran and Mahdavi employed GHS to solve integer programming problems. The GHS developed for real search spaces can be utilized to solve integer programming problems by rounding off the real optimal values to the nearest integers. It should be noted that a penalty function method is also used here to make GHS solve the knapsack problems. Clearly, the solution search space for 0-1 knapsack problems should be .

4. A Novel Discrete Global-Best Harmony Search Algorithm

The original HS is good at identifying the high performance regions of the solution space in a reasonable time but poor at performing local search for numerical optimization problems [37]. Namely, there is imbalance between the exploration and the exploitation of HS. Furthermore, HS designed for continuous space cannot be directly used to solve discrete combinatorial optimization problems.

In order to overcome the drawbacks of HS, a novel discrete global-best harmony search (DGHS for short) algorithm is particularly designed for binary optimization problems in this paper.

Owing to better performance of GHS, some modifications to GHS are introduced to further enhance the convergence performance of GHS. Then a novel binary coded GHS, a two-phase repair operator, and a new greedy selection mechanism are integrated into the DGHS. And they are described in detail as follows.

4.1. Initialization in DGHS

The initial population in DGHS is generated randomly using a Bernoulli process. Specifically, for each decision variable of an initial harmony vector, a number within is generated randomly. If the value of the number is less than 0.5, the corresponding variable in DGHS takes 0; otherwise it takes 1. In this way, a set of HMS harmonies will be generated randomly.

In addition, another harmony vector is generated based on a greedy operation. The greedy operation is based on the idea that the item with higher profit density ratio should be packed first. And the profit density ratio can be calculated by the following equation:

The first way is to sort the items by . Then we add the items with higher value of until the total weight exceeds the capacity of the knapsack. Thus, we can get a harmony vector . And if is better than the worst one of previously initialized HM, then substitute the worst one with .

4.2. Dynamically Updating of the Parameters

First, to our knowledge, the control parameters HMCR and PAR play an important role in standard HS. More specifically, the parameters HMCR and PAR set to constants may have some adverse effects on the performance of HS, which is the idea behind designing the varying parameters. For HS with the guidance of the best harmony, the parameter HMCR with a larger value can be helpful to accelerate the convergence speed of HS variants with the guidance of the best harmony (individual) at the beginning of search, while the parameter HMCR with a smaller value can help the corresponding HS variant to get out of local minima at the end of search. Like the parameter HMCR, a varying parameter PAR is also considered in this paper in order to further balance the exploration and exploitation of HS variants.

Second, two dynamically updating schemes of the parameters HMCR and PAR are designed in order to improve the performance of GHS. And the two schemes can be described as follows:where and represent the lower and upper bounds of , respectively. And the other parameters are the same as those in (3).

Last but not least, we found that the parameter with a constant value can work better in DGHS through a lot of experiments. It should be noticed that the parameter is set to a constant, that is, 0.75, in the later study.

4.3. A Novel Scheme of Improvising a New Harmony

At first, musicians most likely choose a perfect state of a harmony from their memory or harmony memory during the process of improvising a new harmony. Next, they may select a pitch from the current harmony memory randomly and then they would perform a fine tune operation, that is, pitch adjustment, for the chosen pitch to improve the effectiveness of music. In addition, as far as knapsack problem is concerned, the states of a pitch just include zero and 1; that is, any state of a pitch is ranging in . So discrete genetic mutation used in [24] is suitable for pitch adjustment. Based on the intuitional idea and the aforementioned explanation, a novel scheme of improvising a new harmony can be given in Algorithm 3.

(1) Record the best harmony in the HM, and its index is represented by
(2) for to do
(3)  // memory consideration
(4)  if then
(5)   
(6)  else
(7)   Generate a random integer number
(8)   
    // pitch adjustment for the pitch chosen randomly
(9)   if then
(10)   // discrete genetic mutation
(11)   end if
(12)  end if
(13) end for

4.4. Two-Phase Repair Operator

A major drawback of penalty function is that it needs to preset a very large constant, which is yet problem-dependent. In order to reduce the penalty coefficient, a repair operator is introduced. For DGHS, new generated harmony vector needs to be repaired under two cases. One is that the harmony vector violates the constraints. The other is that the knapsack corresponding to the new generated harmony vector can still pack other items without exceeding the capacity of knapsack [11, 13]. Hence, the repair operator consists of two phases. The first phase, called  DROP, is responsible for repairing a harmony vector violating the constraint. The second phase, named ADD, mainly takes charge of optimizing a new generated harmony vector whose total weight is less than the capacity of knapsack. It is worth mentioning that the “DROP” phase should be performed first, and then the “ADD” phase is carried out. This is because a harmony vector changed after previously performing “DROP” phase becomes feasible, but its total weight may be less than the capacity  . Thus, The “ADD” phase has to be performed on the harmony vector again to improve the solution quality. The detailed pseudocode for the repair operator is shown in Algorithm 4.

(1) Let represent a new generated harmony vector
 // Calculate the total weight of knapsack according to
(2) // denotes the total weight
(3) if then
(4) // The “DROP” phase
(5) for to do
(6)   // Compute the profit density value of items loaded
(7) end for
(8) Sort items in increasing order of , and let represent the result sorted,
   and denotes the original index of each
(9) for to do
(10)  if then
(11)   Continue
(12)  end if
(13)   // Unload the th item from the knapsack
(14)  
(15)  if then
(16)   Break // Terminate the “DROP” phase of repair process
(17)  end if
(18) end for
(19) end if
(20)
(21) if then
(22) // The “ADD” phase
(23) Calculate the profit density ratio according to (4)
(24) Sort all the items in decreasing order of , and let represent the result sorted,
  and denotes the original index of each
(25) for to do
(26)  if then
(27)   Let represent a temporary harmony vector, and set
(28)   Set //Try to load the th item
(29)   if  ≤  then
(30)     // Load the th item into knapsack
(31)   end if
(32)   end if
(33) end for
(34) end if

4.5. A New Selection Mechanism

In order to avoid being clustered in the best harmony, we employ a new selection mechanism, in which a new generated harmony vector is compared with first, and if is better than , replace with ; otherwise, is compared with the worst harmony in HM again, and a greedy selection is applied between . In this way, the number of harmony around the best harmony would be small at the early stage of evolution so that the diversity of harmony memory would be kept better. As a consequence, DGHS not only speeds up the convergence speed but also avoids being trapped in a local optimum.

4.6. The Proposed Algorithm

According to the analysis and modifications mentioned above, an initialization of HM based on greedy operation, a novel scheme of improvising a new harmony with the direction information of the best harmony, and a repair operator with greedy strategy make up the proposed DGHS designed for binary knapsack problems. The pseudocode of DGHS is given in Algorithm 5.

(1) Set the harmony memory size , the number of maximum improvisations
   , and other control parameters
(2) Initialize the harmony memory , and perform Algorithm 4 to repair
   the harmony vector of , then evaluate their objective function values
(3) Set // represents the iterative variable
(4) while do
(5) Record the position of the best harmony in the HM, and its index
  is represented by , likewise, denotes the index of the worst
  harmony in the current HM
(6) Calculate the parameter HMCR(FEs) according to (5a)
(7) Perform Algorithm 3 to produce a new harmony vector
(8) Perform Algorithm 4 to repair the new harmony vector
  // Perform a new greedy selection scheme
(9) if is better than or equal to then
(10)   Replace with
(11) else if is better than or equal to then
(12)   Substitute with
(13) end if
(14) Memorize the best harmony achieved so far
(15) Set
(16) end while

5. Experimental Results and Analysis

5.1. Benchmark Instances and Parameter Settings

In order to evaluate the performance of DGHS, twelve benchmark instances chosen from [11, 1315, 24] are employed here to validate its performance. The detailed information of the twelve test problems is listed in Tables 1 and 2. In addition, eight randomly generated test instances with large scales are also employed here to further verify the validity of DGHS according to a rule used in [24]. Concretely, for each (the dimensions or the number of items), the weight is randomly generated in the range of , and the profit is randomly produced between , and the detailed settings of capacity of knapsack are given in Table 3. In short, twenty test instances are employed to testify the performance of DGHS thoroughly.

Subsequently, DGHS is compared with GHS [35] and NGHS [24]. To make a fair comparison, the number of maximum improvisations, namely, the maximum number of function evaluations (maxFEs), is set to 1000 for all the approaches on the twelve benchmark instances listed in Table 1. For the other eight test instances with high dimensions, the settings of are given in Table 3. In addition, the other specific parameters of algorithms are given as follows.(i)For GHS, is set to 5. The other control parameters are set as follows: , , and . The settings are the same as those used in [35]. In order to handle the constraints, the penalty coefficient is set as the value used in [24], that is, .(ii)For NGHS, , the mutation probability , and the penalty coefficient is set to , which are the same as those utilized in [24].(iii)For DGHS, the harmony memory size is also set to 5. And the other parameters are set as follows: , , and .

For all experiments, we use the aforementioned parameter settings unless a change is mentioned. Furthermore, in our experiments, each test problem is run over 50 independent times.

5.2. Comparison among GHS, NGHS, and DGHS

In this subsection, a variety of 0-1 knapsack problems with different scales are considered to investigate the performance of DGHS. And it is compared with GHS and DGHS.

First, experiments tested on twelve knapsack problems with small and median sizes are conducted. And the corresponding results are presented in Table 4 in terms of the best, worst, median, mean, standard deviation (Std.), and success rate (SR) of the solutions achieved in the 50 independent runs by each algorithm.

From Table 4, it can be seen that the DGHS can find global optimal values on the ten benchmark instances (Kp1–Kp10) with SR = 100%. NGHS can find global optima on the five test instances, that is, Kp1, Kp3, Kp4, Kp7, and Kp9. GHS can find global optima on the only three knapsack problems, that is, Kp3, Kp4, and Kp9. On the medium-scale knapsack problems Kp11 and Kp12, the success rate (SR) obtained by DGHS is 82% (41/50) for the test instance Kp11; SR obtained by DGHS is 32% (16/50) on the benchmark instance Kp12. Both GHS and NGHS fail to find global optima on the two instances. However, NGHS performs better than GHS on the two instances in terms of the best, worst, median, mean, and standard deviation of solutions. For the sake of convenience, later comparisons between NGHS and DGHS are conducted with larger values of . On the knapsack problem Kp11, DGHS can find global optima with SR = 100% when . But NGHS still fails to find global optimal value. On the test problem Kp12, the success rate (SR) obtained by DGHS accordingly increases with the increase of . Especially, DGHS can find the best known value with SR = 100% when the number of maximum improvisations is set to . Yet NGHS cannot be able to find the best known solution. Although NGHS find the best known value one time out of fifty times when , the success rate (SR) obtained by NGHS is still zero when . This indicates that the performance of NGHS is unstable. In a word, DGHS is the best among the three algorithms according to the robustness and convergence performance of algorithms.

Second, eight high-dimensional knapsack problems are generated randomly to further testify the comprehensive performance of DGHS. The statistical results obtained in 50 independent runs by three algorithms are given in Table 5. Moreover, the t-test results of all the test instances with high dimensions are also given in Table 5, in which “1” indicates that the proposed DGHS is significantly better than its competitor (GHS or NGHS) at the level of significance .

As can be seen from Table 5, GHS is obviously inferior to NGHS on all the test instances with high dimensions. And from the t-test results in Table 5, it is observed that DGHS outperforms significantly both GHS and NGHS on all the test problems. It is worth mentioning that the worst value of profit sum obtained by DGHS is even better than the best value of profit sum found by GHS and NGHS on all the test instances. In addition, the standard deviation of profit sum obtained by DGHS is also very small on each test problem, which indicates that the DGHS is robust. All these indicate that DGHS has an overwhelming advantage against the other two algorithms on solving the knapsack problems with large scales.

5.3. Further Comparison

Recently, El-Abd had proposed an improved global-best harmony search algorithm, named IGHS, which achieves a better performance for solving real parameter optimization problems [30]. In order to further testify the performance of DGHS, DGHS is further compared with two variants of IGHS for solving 0-1 knapsack problems. For convenience, the first version of IGHS with penalty function is called IGHS. Another version of IGHS with a greedy initialization together with the two-phase repair operator is called IGHS-II. That is, IGHS can be used to handle the knapsack problems by using a penalty function scheme as recommended in [24]. IGHS-II can also be used to solve the knapsack problems through integrating two-phase repair operator and a greedy initialization scheme, which are the same as those employed in DGHS.

For a fair comparison, the maximum number of function evaluations () is set to 1000 for all experiments. In addition, the other parameters of the above two variants (IGHS and IGHS-II) are the same as those of the original IGHS except . For IGHS and IGHS-II, , , , , , and . Meanwhile, the other specific parameters of DGHS are the same as the aforementioned. And ten knapsack problems, that is, Kp1–Kp10, are employed here to perform the experiment. Subsequently, each case of all compared algorithms is run 50 times independently and all experimental results are listed in Table 6.

From Table 6, it can be seen that IGHS is capable of finding the global optimum for each knapsack problem (Kp1–Kp10) in terms of SR (success rate). Especially, the SR obtained by IGHS on Kp3, Kp4, and Kp9 is over 40%, respectively. Yet IGHS-II outperforms IGHS on nine benchmarks, that is, Kp1, Kp2, Kp3, Kp5, Kp6, Kp7, Kp8, Kp9, and Kp10. What is more, the SR obtained by IGHS-II on Kp4 is also very close to that found by IGHS. All these indicate that IGHS works well on knapsack problems and the two-phase repair operator is active. It is worth noting that DGHS achieves the corresponding global optimum for each of ten knapsack problems with 100% success rate, which shows that DGHS is obviously superior to IGHS and DGHS is better than or at least similar to IGHS-II on all benchmarks. And the two-phase repair operator and greedy initialization scheme are all used in IGHS-II and DGHS, respectively. In view of this, DGHS itself is also active. That is, DGHS can work better than IGHS for solving 0-1 knapsack problems. According to the overall rank in Table 6, it is observed that DGHS takes the first place when compared against IGHS and IGHS-II.

6. Conclusion

In this work, a novel discrete global-best harmony search algorithm, called DGHS, is proposed through introducing some modifications, such as an initialization based on a greedy scheme used to improve the solution quality of initial harmony memory, a novel binary coded global harmony search algorithm based on intuitive cognition of improvisation process for easily performing discrete operation and effectively taking advantage of the guiding information of the best harmony, and two-phase repair operator used to repair an infeasible harmony vector and to further improve a feasible solution. The experiments tested on twenty knapsack problems are conducted. The experimental results reveal that DGHS outperforms both GHS and NGHS in most cases. Thus, the proposed DGHS can be considered as an elitist alternative for solving the 0-1 knapsack problems with different scales.

In the future, it is desired to be further applied to solve some real life problems, like warehouse location, production planning, portfolio optimization, and so on.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grants nos. 61064012, 61164003, and 61364026), the Youth Science Foundation of Lanzhou Jiaotong University (Grant no. 2012029), and the Science and Technology Foundation of Lanzhou Jiaotong University (Grant no. ZC2012005). The authors also wish to thank the referees for their constructive comments and suggestions.