Abstract

Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions.

1. Introduction

The combinatorial optimization plays a very important role in operational research, discrete mathematics, and computer science. The knapsack problem is one of the classical combinatorial optimization problems that are difficult to solve and it has been extensively studied since the pioneering work of Dantzig [1]. Generally speaking, if the classification of these methods that are used to solve such problems is based on the nature of the algorithm, they can be simply divided into two categories [2]: exact methods and heuristic methods. Exact methods, like enumeration method [3, 4], branch and bound [5], and dynamic programming [6], can give the exact solutions; nevertheless, in the worst case, it is required to take a long time to get a satisfactory solution; sometimes the time increases exponentially with the increment of the size of the instance.

Recently, nature-inspired metaheuristic algorithms perform powerfully and efficiently in solving the diverse optimization problems, including combinatorial problem. Metaheuristic algorithms include genetic algorithm [7], particle swarm optimization [8], ant colony optimization [9], artificial bee colony algorithm [10], differential evolution algorithm [11], harmony search algorithm [12, 13], and krill herd algorithm [1416].

As is mentioned above, metaheuristic methods have been proven to be an effective means to cope with the combinatorial optimization problems including 0-1 knapsack problem. Unlike deterministic search approaches which have the drawbacks of being trapped into local minima unavoidably, the main advantage of metaheuristic methods can deliver satisfactory solutions in a reasonable time. Because of this, it is crucial to present some new nature-inspired methods to deal with the 0-1 knapsack problem and especially to tackle some intractable and complex large-scale instances which are closer to practical applications.

Cuckoo search (CS), a population-driven nature-inspired metaheuristic algorithms originally proposed by Yang and Deb in 2009 and 2010 [17, 18], which showed some promising efficiency for global optimization and is becoming a new research hotspot in evolutionary computation. CS is inspired by the brood parasitism of some cuckoo species by laying their eggs in the nests of other host birds. Each egg (nest or cuckoo) represents a solution, and a cuckoo egg represents a new solution. The aim is to use the new and potentially better solutions (cuckoos) to replace a not-so-good solution in the nests [19]. Like other metaheuristic algorithms, CS uses no gradient information during the search so that it has the ability to solve nonconvex, nonlinear, nondifferentiable, and multimodal problems. Furthermore, there is essentially only a single parameter in CS and thus it is potentially more generic to adapt to a wider class of optimization problems [19]. In addition, Yang and Deb showed that the CS outperforms particle swarm optimization or genetic algorithms in some real-world optimization problems [18, 20]. In virtue of its simplicity, robustness, and so on, books and articles on the subject have proliferated recently. The CS has received more and more attention and application and it falls into a large number of areas [2025]. More details can be found in [26].

As far as we know, the emphasis of much of previous studies on CS was placed on solving the optimization problems over discrete or continuous space and only a few scholars were concerned about binary problems. In 2011, Layeb [25] developed a variant of cuckoo search in combination with quantum-based approach to solve knapsack problems efficiently. Subsequently, Gherboudj et al. [24] utilized purely binary cuckoo search to tackle knapsack problems. In summary, the studies on binary-coded CS have just begun and its performance needs to further improve so as to further expand its field of application.

Given the above consideration, an improved CS algorithm (ICS) based on the CS framework in combination with a novel greedy strategy is brought forward to solve 0-1 knapsack problem. Compared with the original CS, the outstanding characteristics of Lévy flights such as stability, power law asymptotics used in the original CS is still retained in ICS. Meanwhile, the operation which a fraction of worse nests are abandoned with a probability and new solutions are built randomly is eliminated and a novel operator which the search range is adjusted with adaptive step size and the genetic mutation is embedded is introduced. We assess the performance of our proposed algorithm in terms of the quality of solutions, convergence rate, and robustness by testing twenty different scale knapsack instances. The simulation results not only demonstrated that the proposed algorithm is workable and robust but also held the characteristic of the superior approximation capabilities even in high-dimensional space.

The remainder of this paper is structured as follows. Section 2 describes the mathematical model for the 0-1 knapsack problems. Then the improvement strategies and the original intention of these improvements are given in detail in Section 3, and the greedy transform method is described. Subsequently, Section 4 presents the results of comparative experiments. Finally, some conclusions and comments are made for further research in Section 5.

2. Knapsack Problems

Knapsack problem (KP) is a typical optimization problem and it has high theoretical and practical value. Many practical applications can be formulated as a KP, such as cutting stock problems, portfolio optimization, and scheduling problems, cryptography [27]. This problem has been proven to be a NP-hard problem; hence it cannot be solved in a polynomial time unless [1]. The classical 0-1 knapsack problem can be defined as follows.

Let be a set of items and and represent the weight and profit of item , respectively. Here, , , and are all positive integers. The problem is to choose a subset of the items to make their total weight not to exceed a given capacity , while the total profit is maximized. Without loss of generality, it may be assumed that the weight of each item is smaller than the capacity so that each item fits into the knapsack. We can use the binary decision variable , with if item is selected, and otherwise. The problem can be formulated as follows:

3. The Improved Cuckoo Search Algorithm (ICS)

Although the original CS algorithm possesses some excellent features of simplicity in structure and escaping from local optima easily compared to several traditional optimization approaches, the phenomenon of slow convergence rate and low accuracy still exists. That is to say, the basic algorithm does not adequately exploit the potential of CS algorithm. Therefore, in this paper, in order to improve the convergence rate and precision of CS, we designed a series of appropriate strategies and then a more efficient algorithm (ICS) is proposed.

ICS introduced the following five improving strategies:(1)using adaptive step size to adjust search range,(2)using confidence interval to enhance the local search,(3)using genetic mutation operation with a low probability to prevent the ICS from being trapped into the local optimum,(4)using hybrid encoding to represent each individual in the population,(5)using greedy transform method to repair the infeasible solution and optimize the feasible solution.

More detailed descriptions of these strategies will be given in subsections, respectively.

3.1. Hybrid Encoding

The standard CS algorithm operates in continuous space. Consequently, we cannot use it directly for solving optimization in binary space. Additionally, the operation of the original CS algorithm is closed to the set of real number, but it does not have the closure property in the binary set . Since the wide application of binary optimization problems in real-world engineering, the main objective of the ICS algorithm is to deal with the binary optimization problems. One of the most significant features of the ICS is that it adopts the hybrid coding scheme [28] and each cuckoo individual is represented by two-tuples.

Definition 1 (auxiliary search space). An auxiliary search space , which denotes a subspace of dimensional real space , where, . An auxiliary search space corresponds to a solution space . Additionally, and are two parallel search space. Here the search in is called active search; meanwhile, the search in is called passive search.

Definition 2 (hybrid encoding representation). Each cuckoo individual in the population is represented by the two tuples ( = ), where works in the auxiliary search space and performs in the solution space accordingly and is the dimensionality of solution. Further, Sigmoid function [26] is adopted to transform a real-coded vector to binary vector . The procedure works as follows: is sigmoid function.

3.2. Greedy Transform Method

Many optimization problems are constrained. Accordingly, constraint handling is crucial for the efficient design of metaheuristics. Constraint handling strategies, which mainly act on the representation of solutions or the objective function, can be classified as reject strategies, penalizing strategies, repairing strategies, decoding strategies, and preserving strategies [29]. Repairing strategy, most of them are greedy heuristics, can be applied for the knapsack problem [29]. However, the traditional greedy strategy has some disadvantages of solving the knapsack problem [30]. Truong invented a new repair operator which depends on both the greedy strategy and random selection [31]. Although this method can correct the infeasible solution, random selection reduced the efficiency because it was not greedy enough to improve the convergence speed and accuracy. In this paper, a novel greedy transform method (GTM) is introduced to solve this problem [32]. It can effectively repair the infeasible solution and optimize the feasible solution.

This GTM consists of two stages. The first stage (called RS) examines each variable in descending order of and confirms the variable value of one as long as feasibility is not violated. The second stage (called OS) changes the remaining variable from zero to one until the feasibility is violated. The purpose of the OS stage is to repair an abnormal chromosome coding to turn into a normal chromosome, while the RS stage is to achieve the best chromosome coding. Then according to the mathematical model in Section 2, pseudo code of the GTM is described in Algorithm 1.

Input:
Step1: sort
The items are sorted according to the value-to-weight ratio    in
descending order, then a queue of length n is formed.  It means that:
, for
Step2: repair stage
; Tempc = ;
While  (TempcC)
if   then
   ; Tempc = Tempc +  ;
end if
end while
Step 3: optimize stage
For   to
Tempc  =  Tempc  +   ;
if  (Tempc  ≤  C)   then
   ;
Else
;
end if
end for
Output:    , computation is terminated.

3.3. New Position Updating with Adaptive Step and Genetic Mutation

An outstanding characteristic of PSO is that the individual tends to mimic its successful companion. Each individual follows the simple act that is to emulate the successful experience of the adjacent individual, and the accumulation behavior is to search for the best area for a high-dimensional space [33]. Compared with the PSO, there are some differences and similarities. Firstly, for the PSO, the particle velocity consists of three parts: the previous speed entry, cognitive component, and social composition. The role of social composition of the particles is pulled the direction of the global optimum. For the CS algorithm, new cuckoo individual is generated by a probability in a completely random manner, which can be seen as the social component of the CS. However, it does not well reflect the impact of the entire population on the individual. Secondly, PSO demonstrate adaptive behavior, because the population state is changed in pace with the individual optimum and the global optimum which have been traced. However, cuckoo individual does not fully show the adaptive behavior in the CS algorithm. Thirdly, in PSO, position update formula performs mutation in an embedded memory manner, which is similar to that is used in CS. From the above analyses, we can come to a conclusion that the CS algorithm also has some minor disadvantages. Inspired by the idea of particle swarm optimization, a novel position updating operator is proposed and utilized to strengthen the ability of local search. The ICS and the CS are different in two aspects as follows.(1)The position updating with adaptive step in ICS replaces the random walk completely in CS in the stage of local search.(2)The probability of alien eggs found by host birds is excluded from the CS, and genetic mutation probability () is included in the ICS.

The concept of “confidence interval” is introduced firstly, and the schematic is given as well.

Definition 3 (confidence interval). Let be the th component of ingeneration , and is the global best cuckoo individual in generation . Let be the th component of ingeneration , and is the global worst cuckoo individual in generation accordingly. The is the adaptive step of the th component of individual , and then the confidence interval (CI) of every component of is defined as . Figure 1 gives the schematic representation of confidence interval.

Two major components of any metaheuristic algorithms are intensification and diversification or exploitation and exploration [19], and their interaction can have a marginal effect on the efficiency of a metaheuristic algorithm. The confidence interval is essentially a region near the global best cuckoo. It is significant that the search step size is adjusted gradually in the evolutionary process, which can effectively balance the contradictions between exploration and exploitation. In the early stage of search, cuckoo individuals randomly distributed in the entire response space, so most adaptive steps are large and most confidence intervals are wide, which is very beneficial to making a lot of exploration. As the iterations continue, most adaptive steps gradually become small and most confidence intervals become wide accordingly. Thus the exploitation capabilities will be gradually strengthened.

The purpose of mutation is to introduce new genes so as to increase the diversity of the population. Mutation can also play a balanced exploration-exploitation contradictory role. Genetic mutation operation with a small probability is carried out, for it can effectively prevent the premature convergence of the ICS. New position updating formula of ICS is shown in Algorithm 2.

For   to n
if  (
  
end if
end for

Here, “best” and “worst” are the indexes of the global best cuckoo and the worst cuckoo, respectively. And , and rand are all uniformly generated random numbers in .

Based on the above-mentioned analyses, the pseudo code of the ICS for 0-1 knapsack problems is described as shown in Algorithm 3.

Step1: Sorting.  According to value-to-weight ratio   in descending order,
a queue of length n is formed.
Step2: Initialization.  Generate m cuckoo nests randomly { , , .
Calculate the fitness for each individual, , , determine .
Set the generation counter . Set mutation parameter .
Step3: While  (the stopping criterion is not satisfied)
   for   to m
    for to n
     
     Apply new position updating formula of ICS  (Algorithm 2)
     Repair the illegal individuals and optimize the legal individuals  (Algorithm 1)
    end for
   end for
Step4: Keep best solutions; Rank the solutions and find the current best .
   
Step5: end while

The time complexity of our proposed algorithm is approximately and it is still linear. The time complexity of new proposed algorithm does not increase in magnitude compared with the original CS algorithm.

4. Experimental Results and Analysis

In order to test the optimization ability of ICS and investigate effectiveness of the algorithms for different instance, types, we consider twenty 0-1 knapsack problems involving ten small-scale instances, six medium-scale instances and four large-scale instances. The solution quality and performance are compared with binary version HS and binary version CS, for simplicity, denoted as HS and CS, respectively.

Test problems 1–10 are taken from [12]. Test problems 11 and 13 of He et al. [28] are used in our numerical experiments. Test problem 15 is conducted from test problems 11 and 13. Test problem 16 is generated by Kellerer et al. [34]. Test problems 12 and 14 are generated by Gherboudj et al. [24]. Test problems 17–20 are taken from [12].

All the algorithms were implemented in Visual C++ 6.0. The test environment is set up on personal computer with AMD Athlon(tm) II X2 250 Processor 3.01 GHz, 1.75 G RAM, running on Windows XP. Three groups of experiments were performed to assess the efficiency and performance of our algorithm. In all experiments, we have set the parameters of CS as follows: , the number of cuckoo is 20. For the HS algorithm, harmony memory size HMS = 5, harmony memory consideration rate HMCR = 0.9, pitch adjusting rate PAR = 0.3, and bandwidth . For the ICS algorithm, the number of cuckoo is 20, the generic mutation probability . The experiments on each function were repeated 30 times independently. The quantification of the solutions is tabulated in Tables 13.

4.1. Comparison among Three Algorithms on Small Dimension Knapsack Problems

Table 1 shows the experimental results of our ICS algorithm, the HS, and the CS on ten KP tests with different dimension. Observation of the presented results in Table 1 indicates that the proposed ICS algorithm performs better than HS algorithm and CS algorithm in . The optimal solution of the test problem 8 found by ICS is and . Additionally, CS and ICS have the same results in which is better than that obtained by the HS algorithm and three algorithms have the same results in the other instances. In a word, the solutions obtained by three algorithms are similar and there is almost no significant difference among all the three algorithms. Further, ICS algorithm does not show its advantages thoroughly. Therefore, in order to further test the performance of the algorithm, we conducted the following experiments in the next subsection.

4.2. Comparison among Three Algorithms on Medium Dimension Knapsack Problems

Figures 2, 3, 4, and 5 show convergence curves of the best profits of the ICS over 30 runs on four test problems with 100, 120, 150, and 200 items. It indicates the global search ability and the convergence ability of the ICS. There are several observations and they are given as follows.

The best profit of 100 items test problem is quickly increasing and reaching the approximately optimal profit at nearly one second. Although the algorithm shows a slow evolution only for a moment in 120 items, the best profit is still obtained after about 2.5 seconds. In the 150 items test problem, the best profit is quickly increasing for more than a second. For the large 200 items test problem, the best profit is also increasing very rapidly over 2.5 seconds. The performance of the ICS can be further understood and analyzed from Table 2.

We observed from Table 2 that the ICS has demonstrated an overwhelming advantage over the other two algorithms on solving 0-1 knapsack problems with medium scales. ICS and HS obtained the same optimal solution in all test problems. The CS has the worst performance, and the best solutions found by CS are worse than those obtained by the other two algorithms for and . Furthermore, the worst solutions found by the ICS are all better than those obtained by CS. The ICS and the HS obtained the same worst solutions except and . Unfortunately, the worst solution obtained by ICS cannot exceed that of the HS. The ICS uses little “time” and little “average time” compared with CS for almost all of the test problems. In addition, the “” of most problems is much smaller than that of the CS and the HS, which shows that the ICS has a fast convergence. “SR” is more than 95% for almost all of problems except and . Further, “SR” for and is slightly higher than that of other two algorithms, which indicates the high efficiency of the ICS on solving 0-1 knapsack problems. “Std.dev” is much smaller than that of the CS and the difference is not very poor between ICS and HS, which indicates the good stability of the ICS and superior approximation ability.

Figure 6 shows a comparison of average computation time with to , estimated by seconds for the proposed algorithms, the HS and the CS. In terms of the average computation time, Figure 6 shows that the HS algorithm is the best one and the CS algorithm is the worst one. Moreover, ICS converges to the optima faster than CS on most instances.

Although ICS has shown some advantages on solving 0-1 knapsack problem with medium-scale instances; however, the optimal solution obtained by ICS is not very prominent compared with other two algorithms. Therefore, in order to further verify the efficiency of our proposed algorithm, we designed the large scale knapsack tests as follows.

4.3. Comparison among Three Algorithms on Solving 0-1 Knapsack Problems with Large Dimension

Similar to the results of medium-scale knapsack instances, for the large scale knapsack problem, we observe that the ICS algorithm obtains better solutions in shorter time and has more obvious advantages over the CS algorithm from Table 3. Regrettably, ICS is slightly inferior to HS in terms of the optimal solution quality on function 17 and function 20. In a word, the ICS has demonstrated better performance and it thus provides an efficient alternative on solving 0-1 knapsack problems.

Convergence curves shown in Figures 7, 8, 9, and 10 similarly establish the fact that ICS is more effective than CS in all four large-scale KP instances. Through careful observation, it can be seen that HS gets outstanding profits in the initial stage of the evolution and the best value in the final population. Compared with HS and ICS, CS obtained the worst mean profits at various stages. ICS and HS have roughly the same convergence speed. In addition, it is obvious to infer that CS and ICS get stuck at local optima quickly as can be seen from Figure 10. However, HS converges to the global optimum rapidly.

5. Conclusions

In this paper, the ICS algorithm has been proposed based on the CS framework and a greedy transition method to solve 0-1 knapsack problem efficiently. An adaptive step is carefully designed to balance the local search and the global search. Genetic mutation operator helps the algorithm to yield fast convergence and avoids local optima. The simulation results demonstrate that the proposed algorithm has superior performance when compared with HS and CS. The proposed algorithm thus provides a new method for solving 0-1 knapsack problems.

Further studies will focus on the two issues. On one hand, we would apply our proposed approach on other combinatorial optimization problems, such as multidimensional knapsack problem (MKP) and traveling salesman problem (TSP). On the other hand, we would examine new meta-hybrid to solve 0-1 knapsack problems which are too complicated.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This project is supported by the National Natural Science Foundation of China (Grant no. 10971052).