Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 857254 | https://doi.org/10.1155/2014/857254

Yanhong Feng, Gai-Ge Wang, Qingjiang Feng, Xiang-Jun Zhao, "An Effective Hybrid Cuckoo Search Algorithm with Improved Shuffled Frog Leaping Algorithm for 0-1 Knapsack Problems", Computational Intelligence and Neuroscience, vol. 2014, Article ID 857254, 17 pages, 2014. https://doi.org/10.1155/2014/857254

An Effective Hybrid Cuckoo Search Algorithm with Improved Shuffled Frog Leaping Algorithm for 0-1 Knapsack Problems

Academic Editor: Saeid Sanei
Received04 Jun 2014
Revised13 Sep 2014
Accepted14 Sep 2014
Published22 Oct 2014

Abstract

An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm.

1. Introduction

The application of nature-inspired metaheuristic algorithms to computational optimization is a growing trend [1]. Many hugely popular algorithms, including differential evolution (DE) [2, 3], harmony search (HS) [4, 5], krill herd algorithm (KH) [613], animal migration optimization (AMO) [14], grey wolf optimizer (GWO) [15], biogeography-based optimization (BBO) [16, 17], gravitational search algorithm (GSA) [18], and bat algorithm (BA) [19, 20], perform powerfully and efficiently in solving diverse optimization problems. Many metaheuristic algorithms have been applied to solve knapsack problems, such as evolutionary algorithms (EA) [21], HS [22], chemical reaction optimization (CRO) [23], cuckoo search (CS) [2426], and shuffled frog-leaping algorithm (SFLA) [27]. To better understand swarm intelligence please refer to [28].

In 2003, Eusuff and Lansey firstly proposed a novel metaheuristic optimization method: SFLA, which mimics a group of frogs to search for the location that has the maximum amount of available food. Due to the distinguished benefit of its fast convergence speed, SFLA has been successfully applied to handle many complicated optimization problems, such as water resource distribution [29], function optimization [30], and resource-constrained project scheduling problem [31].

CS, a nature-inspired metaheuristic algorithm, is originally proposed by Yang and Deb in 2009 [32], which showed some promising efficiency for global optimization. Owing to the outstanding characteristics such as fewer parameters, easy implementation, and rapid convergence, it is becoming a new research hotspot in swarm intelligence. Gandomi et al. [33] first verified structural engineering optimization problems with CS algorithm. Walton et al. [34] proposed an improved cuckoo search algorithm which involved the addition of information exchange between the best solutions and tested their performance with a set of benchmark functions. Recently, the hybrid algorithms that combined CS with other methods have been proposed and have become a hot topic studied by people, such as CS combined with a fuzzy system [35], a DE [36], wind driven optimization (WDO) [37], artificial neural network (ANN) [38], and genetic algorithm (GA) [39]. For details, see [40].

In 2011, Layeb [25] developed a variant of cuckoo search in combination with quantum-based approach to solve knapsack problems efficiently. Subsequently, Gherboudj et al. [24] utilized purely binary cuckoo search to tackle knapsack problems. A few scholars consider binary-coded CS and its performance need to further improve so as to further expand its fields of application. In addition, despite successful application to the solution of 0-1 knapsack problem by many methods, in fact, it is still a very active research area, because many existing algorithms do not cope well with some new and more intractable 0-1 knapsack problems hidden in the real world. Further, most of recently proposed algorithms focused on solving 0-1 knapsack problems with low dimension and medium dimension, but 0-1 knapsack problems with high dimension are involved little and the results are not highly satisfactory. What is more, the correlation between the weight and the value of the items may not be more concerned. This necessitates new techniques to be developed.

Therefore, in this work, we propose a hybrid CS algorithm with improved SFLA (CSISFLA) for solving 0-1 knapsack problem. To verify effectiveness of our proposed method, a large number of experiments on 0-1 knapsack problem are conducted and the experimental results show that the proposed hybrid metaheuristic method can reach the required optima more effectively than CS, DE, and GA even in some cases when the problem to be solved is too complicated and complex.

The rest of the paper is organized as follows. Section 2 introduces the preliminary knowledge of CS, SFLA algorithm, and the mathematical model of 0-1 KP problem. Then, our proposed CSISFLA for 0-1 KP problems is presented in Section 3. A series of simulation experiments are conducted in Section 4. Some conclusions and comments are made for further research in Section 5.

In this section, the model of 0-1 knapsack problem and the basic CS and SFLA are introduced briefly.

2.1. 0-1 Knapsack Problem

The 0-1 knapsack problem, denoted by KP, is a classical optimization problem and it has high theoretical and practical value. Many practical applications can be formulated as a KP, such as cutting stock problems, portfolio optimization, scheduling problems, and cryptography. This problem has been proven to be a NP-hard problem; hence, it cannot be solved in a polynomial time unless [44].

The 0-1 knapsack problem can be stated as follows: where is the number of items; and represent the weight and profit of item j, respectively. The objective is to select some items so that the total weight does not exceed a given capacity c, while the total profit is maximized. The binary decision variable , with if item is selected, and otherwise is used.

2.2. Cuckoo Search

CS is a relatively new metaheuristic algorithm for solving global optimization problems, which is based on the obligate brood parasitic behavior of some cuckoo species. In addition, this algorithm is enhanced by the so-called Lévy flights rather than by simple isotropic random walks.

For simplicity, Yang and Deb used the following three approximate rules [32, 45]:(1)each cuckoo lays only one egg at a time and dumps its egg in a randomly chosen nest;(2)the best nests with high-quality eggs will be carried over to the next generations;(3)the number of available host nests is fixed, and the egg laid by the host bird with a probability . In this case, the host bird can either throw the egg away or simply abandon the nest and build a completely new nest.

The last assumption can be approximated by a fraction of the host nests which are replaced by new nests (with new random solutions).

New solution is generated as (2) by using a Lévy flight [32]. Lévy flights essentially provide a random walk while their random steps followed a Lévy distribution for large steps which has an infinite variance with an infinite mean. Here the steps essentially form a random walk process with a power-law step-length distribution with a heavy tail as (3): where is the step size scaling factor. Generally, we take . The product means entry-wise multiplications.

2.3. Shuffled Frog-Leaping Algorithm

The SFLA is a metaheuristic optimization method that imitates the memetic evolution of a group of frogs while casting about for the location that has the maximum amount of available food [46]. SFLA, originally developed by Eusuff and Lansey in 2003, can be applied to handle many complicated optimization problems. In virtue of the beneficial combination of the genetic-based memetic algorithm (MA) and the social behavior-based PSO algorithm, the SFLA has the advantages of global information exchange and local fine search. In SFLA, all virtual frogs are assigned to disjoint subsets of the whole population called memeplex. The different memeplexes are regarded as different cultures of frogs and independently perform local search. The individual frogs in each memeplex have ideas that can be effected by the ideas of other frogs and evolve by means of memetic evolution. After a defined number of memetic evolution steps, ideas are transferred among memeplexes in a shuffling process. The local search and the shuffling processes continue until defined convergence criteria are satisfied [47].

In the SFLA, the initial population is partitioned into memeplexes, each containing frogs (). In this process, the th goes to the jth memeplex where mod M (memeplex numbered from 0). The procedure of evolution of individual frogs contains three frog leapings. The position update is as follows.

Firstly, the new position of the frog individual is calculated by

If the new position is better than the original position , replace with ; else, another new position of this frog will perform in which the global optimal individual replaces the best individual of kth memeplex with the following leaping step size:

If nonimprovement becomes possible in this case, the new frog is replaced by a randomly generated frog; else replace with Y:

Here, Y is an update of frog’s position in one leap. , , and are random numbers uniformly distributed in . and are the best and the worst individual of the kth memeplex, respectively. is the best individual in the whole population. U, L is the maximum and minimum allowed change of frog’s position in one leap.

3. Hybrid CS with ISFLA for 0-1 Knapsack Problems

In this section, we will propose a hybrid metaheuristic algorithm integrating cuckoo search and improved shuffled frog-leaping algorithm (CSISFLA) for solving 0-1 knapsack problem. First, the hybrid encoding scheme and repair operator will be introduced. And then improved frog-leaping algorithm along with the framework of proposed CSISFLA will be presented.

3.1. Encoding Scheme

As far as we know, the standard CS algorithm can solve the optimization problems in continuous space. Additionally, the operation of the original CS algorithm is closed to the set of real number, but it does not have the closure property in the binary set . Based on above analysis, we utilize hybrid encoding scheme [26] and each cuckoo individual is represented by two tuples (), where works in the auxiliary search space and performs in the solution space accordingly and is the dimensionality of solution. Further, Sigmoid function is adopted to transform a real-coded vector to binary vector . The procedure works as follows: where is Sigmoid function.

The encoding scheme of the population is depicted in Table 1.



3.2. Repair Operator

After evolving a generation, the feasibility of all the generated solutions is taken into consideration. That is, to say, the individuals could be illegal because of violating the constraint conditions. Therefore, a repair procedure is essential to construct illegal individuals. In this paper, an effective greedy transform method (GTM) is introduced to solve this problem [26, 48]. It cannot only effectively repair the infeasible solution but also can optimize the feasible solution.

This GTM consists of two phases. The first phase, called repairing phase (RP), checks each solution in order of decreasing and confirms the variable value of one as long as feasibility is not violated. The second phase, called optimizing phase (OP), changes the remaining variable from zero to one until the feasibility is violated. The primary aim of the OP is to transform an abnormal chromosome coding into a normal chromosome, while the RP is to achieve the best chromosome coding.

3.3. Improved Shuffled Frog-Leaping Algorithm

In the evolution of SFLA, new individual is only affected by local optimal individual and the global optimal during the first two frog leapings, respectively. That is to say, there is a lack of information exchange between individuals and memeplexes. In addition, the use of the worst individual is not conducive to quickly obtain the better individuals and quick convergence. When the quality of the solution has not been improved after the first two frog leapings, the SFLA randomly generates a new individual without restriction to replace original individual, which will result in the loss of some valuable information of the superior individual to some extent. Therefore, in order to make up for the defect of the SFLA, an improved shuffled frog-leaping algorithm (ISFLA) is carefully designed and then embedded in the CSISFLA. Compared with SFLA, there are three main improvements.

The first slight improvement is that we get rid of sorting of the items according to the fitness value which will decrease in time cost.

The second improvement is that we adopt a new frog individual position update formula instead of the first two frog leapings. The idea is inspired by the DE/Best/1/Bin in DE algorithm. Similarly, each frog individual is represented as a solution and then the new solution is given by where is the current global best solution found so far. is the best solution of the kth memeplex. is an individual of random selection with index of and is random number uniformly distributed in . In particular the plus or minus signs are selected with certain probability. The main purpose of improvement in (8) is to quicken convergence rate.

The third improvement is to randomly generate new individuals with certain probability instead of unconditional generating new individuals, which takes into consideration the retention of the better individuals in the population.

The main step of ISFLA is given in Algorithm 1. In Algorithm 1, P is the size of the population. M is the number of memeplex. D is the dimension of decision variables. And is a random real number uniformly distributed in (0, 1). And , , , and are all D-dimensional random vectors and each dimension is uniformly distributed in (0, 1). In particular, is called probability of mutation which controls the probability of individual random initialization.

Begin
  For    to    do
   mod
     select uniform randomly
     For to    do
       If     then
        
        Else
        
     End if
     End for
If    then
         
Else If       then
         
         End if
End if
End for
End

3.4. The Frame of CSISFLA

In this section, we will demonstrate how we combine the well-designed ISFLA with Lévy flights to form an effective CSISFLA. The proposed algorithm does not change the main search mechanism of CS and SFLA. In the iterative process of the whole population, Lévy flights are firstly performed and then frog-leaping operator is adopted in each memeplex. Therefore, the strong exploration abilities in global area of the original CS and the exploitation abilities in local region of ISFLA can be fully developed. The CSISFLA architecture is explained in Figure 1.

3.5. CSISFLA Algorithm for 0-1 Knapsack Problems

Through the design above carefully, the pseudocode of CSISFLA for 0-1 knapsack problems is described as follows (see Algorithm 2). It can be analyzed that there are essentially three main processes besides the initialization process. Firstly, Lévy flights are executed to get a cuckoo randomly or generate a solution. The random walk via Lévy flights is much more efficient in exploring the search space owing to its longer step length. In addition, some of the new solutions are generated by Lévy flights around the best solution, which can speed up the local search. Then ISFLA is performed in order to exploit the local area efficiently. Here, we regard the frog-leaping process as the process of cuckoo laying egg in a nest. The new nest generated with a probability is far enough from the current best solution, which enables CSISFLA to avoid being trapped into local optimum. Finally, when an infeasible solution is generated, a repair procedure is adopted to keep feasibility and, moreover, optimize the feasible solution. Since the algorithm can well balance the exploitation and exploration, it expects to obtain solutions with satisfactory quality.

Begin
Step  1. Sorting. According to value-to-weight ratio in descending
order, a queue of length is formed.
Step  2. Initialization. Set the generation counter ; Set probability of mutation .
Generate cuckoo nests randomly . Divide the whole
population into memeplexes, and each memeplex contains (i.e.P/M) cuckoos; Calculate
the fitness for each individual, , , determine the global optimal individual
and the best individual of each memeplex , .
Step  3. While the stopping criterionis not satisfied do
   For    to
     mod
     select uniform randomly
    For to
      //Levy flight
      If     then       // The first frog leaping
        Temp  =
      Else
        Temp  =
      End if
    End for
    If    then    // Generate new individual
       =  Temp
    Else If       then    // Random selection
    
     End if
    End if   where
Repair the illegal individuals and optimize the legal individuals by performing GTM method
  End for
Keep best solutions.
Rank the solutions in descending order and find the current best .
Step  4.   Shuffle all the memeplexes
Step  5.    End while
End.

3.6. Algorithm Complexity

CSISFLA is composed of three stages: the sorting by value-to-weight ratio, the initialization, and the iterative search. The quick sorting has time complexity . The generation of the initial cuckoo nests has time complexity . The iterative search consists of four steps (comment statements in Algorithm 2), and so forth, the Lévy flight, the first frog leaping, generate new individual and random selection which costs the same time . In summary, the overall complexity of the proposed CSISFLA is . It does not change compared with the original CS algorithm.

4. Simulation Experiments

4.1. Experimental Data Set

In existent researching files, cases studies and research of knapsack problems are about small-scale to moderate-scale problems. However, in real-world applications, problems are typically large-scale with thousands or even millions of design variables. In addition, the complexity of KP problem is greatly affected by the correlation between profits and weights [4951]. However, few scholars pay close attention to the correlation between the weight and the value of the items. To test the validity of the algorithm for different types of instances, we adopt uncorrelated, weakly correlated, strongly correlated, multiple strongly correlated, profit ceiling, and circle data sets with different dimension. The problems are described as follows:(i)uncorrelated instances: the weights and the profits are random integers uniformly distributed in ;(ii)weakly correlated instances: the weights are random integers uniformly distributed in , and the profits are random integer uniformly distributed in [, ];(iii)strongly correlated instances: the weights are random integers uniformly distributed in and the profits are set to ;(iv)multiple strongly correlated instances: the weights are randomly distributed in . If the weight is divisible by 6, then we set the otherwise set it to ;(v)profit ceiling instances: the weights are randomly distributed in and the profits are set to ;(vi)circle instances: the weights are randomly distributed in and the profits are set to . Choosing , .

For each data set, we set the value of the capacity. Consider .

Figures 2, 3, 4, 5, 6, and 7 illustrate six types of instances of 200 items, respectively.

The KP instances in this study are described in Table 2.


ProblemCorrelationDimensionTarget weightTotal weightTotal values

KP1Uncorrelated150647186288111
KP2Uncorrelated20083281110410865
KP3Uncorrelated300123831651116630
KP4Uncorrelated500203632715028705
KP5Uncorrelated800333674448944005
KP6Uncorrelated1000419485593054764
KP7Uncorrelated1200494856598066816
KP8Weakly correlated150640385388504
KP9Weakly correlated20083581114411051
KP10Weakly correlated300125541673916778
KP11Weakly correlated500207582767727821
KP12Weakly correlated800333674448944491
KP13Weakly correlated1000418495579955683
KP14Weakly correlated1200498086641156811
KP15Strongly correlated300122471632919329
KP16Strongly correlated500213052840733406
KP17Strongly correlated800333674448952489
KP18Strongly correlated1000408835451164510
KP19Strongly correlated1200504306724079240
KP20Multiple strongly correlated300129081721123651
KP21Multiple strongly correlated500202592701237903
KP22Multiple strongly correlated800327674368961140
KP23Multiple strongly correlated1000424425658977940
KP24Multiple strongly correlated1200502226696392653
KP25Profit ceiling300126661688817181
KP26Profit ceiling500198112641526913
KP27Profit ceiling800320114268143497
KP28Profit ceiling1000422535633757381
KP29Profit ceiling1200502086694468157
KP30Circle300125541673926448
KP31Circle500208122774943880
KP32Circle800325814344169527
KP33Circle1000421075614388220
KP34Circle12004922065627104287

4.2. The Selection on the Value of and N

The CSISFLA has some control parameters that affect its performance. In our experiments, we investigate thoroughly the number of subgroups and the number of individuals in each subgroup . The below three test instances are used to study the effect of and on the performance of the proposed algorithm. Firstly, M is set to 2, and then three levels of 10, 15, and 20 are considered for N (accordingly, the size of population is 2 × 10, 2 × 15, and 2 × 20). Secondly, a fixed individual number of each subgroup is 10, and the value of is 2, 3, and 4, respectively. Results are summarized in Table 3.


Instance
BestWorstMeanSTDBestWorstMeanSTD

KP9108727870487115.528727870487115.5
158728870187156.838725870187137.0
208730870287186.548726870887176.3

KP10101315213124131408.721315213124131408.7
1513168131201314412.631316713131131468.2
2013174131261314813.341316813128131489.4

KP111021820217372177322.1221820217372177322.1
1521827217562178617.3321840217352178324.6
2021814217572177815.4421848217422178823.5

As expected, with the increase of the individual number in the population, it is an inevitable consequence that there are more opportunities to obtain the optimal solution. This issue can be indicated by bold data in Table 3. In order to get a reasonable quality under the condition of inexpensive computational costs, we use and in the rest experiments.

4.3. The Selection on the Value of

In this subsection, the effect of on the performance of the CSISFLA is carefully investigated. We select two uncorrelated instances (KP1, KP2) and two weakly correlated instances (KP8, KP9) as the test instances for parameter setting experiment of . For each instance, every test is run 30 times. We use , , and the maximum time of iterations is set to 5 seconds. Table 4 gives the optimization results of the CSISFLA using different values for .


Instance00.050.10.150.20.30.40.50.60.70.80.91.0

KP1
 Best7474747574757475747574747475747474747474747374747459
 Worst7430746974687471747174637457745174517446743774277407
 Mean7461747374747474747374717470746874687461745574487436
 STD12.601.501.570.931.273.574.966.035.878.8310.1111.1713.88
KP2
 Best9865986598659865986398649860985998509847984498439842
 Worst9821984798459844983998239830981898049778977597689757
 Mean9847985898569857985298489847984198339830981298109783
 STD11.965.756.125.326.8410.607.9911.8912.3516.8621.9221.1220.24
KP8
 Best6676667466736672667166726672667166786666666666626654
 Worst6658666266636665666266636662665766556650665266456642
 Mean6668667166696669666866686668666466646659665866526647
 STD4.592.952.592.042.442.792.394.174.454.063.884.273.17
KP9
 Best8730873487348728873187208723871687128710870787018688
 Worst8707870387058701870087028695868486828675867786648655
 Mean8716871887188715871487118707870286978693869086828676
 STD6.238.796.666.857.454.597.207.977.509.757.2710.067.76

From the results of Table 4, it is not difficult to observe that the probability of mutation with is more suitable for all test instances which can be seen from data in bold in Table 3. In addition, the optimal solution dwindles steadily with the change of from 0.5 to 1.0 and the worst results of four evaluation criteria are obtained when . Similarly, the performance of the CSISFLA is also poor when is 0. As we have expected, 0 means that the position update in memeplex is completed entirely by the first Leapfrog, which cannot effectively ensure the diversity of the entire population, leading to the CSISFLA more easily fall into the local optimum, and 1 means that new individuals randomly generated without any restrictions which results in slow convergence. Generally speaking, using a small value of is beneficial to strengthen the convergence ability and stability of the CSISFLA. The performance of the algorithm is the best when , so we will set for the following experiments.

4.4. Experimental Setup and Parameters Setting

In this paper, in order to test the optimization ability of CSISFLA and further investigate effectiveness of the algorithms for different types of instance, we adopt a set of 34 knapsack problems (KP1–KP34). We compared the performance of CSISFLA with (a) GA, (b) DE, and (c) classical CS. In the experiments, the parameters setting are shown in Table 5.


AlgorithmParameterValue

GA [41]Population size 100
Crossover probability 0.6
Mutation probability 0.001
DE
[42, 43]
Population size 100
Crossover probability 0.9
Amplification factor 0.3
CS [24]Population size 40
0.25
CSISFLA4
10
0.15

In order to make a fair comparison, all computational experiments are conducted with Visual C++ 6.0. The test environment is set up on a PC with AMD Athlon(tm) II X2 250 Processor 3.01 GHz, 1.75 G RAM, running on Windows XP. The experiment on each instance was repeated 30 times independently. Further, best solution, worst solution, mean, median, and standard deviation (STD) for all the solutions are given in related tables. In addition, the maximum run-time was set to 5 seconds for the instances with dimension less than 500, and it was set to 8 seconds for other instances.

4.5. The Experimental Results and Analysis

We do experiment on 7 uncorrelated instances, 7 weakly correlated instances, and 5 other types of instances, respectively. The numerical results are given in Tables 611. The best values are emphasized in boldface. In addition, comparisons of the best profits obtained from the CSISFLA with those obtained from GA, DE, and CS for six KP instances with 1200 items are shown in Figures 8, 9, 10, 11, 12, and 13. Specifically, the convergence curves of four algorithms on six KP instances with 1200 items are also drawn in Figures 14, 15, 16, 17, 18, and 19. Through our careful observation, it can be analyzed as follows.(a)Table 6 shows that CSISFLA outperforms GA, DE, and CS on almost all the uncorrelated knapsack instances in terms of computation accuracy and robustness. In particular, the best solution found by CSISFLA is slightly inferior to that obtained by DE on KP3. On closer inspection, “STD” is much smaller than that of the other algorithms except for KP7, which indicates the good stability of the CSISFLA and superior approximation ability.(b)From Table 7, it can be seen that DE obtained the best, mean, and median results for the first four cases, and CS attained the best results for the last three cases. Although the optimal solutions obtained by the CSISFLA are worse than DE or CS, the CSISFLA obtained the worst, median, and STD results in KP12–KP14, which still can indicate that the CSISFLA has better stability. Above all, the well-known NFL theorem [52] has stated clearly that there is no heuristic algorithm best suited for solving all optimization problems. Unfortunately, although weakly correlated knapsack problems are closer to the real world situations [49], the CSISFLA does not appear clearly superior to the other two algorithms in solving such knapsack problems.(c)Obviously, in point of search accuracy and convergence speed, it can be seen from Table 8 that CSISFLA outperforms GA, DE, and CS on all five strongly correlated knapsack problems. If anything, the STD values tell us that CSISFLA is only inferior to CS.(d)Similar results were found from Tables 9, 10, and 11 and it can be inferred that CSISFLA can easily yield superior results compared with GA, DE, and CS. The series of experimental results confirm convincingly the superiority and effectiveness of CSISFLA.(e)Figures 813 show a comparison of the best profits obtained by the four algorithms for six types of 1200 items.(f)Figures 1419 illustrate the average convergence curves of all the algorithms in 30 runs where we can observe that CS and CSISFLA usually show the almost same starting point. However, CSISFLA surpasses CS in point of the accuracy and convergence speed. CS performs the second best in hitting the optimum. DE shows premature phenomenon in the evolution and does not offer satisfactory performance along with the extending of the problem.


InstanceAlgorithmBestWorstMeanMedianSTD

KP1GA731669787200720875.78
DE74757433747174737.68
CS747273587403740527.82
CSISFLA74757467747374741.56

KP2GA967392279503950797.39
DE986597519854986522.52
CS984896789737973433.22
CSISFLA98659837985698587.23

KP3GA15022142751475614795158.91
DE1533415088152871530154.45
CS1522415024150921508151.37
CSISFLA1532715248152971530218.48

KP4GA25882252122549825493150.68
DE26333257512609926096135.88
CS26208257862593625911103.4
CSISFLA2636026193262842627738.54

KP5GA39528384623897639014243.62
DE39652392153941039399113.28
CS40223394163956539514179.98
CSISFLA4029039885400724008191.97

KP6GA49072478354848348570316.62
DE49246488354898948979101.11
CS49767490244916449142143.08
CSISFLA4989349567497444973797.52

KP7GA59793583515913559225370.86
DE59932594885970759727110.39
CS60629597085993959884166.43
CSISFLA60779602646044360420130.56


InstanceAlgorithmBestWorstMeanMedianSTD

KP8GA662765316593659320.63
DE66766657667466764.80
CS66606637664866466.79
CSISFLA66736663666866682.23

KP9GA865885018588859033.38
DE87438743874387430.00
CS871786448676867118.23
CSISFLA87288701871487146.87

KP10GA1306212939129971299130.64
DE132021315813186131869.76
CS1315713069130941308721.91
CSISFLA1316813120131451314511.90

KP11GA2167121470215712157648.85
DE2195121745218582185937.61
CS2193521670217462172276.53
CSISFLA2182721756217882178716.66

KP12GA3458734314344883449963.23
DE3481434578347213471864.50
CS34987346213469734654100.38
CSISFLA3481834721347603475822.87

KP13GA4324142938430824307375.51
DE4332743162432174321143.64
CS43737432164334043264166.53
CSISFLA4340943312433674336827.23

KP14GA51472504145105851135265.56
DE51947514445160051569108.83
CS53333516015183151788299.35
CSISFLA5240352077522675226486.19


InstanceAlgorithmBestWorstMeanMedianSTD

KP15GA1478514692147541476225.93
DE147971478114789147874.90
CS148041479114797147972.43
CSISFLA148071479514798147973.46

KP16GA2548625402254582546521.61
DE255022548125492254934.21
CS255142550225506255053.49
CSISFLA255152550525510255123.94

KP17GA4008739975400394004128.33
DE401114006840089400888.66
CS401074009640103401053.88
CSISFLA401174009840111401135.12

KP18GA4933249225493004930927.26
DE493634933349346493457.50
CS493804935049364493637.04
CSISFLA493934936249373493737.90

KP19GA6052060418604826048926.62
DE605406050160519605198.55
CS605586053060542605406.77
CSISFLA605626053960549605505.70


InstanceAlgorithmBestWorstMeanMedianSTD

KP20GA1834618172182841828838.39
DE1838718335183541834815.25
CS183861835518368183684.73
CSISFLA183881836818381183868.03

KP21GA2952529387294612946231.97
DE2954829488295192952014.10
CS2958929527295552954913.94
CSISFLA2960929562295812958512.38

KP22GA4764547494475684757539.72
DE4770447620476594765720.68
CS4772747673476964769515.09
CSISFLA4775747697477324773613.02

KP23GA6052960312604556046347.39
DE6057260508605346053013.98
CS6060760540605766057416.96
CSISFLA6065060579606156061215.75

KP24GA7206371725719147191764.42
DE7207271973720187201819.38
CS7209472031720587205715.93
CSISFLA7215172070721127211121.20


InstanceAlgorithmBestWorstMeanMedianSTD

KP25GA129571294812955129572.53
DE129571295112953129541.83
CS129571295412957129570.76
CSISFLA129571295712957129570.00

KP26GA202952026820285202867.37
DE203012029220294202942.17
CS203042029520299202981.86
CSISFLA203072029820304203042.28

KP27GA327963276932785327876.99
DE328023279332797327962.63
CS328113279932803328023.12
CSISFLA328203280832812328113.34

KP28GA432484321543234432368.76
DE432574324543249432483.57
CS432694325143257432544.41
CSISFLA432724326043266432662.88

KP29GA513785134851364513667.25
DE513845137251378513783.04
CS513995137851385513844.32
CSISFLA513995139051396513963.10


InstanceAlgorithmBestWorstMeanMedianSTD

GA2119420899210862109671.44
DE2133321192212642127732.46
CS2133321194212612126118.57
CSISFLA2133321263213002129534.04

GA3526234982351123512482.25
DE3534335184352473526738.08
CS3534535271352973527731.29
CSISFLA3541435342353543534523.23

GA55976554515574655771116.83
DE5606355914559645595444.95
CS5628055988560575606155.01
CSISFLA5627356130561855620138.65

GA70739702477048770456113.53
DE7080670641706967068438.21
CS7091570729707897079742.50
CSISFLA7100870867709247093941.17

GA83969833398372383757142.75
DE8404083820839128389956.64
CS84645839548405584033121.94
CSISFLA8424484099841758418138.36

Based on previous analyses, we can draw a conclusion that the superiority of CSISFLA over GA, DE, and CS in solving six types of KP instances is quite indubitable. In general, CS is slightly inferior to CSISFLA, so the next best is CS. DE and GA perform the third-best and the fourth-best, respectively.

5. Conclusions

In this paper, we proposed a novel hybrid cuckoo search algorithm with improved shuffled frog-leaping algorithm, called CSISFLA, for solving 0-1 knapsack problems. Compared with the basic CS algorithm, the improvement of CSISFLA has several advantages. First, we specially designed an improved frog-leap operator, which not only retains the effect of the global optimal information on the frog leaping but also strengthens information exchange between frog individuals. Additionally, new individuals randomly generated with mutation rate. Second, we presented a novel CS model which is in an excellent combination with the rapid exploration of the global search space by Lévy flight and the fine exploitation of the local region by frog-leap operator. Third, CSISFLA employs hybrid encoding scheme; that is, to say, it conducts active searches in continuous real space, while the consequences are used to constitute the new solution in the binary space. Fourth, CSISFLA uses an effective GTM to assure the feasibility of solutions. The computational results show that CSISFLA outperforms the GA, DE, and CS in solution quality. Further, compared with ICS [26], the CSISFLA can be regarded as a combination of several algorithms and secondly the KP instances are more complex. The future work is to design more effective CS method for solving complex 0-1 KP and to apply the hybrid CS for solving other kinds of combinatorial optimization problems, multidimensional knapsack problem (MKP), and traveling salesman problem (TSP).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by Research Fund for the Doctoral Program of Jiangsu Normal University (no. 13XLR041) and National Natural Science Foundation of China (no. 61272297 and no. 61402207).

References

  1. X.-S. Yang, S. Koziel, and L. Leifsson, “Computational optimization, modelling and simulation: Recent trends and challenges,” in Proceedings of the 13th Annual International Conference on Computational Science (ICCS '13), vol. 18, pp. 855–860, June 2013. View at: Publisher Site | Google Scholar
  2. R. Storn and K. Price, “Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site | Google Scholar | MathSciNet
  3. X. Li and M. Yin, “An opposition-based differential evolution algorithm for permutation flow shop scheduling based on diversity measure,” Advances in Engineering Software, vol. 55, pp. 10–31, 2013. View at: Publisher Site | Google Scholar
  4. Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001. View at: Publisher Site | Google Scholar
  5. L. Guo, G.-G. Wang, H. Wang, and D. Wang, “An effective hybrid firefly algorithm with harmony search for global numerical optimization,” The Scientific World Journal, vol. 2013, Article ID 125625, 9 pages, 2013. View at: Publisher Site | Google Scholar
  6. A. H. Gandomi and A. H. Alavi, “Krill herd: a new bio-inspired optimization algorithm,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 12, pp. 4831–4845, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  7. G.-G. Wang, A. H. Gandomi, and A. H. Alavi, “An effective krill herd algorithm with migration operator in biogeography-based optimization,” Applied Mathematical Modelling, vol. 38, no. 9-10, pp. 2454–2462, 2014. View at: Publisher Site | Google Scholar
  8. G.-G. Wang, A. H. Gandomi, and A. H. Alavi, “Stud krill herd algorithm,” Neurocomputing, vol. 128, pp. 363–370, 2014. View at: Publisher Site | Google Scholar
  9. G. Wang, L. Guo, H. Wang, H. Duan, L. Liu, and J. Li, “Incorporating mutation scheme into krill herd algorithm for global numerical optimization,” Neural Computing and Applications, vol. 24, no. 3-4, pp. 853–871, 2014. View at: Publisher Site | Google Scholar
  10. G.-G. Wang, L. Guo, A. H. Gandomi, G.-S. Hao, and H. Wang, “Chaotic krill herd algorithm,” Information Sciences, vol. 274, pp. 17–34, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  11. G. G. Wang, A. H. Gandomi, A. H. Alavi, and G. S. Hao, “Hybrid krill herd algorithm with differential evolution for global numerical optimization,” Neural Computing and Applications, vol. 25, no. 2, pp. 297–308, 2014. View at: Publisher Site | Google Scholar
  12. L. Guo, G.-G. Wang, A. H. Gandomi, A. H. Alavi, and H. Duan, “A new improved krill herd algorithm for global numerical optimization,” Neurocomputing, vol. 138, pp. 392–402, 2014. View at: Publisher Site | Google Scholar
  13. G.-G. Wang, A. H. Gandomi, and A. H. Alavi, “A chaotic particle-swarm krill herd algorithm for global numerical optimization,” Kybernetes, vol. 42, no. 6, pp. 962–978, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  14. X. Li, J. Zhang, and M. Yin, “Animal migration optimization: an optimization algorithm inspired by animal migration behavior,” Neural Computing and Applications, vol. 24, no. 7-8, pp. 1867–1877, 2014. View at: Publisher Site | Google Scholar
  15. S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Grey wolf optimizer,” Advances in Engineering Software, vol. 69, pp. 46–61, 2014. View at: Publisher Site | Google Scholar
  16. D. Simon, “Biogeography-based optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 6, pp. 702–713, 2008. View at: Publisher Site | Google Scholar
  17. S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Let a biogeography-based optimizer train your multi-layer perceptron,” Information Sciences, vol. 269, pp. 188–209, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  18. S. Mirjalili, S. Z. Mohd Hashim, and H. Moradian Sardroudi, “Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm,” Applied Mathematics and Computation, vol. 218, no. 22, pp. 11125–11137, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  19. X.-S. Yang, “A new metaheuristic bat-inspired algorithm,” in Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), pp. 65–74, Springer, Berlin, Germany, 2010. View at: Google Scholar
  20. S. Mirjalili, S. M. Mirjalili, and X.-S. Yang, “Binary bat algorithm,” Neural Computing and Applications, vol. 25, no. 3-4, pp. 663–681, 2013. View at: Publisher Site | Google Scholar
  21. R. Kumar and P. K. Singh, “Assessing solution quality of biobjective 0-1 knapsack problem using evolutionary and heuristic algorithms,” Applied Soft Computing Journal, vol. 10, no. 3, pp. 711–718, 2010. View at: Publisher Site | Google Scholar
  22. D. Zou, L. Gao, S. Li, and J. Wu, “Solving 0-1 knapsack problem by a novel global harmony search algorithm,” Applied Soft Computing Journal, vol. 11, no. 2, pp. 1556–1564, 2011. View at: Publisher Site | Google Scholar
  23. T. K. Truong, K. Li, and Y. Xu, “Chemical reaction optimization with greedy strategy for the 0-1 knapsack problem,” Applied Soft Computing Journal, vol. 13, no. 4, pp. 1774–1780, 2013. View at: Publisher Site | Google Scholar
  24. A. Gherboudj, A. Layeb, and S. Chikhi, “Solving 0-1 knapsack problems by a discrete binary version of cuckoo search algorithm,” International Journal of Bio-Inspired Computation, vol. 4, no. 4, pp. 229–236, 2012. View at: Publisher Site | Google Scholar
  25. A. Layeb, “A novel quantum inspired cuckoo search for knapsack problems,” International Journal of Bio-Inspired Computation, vol. 3, no. 5, pp. 297–305, 2011. View at: Publisher Site | Google Scholar
  26. Y. Feng, K. Jia, and Y. He, “An improved hybrid encoding cuckoo search algorithm for 0-1 knapsack problems,” Computational Intelligence and Neuroscience, vol. 2014, Article ID 970456, 9 pages, 2014. View at: Publisher Site | Google Scholar
  27. K. K. Bhattacharjee and S. P. Sarmah, “Shuffled frog leaping algorithm and its application to 0/1 knapsack problem,” Applied Soft Computing Journal, vol. 19, pp. 252–263, 2014. View at: Publisher Site | Google Scholar
  28. R. S. Parpinelli and H. S. Lopes, “New inspirations in swarm intelligence: a survey,” International Journal of Bio-Inspired Computation, vol. 3, no. 1, pp. 1–16, 2011. View at: Publisher Site | Google Scholar
  29. M. M. Eusuff and K. E. Lansey, “Optimization of water distribution network design using the shuffled frog leaping algorithm,” Journal of Water Resources Planning and Management, vol. 129, no. 3, pp. 210–225, 2003. View at: Publisher Site | Google Scholar
  30. X. Li, J. Luo, M.-R. Chen, and N. Wang, “An improved shuffled frog-leaping algorithm with extremal optimisation for continuous optimisation,” Information Sciences, vol. 192, pp. 143–151, 2012. View at: Publisher Site | Google Scholar
  31. C. Fang and L. Wang, “An effective shuffled frog-leaping algorithm for resource-constrained project scheduling problem,” Computers and Operations Research, vol. 39, no. 5, pp. 890–901, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  32. X.-S. Yang and S. Deb, “Cuckoo search via Lévy flights,” in Proceedings of the World Congress on Nature and Biologically Inspired Computing (NABIC '09), pp. 210–214, December 2009. View at: Publisher Site | Google Scholar
  33. A. H. Gandomi, X.-S. Yang, and A. H. Alavi, “Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems,” Engineering with Computers, vol. 29, no. 1, pp. 17–35, 2013. View at: Publisher Site | Google Scholar
  34. S. Walton, O. Hassan, K. Morgan, and M. R. Brown, “Modified cuckoo search: a new gradient free optimisation algorithm,” Chaos, Solitons and Fractals, vol. 44, no. 9, pp. 710–718, 2011. View at: Publisher Site | Google Scholar
  35. K. Chandrasekaran and S. P. Simon, “Multi-objective scheduling problem: hybrid approach using fuzzy assisted cuckoo search algorithm,” Swarm and Evolutionary Computation, vol. 5, pp. 1–16, 2012. View at: Publisher Site | Google Scholar
  36. G. G. Wang, L. H. Guo, H. Duan, H. Wang, L. Liu, and M. Shao, “A hybrid metaheuristic DE/CS algorithm for UCAV three-dimension path planning,” The Scientific World Journal, vol. 2012, Article ID 583973, 11 pages, 2012. View at: Publisher Site | Google Scholar
  37. A. K. Bhandari, V. K. Singh, A. Kumar, and G. K. Singh, “Cuckoo search algorithm and wind driven optimization based study of satellite image segmentation for multilevel thresholding using Kapur's entropy,” Expert Systems with Applications, vol. 41, no. 7, pp. 3538–3560, 2014. View at: Publisher Site | Google Scholar
  38. M. Khajeh and E. Jahanbin, “Application of cuckoo optimization algorithm-artificial neural network method of zinc oxide nanoparticles-chitosan for extraction of uranium from water samples,” Chemometrics and Intelligent Laboratory Systems, vol. 135, pp. 70–75, 2014. View at: Publisher Site | Google Scholar
  39. G. Kanagaraj, S. G. Ponnambalam, and N. Jawahar, “A hybrid cuckoo search and genetic algorithm for reliability-redundancy allocation problems,” Computers & Industrial Engineering, vol. 66, no. 4, pp. 1115–1124, 2013. View at: Publisher Site | Google Scholar
  40. X. S. Yang and S. Deb, “Cuckoo search: recent advances and applications,” Neural Computing and Applications, vol. 24, no. 1, pp. 169–174, 2014. View at: Publisher Site | Google Scholar
  41. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin, Germany, 1996.
  42. S. Das and P. N. Suganthan, “Differential evolution: a survey of the state-of-the-art,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 4–31, 2011. View at: Publisher Site | Google Scholar
  43. R. Mallipeddi, P. N. Suganthan, Q. K. Pan, and M. F. Tasgetiren, “Differential evolution algorithm with ensemble of parameters and mutation strategies,” Applied Soft Computing Journal, vol. 11, no. 2, pp. 1679–1696, 2011. View at: Publisher Site | Google Scholar
  44. G. B. Dantzig, “Discrete-variable extremum problems,” Operations Research, vol. 5, pp. 266–277, 1957. View at: Publisher Site | Google Scholar | MathSciNet
  45. X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, Frome, UK, 2010.
  46. M. Eusuff, K. Lansey, and F. Pasha, “Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization,” Engineering Optimization, vol. 38, no. 2, pp. 129–154, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  47. E. Elbeltagi, T. Hegazy, and D. Grierson, “Comparison among five evolutionary-based optimization algorithms,” Advanced Engineering Informatics, vol. 19, no. 1, pp. 43–53, 2005. View at: Publisher Site | Google Scholar
  48. Y. C. He, K. Q. Liu, and C. J. Zhang, “Greedy genetic algorithm for solving knapsack problems and its applications,” Computer Engineering and Design, vol. 28, no. 11, pp. 2655–2657, 2007. View at: Google Scholar
  49. S. Martello and P. Toth, Knapsack Problems, Wiley-Interscience Series in Discrete Mathematics and Optimization, Wiley, New York, NY, USA, 1990. View at: MathSciNet
  50. D. Pisinger, Algorithms for knapsack problems, 1995.
  51. D. Pisinger, “Where are the hard knapsack problems?” Computers & Operations Research, vol. 32, no. 9, pp. 2271–2284, 2005. View at: Publisher Site | Google Scholar
  52. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67–82, 1997. View at: Publisher Site | Google Scholar

Copyright © 2014 Yanhong Feng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views2047
Downloads957
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.