Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 5327056 |

Yangjun Gao, Fengming Zhang, Yu Zhao, Chao Li, "Quantum-Inspired Wolf Pack Algorithm to Solve the 0-1 Knapsack Problem", Mathematical Problems in Engineering, vol. 2018, Article ID 5327056, 10 pages, 2018.

Quantum-Inspired Wolf Pack Algorithm to Solve the 0-1 Knapsack Problem

Academic Editor: Emilio Insfran Pelozo
Received03 Nov 2017
Accepted08 May 2018
Published20 Jun 2018


This paper proposes a Quantum-Inspired wolf pack algorithm (QWPA) based on quantum encoding to enhance the performance of the wolf pack algorithm (WPA) to solve the 0-1 knapsack problems. There are two important operations in QWPA: quantum rotation and quantum collapse. The first step enables the population to move to the global optima and the second step helps to avoid the trapping of individuals into local optima. Ten classical and four high-dimensional knapsack problems are employed to test the proposed algorithm, and the results are compared with other typical algorithms. The statistical results demonstrate the effectiveness and global search capability for knapsack problems, especially for high-level cases.

1. Introduction

The 0-1 knapsack problem (KP01) is a typical combinatorial optimization problem. It offers various practical applications such as task scheduling, resource allocation, investment decisions, and others [1, 2]. In a given set of items, each of them with a value and a volume , there is a knapsack with a limited capacity C. The question is to select a subset from the given set to pack the knapsack so that the items in this knapsack have a maximal value of overall possible solutions. The model of the KP01 can be formulated as follows:

The variable takes values either 0 or 1, which represents rejection or selection of the ith item.

There are mainly two classes of approaches to solving the KP01: the conventional one is an exact solution based on mathematical programming and operational research, and the other one is a stochastic solution based on heuristic algorithm [3]. It is possible to solve a small-scale KP01 problem by branching definition method and dynamic programming. However, a high-dimensional situation is NP-hard and it is unrealistic to obtain optimal solutions using an exact method. As a result, the use of heuristic algorithms has attracted the extensive attention of scholars in this field.

In recent years, most classical heuristic algorithms, such as the genetic algorithm, particle swarm algorithm, ant colony algorithm, and modifications of these algorithms, have been applied to KP01 problems for excellent results [46]. Some emerging novel algorithms have also been widely used like the artificial bee colony algorithm [7] and the cuckoo algorithm [8]. The wolf pack algorithm [9] is to mimic the hunting behavior of wolves to obtain optimal solutions and it has been shown to be global convergent and robust, and the binary wolf algorithm has shown effective performance for the KP01 problem [10]. However, the wolf pack algorithm is proposed in continuous space, and the binary encoding required to map continuous space to discrete space may lead to confusion [11]. The conventional method used for updating operations in binary encoding is directly rounding orbit updating. These operations are able to reflect the ideas of algorithms to a limited extent. However, it is difficult to determine whether upward or downward rounding should be used in the updating operation and whether the number of bits is able to represent the position of individuals.

To avoid the above difficulties, we proposed a quantum wolf pack algorithm (QWPA) based on a new updating mechanism. The probability position updating operation is employed to make the population move to the global optima. At the same time, the collapse operation maps the probability position to a certain position to prevent the diversity of the population. The experimental results show the competitive performance of the proposed algorithm.

The remainder of the paper is organized as follows: In Section 2, the basic wolf pack algorithm and the binary wolf pack algorithm are introduced. Section 3 proposes the quantum wolf pack algorithm based on some related concepts. The steps of the proposed algorithm are listed in this section. Section 4 presents the experimental testing of the algorithm, and Section 5 concludes the paper.

2.1. Wolf Pack Algorithm (WPA)

There are three roles in the wolf pack algorithm [12]: leader wolf, scouting wolf, and fierce wolf. The leader wolf represents global optima in population and guides other wolves. The scouting wolf acts to improve the randomicity of population by renovating around noninferior solutions. The fierce wolf, which constitutes the main part of the population, moves toward the global optima. The leader wolf will be replaced by other wolves if the scouting or fierce wolf finds a better position. In other words, the global optimal solution will be replaced if there is a better solution in the population.

2.2. Binary Wolf Pack Algorithm

The binary wolf pack algorithm (BWPA) can be used to solve knapsack problems. The position jth wolf can be described as . Where m is the number of dimensions, N is the number of the population, and .

In BWAP, the number of updated dimensions r is employed to represent the distance between every two wolves. There is an updated set M’, which is the r-element-containing subset of the feasible set M, updating the position of each wolf by reversing operation.

A simple example is as follows:

The M’ can be obtained from M: M’= or M’=, then the new position is

In fact, BWPA essentially adopts a bit updating mechanism, which may lead to confusion [11] in the computing process. Instead, the QWPA is proposed in this paper.

3. Quantum Wolf Pack Algorithm (QWPA)

Quantum information and quantum computation processes were extensively developed in the 1990s. The most popular quantum algorithms are the short large number decomposition algorithm [13, 14] and Grover search algorithm [15]. It is worth mentioning that the Grover algorithm can solve the search problem of the scale of N in the case of the time complexity . In this section, we propose the QWPA through several quantum concepts related to the Grover algorithm.

3.1. Related Definitions

As described in the previous section, the position of each wolf is described as a vector. Unlike a certain value (0 or 1) of each dimension in BWPA, the positions of wolves are uncertain in quantum theory. Taking the knapsack problem as an example, the value of any dimension is not 0 or 1 but the probability superposition of 0 and 1. The definitions what we have been used in QWPA are shown in Figure 1.

As seen in Figure 1, the updating of quantum encoding is similar to the hidden Markov process: changing the implied states by state transition matrix (quantum-rotating gate), then the observation states are obtained by Confusion Matrix (collapse operation). In the following, the related definitions involved in Figure 1 will be elaborated.

Definition 1 (probability position). In a finite set, the position can be defined by the linear combinations of stateswhere U represents the finite set and is the element of U. is the probability of obtaining , and .

In KP01, each item has only two states: selected or not (1 or 0). This character will bring it easy to describe the solutions by 2-dimension quantum superposition state of 1 and 0. What we need to do is to control the probability of obtaining 1 or 0 to update the superposition. The ranges of variables in KP fit well with 2-dimension cases for updating: on the one hand, a simple linear transformation can be used to increase (or decrease) the probability of one state and decrease (or increase) another one; on the other hand, the sum of the two probabilities is always kept as 1.

For the jth wolf in the knapsack problem, the ith dimension can be formulated as

Definition 2 (position probability). The probability of obtaining a certain position is defined as the position probability. For example, is the position probability of obtaining in formulation (4).

Definition 3 (certain position). The certain position refers to the conventional position in Euclidean space. In (4), each is a certain position.

Definition 4 (collapse). The mapping from the probable positions to certain positions is defined as a collapse operation. A probability position may map to multiple certain positions.

Definition 5 (quantum-rotating gate). The quantum-rotating gate is the process of updating the probability position. Orthogonal transformation is usually used to change the probability position. One of the most popular transformations is

where θ is the quantum rotation angle. The probability position is updated as follows:

Formulation (6) shows the updating process of each dimension.

The possible combinations may appear if any . Supposing that is the ith dimension, then

where , represent the probability of certain positions , respectively.

We can update the probability of certain positions by adjusting m quantum positions through the quantum parallel operation.

3.2. Rule Description of QWPA

The concepts of probability position and certain position are employed in QWPA. In knapsack problems, the certain position of the jth wolf can be defined as , where and the probability position can be formulated as

where , . and , respectively, represent the probability of the ith dimension obtaining 0 or 1. The value of each dimension is uncertain so that the collapse operation is necessary to obtain the certain positions.

Suppose that the number of wolves in the population is P pop. In knapsack problems, a certain position represents a solution, whose quality is determined by the value of fitness function as follows:

where M is a large enough real number and is the ith dimension of the jth (j=1,2, P_pop) wolf. Here, we adopted a penalty function in (9) to ensure the solutions under volume constraint.

There are three steps before updating the starts: set the probability positions of the wolf pack, obtain certain positions by applying the collapse operation, and select the optimal certain position as the position of the leader wolf in the population. (The position of leader wolf is a certain position, and the other wolves can have either a probability position or a certain position.) As reported previously [16], the efficiency calculation is better if the number of scout wolves and fierce wolves is as large as possible. Here, the number of the above two kinds of wolves can each be set to N-1.

3.2.1. Scout Behavior

There are p certain positions of the jth wolf that can be obtained from the application of the collapse operation according to its probability position . Then the p certain positions of the jth wolf are used to determine whether or not to update the certain position of the leader. The process above is repeated until the jth wolf finds a certain position that is better than the leader’s position or the number of scouting behavior exceeds the limit.

The value of h is a random integer between and as defined in the literature [10].

3.2.2. Beleaguer Behavior

The position of the leader wolf will be extended to the probability position according to formula (10):

where i=1,2,…m. The Manhattan distance can be calculated by the extended position and the jth wolf as follows:

If the distance above is more than a threshold distance , the jth wolf will approach the leader in a large step ; otherwise it will use a small step . is a random number between 0 and 1. Extensive experiments were done in this study to determine the values of and for a high-dimension knapsack:

where m is the dimension of the knapsack, k is the current iteration, and is the maximum iteration.

3.2.3. Elimination Mechanism

If the certain positions obtained by the probability position of a wolf in scout behavior are always inferior to others in a cycle, this probability position will be eliminated. Then, a new probability position will randomly be obtained for this wolf.

3.3. Procedure for QWPA Solving Knapsack Problem

Step 1. Parameter and the probability position of wolf pack initialization.

Step 2. Obtain certain positions of wolves and determine the position of the leader wolf.

Step 3. The population enters the scout stage. For each wolf, p certain positions are obtained and compared with that of the leader to determine whether to update the certain position of the leader. The procedure above is repeated times or when stopped by the conditions of termination.

Step 4. The population enters the beleaguer stage. The probability positions of wolves are updated by (6) according to quantum-rotating angles, which are determined by the Manhattan distances and .

Step 5. Eliminate R probability positions and generate R probability positions to maintain the population diversity.

Step 6. Repeat Step 3~Step 5 until the terminal condition is satisfied.

3.4. Theoretical Analysis of the Algorithm

A Markov chain is a Markov process with discrete parameters and state space sets. The procedure of QWPA is only related to the current state, and the parameter and state space sets are discrete. So we can conclude that the population sequence is a Markov chain.

Definition 6. If the limit of the transition probability matrix [9] of Markov chain exists and is unrelated to s, the Markov chain is ergodic:where E is the state space and z is the number of transition steps.

Theorem 7. In a finite Markov chain, if there is a positive integer v satisfying the condition:Then the Markov chain is ergodic [17, 18].

Proposition 8. QWPA is globally convergent.
Assuming the probability position of the jth wolf as formula (8), then each dimension

The probability positions are only updated in beleaguer behavior. The (i=1,2…m) will be updated in step or by (8). It is specified in definition 5 that the quantum-rotating gates are orthogonal transformations.

The beleaguer operation is or . Then the updating procedure can be formulated as follows in a cycle:

If , , the matrix or , in which without 0 element, will continue to update. To say this in another way, a matrix without 0 element exists when v=1 and satisfies the condition of Theorem 7. Therefore the QWPA is ergodic.

Because the feasible solution of a knapsack is finite, the QWPA could obtain the global optimal solution in infinite iterations.

The position of the leader wolf is saved in the next iteration, and when the global optimal solution is found, the leader will not update.

When the number of iteration k≥K (K is a large enough positive number) the leader will be constant. The probability positions of other wolves approach the leader by a quantum-rotating gate.

where .

Formula (17) shows that the probability of the jth (j=1,2…P_pop) will eventually converge to the extended position of the leader. The certain position of the jth can only be the position of leader.

4. Experiments and Analyses

Two groups of data sets were employed to evaluate the performance of QWPA to solve KP01. The first is ten classic sets of data, described in the literature [19] and used to test the performance in a simple situation. There are 100, 250, 500, and 1000 dimension sets of data by formula (18) to test the performance of QWPA in a high-dimension situation.

where is the volume and pi is the value of the ith item, C represents the constraint of volume, and m represents the number of dimensions. All experiments were conducted with Matlab2012, Core(TM)i7-479 CPU @ 3.60GHz processor, and Windows 7 ultimate edition.

Experiment 1 (study of ten classic KP01 problems). The ten classic knapsack problems are employed to test the performance of QWPA. Other outcomes of typical algorithms such as the binary wolf pack algorithm (BWPA), genetic algorithm (GA), harmony search algorithm (HS), and greedy algorithm were compared. The numbers of the dimension of the ten problems range from 10 to 100. In QWPA and BWPA, the number of iteration is 100 and the population size is 40. In order to obtain reliable results, we ran the two above algorithms 20 times. The parameters in QWPA are as follows: =0.2, [0.01,0.015], , R=8. The parameters in BWPA are given in the literature [10]. The related parameters of the ten classic knapsack problems are given in Table 1.



Table 2 shows the test results of QWPA and BWAP. The results of the best solution, worst solution, and average are listed in rows 3-5. The standard deviation and the times of getting the optimal results are showed in rows 6 and 7, respectively. The last row lists the results of other algorithms as reported in the literature [10].

NumberAlgorithmsBestWorstAVGSTDObtained timesResults from the literature [10]

KP1BWPA295295295020295_Genetic algorithm, 209_Greedy Algorithm,
295_Fuzzy particle swarm optimal algorithm

KP2BWPA481.69481.69481.69020481.69_Adaptive harmony algorithm

KP3BWPA1024102410240201018_Greedy Algorithm, 1024_Quantum harmony algorithm

KP4BWPA9767976797670209757_Dminsionality reduction algorithm,
9767_ Quantum harmony algorithm

KP5BWPA309630663080.58.1703082_ Simulated annealing algorithm,
3090_ Genetic algorithm based on simulated annealing

KP6BWPA310430803092.87.2703105_Different evaluation based on hybrid encoding, 3112_Greedy Genetic algorithm,
3114_Learned harmony search algorithm

KP7BWPA16102101021610202014865_Genetic algorithm, 15565_Binary particle swarm optimal algorithm,
15955_ Simulated annealing algorithm

KP8BWPA836283568361.41.85177775_Genetic algorithm based on greedy strategy,
8362_Ant colony optimization algorithm with scout subgroup

KP9BWPA5183518351830205107_ Hybrid particle swarm algorithm,
5101_Discrete particle swarm optimization algorithm

KP10BWPA15170151701517002015080_ Hybrid discrete particle swarm optimization algorithm,
15089-Discrete particle swarm optimization algorithm based on penalty function

The QWPA shows very competitive results to those of BWPA and other algorithms for the ten classic KP01 problems as shown in Table 2. For KP1 to KP4, BWAP and QWPA obtained the optimal solutions with 100% probability. It is important to note that BWPA and other algorithms were unable to obtain the global optimal solutions for KP5 and KP6, but QWPA obtained the optimal solutions several times. However, QWPA also became trapped in local optimum in the two sets of data, especially in KP6, where QWPA only obtained the global optimum 7 times out of 20 attempts. In comparison, QWPA showed better performance in stability and statistically. For KP7-KP9, QWPA and BWPA behaved almost the same and were obviously superior to other algorithms. For KP10, although QWPA could obtain the optimal solutions, it was trapped in local optima several times. Overall, from the analysis above we may conclude that BWPA and QWPA perform obviously better than other algorithms described previously [10] and behave relatively the same.

Experiment 2 (study of four high-dimension KP01). In Experiment 1, the results of KP10 may suggest that QWPA is not adapted to high-dimension situations. To test this, Experiment 2 was designed. Four high-dimension knapsacks with 100, 250, 500, and 1000 dimensions were generated by formula (18). We compared the performance of the quantum genetic algorithm (QGA), the artificial fish algorithm (AF), BWPA, and QWPA to solve the above high-dimension knapsack problems. The parameters of the four algorithms are given as follows. The population size is 40 and the number of iteration is 100 for all algorithms. In QGA, the length of binary encoding is 20, and the size of quantum-rotating angles is 0.05π, 0.025π, 0.01π, and 0.005π. In AF, the perception distance is 0.5, the crowding factor is 0.618, and the random selection time is 50. The parameters of BWPA and QWPA are shown in Table 3.

Parameters of stepsCommon parameters



Figures 2(a)2(d) show the separation curve of the four above algorithms in different dimensions. The abscissa represents the computation time, and the ordinate represents the calculated results. All problems were calculated 20 times for each algorithm.

The results of Figure 2 show that QWPA has the competitive performance to that of the other three algorithms. QWPA found the global optima 618 for 100-dimension problems all 20 times but the other algorithms did not do as well. As the dimensions increased, the difference in the quality of results from the four algorithms increased continuously. When the number of dimensions reached 1000, the best value (obtained by QWPA) was approximately 600 better than the worst value (obtained by BWPA). It is obvious that QWPA is well adapted for the solving of high-dimension knapsack problems. Another finding is that algorithms based on a quantum mechanism performed better in high-dimension problems. It is shown in (a)~(d) of Figure 2 that QGA and QWPA are better for solving these problems. This research problem will be investigated in further studies.

In order to more completely analyze the performance of QWPA, Figure 3 shows the convergence curves of the four algorithms for the high-dimensional problems. The data in Figure 3 are mean values of one hundred repeats of each algorithm.

It is evident from Figure 3 that QWPA and QGA evolve more quickly than AF and BWPA in high dimensions. In 500 and 1000 dimensions, QWPA and QGA still have the potential to evolve even after the 100th iteration. We can conclude that algorithms based on a quantum mechanism will preserve the diversity of population to avoid local optima.

The diversity of the population is discussed in QWPA for solving a 100-dimension knapsack problem. Assume the position of leader is =[0100110…], which can be extended as

The other wolves approach the extended position by a quantum-rotating gate. After multiple iterations, the position of the jth wolf is

where , , and so on. This means that the probability of obtaining 0 in the first dimension is approximately 0.99, the probability to obtain 1 in the second dimension is approximately 0.99, and so on. Then the probability of the jth wolf being the same as the leader is .366. In other words, although each dimension of the probability position of the jth is very close to that of the leader, the certain position of the jth is not the same as the leader. Thus, the diversity of the population is better maintained in QWPA

5. Conclusion

A quantum wolf pack algorithm is proposed to solve KP01 problems in this paper. New concepts of probability position and certain position are included in the proposed algorithm. The updating of the probability position plays a guiding role, and the collapse operation from probability to a certain position is a random process in QWPA. Ten classic and four high-dimensional knapsack problems were employed to test the performance of the proposed algorithm. The results show the competitive performance of the proposed algorithm for KP01 problems, especially for high-dimension cases.

We are going to study the influence of parameters on the algorithm in the future. In addition, we found that the method of 2-dimension quantum encoding is well adapted to knapsack problems, and quantum mechanism can be applied to other algorithms to solve different knapsack problems.

Conflicts of Interest

The authors declare that there are no conflicts of interest related to this paper.


This work was supported in part by the National Natural Science Foundation of China under Grant 71601183.


  1. H. Kellerer, U. Pferschy, and D. Pisinger, Knapsack Problems, Springer, 2004. View at: Publisher Site | MathSciNet
  2. X. Z. Wang and Y. C. He, “Evolutionary algorithms for knapsack problems,” Journal of Software. Ruanjian Xuebao, vol. 28, no. 1, pp. 1–16, 2017. View at: Google Scholar | MathSciNet
  3. D. Zou, L. Gao, S. Li, and J. Wu, “Solving 0-1 knapsack problem by a novel global harmony search algorithm,” Applied Soft Computing, vol. 11, no. 2, pp. 1556–1564, 2011. View at: Publisher Site | Google Scholar
  4. G. L. Chen, X. F. Wang, Z. Q. Zhuang, and D. S. Wang, Genetic Algorithm and Its Applications, Beijing: The posts and Telecommunications Press, 2003.
  5. R. Poli, J. Kennedy, and T. Blackwell, “Particle swarm optimization,” Swarm Intelligence, vol. 1, no. 1, pp. 33–57, 2007. View at: Publisher Site | Google Scholar
  6. M. Dorigo and T. Stutzle, Ant Colony Optimization, MIT Press, Combridge, MA, USA, 2004. View at: Publisher Site
  7. D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: Artificial bee colony(ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  8. X. S. Yang and S. Deb, “Cuckoo search via Levy flights,” in Proceedings of World Congress nature & Biologically Inspired Computing, India, pp. 210–214, USA, IEEE Publications, 2009. View at: Google Scholar
  9. H.-S. Wu, F.-M. Zhang, and L.-S. Wu, “New swarm intelligence algorithm-wolf pack algorithm,” Systems Engineering and Electronics, vol. 35, no. 11, pp. 2430–2438, 2013. View at: Publisher Site | Google Scholar
  10. W. Husheng, Z. Fengming, and Z. Renju, “A binary wolf pack algorithm for solving 0-1 knapsack problem,” Systems Engineering and Electronics, vol. 8, pp. 1660–1667, 2014. View at: Google Scholar
  11. M. G. Gong, Q. Cai, X. W. Chen, and L. J. Ma, “Complex network clustering by multiobjective discrete particle swarm optimization based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 1, pp. 82–97, 2014. View at: Publisher Site | Google Scholar
  12. H.-S. Wu and F.-M. Zhang, “Wolf pack algorithm for unconstrained global optimization,” Mathematical Problems in Engineering, vol. 2014, Article ID 465082, 17 pages, 2014. View at: Publisher Site | Google Scholar
  13. H.-W. Chen, K. Li, and S.-M. Zhao, “Quantum walk search algorithm based on phase matching and circuit cmplementation,” Wuli Xuebao/Acta Physica Sinica, vol. 64, no. 24, 2015. View at: Google Scholar
  14. P. W. Shor, “Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer,” SIAM Journal on Computing, vol. 26, no. 5, pp. 1484–1509, 1997. View at: Publisher Site | Google Scholar | MathSciNet
  15. K. L. Grover, Proceedings of 28th ACM Symposium on Theory of Computation, Philadelphia, USA, 1996.
  16. L. Guoliang, Research and application of Wolf Colony Algorithm, East China University of Technology, 2016.
  17. R. Wall, An Introduction to Mathematical Statistics and Its Applications, Prentice-Hall, 1986. View at: MathSciNet
  18. S. M. Ross, Introduction to Probability Models, 10th edition, 2011.
  19. L. Juan, F. Ping, and Z. Ming, “A hybrid genetic algorithm for knapsack problem,” Journal of Nanchang Institute of Aeronautical Technology, vol. 3, pp. 35–39, 1998. View at: Google Scholar

Copyright © 2018 Yangjun Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.