Abstract

The discounted {0-1} knapsack problem (DKP01) is a kind of knapsack problem with group structure and discount relationships among items. It is more challenging than the classical 0-1 knapsack problem. In this paper, we study binary particle swarm optimization (PSO) algorithms with different transfer functions and a new encoding scheme for DKP01. An effective binary vector with shorter length is used to represent a solution for new binary PSO algorithms. Eight transfer functions are used to design binary PSO algorithms for DKP01. A new repair operator is developed to handle isolation solution while improving its quality. Finally, we conducted extensive experiments on four groups of 40 instances using our proposed approaches. The experience results show that the proposed algorithms outperform the previous algorithms named FirEGA and SecEGA . Overall, the proposed algorithms with a new encoding scheme represent a potential approach for solving the DKP01.

1. Introduction

The discounted 0-1 knapsack problem (DKP01) is a new kind of knapsack problem proposed by Guldan [1]. This problem has an important role in the real-world business process. It is a part of many key problems such as investment decision-making, mission selection, and budget control. An exact algorithm based on dynamic programming for the DKP01 is first proposed in [1]. An approach combining dynamic programming with the core of the DKP01 to solve it is studied in [2]. Two algorithms based on genetic algorithm for DKP01 are named FirEGA and SecEGA in [3].

Assume that there are n groups. Each group contains three items. Consider a given set of items; each of them has an integer weight and an integer profit . The problem is to select a subset from the set of items in n groups such that the overall profit is maximized without exceeding a given weight capacity C. We cannot choose more than 1 item in each group. It is an NP-Hard problem and hence it does not have a polynomial time algorithm unless . The problem may be mathematically modelled as follows:where , , and represent whether the items , and are put into the knapsack: indicates that the item is not in knapsack, while indicates that the item j is in knapsack. It is worth noting that a binary vector is a potential solution of DKP01. Only if X meets both equations (2) and (3), it is a feasible solution of DKP01.

Recently, He et al. 2016 also had a detailed study of the algorithms of the DKP01 and proposed brand new deterministic algorithm and approximation algorithms. A new exact algorithm and two approximation algorithms with a greedy repair operator were proposed to solve DKP01 [4]. An algorithm based on PSO is named GBPSO using discrete particle swarm optimization [5]. An evolution algorithm combines with ring theory to solve DKP01 [6]. Multistrategy monarch butterfly optimization algorithm, Binary Moth Search algorithm [7], and hybrid teaching-learning-based optimization algorithm [8] are proposed for DKP01. Binary PSO is also developed to solve many optimization problems such as scheduling of appliances in smart homes [9], fault diagnosis of bearing [10], operation cost reduction in unit commitment problem [11], channel selection in the EEG signals and its application to speller systems [12], wireless sensor networks [13], transmission expansion planning considering n − 1 security criterion [14], a bare-bones multiobjective particle swarm optimization algorithm for environmental/economic dispatch [15], multiobjective particle swarm optimization for feature selection with fuzzy cost [16], multiobjective particle swarm optimization approach for cost-based feature selection in classification [17], and variable-size cooperative coevolutionary particle swarm optimization for feature selection on high-dimensional data [18].

Many algorithms were proposed to solve DKP01, and each of them has its advantages and disadvantages. Further study on this problem is necessary. In this paper, we study binary particle swarm optimization (PSO) algorithms with different transfer function and a new encoding scheme for DKP01. An effective binary vector with -dimensional length is used to represent an individual the proposed binary PSO strategies. Eight types of transfer functions are used to design binary PSO algorithms for DKP01. Finally, we conducted extensive experiments on four groups of 40 instances using our proposed approaches. The experience results demonstrate that the proposed algorithms outperform the genetic algorithm and original binary PSO in solving 40 DKP01 instances. The main contributions of this work can be listed as follows:(i)Binary particle swarm optimization algorithms with difference binary transfer functions and new solution presentation are proposed to solve the discounted {0-1} knapsack problem.(ii)A new encoding scheme has shorter binary vector (the length is 2n compared to 3n) and also automatically satisfies the constraint that chose the most one item in each group.(iii)A new repair operator is developed to handle isolation solution while improving its quality.

The rest of this paper is organized as follows: Section 2 presents previous algorithms for KP01. Section 3 presents the binary particle swarm optimization for DKP01. The simulated results of the proposed algorithms are presented in Section 4. We conclude this paper and suggest potential future work in Section 5.

2.1. Particle Swarm Optimization

The PSO implements a population of particles. A population of particles is randomly created initially [19, 20]. The standard molecule swarm optimizer keeps up a swarm of molecule that speaks to the potential arrangements to issue on hand. Suppose that the search space is D-dimensional, and the position of the ith particle of the swarm can be described using a D-dimensional vector, . The velocity of the particle is described by a D-dimensional vector . The last best position of the ith particle is named as . In substance, the direction of each molecule is upgraded concurring to its claim flying experience as well as to that of the finest molecule within the swarm. The fundamental PSO calculation can be depicted aswhere is the dth dimension velocity of particle i in cycle k; is the dth dimension position of particle i in cycle k; is the dth dimension position of personal best (pbest) of particle i in cycle k; is the dth dimension position of global best particle (gbest) in cycle k; is the inertia weight; is the cognitive weight and is a social weight; and are two random values uniformly distributed in the range of [0, 1] [21].

The pseudocode of the PSO is given in Algorithm 1.

Input: Initial parameters
Output: optimal solution
(1)for each particle do
(2) Initialize particle
(3)while stop condition is not met do
(4)for each particle do
(5)  Evaluate objective function
(6)  if the objective function valuepBest then
(7)   current value is replace by pBest
(8) Calculate the gBest (the global best value)
(9)for each particle do
(10)  Calculate particle velocity by equation (5)
(11)  Update particle position by equation (6)
2.2. Binary Particle Swarm Optimization

The binary particle swarm optimization algorithm was introduced by Bansal and Deep to allow the PSO algorithm to operate in binary problem spaces [2123]. It uses the concept of velocity as a probability that a bit (position) takes on one or zero. In the BPSO, equation (5) for updating the velocity remains unchanged, but equation (6) for updating the position is redefined by the rule using the two following equations:where S (.) is the sigmoid function for transforming the velocity to the probability as the following expression:

In this section, we propose 8 binary algorithms based on BSO named BPSO1 to BPSO8. The algorithm BPSOx uses transfer function Sx (where x is integer in [1, 8]), and BPSO1–BPSO4 use formula (7), while BPSO5–BPSO8 use formula (8) to calculate binary vector X.

3. Proposed Binary Particle Swarm Optimization for DKP01

3.1. Solution Presentation

At present, there are two methods to encode a solution which are using a binary vector with length equal to the -dimensional problem [3, 7, 24, 25], and the other method is an integer vector with length equal to number of groups n to present a solution [8]. Each encoding scheme has its advantages and disadvantages. The binary scheme has an advantage when many metaheuristics have a good design to be directly applied to solution.

In this paper, a new binary encode scheme with length is used to present the solution. The advantage of this encode scheme is shorter length and it automatically satisfies constraint 2. The new binary encoding scheme is presented in Table 1.

3.2. Repair Function

The new encoding scheme automatically satisfies constraint 2. To handle constraint 3 and improve the quality of solution, a new repair based on the idea in [3] is proposed. The advantage of this repair procedure is the balance between CPU time cost and not getting stuck in local optima. The items are sorted according to the profit-to-weight ratio so that they are not increasing. It means that

This repair operator consists of two phases. The first phase (called repair phase) examines each variable in increasing order of and drops item from knapsack if feasibility is violated. The first phase (called optimization phase) examines each variable in increasing order of and adds item to knapsack as long as feasibility is not violated. The aim of the repair phase is to obtain a feasible solution from an infeasible solution, while the optimization phase seeks to improve the fitness of a feasible solution. The pseudocode for the repair operator is given in Algorithm 2. The time complexity of repair operator is O (n).

Input: Solution , value vector , weight vector , index vector , and knapsack capacity .
Output: Solution
(1)% Repair phase
(2)n length (x)/2 for i= 1 : do
(3)k floor ((id (i) − 1)/3);
(4)r mod (id (i) − 1, 3);
(5)ifthen
(6)  
(7)  
(8)else ifthen
(9)  x (2k + 1)  0
(10)  x (2k + 2)  0
(11)ifthen
(12)  
(13)  
(14)else ifthen
(15)  x (2k + 1)  0
(16)  x (2k + 2)  0
(17)ifthen
(18)  
(19)  
(20)else ifthen
(21)  x (2k + 1)  0
(22)  x (2k + 2)  0
(23)% Optimization phase
(24)fordo
(25)k floor ((id (i) − 1)/3);
(26)r mod (id (i) − 1, 3);
(27)ifthen
(28)  
(29)  
(30)  ifthen
(31)   x (2k + 1)  0; x (2k + 2)  1
(32)  ifthen
(33)   x (2k + 1)  1; x (2k + 2)  0
(34)  ifthen
(35)   x (2k + 1)  1; x (2k + 2)  1
(36)return

The overall pseudocode of the BPSO algorithms for DKP01 is given in Algorithm 3.

Input: Initial parameters
Output: Optimal solution
(1)for each particle do
(2) Initialize particle
(3)while stop condition is not met do
(4)for each particle do
(5)  Evalute objective function
(6)  if the objective function valuepBest then
(7)   current value is replace by pBest
(8)  Caculate the gBest (the global best value)
(9)  for each particle do
(10)   Calculate particle velocity by equation (5)
(11)   Caculate S (.) using a transfer function
(12)   Update particle position by equations (7) or (8)
(13)   Apply repair operator for current particle position.

4. Simulation Results

In this paper, the experience results of eight BPSO algorithms are compared to find out the best one among them to solve DKP01. The best proposed BPSO is used to compare the results of two algorithms taken from [6] named FirEGA and SecEGA. 40 DKP01 test instances are 10 uncorrelated instances (denoted as UDKP1–UDKP10), 10 weakly correlated instances (denoted as WDKP1–WDKP10), 10 inverse strongly correlated instances (denoted as IDKP1–IDKP10), and 10 strongly correlated instances (denoted as SDKP1–SDKP10) [3].

All experiments of the proposed algorithms are performed on a Dell Vostro 5471 VTI5207W laptop with an Intel (R) Core (TM) i5-8250u CPU-1.6 GHz and 8 GB DDR3 memory. The operating system is Microsoft Windows 10. All the algorithms are implemented using MATLAB R2018a.

The parameters of FirEGA and SecEGA are shown in [6]. The population sizes of FirEGA and SecEGA are set to 50, and the iteration is set to be equal to the dimension of the DKP01. For a fair comparison, the parameters for BPSO algorithms are set as follows: the number of particles is equal to 50, and are set to 2, is linearly decreased from 0.9 to 0.4, the maximum number of iterations is set to be equal to the dimension of the DKP01, and the stopping criterion is satisfied when the maximum number of iterations is reached. For all algorithms, the numbers of objective function evaluations are similar.

Tables 25 summarize the comparison among 8 BPSO algorithms based on the five different performance factors, that is, the best results (BEST), the average results (AVE), the worst results (Worst), the standard deviation (Std. dev), and the gap between the AVE and OPT, where OPT is the optimal value of the instance. The results are averaged over 30 independent runs, and the best results are highlighted in bold font. The formula of computing the gap is as follows:

The results show that BPSO7 and BPSO8 have better performance compared to the other six algorithms. Table 6 summarizes the average ranks of eight BPSO algorithms on 40 instances. The results showed that BPSO8 achieved the average best rank in all three factors, that is, average best rank (rank based on Best), average mean rank (rank based on AVE), and average worst rank (rank based on Worst). Tables 710 summarize the comparison among the FirEGA, SecEGA, and BPSO8 based on the five different performance criteria on 30 independent runs: BEST, AVE, Worst, Std. dev, and Gap. BPSO8 is better than FirEGA and SecEGA in Best, AVE, and Worst for the instances of SDKP, UDKP, and WDKP except for instances of IDKP. The results show that BPSO8 has better performance than FirEGA and SecEGA algorithms. Table 11 summarizes the average ranks of eight BPSO8, FirEGA, and SecEGA algorithms on 40 instances. The results showed that BPSO8 achieved the average best rank in all three factors, that is, average best rank (rank based on Best), the average mean rank (rank based on AVE), and average worst rank (rank based on Worst).

For the stability, the Std. dev and Gap value from Tables 210 demonstrate the stability of the proposed algorithms. Figure 1 demonstrates the box plot of eight instances. The results showed that the group of algorithms BPSO5–BPSO8 is better than group of algorithms BPSO1–BPSO4. Figure 2 demonstrates the convergence curves of eight instances. The results showed that the group of algorithms BPSO5–BPSO8 has faster convergence than the group of algorithms BPSO1–BPSO4.

Therefore, the performance of BPSO8 is excellent compared to that of BPSOs for the DKP01 problem. From the above comparison, it is not difficult to see that, for the DKP01 problem, the BPSOS has the best performance, followed by BPSO8, and they are far better than FirEGA and SecEGA. This shows that the designed method of binary swarm optimization with a new binary encoding scheme is not only feasible but also effective.

5. Conclusion

In this paper, eight new algorithms have been proposed based on the binary particle swarm optimization with a new repair operator to solve discounted 0-1 knapsack problem efficiently. An effective binary encoding scheme is proposed to present the solution to the problem. The new encoding scheme has two advantages, that is, helping reduce the computing effort when using shorter binary vector and also automatically satisfy the constraint that chose the most one item in each group. The simulation results on forty DKP01 instances showed that the proposed algorithms are better than the two algorithms based on genetic algorithm.

In the future, I will study the effect of transfer function combined with PSO algorithm for other optimization problems. Many other optimization algorithms are also considered to solve DKP01.

Data Availability

The data used to support the findings of this study are included within the article or are made publicly available to the research community at https://www.researchgate.net/publication/336126537_Four_kinds_of_D0-1KP_instances.

Additional Points

(i) Particle swarm optimization algorithms with difference binary transfer functions and new solution presentation are proposed to solve the discounted 0-1 knapsack problem. (ii) A new encoding scheme has shorter binary vector and also automatically satisfy the constraint that chose the most one item in each group. (iii) A new repair operator is developed to handle isolation solution while improving its quality. (iv) Experiment results in 40 instances of discounted 0-1 knapsack problem showed that the proposed approaches are efficient.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The author acknowledges Van Lang University for supporting this work.