Abstract

In recent years Grammatical Evolution (GE) has been used as a representation of Genetic Programming (GP) which has been applied to many optimization problems such as symbolic regression, classification, Boolean functions, constructed problems, and algorithmic problems. GE can use a diversity of searching strategies including Swarm Intelligence (SI). Particle Swarm Optimisation (PSO) is an algorithm of SI that has two main problems: premature convergence and poor diversity. Particle Evolutionary Swarm Optimization (PESO) is a recent and novel algorithm which is also part of SI. PESO uses two perturbations to avoid PSO’s problems. In this paper we propose using PESO and PSO in the frame of GE as strategies to generate heuristics that solve the Bin Packing Problem (BPP); it is possible however to apply this methodology to other kinds of problems using another Grammar designed for that problem. A comparison between PESO, PSO, and BPP’s heuristics is performed through the nonparametric Friedman test. The main contribution of this paper is proposing a Grammar to generate online and offline heuristics depending on the test instance trying to improve the heuristics generated by other grammars and humans; it also proposes a way to implement different algorithms as search strategies in GE like PESO to obtain better results than those obtained by PSO.

1. Introduction

The methodology development to solve a specific problem is a process that entails the problem study and the analysis instances from such problem. There are many problems [1] for which there are no methodologies that can provide the exact solution, because the size of the problem search space makes it intractable in time, and it makes it necessary to search and improve methodologies that can give a solution in a finite time. There are methodologies based on Artificial Intelligence which do not yield exact solutions; those methodologies, however, provide an approximation, and among those we can find the following methodologies.

Heuristics are defined as “a type of strategy that dramatically limits the search for solutions” [2, 3]. One important characteristic of heuristics is that they can obtain a result for an instance problem in polynomial time [1], although heuristics are developed for a specific instance problem.

Metaheuristics are defined as “a master strategy that guides and modifies other heuristics to obtain solutions generally better that the ones obtained with a local search optimization” [4]. The metaheuristics can work over several instances of a given problem or various problems, but it is necessary to adapt the metaheuristics to work with each problem.

It has been shown that the metaheuristic Genetic Programming [5] can generate a heuristic that can be applied to an instance problem [6]. There also exist metaheuristics that are based on Genetic Programming’s paradigm [7] such as Grammatical Differential Evolution [8], Grammatical Swarm [9], Particle Swarm Programming [10], and Geometric Differential Evolution [11].

The Bin Packing Problem (BPP) has been widely studied because of its many Industrial Applications, like wood and glass cutting, packing in transportation and warehousing [12], and job scheduling on uniform processors [13, 14]. This is an NP-Hard Problem [1] and due to its complexity many heuristics have been developed attempting to give an approximation [1519]. Some metaheuristics have also been applied to try to obtain better results than those obtained by heuristics [2022]. Some exact algorithms have been developed [2325]; however, given the nature of the problem the time reported by these algorithms grows up and depending on the instance the time may grow up exponentially.

The contribution of this paper is to propose a generic methodology to generate heuristics using GE with search strategies. It has been shown that is possible to use this methodology to generate BPP heuristics by using PESO and PSO as search strategies; it was also shown that the heuristics generated with the proposed Grammar have better performance than the BPP’s classical heuristics, which were designed by an expert in Operational Research. Those results were obtained by comparing the results obtained by the GE and the BPP heuristics by means of Friedman nonparametric test [26].

The GE is described in Section 2, including the PSO and PESO. Section 3 describes the Bin Packing Problem, the state-of-the-art heuristics, the instances used, and the fitness function. We describe the experiments performed in Section 4. Finally, general conclusions about the present work are presented in Section 5, including future perspectives of this work.

2. Grammatical Evolution

Grammatical Evolution (GE) [7] is a grammar-based form of Genetic Programming (GP) [27]. GE joins the principles of molecular biology, which are used by GP, and the power of formal grammars. Unlike GP, GE adopts a population of lineal genotypic integer strings, or binary strings, witch are transformed into functional phenotypic through a genotype-to-phenotype mapping process [28]; this process is also known as Indirect Representation [29]. The genotype strings evolve with no knowledge of their phenotypic equivalent, only using the fitness measure.

The transformation is governed through a Backus Naur Form grammar (BNF), which is made up of the tuple , where is the set of all nonterminal symbols, is the set of terminals, is the set of production rules that map , and is the initial start symbol where . There are a number of production rules that can be applied to a nonterminal; an “∣” (or) symbol separates the options.

Even though the GE uses the Genetic Algorithm (GA) [7, 28, 30] as a search strategy it is possible to use another search strategy like the Particle Swarm Optimization, called Grammatical Swarm (GS) [8].

In GE each individual is mapped into a program using the BNF, using (1) proposed in [28] to choose the next production based on the nonterminal symbol. An example of the mapping process employed by GE is shown in Figure 1. Consider where is the codon value and is the number of production rules available for the current nonterminal.

The GE can use different search strategies; our proposed model is shown in Figure 2. This model includes the problem instance and the search strategy as an input. In [28] the search strategy is part of the process; however it can be seen as an additional element that can be chosen to work with GE. The GE will generate a solution through the search strategy selected and it will be evaluated in the objective function using the problem instance.

2.1. Particle Swarm Optimization

Particle Swarm Optimization (PSO) [3135] is a metaheuristic bioinspired in the flock of birds or school of fish. It was developed by Kennedy and Eberthart based on a concept called social metaphor. This metaheuristic simulates a society where all individuals contribute with their knowledge to obtain a better solution. There are three factors that influence the change of status or behavior of an individual.(i)The knowledge of the environment or adaptation: it is related to the importance given to the experience of the individual.(ii)His experience or local memory: it is related to the importance given to the best result found by the individual.(iii)The experience of their neighbors or global memory: this is related to how important is the best result obtained by their neighbors or other individuals.In this metaheuristic each individual is considered as a particle and moves through a multidimensional space that represents the social space; the search space depends on the dimension of space which in turn depends on the variables used to represent the problem.

For the update of each particle we use the velocity vector which tells how fast the particle will move in each of the dimensions; the method for updating the speed of PSO is given by (2), and its position is updated by (3). Algorithm 1 shows the complete PSO algorithm: where (i) is the velocity of the th particle,(ii) is adjustment factor to the environment,(iii) is the memory coefficient in the neighborhood,(iv) is the coefficient memory,(v) is the position of the th particle,(vi) is the best position found so far by all particles,(vii) is the best position found by the th particle.

Require:   adaptation to environment coefficient, neighborhood memory
     coefficient, memory coefficient, swarm size.
(1)    Start the swarm particles.
(2)   Start the velocity vector for each particle in the swarm.
(3)   while stopping criterion not met do
(4)  for   to   do
(5)   If the -particle’s fitness is better than the local best then replace the
     local best with the -particle.
(6)   If the -particle’s fitness is better than the global best then replace the
     global best with the -particle.
(7)   Update the velocity vector by (2).
(8)   Update the particle’s position with the velocity vector by (3).
(9)  end for
(10) end while

2.2. Particle Evolutionary Swarm Optimization

Particle Evolutionary Swarm Optimization (PESO) [3638] is based on PSO but introduces two perturbations in order to avoid two problems observed in PSO [39]:(i)premature convergence,(ii)poor diversity.Algorithm 2 shows the PESO Algorithm with two perturbations, Algorithms 3 and 4. The C-Perturbation has the advantage of keeping the self-organization potential of the flock as no separate probability distribution needs to be computed; meanwhile the M-Perturbation helps keeping diversity into the population.

Require:   adaptation to environment coefficient, neighborhood memory
     coefficient, memory coefficient, swarm size.
(1)    Start the swarm particles.
(2)   Start the velocity vector for each particle in the swarm.
(3)   while stopping criterion not met do
(4)  for   to   do
(5)   If the -particle’s fitness is better than the local best then replace the
     local best with the -particle.
(6)   If the -particle’s fitness is better than the global best then replace the
     global best with the -particle.
(7)   Update the velocity vector by (2).
(8)   Update the particle’s position with the velocity vector by (3).
(9)   Apply the C-Perturbation
(10)      Apply the M-Perturbation
(11) end for
(12) end while

(1) for all Particles do
(2)  Generate uniformly between 0 and 1.
(3)  Generate , and as random numbers between 1 and the number
    of particles.
(4)  Generate the -new particle using the following equation and applying it
    to each particle dimension: .
(5) end for
(6) for all Particles do
(7)  If the -new particle is better that the -particle then replace the -particle
    with the -new particle.
(8) end for

(1)   for all Particles do
(2)  for all Dimension do
(3)   Generate uniformly between 0 and 1.
(4)   if     then
(5)    
(6)   else
(7)    
(8)   end if
(9)  end for
(10) end for
(11)  for all Particles do
(12)  If the -new particle is better that the -particle then replace the -particle
  with the -new particle.
(13)  end for

3. Bin Packing Problem

The Bin Packing Problem (BPP) [40] can be described as follows: given items that need to be packed in the lowest possible number of bins, each item has a weight , where is the element; the max capacity of the bins is also available. The objective is to minimize the bins used to pack all the items, given that each item is assigned only to one bin, and the sum of all the items in the bin can not exceed the bin’s size.

This problem has been widely studied, including the following:(i)proposing new theorems [41, 42],(ii)developing new heuristic algorithms based on Operational Research concepts [18, 43],(iii)characterizing the problem instances [4446],(iv)implementing metaheuristics [20, 4749].

This problem has been shown to be an NP-Hard optimization problem [1]. A mathematical definition of the BPP is as follows:

Minimize subject to the following constraints and conditions: where(i) is weight of the item,(ii) is binary variable that shows if the bin has items,(iii) indicates whether the item is into the bin,(iv) is number of available bins,(v) is capacity of each bin.The algorithms for the BPP instances can be classified as online or offline [46]. We have algorithms considered online if we do not know the items before starting the packing process and offline if we know all the items before starting. In this research we worked with both algorithms.

3.1. Tests Instances

Beasley [50] proposed a collection of test data sets, known as OR-Library and maintained by the Beasley University, which were studied by Falkenauer [21]. This collection contains a variety of test data sets for a variety of Operational Research problems, including the BPP in several dimensions. For the one-dimensional BPP case the collection contains eight data sets that can be classified in two classes.(i)Unifor. The data sets from binpack1 to binpack4 consist of items of sizes uniformly distributed in (20, 100) to be packed into bins of size . The number of bins in the current known solution was found by [21].(ii)Triplets. The data sets from binpack5 to binpack8 consist of items from (24, 50) to be packed into bins of size 100. The number of bins can be obtained dividing the size of the data set by three.Scholl et al. [23] proposed another collection of data sets; only 1184 problems were solved optimally. Alvim et al. [51] reported the optimal solutions for the remaining 26 problems. The collection contains three data sets.(i)Set 1. It has 720 instances with items drawn from a uniform distribution on three intervals , , and . The bin capacity is , 120, and 150 and , 100, 200, and 500.(ii)Set 2. It has 480 instances with and , 100, 200, and 500. Each bin has an average of 3–9 items.(iii)Set 3. It has 10 instances with , , and items are drawn from a uniform distribution on . Set 3 is considered the most difficult of the three sets.

3.2. Classic Heuristics

Heuristics have been used to solve the BPP, obtaining good results. Reference [18] shows the following heuristics as Classical Heuristics; these heuristics can be used as online heuristics if the items need to be packed as they come in or offline heuristics if the items can be sorted before starting the packing process.(i)Best Fit [17] puts the piece in the fullest bin that has room for it and opens a new bin if the piece does not fit in any existing bin.(ii)Worst Fit [18] puts the piece in the emptiest bin that has room for it and opens a new bin if the piece does not fit in any existing bin.(iii)Almost Worst Fit [18] puts the piece in the second emptiest bin if that bin has room for it and opens a new bin if the piece does not fit in any open bin.(iv)Next Fit [15] puts the piece in the right-most bin and opens a new bin if there is not enough room for it.(v)First Fit [15] puts the piece in the left-most bin that has room for it and opens a new bin if it does not fit in any open bin.Even though there are some heuristics having better performance than the heuristics shown in the present section [16, 19, 42, 52, 53], such heuristics have been the result of research of lower and upper bounds to determine the minimal number of bins.

3.3. Fitness Measure

There are many Fitness Measures used to discern the results obtained by heuristics and metaheuristics algorithms. In [54] two fitness measures are shown: the first measure (see (6)) tries to find the difference between the used bins and the theorical upper bound on the bins needed; the second (see (7)) was proposed in [47] and rewards full or almost full bins; the objective is to fill each bin, minimizing the free space: where(i) is number of bins used,(ii) is number of containers,(iii) is number of pieces,(iv) is th’s piece size,(v)consider (vi) is bin capacity.

4. Grammar Design and Testing

Algorithm 5 shows the proposed approach; this approach allows the use of different fitness functions and search strategies to generate heuristics automatically.

Require:  SS search strategy, FF Fitness Function, BNF-G Grammar, IS
     Instances Set.
(1)    for all Instance Set into Instances Set do
(2)   Select randomly an Instance from the Instance Set
(3)   Start an initial population.
(4)   while stopping criterion not met do
(5)    Apply the mapping process using the Grammar BNF-G, as seen in
        the Figure 1, to obtain an heuristic by each element into the population.
(6)    Calculate the fitness value, using FF, for each element into the population
        applying the heuristic generated to the instance selected.
(7)    Apply the search strategy to optimize the elements into the population.
(8)   end while
(9)   Apply the found heuristic to all instances from the Instance Set.
(10)  end for
(11)  return Heuristic for each instance set.

To improve the Bin Packing Heuristics it was necessary to design a grammar that represents the Bin Packing Problem. In [55] Grammar 1 is shown to be based on heuristic elements taken by [6]; however the results obtained in [1] give 10% of solutions that can not be applied to the instance and for this reason this approach does not need to be included to be compared against the results obtained.

That Grammar has been improved in the Grammar 2 [56] to obtain similar results to those obtained by the BestFit heuristic. However this grammar cannot be applied to Bin Packing offline Problems because it does not sort pieces. Grammar 3 is proposed to improve the results obtained by Grammar 2, given that it can generate heuristics online and offline:

Grammar 1. Grammar based on FirstFit Heuristic was proposed in [55] and we use the Heuristic Components shown in [57]

Grammar 2. Grammar proposed in [56] was based on BestFist Heuristic:

Grammar 3. Grammar proposal to generate heuristics online and offline is based on Grammar 2, where(i) is size of the current piece,(ii) is bin capacity,(iii) is sum of the pieces already in the bin,(iv)Elements sorts the elements,(v)Bin sorts the bins based on the bin number,(vi)Cont sorts the bins based on the bin contents,(vii)Asc sorts in ascending order,(viii)Des sorts in descending order.

In order to generate the heuristics Grammar 3 was used. The search strategies applied to the GE were PESO and PSO. The number of function calls was taken from [56], where it was explained that this number is only 10% from the number of function calls used by [6]. To obtain the parameters shown in Table 1 a fine-tuning process was applied based on Covering Arrays (CA) [58]; in this case the CA was generated using the Covering Array Library (CAS) [59] from The National Institute of Standards and Technology (NIST) (http://csrc.nist.gov/groups/SNS/acts/index.html).

In order to generate the heuristics, one instance from each set was used. Once the heuristic was obtained for each instance set, it was applied to all the sets to obtain the heuristic’s fitness. The instance sets used were detailed in Section 3.1. 33 experiments were performed independently and the median was used to compare the results against those obtained with the heuristics described in Section 3. The comparison was implemented through the nonparametric test of Friedman [26, 60]; this nonparametric test used a post hoc analysis to discern the performance between the experiments and gives a ranking of them.

The method to apply the heuristics generated by Grammar 3 for an instance set is described below.(i)For each instance in the instance set the generated heuristic will be applied.(ii)The generated heuristic has the option to sort the items before starting the packing process, to treat the instances like offline instances.(iii)The next part of the generated heuristic says how to sort the bins; many heuristics require sorting the bins before packing an item.(iv)The last part, the inequality, determines the rule to pack an item.Sometimes the generated heuristic does not have items ordered and it makes the heuristic work like an online heuristic. If it does not have the bins ordered all the items will be packed into the bins in the order they were created.

5. Results

In Table 2 the results obtained with online and offline heuristics (described in Section 3.2) are shown. Results obtained by an exact algorithm were included, the MTP algorithm [40], and results from the fitness function from Section 3.3 are shown as well with the number of bins used. A row was added where the difference of containers regarding the optimal is shown. These results were obtained by applying the heuristics to each instance; all the results from an instance set were added.

Table 3 shows examples of heuristics generated using the proposed Grammar with GE for each instance set; some heuristics can be reduced but this is not part of the present work.

The results obtained by the PSO and PESO with the Grammars are shown in Table 4; these results are the median from 33 individual experiments. Using the results obtained by the heuristics and the GE with PESO and PSO the Friedman nonparametric test was performed to discern the results. The value obtained by the Friedman nonparametric test is 85.789215 and the value 6.763090E-11; this means that the tested heuristics have different performance. Due to this it was necessary to apply a post hoc procedure to obtain the Heuristics Ranking shown in Table 5.

Both Tables 2 and 4 have an extra row at the bottom with the total remaining bins. The results obtained by PESO using Grammar 3 show that this heuristic which has been deployed automatically has less bins than the other classic heuristics.

6. Conclusions and Future Works

In the present work a Grammar was proposed to generate online and offline heuristics in order to improve heuristics generated by other grammars and by humans. It also was proposed using PESO as a search strategy based on Swarm Intelligence to avoid the problems observed in PSO.

Through the results obtained in Section 5, it was concluded that it is possible to generate good heuristics with the proposed Grammar. Additionally it can be seen that the quality of these heuristics strongly depends on the grammar used to evolve.

The grammar proposed in the present work shows that is possible to generate heuristics with better performance that the well-known BestFit, FirstFit, NextFit, WorstFit, and Almost WorstFit heuristics from Section 3.2 regardless of heuristics being online or offline. While the heuristics are designed to work with all the instances sets, the GE adjusts heuristics automatically to work with one instance set and it makes it possible for GE to generate offline or online heuristics. The GE can generate as many heuristics as instances sets that have been working and try to adapt the best heuristic that can be generated with the used Grammar.

The results obtained by PESO are better than those obtained by PSO by using Grammars 2 and 3, but with Grammar 1 PESO and PSO have the same performance.

The current investigation is based on the one-dimensional bin packing problem but this methodology can be used to solve other problems, due to the generality of the approach. It is necessary to apply heuristic generation to other problems and investigate if the GE with PESO as search strategy gives better results than the GP or GE with other search strategies.

It will be necessary to find a methodology to choose the instance or instances for the training process as well as to determine if the instances are the same or to classify the instances in groups with the same features to generate only one heuristic by group.

It will also be necessary to research other metaheuristics that do not need the parameter tuning because the metaheuristics shown in the present paper were tuned using Covering Arrays.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors want to thank to the Instituto Tecnológico de León (ITL) for the support provided for this research. Additionally the authors want to aknowledge the generous support from the Consejo Nacional de Ciencia y Tecnológia (CONACyT) from Mexico for this research project.