Research Article  Open Access
Hangwei Feng, Hong Ni, Ran Zhao, Xiaoyong Zhu, "An Enhanced Grasshopper Optimization Algorithm to the Bin Packing Problem", Journal of Control Science and Engineering, vol. 2020, Article ID 3894987, 19 pages, 2020. https://doi.org/10.1155/2020/3894987
An Enhanced Grasshopper Optimization Algorithm to the Bin Packing Problem
Abstract
The grasshopper optimization algorithm (GOA) is a novel metaheuristic algorithm. Because of its easy deployment and high accuracy, it is widely used in a variety of industrial scenarios and obtains good solution. But, at the same time, the GOA algorithm has some shortcomings: (1) original linear convergence parameter causes the processes of exploration and exploitation unbalanced; (2) unstable convergence speed; and (3) easy to fall into the local optimum. In this paper, we propose an enhanced grasshopper optimization algorithm (EGOA) using a nonlinear convergence parameter, niche mechanism, and the βhill climbing technique to overcome the abovementioned shortcomings. In order to evaluate EGOA, we first select the benchmark set of GOA authors to test the performance improvement of EGOA compared to the basic GOA. The analysis includes exploration ability, exploitation ability, and convergence speed. Second, we select the novel CEC2019 benchmark set to test the optimization ability of EGOA in complex problems. According to the analysis of the results of the algorithms in two benchmark sets, it can be found that EGOA performs better than the other five metaheuristic algorithms. In order to further evaluate EGOA, we also apply EGOA to the engineering problem, such as the bin packing problem. We test EGOA and five other metaheuristic algorithms in SchWae2 instance. After analyzing the test results by the Friedman test, we can find that the performance of EGOA is better than other algorithms in bin packing problems.
1. Introduction
Bin packing problem (BPP) is one of the most important combinatorial optimization problems. It is widely used in many fields of science and engineering and is the basis of many practical engineering optimization problems, including processors scheduling [1] and cloud computing resource allocation [2] in the field of computer science. BPP belongs to NPhard problems [3], so that basic heuristic algorithms has been proposed for solving BPP. The basic heuristic algorithms can ensure fast and accurate solutions for smallscale bin packing problems. But, the performance of this way starts to deteriorate with the increase of scale. Thus, metaheuristic algorithms which can find a highquality solution though adjusting the exploitation and exploration of search space become a new trend to solve BPP. In [4], the authors combine a genetic algorithm with a grouping mechanism. In [5], the authors apply first fit and ranked order values (ROVs) in CS to solve the BPP. In [6], the authors solve the BPP using an improved whale optimization algorithm.
Metaheuristic algorithms have developed rapidly and have been widely applied in many different fields, such as signal detection [7], resource allocation [8], load balancing [9], feature selection [10], task scheduling [11], and engineering applications [12]. The representative algorithms contain Dragonfly Algorithm (DA) [13], Ant Lion Optimization (ALO) [14], Bee Algorithm (BA) [15], and Particle Swarm Optimization (PSO) [16, 17]. These algorithms seek inspiration from physical or biological phenomena to solve optimization problems, including ant colony foraging, bird migrating, bee colony behavior, grey wolves hunting, and fish schooling [18]. It has been proved in practice that they are superior to traditional optimization methods in many application scenarios.
The metaheuristic algorithms form a randomly initialized population for a given problem, evaluate the solutions using the objective function (s), and improve the random solutions during the iteration until satisfying the terminating condition in order to find the global optimum. This unique way to find solution give them the advantages as follows: (i) simplicity, mainly inspired by fairly simple concepts and are easy to apply; (ii) flexibility, no special requirements on the objective function; and (iii) independent of derivation, no need to calculate the derivative in the process of finding the optimal solution. These advantages make metaheuristic algorithms not only limited to specific problems but also have a wide range of applications.
Although there are differences between different metaheuristic algorithms, they all have one feature in common, which is the two phases during the search process: exploration and exploitation. In the exploration phase, the algorithm explores the search space as widely as possible for promising regions. While in the exploitation phase, it pays more attention to local search ability for the optimal solution around promising regions. How to balance the relationship between exploration and exploitation has become the key to design and improve a new metaheuristic algorithm.
The GOA is a stateoftheart populationbased metaheuristic algorithm that was inspired by the longrange and abrupt movements of adult grasshoppers in groups [19]. Also, GOA is widely used in a variety of industrial scenarios. Aljarah proposes a hybrid approach based on the grasshopper to optimize the parameters of the SVM model [20]. Hekimoğlu Baran and Ekinci Serder employ GOA to solve many optimization problems in an automatic voltage regular system [21]. Wu proposes a dynamic GOA for optimizing the distributed trajectories of unmanned aerial vehicles in urban environments [22]. In [23], the authors apply the basic multiobjective GOA to solve several benchmark problems with superior performance.
While the GOA can obtain good solutions in a reasonable timeframe, it presents some shortcomings: (1) original linear convergence parameter makes the processes of exploration and exploitation unbalanced, (2) unstable convergence speed, and (3) easy to fall into the local optimum. Some studies have proposed improved GOA algorithms. Luo proposes an improved grasshopper using levy flight in [24]. Tharwat applies GOA for constrained and unconstrained multiobjective optimization problems in [25]. Ewees introduces an oppositionbased learning strategy to the GOA in [26]. This strategy improves the ability of jumping out of local optimum and movement issues. But, the improvement of algorithm in other shortcomings is not significant because they consider neither the balance between exploration and exploitation nor the relationship between the diversity of the population and the convergence speed during the group optimization process.
To overcome these disadvantages, this paper proposed an enhanced grasshopper optimization algorithm. The main contributions of this paper could be summarized as follows:(i)We introduced a nonlinear convergence parameter into the basic GOA to balance the exploration and exploitation phases to improve the overall performance.(ii)We applied a niche repulsing mechanism to the basic GOA. This mechanism directs group optimization and ensures the diversity of the search space to increase the speed of convergence.(iii)We adapted the βhill climbing (BHC) technique to GOA to avoid falling into the local optimums.
The remainder of this paper is organized as follows. The basic GOA and EGOA are introduced in Sections 2 and 3. Several experiments of two benchmark sets are implemented and the analysis results are shown in Section 4. The BPP and the applicability of the EGOA to the BPP are discussed in Section 5. Finally, conclusions and future works are discussed in Section 6.
2. Basic Grasshopper Optimization Algorithm
2.1. Biological Inspiration
Grasshoppers are considered to be pests based on the damage they inflict on crops and vegetation. Instead of acting individually, grasshoppers form some of the largest swarms among all living creatures. Millions of grasshoppers jump and move like large rolling cylinders. The influences of the individuals in a swarm, wind, gravity, and food sources all affect swarm movement.
2.2. Main Procedures for the GOA
The GOA is a novel swarmintelligencebased metaheuristic algorithm that was inspired by the longrange and abrupt movements of adult grasshoppers in groups. Food source seeking is an important behavior of grasshopper swarms. Metaheuristic algorithms logically divide the search process into two phases: exploration and exploitation. The longrange and abrupt movements of grasshoppers represent the exploration phase, and local movements to search for better food sources represent the exploitation phase.
A mathematical model for this behavior was presented by Mirjalili in [19]. This model is defined as follows:where is the position of the i grasshopper, represents the social interaction in a group, G is the force of gravity acting on the i grasshopper, and A is the wind direction. By expanding , G and A in (1), the equation can be rewritten as follows:where is a function simulating the impact of social interactions and N is the number of grasshoppers. is the expanded G component, where is the gravitational force and is a unit vector pointing toward the center of the earth. is the expanded A component, where u is a constant drift and is a unit vector pointing in the direction of the wind. is the distance between the i and j grasshoppers and is calculated as .
Because grasshoppers quickly find comfortable zones and exhibit poor convergence, the influences of wind and gravity are far weaker than the relationships between grasshoppers, meaning this mathematical model should be modified as follows:where and represent the upper and lower boundaries of the search space, is the value relative to the target (best solution found so far), and c is a decreasing coefficient that balances the processes of exploitation and exploration, which is defined as follows:where is the maximum value (equal to 1), is the minimum value (equal to 0.00001), represents the current iteration, and is the maximum number of iterations.
3. Enhanced Grasshopper Optimization Algorithm
First, the EGOA utilizes a nonlinear convergence parameter to balance the exploration phase exploitation phase. Second, the EGOA is hybridized with a niche mechanism to balance diversity and convergence speed. Finally, the EGOA is combined with the βhill climbing technique to avoid local optimum.
3.1. Nonlinear Convergence Parameter
In metaheuristic algorithm, the convergence parameter c directly affects the step size. The larger the convergence parameter, the larger the step size. In the exploration phase of metaheuristic algorithm, the step size should be as large as possible to ensure that the algorithm can search the global optimum in a wider range to prevent falling into the local optimum. In the exploitation phase, the step size should be as small as possible to ensure that the algorithm gradually converges to the global optimum to prevent skipping the optimal value. In the basic GOA, the convergence parameter c decreases linearly in the two phases of exploration and exploitation. Also, the size of the step cannot be balanced according to the characteristics of the two phases, which affects the performance of the algorithm. In order to make up for the lack of linear convergence parameters, we introduce the function, a nonlinear adjustment parameter widely used in the field of deep learning with good performance. function can compensate for the lack of GOA nonlinear convergence parameters and the characteristics of exploration and exploitation phases based on the nonlinear characteristics to improve performance at different phases of GOA. The formula for the function is defined as follows:
A nonlinear convergence parameter based on a variant of the tanh function is introduced as follows:where is defined as follows:where is the current iteration and is the adjustment factor.
3.2. Niche Mechanism
The idea of the GOA algorithm to solve the problem is that grasshoppers interact with each other. When solving problems, the agent in GOA mimics the grasshoppers’ interaction and searches the global optimum in the solution space. In the process of the agent searching for the global optimum, when a local optimum is found, all agents will move closer to this local optimum. If the moving is too fast, the agent swarm loses diversity. This leads to a local optimum and slows down convergence speed. In order to balance the convergence speed and diversity of GOA, we introduce niche technology.
Niche technology is inspired by the natural phenomenon where creatures tend to live with similar creatures. It can be used to balance GOA’s convergence speed and the diversity of a swarm. Therefore, to maintain the diversity of the swarm, we design a niche repulsing mechanism for the EGOA. Through this mechanism, the algorithm maintains the diversity of the swarm while preserving local optimum. The simulation of this mechanism is defined as follows:
First, calculate , where is the Euclidean distance between a search agent and search agent (there are N search agents). Then, get and calculate , where is the minimum distance between search agent and any other search agent in the population, is the dimensionality of the search agents. Afterwards, when is less than , compare the fitnesses of and , and penalty is assigned to the worse one. If , then is p; otherwise, is . Finally, after all individuals have been processed, they are ranked according to their fitness. individuals with poor fitness will be eliminated. To maintain the population and increase diversity, we select the best and worse individuals to generate new individuals.
3.3. The βHill Climbing Technique
As the iterative process continues, the GOA is similar to many other metaheuristic algorithms. It only focuses on the process of converging to the global optimum and ignores the mechanism of avoiding the local optimum. When falling into a local optimum, GOA search cannot continue. This is very fatal when solving practical engineering problems.
To allow the algorithm to escape from local optimum, the βhill climbing (BHC) technique is introduced [27] in the EGOA. The BHC technique can iteratively enrich a series of randomly approximated solutions. In the BHC technique, a stochastic strategy called the βoperator is employed to establish a fine balance between exploration and exploitation throughout a global search.
The BHC technique begins exploitation with a search agent that represents a poor solution. It uses the βoperator throughout its exploitative steps to generate a new solution . The elements of the new solution are updated according to their current values or filled randomly with a probability of β as follows:where and z represent a random value in the range (0, 1). Next, the BHC technique compares to and records the best (downhill) value as follows:
In terms of exploration, the BHC technique’s ability to escape from local optima solutions using the βoperator can be considered as the key concept for avoiding local optimum (Algorithm 1).

4. Experimental Results on Benchmarks
In this section, the efficiency of the proposed EGOA is compared to that of the basic GOA and several other metaheuristic algorithms, namely Dragonfly Algorithm (DA), Ant Colony Optimizer (ALO), Particle Swarm Optimization (PSO), and oppositionbased learning GOA (OBLGOA) based on various benchmark problems. All codes utilized in this section were implemented consistently using MATLAB 9.3 (R2017b) and executed on a PC with the Windows 10 64 bit professional operating system and 16 GB of RAM. In order to fully verify the performance of EGOA, this paper cites two benchmark sets: The benchmark set 1 is the GOA original algorithm test environment. This part of benchmark set is used to verify the performance improvement of the original algorithm and the comparison with other algorithms [28, 29]. The benchmark set 2 is the most novel metaheuristic algorithm performance benchmark set CEC2019 [30]. It can be further verified that EGOA also has good performance in the recent benchmark set. To investigate the differences between the results obtained by the EGOA and those obtained by the other algorithms, a nonparametric Wilcoxon rank sum test [31] with a significance level of 5% was adopted in this study. The Wilcoxon rank sum test generates a to determine if two datasets came from the same distributed set. The higher the is, the more similar the two datasets are. If the is less than 0.05, then the two datasets are considered to be independent.
4.1. Benchmark Set 1
The benchmark set 1 used in this paper contains 29 test functions. Different test functions have different characteristics, which can comprehensively measure the performance of the algorithm. Test functions include unimodal, multimodal, fixeddimension multimodal functions, and composite functions which are listed in Tables 1 and 2. In the tables, the column labelled represents the objective fitness function. The column labelled represents the dimensionality of each function, represents the boundaries of the search space, represents the optimal value, and represents the function type.


For the unimodal, multimodal, and fixeddimension multimodal functions, each of the benchmark functions uses 30 search agents over 500 iterations. The presented results were recorded based on 30 independent trials with random initial conditions to calculate statistical results, including the average fitness (avg) which represents the average performance and reliability of the algorithm, standard deviation of fitness (std) which represents the stability of the algorithm, best fitness (best) which represents the best optimization ability of the algorithm, and worst fitness (worst) which represents the capability boundary.
4.1.1. Evaluation of Exploitation Capability
Unimodal functions, which have only one global optimum, can test the exploitation capabilities of algorithms. Table 3 shows the results for the unimodal functions. It can be clearly seen that EGOA achieves best performance in F_{1}, F_{2}, and F_{4} and second best performance in F_{7}. Moreover, EGOA outperforms the other algorithms in F_{3} in terms of the avg fitness and the best avg fitness. In addition, EGOA achieves the best avg fitness among all compared algorithms in F_{5}. Besides, most of the values in Table 4 are much less than 0.05 which means that EGOA and other solutions can be considered irrelevant. From the abovementioned results, we can conclude that EGOA has a good exploitation capability. The main reasons for the superior performance of EGOA are the embedded BHC local search and the exploitative patterns inherited from the basic GOA.


4.1.2. Evaluation of Exploration Capability
Multimodal functions, which have several local optimums, are used for evaluating exploration ability. Tables 5 and 6 list the results of the compared algorithms on multimodal test functions and fixeddimension multimodal functions, respectively. The EGOA works best in 10 of the 16 benchmark tests. Although, among the remaining six benchmark tests that do not achieve the best results, the results of F_{8}, F_{13}, F_{15}, and F_{19} are all very close to the optimal values. These results show that the EGOA has competitive exploration ability. Also, the p values presented in Table 4 demonstrate that it has statistical significance.


4.1.3. Ability to Avoid Local Minima
Composite functions, which have composite values, are very challenging test problems for metaheuristic algorithms. Furthermore, they are suitable for benchmarking both exploration and exploitation simultaneously for a massive number of local optimums. The results for composite benchmark functions listed in Table 7 indicate that the EGOA outperforms the other algorithms in F_{24} and F_{29}, gets the best avg fitness in F_{25,} and achieves the best std fitness and avg fitness in F_{27}. The results can demonstrate that the EGOA is able to escape from local optimums.

4.1.4. Analysis of EGOA Convergence Curves
In this subsection, the convergence speed of the EGOA is analyzed. Several benchmark functions were chosen and executed with 30 search agents over 500 iterations. The results for the EGOA, GOA, DA, ALO, PSO, and OBLGOA are presented in Figure 1. The abscissa uses the number of current iterations, and the ordinate represents the best fitness value in each iteration. The representative convergence curves in each part of benchmark functions are selected, which can show that EGOA has an outstanding performance in solving different kinds of benchmark functions compared to wellknown metaheuristic algorithm. During the iterations of F_{1}, F_{4}, and F_{10}, the convergence speed of EGOA is approximately in the middle of the other five comparison algorithms before about 250 iterations. But, after 250 iterations, an inflection point appears, and EGOA accelerates the convergence process. This is because EGOA is hybridized with a niche mechanism to provide better individual direction and faster convergence speed. Besides, during the iterations of F_{8}, F_{15}, and F_{21}, the slope of the convergence curve is larger in the initial iteration, and after several jumps, it jumps out of the local optimum and finally obtains the optimal fitness. This can be explained as EGOA utilizes a nonlinear convergence parameter f to balance the two search phases, and the combination of EGOA and βhill climbing technique can improve the ability to jump out of local optimization. Overall, the convergence curves demonstrate that the proposed EGOA provides relatively fast convergence speed in most of the iterations.
(a)
(b)
4.2. Benchmark Set 2
The CEC2019 benchmark set contains 10 test functions. Unlike the previous CEC benchmark set, the complexity of the test function is significantly increased, and more attention is paid to the ability of evaluation algorithms to find an accurate solution. In Table 8, the column labelled represents the objective fitness function. The column labelled represents the dimensionality of each function, represents the boundaries of the search space, and represents the optimal value.

Each test function uses 30 search agents for 500 iterations. The presented results were recorded based on 30 independent trials with random initial conditions to calculate statistical results, including the average fitness (avg), standard deviation of fitness (std), best fitness (best), and worst fitness (worst) which represent the same meaning as mentioned above in the benchmark set 1. To assess the accuracy of the algorithm, in this part of the solution space, we need to focus more on average fitness and best fitness.
4.2.1. Evaluation of EGOA’s Performance in CEC2019
The CEC2019 test functions, which have complicated solution space, are more challenging test problems for metaheuristic algorithms. Furthermore, they are suitable for evaluating the algorithm’s ability to find accurate solutions in the complicated search process. The results for CEC2019 test functions listed in Table 9 indicate that EGOA works best in f_{4} and f_{10} and second in f_{5} and f_{6}. Meanwhile, it gets the best avg fitness in 7 of the 10 benchmark tests and achieves the best fitness in 5 of 10. The results demonstrate that compared with other algorithms, EGOA has a good average performance and higher reliability. At the same time, it shows the best optimization ability in most search processes.

4.2.2. Analysis of EGOA’s Convergence Curves in CEC2019
In this subsection, the convergence behavior of the EGOA in CEC2019 test functions is investigated. Several benchmark functions were chosen and executed with 30 search agents over 500 iterations. The results over part of the benchmark functions are presented in Figure 2. The parameter space in Figure 1 indicates that the solution space of the CEC2019 test function is more complex than test functions solution space in benchmark set 1, and it is more challenging to find the optimal fitness. During the iterations of f_{4}, f_{7,} and f_{10}, EGOA keeps a large slope relatively in the initial iteration, and after several sudden drops of the curve, it converges to the optimal fitness in the search space, which is far superior to other algorithms. Although during the iterations of f_{2} OBLGOA achieves the best result, EGOA ranks second. Considering comprehensively, it can still prove that the proposed EGOA has also superior search ability in more complex solution spaces, compared with wellknown metaheuristics.
(a)
(b)
5. Application of the EGOA to the BPP
5.1. BPP Formulation
The BPP consists of packing a set of items with different weights into a minimum number of bins, which may have different capacities. Mathematically, the BPP can be defined as a programming problem as follows [32]: