Abstract

The grasshopper optimization algorithm (GOA) is a novel metaheuristic algorithm. Because of its easy deployment and high accuracy, it is widely used in a variety of industrial scenarios and obtains good solution. But, at the same time, the GOA algorithm has some shortcomings: (1) original linear convergence parameter causes the processes of exploration and exploitation unbalanced; (2) unstable convergence speed; and (3) easy to fall into the local optimum. In this paper, we propose an enhanced grasshopper optimization algorithm (EGOA) using a nonlinear convergence parameter, niche mechanism, and the β-hill climbing technique to overcome the abovementioned shortcomings. In order to evaluate EGOA, we first select the benchmark set of GOA authors to test the performance improvement of EGOA compared to the basic GOA. The analysis includes exploration ability, exploitation ability, and convergence speed. Second, we select the novel CEC2019 benchmark set to test the optimization ability of EGOA in complex problems. According to the analysis of the results of the algorithms in two benchmark sets, it can be found that EGOA performs better than the other five metaheuristic algorithms. In order to further evaluate EGOA, we also apply EGOA to the engineering problem, such as the bin packing problem. We test EGOA and five other metaheuristic algorithms in SchWae2 instance. After analyzing the test results by the Friedman test, we can find that the performance of EGOA is better than other algorithms in bin packing problems.

1. Introduction

Bin packing problem (BPP) is one of the most important combinatorial optimization problems. It is widely used in many fields of science and engineering and is the basis of many practical engineering optimization problems, including processors scheduling [1] and cloud computing resource allocation [2] in the field of computer science. BPP belongs to NP-hard problems [3], so that basic heuristic algorithms has been proposed for solving BPP. The basic heuristic algorithms can ensure fast and accurate solutions for small-scale bin packing problems. But, the performance of this way starts to deteriorate with the increase of scale. Thus, metaheuristic algorithms which can find a high-quality solution though adjusting the exploitation and exploration of search space become a new trend to solve BPP. In [4], the authors combine a genetic algorithm with a grouping mechanism. In [5], the authors apply first fit and ranked order values (ROVs) in CS to solve the BPP. In [6], the authors solve the BPP using an improved whale optimization algorithm.

Metaheuristic algorithms have developed rapidly and have been widely applied in many different fields, such as signal detection [7], resource allocation [8], load balancing [9], feature selection [10], task scheduling [11], and engineering applications [12]. The representative algorithms contain Dragonfly Algorithm (DA) [13], Ant Lion Optimization (ALO) [14], Bee Algorithm (BA) [15], and Particle Swarm Optimization (PSO) [16, 17]. These algorithms seek inspiration from physical or biological phenomena to solve optimization problems, including ant colony foraging, bird migrating, bee colony behavior, grey wolves hunting, and fish schooling [18]. It has been proved in practice that they are superior to traditional optimization methods in many application scenarios.

The metaheuristic algorithms form a randomly initialized population for a given problem, evaluate the solutions using the objective function (s), and improve the random solutions during the iteration until satisfying the terminating condition in order to find the global optimum. This unique way to find solution give them the advantages as follows: (i) simplicity, mainly inspired by fairly simple concepts and are easy to apply; (ii) flexibility, no special requirements on the objective function; and (iii) independent of derivation, no need to calculate the derivative in the process of finding the optimal solution. These advantages make metaheuristic algorithms not only limited to specific problems but also have a wide range of applications.

Although there are differences between different metaheuristic algorithms, they all have one feature in common, which is the two phases during the search process: exploration and exploitation. In the exploration phase, the algorithm explores the search space as widely as possible for promising regions. While in the exploitation phase, it pays more attention to local search ability for the optimal solution around promising regions. How to balance the relationship between exploration and exploitation has become the key to design and improve a new metaheuristic algorithm.

The GOA is a state-of-the-art population-based metaheuristic algorithm that was inspired by the long-range and abrupt movements of adult grasshoppers in groups [19]. Also, GOA is widely used in a variety of industrial scenarios. Aljarah proposes a hybrid approach based on the grasshopper to optimize the parameters of the SVM model [20]. Hekimoğlu Baran and Ekinci Serder employ GOA to solve many optimization problems in an automatic voltage regular system [21]. Wu proposes a dynamic GOA for optimizing the distributed trajectories of unmanned aerial vehicles in urban environments [22]. In [23], the authors apply the basic multiobjective GOA to solve several benchmark problems with superior performance.

While the GOA can obtain good solutions in a reasonable timeframe, it presents some shortcomings: (1) original linear convergence parameter makes the processes of exploration and exploitation unbalanced, (2) unstable convergence speed, and (3) easy to fall into the local optimum. Some studies have proposed improved GOA algorithms. Luo proposes an improved grasshopper using levy flight in [24]. Tharwat applies GOA for constrained and unconstrained multiobjective optimization problems in [25]. Ewees introduces an opposition-based learning strategy to the GOA in [26]. This strategy improves the ability of jumping out of local optimum and movement issues. But, the improvement of algorithm in other shortcomings is not significant because they consider neither the balance between exploration and exploitation nor the relationship between the diversity of the population and the convergence speed during the group optimization process.

To overcome these disadvantages, this paper proposed an enhanced grasshopper optimization algorithm. The main contributions of this paper could be summarized as follows:(i)We introduced a nonlinear convergence parameter into the basic GOA to balance the exploration and exploitation phases to improve the overall performance.(ii)We applied a niche repulsing mechanism to the basic GOA. This mechanism directs group optimization and ensures the diversity of the search space to increase the speed of convergence.(iii)We adapted the β-hill climbing (BHC) technique to GOA to avoid falling into the local optimums.

The remainder of this paper is organized as follows. The basic GOA and EGOA are introduced in Sections 2 and 3. Several experiments of two benchmark sets are implemented and the analysis results are shown in Section 4. The BPP and the applicability of the EGOA to the BPP are discussed in Section 5. Finally, conclusions and future works are discussed in Section 6.

2. Basic Grasshopper Optimization Algorithm

2.1. Biological Inspiration

Grasshoppers are considered to be pests based on the damage they inflict on crops and vegetation. Instead of acting individually, grasshoppers form some of the largest swarms among all living creatures. Millions of grasshoppers jump and move like large rolling cylinders. The influences of the individuals in a swarm, wind, gravity, and food sources all affect swarm movement.

2.2. Main Procedures for the GOA

The GOA is a novel swarm-intelligence-based metaheuristic algorithm that was inspired by the long-range and abrupt movements of adult grasshoppers in groups. Food source seeking is an important behavior of grasshopper swarms. Metaheuristic algorithms logically divide the search process into two phases: exploration and exploitation. The long-range and abrupt movements of grasshoppers represent the exploration phase, and local movements to search for better food sources represent the exploitation phase.

A mathematical model for this behavior was presented by Mirjalili in [19]. This model is defined as follows:where is the position of the i grasshopper, represents the social interaction in a group, G is the force of gravity acting on the i grasshopper, and A is the wind direction. By expanding , G and A in (1), the equation can be rewritten as follows:where is a function simulating the impact of social interactions and N is the number of grasshoppers. is the expanded G component, where is the gravitational force and is a unit vector pointing toward the center of the earth. is the expanded A component, where u is a constant drift and is a unit vector pointing in the direction of the wind. is the distance between the i and j grasshoppers and is calculated as .

Because grasshoppers quickly find comfortable zones and exhibit poor convergence, the influences of wind and gravity are far weaker than the relationships between grasshoppers, meaning this mathematical model should be modified as follows:where and represent the upper and lower boundaries of the search space, is the value relative to the target (best solution found so far), and c is a decreasing coefficient that balances the processes of exploitation and exploration, which is defined as follows:where is the maximum value (equal to 1), is the minimum value (equal to 0.00001), represents the current iteration, and is the maximum number of iterations.

3. Enhanced Grasshopper Optimization Algorithm

First, the EGOA utilizes a nonlinear convergence parameter to balance the exploration phase exploitation phase. Second, the EGOA is hybridized with a niche mechanism to balance diversity and convergence speed. Finally, the EGOA is combined with the β-hill climbing technique to avoid local optimum.

3.1. Nonlinear Convergence Parameter

In metaheuristic algorithm, the convergence parameter c directly affects the step size. The larger the convergence parameter, the larger the step size. In the exploration phase of metaheuristic algorithm, the step size should be as large as possible to ensure that the algorithm can search the global optimum in a wider range to prevent falling into the local optimum. In the exploitation phase, the step size should be as small as possible to ensure that the algorithm gradually converges to the global optimum to prevent skipping the optimal value. In the basic GOA, the convergence parameter c decreases linearly in the two phases of exploration and exploitation. Also, the size of the step cannot be balanced according to the characteristics of the two phases, which affects the performance of the algorithm. In order to make up for the lack of linear convergence parameters, we introduce the function, a nonlinear adjustment parameter widely used in the field of deep learning with good performance. function can compensate for the lack of GOA nonlinear convergence parameters and the characteristics of exploration and exploitation phases based on the nonlinear characteristics to improve performance at different phases of GOA. The formula for the function is defined as follows:

A nonlinear convergence parameter based on a variant of the tanh function is introduced as follows:where is defined as follows:where is the current iteration and is the adjustment factor.

3.2. Niche Mechanism

The idea of the GOA algorithm to solve the problem is that grasshoppers interact with each other. When solving problems, the agent in GOA mimics the grasshoppers’ interaction and searches the global optimum in the solution space. In the process of the agent searching for the global optimum, when a local optimum is found, all agents will move closer to this local optimum. If the moving is too fast, the agent swarm loses diversity. This leads to a local optimum and slows down convergence speed. In order to balance the convergence speed and diversity of GOA, we introduce niche technology.

Niche technology is inspired by the natural phenomenon where creatures tend to live with similar creatures. It can be used to balance GOA’s convergence speed and the diversity of a swarm. Therefore, to maintain the diversity of the swarm, we design a niche repulsing mechanism for the EGOA. Through this mechanism, the algorithm maintains the diversity of the swarm while preserving local optimum. The simulation of this mechanism is defined as follows:

First, calculate , where is the Euclidean distance between a search agent and search agent (there are N search agents). Then, get and calculate , where is the minimum distance between search agent and any other search agent in the population, is the dimensionality of the search agents. Afterwards, when is less than , compare the fitnesses of and , and penalty is assigned to the worse one. If , then is p; otherwise, is . Finally, after all individuals have been processed, they are ranked according to their fitness. individuals with poor fitness will be eliminated. To maintain the population and increase diversity, we select the best and worse individuals to generate new individuals.

3.3. The β-Hill Climbing Technique

As the iterative process continues, the GOA is similar to many other metaheuristic algorithms. It only focuses on the process of converging to the global optimum and ignores the mechanism of avoiding the local optimum. When falling into a local optimum, GOA search cannot continue. This is very fatal when solving practical engineering problems.

To allow the algorithm to escape from local optimum, the β-hill climbing (BHC) technique is introduced [27] in the EGOA. The BHC technique can iteratively enrich a series of randomly approximated solutions. In the BHC technique, a stochastic strategy called the β-operator is employed to establish a fine balance between exploration and exploitation throughout a global search.

The BHC technique begins exploitation with a search agent that represents a poor solution. It uses the β-operator throughout its exploitative steps to generate a new solution . The elements of the new solution are updated according to their current values or filled randomly with a probability of β as follows:where and z represent a random value in the range (0, 1). Next, the BHC technique compares to and records the best (downhill) value as follows:

In terms of exploration, the BHC technique’s ability to escape from local optima solutions using the β-operator can be considered as the key concept for avoiding local optimum (Algorithm 1).

Initialize the swarm position and the parameters
Initialize , , and as the maximum number of iterations
Calculate the fitness of each search agent and let T represent the best fitness
while do
 Calculate f using equation (6)
for i = 1 : N do
   Update using equation (3)
   Calculate the fitness
   if current fitness is worse than target fitness then
    conducts BHC Using equations (10) and (11)
  end if
  Bring the current search agent back if it travels outside the boundaries
end for
X conducts niche mechanism using equation (9)
 Update T and position
end while
Return the target fitness and target position

4. Experimental Results on Benchmarks

In this section, the efficiency of the proposed EGOA is compared to that of the basic GOA and several other metaheuristic algorithms, namely Dragonfly Algorithm (DA), Ant Colony Optimizer (ALO), Particle Swarm Optimization (PSO), and opposition-based learning GOA (OBLGOA) based on various benchmark problems. All codes utilized in this section were implemented consistently using MATLAB 9.3 (R2017b) and executed on a PC with the Windows 10 64 bit professional operating system and 16 GB of RAM. In order to fully verify the performance of EGOA, this paper cites two benchmark sets: The benchmark set 1 is the GOA original algorithm test environment. This part of benchmark set is used to verify the performance improvement of the original algorithm and the comparison with other algorithms [28, 29]. The benchmark set 2 is the most novel metaheuristic algorithm performance benchmark set CEC2019 [30]. It can be further verified that EGOA also has good performance in the recent benchmark set. To investigate the differences between the results obtained by the EGOA and those obtained by the other algorithms, a nonparametric Wilcoxon rank sum test [31] with a significance level of 5% was adopted in this study. The Wilcoxon rank sum test generates a to determine if two datasets came from the same distributed set. The higher the is, the more similar the two datasets are. If the is less than 0.05, then the two datasets are considered to be independent.

4.1. Benchmark Set 1

The benchmark set 1 used in this paper contains 29 test functions. Different test functions have different characteristics, which can comprehensively measure the performance of the algorithm. Test functions include unimodal, multimodal, fixed-dimension multimodal functions, and composite functions which are listed in Tables 1 and 2. In the tables, the column labelled represents the objective fitness function. The column labelled represents the dimensionality of each function, represents the boundaries of the search space, represents the optimal value, and represents the function type.

For the unimodal, multimodal, and fixed-dimension multimodal functions, each of the benchmark functions uses 30 search agents over 500 iterations. The presented results were recorded based on 30 independent trials with random initial conditions to calculate statistical results, including the average fitness (avg) which represents the average performance and reliability of the algorithm, standard deviation of fitness (std) which represents the stability of the algorithm, best fitness (best) which represents the best optimization ability of the algorithm, and worst fitness (worst) which represents the capability boundary.

4.1.1. Evaluation of Exploitation Capability

Unimodal functions, which have only one global optimum, can test the exploitation capabilities of algorithms. Table 3 shows the results for the unimodal functions. It can be clearly seen that EGOA achieves best performance in F1, F2, and F4 and second best performance in F7. Moreover, EGOA outperforms the other algorithms in F3 in terms of the avg fitness and the best avg fitness. In addition, EGOA achieves the best avg fitness among all compared algorithms in F5. Besides, most of the values in Table 4 are much less than 0.05 which means that EGOA and other solutions can be considered irrelevant. From the abovementioned results, we can conclude that EGOA has a good exploitation capability. The main reasons for the superior performance of EGOA are the embedded BHC local search and the exploitative patterns inherited from the basic GOA.

4.1.2. Evaluation of Exploration Capability

Multimodal functions, which have several local optimums, are used for evaluating exploration ability. Tables 5 and 6 list the results of the compared algorithms on multimodal test functions and fixed-dimension multimodal functions, respectively. The EGOA works best in 10 of the 16 benchmark tests. Although, among the remaining six benchmark tests that do not achieve the best results, the results of F8, F13, F15, and F19 are all very close to the optimal values. These results show that the EGOA has competitive exploration ability. Also, the p values presented in Table 4 demonstrate that it has statistical significance.

4.1.3. Ability to Avoid Local Minima

Composite functions, which have composite values, are very challenging test problems for metaheuristic algorithms. Furthermore, they are suitable for benchmarking both exploration and exploitation simultaneously for a massive number of local optimums. The results for composite benchmark functions listed in Table 7 indicate that the EGOA outperforms the other algorithms in F24 and F29, gets the best avg fitness in F25, and achieves the best std fitness and avg fitness in F27. The results can demonstrate that the EGOA is able to escape from local optimums.

4.1.4. Analysis of EGOA Convergence Curves

In this subsection, the convergence speed of the EGOA is analyzed. Several benchmark functions were chosen and executed with 30 search agents over 500 iterations. The results for the EGOA, GOA, DA, ALO, PSO, and OBLGOA are presented in Figure 1. The abscissa uses the number of current iterations, and the ordinate represents the best fitness value in each iteration. The representative convergence curves in each part of benchmark functions are selected, which can show that EGOA has an outstanding performance in solving different kinds of benchmark functions compared to well-known metaheuristic algorithm. During the iterations of F1, F4, and F10, the convergence speed of EGOA is approximately in the middle of the other five comparison algorithms before about 250 iterations. But, after 250 iterations, an inflection point appears, and EGOA accelerates the convergence process. This is because EGOA is hybridized with a niche mechanism to provide better individual direction and faster convergence speed. Besides, during the iterations of F8, F15, and F21, the slope of the convergence curve is larger in the initial iteration, and after several jumps, it jumps out of the local optimum and finally obtains the optimal fitness. This can be explained as EGOA utilizes a nonlinear convergence parameter f to balance the two search phases, and the combination of EGOA and β-hill climbing technique can improve the ability to jump out of local optimization. Overall, the convergence curves demonstrate that the proposed EGOA provides relatively fast convergence speed in most of the iterations.

4.2. Benchmark Set 2

The CEC2019 benchmark set contains 10 test functions. Unlike the previous CEC benchmark set, the complexity of the test function is significantly increased, and more attention is paid to the ability of evaluation algorithms to find an accurate solution. In Table 8, the column labelled represents the objective fitness function. The column labelled represents the dimensionality of each function, represents the boundaries of the search space, and represents the optimal value.

Each test function uses 30 search agents for 500 iterations. The presented results were recorded based on 30 independent trials with random initial conditions to calculate statistical results, including the average fitness (avg), standard deviation of fitness (std), best fitness (best), and worst fitness (worst) which represent the same meaning as mentioned above in the benchmark set 1. To assess the accuracy of the algorithm, in this part of the solution space, we need to focus more on average fitness and best fitness.

4.2.1. Evaluation of EGOA’s Performance in CEC2019

The CEC2019 test functions, which have complicated solution space, are more challenging test problems for metaheuristic algorithms. Furthermore, they are suitable for evaluating the algorithm’s ability to find accurate solutions in the complicated search process. The results for CEC2019 test functions listed in Table 9 indicate that EGOA works best in f4 and f10 and second in f5 and f6. Meanwhile, it gets the best avg fitness in 7 of the 10 benchmark tests and achieves the best fitness in 5 of 10. The results demonstrate that compared with other algorithms, EGOA has a good average performance and higher reliability. At the same time, it shows the best optimization ability in most search processes.

4.2.2. Analysis of EGOA’s Convergence Curves in CEC2019

In this subsection, the convergence behavior of the EGOA in CEC2019 test functions is investigated. Several benchmark functions were chosen and executed with 30 search agents over 500 iterations. The results over part of the benchmark functions are presented in Figure 2. The parameter space in Figure 1 indicates that the solution space of the CEC2019 test function is more complex than test functions solution space in benchmark set 1, and it is more challenging to find the optimal fitness. During the iterations of f4, f7, and f10, EGOA keeps a large slope relatively in the initial iteration, and after several sudden drops of the curve, it converges to the optimal fitness in the search space, which is far superior to other algorithms. Although during the iterations of f2 OBLGOA achieves the best result, EGOA ranks second. Considering comprehensively, it can still prove that the proposed EGOA has also superior search ability in more complex solution spaces, compared with well-known metaheuristics.

5. Application of the EGOA to the BPP

5.1. BPP Formulation

The BPP consists of packing a set of items with different weights into a minimum number of bins, which may have different capacities. Mathematically, the BPP can be defined as a programming problem as follows [32]:where is a binary variable that represents whether or not bin i contains any items, is a binary variable that indicates whether or not item j is assigned to bin i, is the weight of item j, C is the bin capacity, and n is the number of available bins. The goal is to minimize the number of bins. The sum of the sizes of the items assigned to any bin cannot exceed its capacity.

Use of the bin number as a fitness function may lead to algorithm stagnation because there are many solutions, which have the same number of bins. However, this information can be helpful if integrated with other information, such as the fullness of the bins. The following formulation was introduced in [33]:where is the number of used bins, is the occupancy of each bin i, which is the sum of the weights of all items packed, C is the capacity, and s is a constant that defines an equilibrium point for filling bins.

5.2. Discretization Search Space

The basic version of the GOA was proposed for optimization problems in continuous search spaces. To apply such algorithm to combinatorial search spaces, different techniques have been proposed, such as largest order values [34], ROV [35], smallest position values [36], and largest ranked values [37]. These techniques transform continuous solutions into permutations with different orders. In this study, the ROV [35] rule was adopted for experimentation. The ROV rule is a simple method based on random key representations for forming permutations, as shown in Table 10. We assign items to available bins according to the first-fit policy. In the first-fit policy, each item is placed in the first bin that has the capacity to hold the item.

In the EGOA, the current best number of bins is compared to the theoretical optimal solution. The optimal solution can be computed as follows:where represents the number of items that fit into a bin and is equal to C. If the current best number of bins is equal to the optimal number, the search stops.

5.3. Experimental Results

In order to test the validity of the proposed algorithm, some experiments are conducted on selected benchmarks from the ”SchWae2” instances [38] in which EGOA is compared to other metaheuristic algorithms. For the parameter settings, small numbers of search agents and iterations are selected for the EGOA and other metaheuristic algorithm. Specifically, 30 agents are selected and the maximum number of iterations is set to 100. The values of the bin capacities, items weights, and numbers of items affect the solution quality of the algorithm. Therefore, a selection of benchmark should be taken into account the previously cited values. Our experiments are conducted using the SchWae2 instances, where the weights of items are generated using the BPPGEN method discussed in [38]. BPPGEN assumes that components are uniformly distributed in the interval and determines weight values based on the following two steps:(i)Generation of a realization of a weight , which is distributed in , (ii)Rounding down to the nearest integer, where the weight of the item is the resulting integer

In addition, the Friedman test is one of the nonparametric tests, which is applied to test the average ranking values of these algorithms. The Friedman test [31] is next used to analyze the experimental results to show the performance difference between EGOA and the comparison algorithm.

The obtained results for the SchWae2 instances are summarized in Table 11. The first column labelled Number represents the numbers of items to be placed into bins. The column labelled C represents the capacity of the bin. The column labelled represents the upper and lower bounds of the weight distributions. Also, the column labelled Optimal represents the theoretical optimal numbers of bins. The columns labelled , , , , , and represent the results of the corresponding algorithms.

Different pairs of determine the difficulty levels of the problems. A greater value of results in more complicated item weights. When is equal to 0.1 and is equal to 0.2, the weights of items are ranging between 100 and 200. Similarly, when is equal to 0.1 and is equal to 0.5, the weights of items are ranging between 100 and 500. Also, the bin capacity is equal to 1000. The results of our experiments demonstrate that EGOA is the same as or closer to the optimal value compared to other algorithms in all cases which indicates the good searching performance of EGOA. In addition, Figure 3 is the conclusion drawn by Friedman test. It can be intuitively seen that EGOA ranks first, indicating that EGOA has excellent and stable performance in BPPs of different difficulty levels.

6. Conclusions

In this paper, a new variant of the GOA called the EGOA has been presented. First, a nonlinear convergence parameter, niche mechanism, and the BHC technique are added to improve the performance of the GOA. Next, we evaluate EGOA based on the benchmark set proposed by the GOA authors and the CEC2019 benchmark set and compare with the original GOA, DA, ALO, PSO, and OBLGOA in performance. The results of experiments and the Wilcoxon rank sum test indicate that the EGOA has excellent exploitation capability and competitive exploration capability. Finally, the EGOA is applied to the BPP. The EGOA is tested on the SchWae2 instances and compared to the GOA, DA, ALO, PSO, and OBLGOA algorithms. The experimental results and Friedman test show the ability of the EGOA to find optimal solutions in different problem sizes efficiently.

In the future, we wish to apply the EGOA to other variants of the BPP and engineering problems. Further studies could reveal additional methods for enhancing the EGOA to solving additional optimization problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This work was partially supported by Strategic Leadership Project of Chinese Academy of Science: SEANET Technology Standardization Research System Development (Project no. XDC02070100).