Journal of Control Science and Engineering

Journal of Control Science and Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 3894987 | https://doi.org/10.1155/2020/3894987

Hangwei Feng, Hong Ni, Ran Zhao, Xiaoyong Zhu, "An Enhanced Grasshopper Optimization Algorithm to the Bin Packing Problem", Journal of Control Science and Engineering, vol. 2020, Article ID 3894987, 19 pages, 2020. https://doi.org/10.1155/2020/3894987

An Enhanced Grasshopper Optimization Algorithm to the Bin Packing Problem

Academic Editor: Daniel Morinigo-Sotelo
Received27 Aug 2019
Revised22 Jan 2020
Accepted10 Feb 2020
Published10 Mar 2020

Abstract

The grasshopper optimization algorithm (GOA) is a novel metaheuristic algorithm. Because of its easy deployment and high accuracy, it is widely used in a variety of industrial scenarios and obtains good solution. But, at the same time, the GOA algorithm has some shortcomings: (1) original linear convergence parameter causes the processes of exploration and exploitation unbalanced; (2) unstable convergence speed; and (3) easy to fall into the local optimum. In this paper, we propose an enhanced grasshopper optimization algorithm (EGOA) using a nonlinear convergence parameter, niche mechanism, and the β-hill climbing technique to overcome the abovementioned shortcomings. In order to evaluate EGOA, we first select the benchmark set of GOA authors to test the performance improvement of EGOA compared to the basic GOA. The analysis includes exploration ability, exploitation ability, and convergence speed. Second, we select the novel CEC2019 benchmark set to test the optimization ability of EGOA in complex problems. According to the analysis of the results of the algorithms in two benchmark sets, it can be found that EGOA performs better than the other five metaheuristic algorithms. In order to further evaluate EGOA, we also apply EGOA to the engineering problem, such as the bin packing problem. We test EGOA and five other metaheuristic algorithms in SchWae2 instance. After analyzing the test results by the Friedman test, we can find that the performance of EGOA is better than other algorithms in bin packing problems.

1. Introduction

Bin packing problem (BPP) is one of the most important combinatorial optimization problems. It is widely used in many fields of science and engineering and is the basis of many practical engineering optimization problems, including processors scheduling [1] and cloud computing resource allocation [2] in the field of computer science. BPP belongs to NP-hard problems [3], so that basic heuristic algorithms has been proposed for solving BPP. The basic heuristic algorithms can ensure fast and accurate solutions for small-scale bin packing problems. But, the performance of this way starts to deteriorate with the increase of scale. Thus, metaheuristic algorithms which can find a high-quality solution though adjusting the exploitation and exploration of search space become a new trend to solve BPP. In [4], the authors combine a genetic algorithm with a grouping mechanism. In [5], the authors apply first fit and ranked order values (ROVs) in CS to solve the BPP. In [6], the authors solve the BPP using an improved whale optimization algorithm.

Metaheuristic algorithms have developed rapidly and have been widely applied in many different fields, such as signal detection [7], resource allocation [8], load balancing [9], feature selection [10], task scheduling [11], and engineering applications [12]. The representative algorithms contain Dragonfly Algorithm (DA) [13], Ant Lion Optimization (ALO) [14], Bee Algorithm (BA) [15], and Particle Swarm Optimization (PSO) [16, 17]. These algorithms seek inspiration from physical or biological phenomena to solve optimization problems, including ant colony foraging, bird migrating, bee colony behavior, grey wolves hunting, and fish schooling [18]. It has been proved in practice that they are superior to traditional optimization methods in many application scenarios.

The metaheuristic algorithms form a randomly initialized population for a given problem, evaluate the solutions using the objective function (s), and improve the random solutions during the iteration until satisfying the terminating condition in order to find the global optimum. This unique way to find solution give them the advantages as follows: (i) simplicity, mainly inspired by fairly simple concepts and are easy to apply; (ii) flexibility, no special requirements on the objective function; and (iii) independent of derivation, no need to calculate the derivative in the process of finding the optimal solution. These advantages make metaheuristic algorithms not only limited to specific problems but also have a wide range of applications.

Although there are differences between different metaheuristic algorithms, they all have one feature in common, which is the two phases during the search process: exploration and exploitation. In the exploration phase, the algorithm explores the search space as widely as possible for promising regions. While in the exploitation phase, it pays more attention to local search ability for the optimal solution around promising regions. How to balance the relationship between exploration and exploitation has become the key to design and improve a new metaheuristic algorithm.

The GOA is a state-of-the-art population-based metaheuristic algorithm that was inspired by the long-range and abrupt movements of adult grasshoppers in groups [19]. Also, GOA is widely used in a variety of industrial scenarios. Aljarah proposes a hybrid approach based on the grasshopper to optimize the parameters of the SVM model [20]. Hekimoğlu Baran and Ekinci Serder employ GOA to solve many optimization problems in an automatic voltage regular system [21]. Wu proposes a dynamic GOA for optimizing the distributed trajectories of unmanned aerial vehicles in urban environments [22]. In [23], the authors apply the basic multiobjective GOA to solve several benchmark problems with superior performance.

While the GOA can obtain good solutions in a reasonable timeframe, it presents some shortcomings: (1) original linear convergence parameter makes the processes of exploration and exploitation unbalanced, (2) unstable convergence speed, and (3) easy to fall into the local optimum. Some studies have proposed improved GOA algorithms. Luo proposes an improved grasshopper using levy flight in [24]. Tharwat applies GOA for constrained and unconstrained multiobjective optimization problems in [25]. Ewees introduces an opposition-based learning strategy to the GOA in [26]. This strategy improves the ability of jumping out of local optimum and movement issues. But, the improvement of algorithm in other shortcomings is not significant because they consider neither the balance between exploration and exploitation nor the relationship between the diversity of the population and the convergence speed during the group optimization process.

To overcome these disadvantages, this paper proposed an enhanced grasshopper optimization algorithm. The main contributions of this paper could be summarized as follows:(i)We introduced a nonlinear convergence parameter into the basic GOA to balance the exploration and exploitation phases to improve the overall performance.(ii)We applied a niche repulsing mechanism to the basic GOA. This mechanism directs group optimization and ensures the diversity of the search space to increase the speed of convergence.(iii)We adapted the β-hill climbing (BHC) technique to GOA to avoid falling into the local optimums.

The remainder of this paper is organized as follows. The basic GOA and EGOA are introduced in Sections 2 and 3. Several experiments of two benchmark sets are implemented and the analysis results are shown in Section 4. The BPP and the applicability of the EGOA to the BPP are discussed in Section 5. Finally, conclusions and future works are discussed in Section 6.

2. Basic Grasshopper Optimization Algorithm

2.1. Biological Inspiration

Grasshoppers are considered to be pests based on the damage they inflict on crops and vegetation. Instead of acting individually, grasshoppers form some of the largest swarms among all living creatures. Millions of grasshoppers jump and move like large rolling cylinders. The influences of the individuals in a swarm, wind, gravity, and food sources all affect swarm movement.

2.2. Main Procedures for the GOA

The GOA is a novel swarm-intelligence-based metaheuristic algorithm that was inspired by the long-range and abrupt movements of adult grasshoppers in groups. Food source seeking is an important behavior of grasshopper swarms. Metaheuristic algorithms logically divide the search process into two phases: exploration and exploitation. The long-range and abrupt movements of grasshoppers represent the exploration phase, and local movements to search for better food sources represent the exploitation phase.

A mathematical model for this behavior was presented by Mirjalili in [19]. This model is defined as follows:where is the position of the i grasshopper, represents the social interaction in a group, G is the force of gravity acting on the i grasshopper, and A is the wind direction. By expanding , G and A in (1), the equation can be rewritten as follows:where is a function simulating the impact of social interactions and N is the number of grasshoppers. is the expanded G component, where is the gravitational force and is a unit vector pointing toward the center of the earth. is the expanded A component, where u is a constant drift and is a unit vector pointing in the direction of the wind. is the distance between the i and j grasshoppers and is calculated as .

Because grasshoppers quickly find comfortable zones and exhibit poor convergence, the influences of wind and gravity are far weaker than the relationships between grasshoppers, meaning this mathematical model should be modified as follows:where and represent the upper and lower boundaries of the search space, is the value relative to the target (best solution found so far), and c is a decreasing coefficient that balances the processes of exploitation and exploration, which is defined as follows:where is the maximum value (equal to 1), is the minimum value (equal to 0.00001), represents the current iteration, and is the maximum number of iterations.

3. Enhanced Grasshopper Optimization Algorithm

First, the EGOA utilizes a nonlinear convergence parameter to balance the exploration phase exploitation phase. Second, the EGOA is hybridized with a niche mechanism to balance diversity and convergence speed. Finally, the EGOA is combined with the β-hill climbing technique to avoid local optimum.

3.1. Nonlinear Convergence Parameter

In metaheuristic algorithm, the convergence parameter c directly affects the step size. The larger the convergence parameter, the larger the step size. In the exploration phase of metaheuristic algorithm, the step size should be as large as possible to ensure that the algorithm can search the global optimum in a wider range to prevent falling into the local optimum. In the exploitation phase, the step size should be as small as possible to ensure that the algorithm gradually converges to the global optimum to prevent skipping the optimal value. In the basic GOA, the convergence parameter c decreases linearly in the two phases of exploration and exploitation. Also, the size of the step cannot be balanced according to the characteristics of the two phases, which affects the performance of the algorithm. In order to make up for the lack of linear convergence parameters, we introduce the function, a nonlinear adjustment parameter widely used in the field of deep learning with good performance. function can compensate for the lack of GOA nonlinear convergence parameters and the characteristics of exploration and exploitation phases based on the nonlinear characteristics to improve performance at different phases of GOA. The formula for the function is defined as follows:

A nonlinear convergence parameter based on a variant of the tanh function is introduced as follows:where is defined as follows:where is the current iteration and is the adjustment factor.

3.2. Niche Mechanism

The idea of the GOA algorithm to solve the problem is that grasshoppers interact with each other. When solving problems, the agent in GOA mimics the grasshoppers’ interaction and searches the global optimum in the solution space. In the process of the agent searching for the global optimum, when a local optimum is found, all agents will move closer to this local optimum. If the moving is too fast, the agent swarm loses diversity. This leads to a local optimum and slows down convergence speed. In order to balance the convergence speed and diversity of GOA, we introduce niche technology.

Niche technology is inspired by the natural phenomenon where creatures tend to live with similar creatures. It can be used to balance GOA’s convergence speed and the diversity of a swarm. Therefore, to maintain the diversity of the swarm, we design a niche repulsing mechanism for the EGOA. Through this mechanism, the algorithm maintains the diversity of the swarm while preserving local optimum. The simulation of this mechanism is defined as follows:

First, calculate , where is the Euclidean distance between a search agent and search agent (there are N search agents). Then, get and calculate , where is the minimum distance between search agent and any other search agent in the population, is the dimensionality of the search agents. Afterwards, when is less than , compare the fitnesses of and , and penalty is assigned to the worse one. If , then is p; otherwise, is . Finally, after all individuals have been processed, they are ranked according to their fitness. individuals with poor fitness will be eliminated. To maintain the population and increase diversity, we select the best and worse individuals to generate new individuals.

3.3. The β-Hill Climbing Technique

As the iterative process continues, the GOA is similar to many other metaheuristic algorithms. It only focuses on the process of converging to the global optimum and ignores the mechanism of avoiding the local optimum. When falling into a local optimum, GOA search cannot continue. This is very fatal when solving practical engineering problems.

To allow the algorithm to escape from local optimum, the β-hill climbing (BHC) technique is introduced [27] in the EGOA. The BHC technique can iteratively enrich a series of randomly approximated solutions. In the BHC technique, a stochastic strategy called the β-operator is employed to establish a fine balance between exploration and exploitation throughout a global search.

The BHC technique begins exploitation with a search agent that represents a poor solution. It uses the β-operator throughout its exploitative steps to generate a new solution . The elements of the new solution are updated according to their current values or filled randomly with a probability of β as follows:where and z represent a random value in the range (0, 1). Next, the BHC technique compares to and records the best (downhill) value as follows:

In terms of exploration, the BHC technique’s ability to escape from local optima solutions using the β-operator can be considered as the key concept for avoiding local optimum (Algorithm 1).

Initialize the swarm position and the parameters
Initialize , , and as the maximum number of iterations
Calculate the fitness of each search agent and let T represent the best fitness
while do
 Calculate f using equation (6)
for i = 1 : N do
   Update using equation (3)
   Calculate the fitness
   if current fitness is worse than target fitness then
    conducts BHC Using equations (10) and (11)
  end if
  Bring the current search agent back if it travels outside the boundaries
end for
X conducts niche mechanism using equation (9)
 Update T and position
end while
Return the target fitness and target position

4. Experimental Results on Benchmarks

In this section, the efficiency of the proposed EGOA is compared to that of the basic GOA and several other metaheuristic algorithms, namely Dragonfly Algorithm (DA), Ant Colony Optimizer (ALO), Particle Swarm Optimization (PSO), and opposition-based learning GOA (OBLGOA) based on various benchmark problems. All codes utilized in this section were implemented consistently using MATLAB 9.3 (R2017b) and executed on a PC with the Windows 10 64 bit professional operating system and 16 GB of RAM. In order to fully verify the performance of EGOA, this paper cites two benchmark sets: The benchmark set 1 is the GOA original algorithm test environment. This part of benchmark set is used to verify the performance improvement of the original algorithm and the comparison with other algorithms [28, 29]. The benchmark set 2 is the most novel metaheuristic algorithm performance benchmark set CEC2019 [30]. It can be further verified that EGOA also has good performance in the recent benchmark set. To investigate the differences between the results obtained by the EGOA and those obtained by the other algorithms, a nonparametric Wilcoxon rank sum test [31] with a significance level of 5% was adopted in this study. The Wilcoxon rank sum test generates a to determine if two datasets came from the same distributed set. The higher the is, the more similar the two datasets are. If the is less than 0.05, then the two datasets are considered to be independent.

4.1. Benchmark Set 1

The benchmark set 1 used in this paper contains 29 test functions. Different test functions have different characteristics, which can comprehensively measure the performance of the algorithm. Test functions include unimodal, multimodal, fixed-dimension multimodal functions, and composite functions which are listed in Tables 1 and 2. In the tables, the column labelled represents the objective fitness function. The column labelled represents the dimensionality of each function, represents the boundaries of the search space, represents the optimal value, and represents the function type.


FunctionDimRangeType

30[−100, 100]0Unimodal

30[−10, 10]0Unimodal

30[−100, 100]0Unimodal

30[−100, 100]0Unimodal

30[−30, 30]0Unimodal

30[−100, 100]0Unimodal

30[−1.28, 1.28]0Unimodal

30[−500, 500]Multimodal

30[−5.12, 5.12]0Multimodal

30[−32, 32]0Multimodal

30[−600, 600]0Multimodal

30[−50, 50]0Multimodal

30[−50, 50]0Multimodal

2[−65, 65]0Fixed-dimension multimodal

4[−5, 5]0.00030Fixed-dimension multimodal

2[−5, 5]−1.0316Fixed-dimension multimodal

2[−5, 5]0.3979Fixed-dimension multimodal

2[−2, 2]3Fixed-dimension multimodal

3[1, 3]−3.86Fixed-dimension multimodal

6[0, 1]−3.32Fixed-dimension multimodal

4[0, 10]−10.1532Fixed-dimension multimodal

4[0, 10]−10.4028Fixed-dimension multimodal

4[0, 10]−10.5363Fixed-dimension multimodal


FunctionDimRangeType

30[−5, 5]0Composite

30[−5, 5]0Composite

30[−5, 5]0Composite

30[−5, 5]0Composite

30[−5, 5]0Composite

30[−5, 5]0Composite

For the unimodal, multimodal, and fixed-dimension multimodal functions, each of the benchmark functions uses 30 search agents over 500 iterations. The presented results were recorded based on 30 independent trials with random initial conditions to calculate statistical results, including the average fitness (avg) which represents the average performance and reliability of the algorithm, standard deviation of fitness (std) which represents the stability of the algorithm, best fitness (best) which represents the best optimization ability of the algorithm, and worst fitness (worst) which represents the capability boundary.

4.1.1. Evaluation of Exploitation Capability

Unimodal functions, which have only one global optimum, can test the exploitation capabilities of algorithms. Table 3 shows the results for the unimodal functions. It can be clearly seen that EGOA achieves best performance in F1, F2, and F4 and second best performance in F7. Moreover, EGOA outperforms the other algorithms in F3 in terms of the avg fitness and the best avg fitness. In addition, EGOA achieves the best avg fitness among all compared algorithms in F5. Besides, most of the values in Table 4 are much less than 0.05 which means that EGOA and other solutions can be considered irrelevant. From the abovementioned results, we can conclude that EGOA has a good exploitation capability. The main reasons for the superior performance of EGOA are the embedded BHC local search and the exploitative patterns inherited from the basic GOA.


FunctionTypeEGOAGOADAALOPSOOBLGOA

F1avg9.0534E140.973037.27760.00143.1013E − 053.2306E − 05
std3.7011E140.869881.61860.00117.9301E − 052.3431E − 05
best1.9846E140.01723.55260.00022.9928E − 075.2292E − 06
worst1.8459E133.8065450.17490.00480.00041.0108E − 04

F2avg1.2287E0718.03674.411230.23920.07760.0146
std2.7105E0836.43202.392937.30480.14590.0390
best7.2962E080.03061.32951.14300.00030.0013
worst2.0402E07118.123910.9685114.76810.68040.1775

F3avg0.00281413.81901128.25434431.5768228.07910.0054
std0.0036628.50171139.04491821.2736173.77770.0018
best8.5499E13475.2364214.35972059.276563.39510.0019
worst0.01212784.67866174.24269027.3843732.16250.0087

F4avg1.4732E078.639630.483019.63483.71430.0122
std5.2777E082.91207.97704.88141.69190.0084
best6.4635E085.080516.643711.14981.69230.0024
worst2.6566E0716.939952.925530.76457.53660.0304

F5avg27.9972932.49504273.4106347.3018168.323728.4168
std0.64841473.452516369.5087439.0214543.59660.2120
best26.237428.005791.615728.546512.598127.9057
worst29.08066233.787190522.60221608.07663038.787228.7632

F6avg0.04760.881345.01630.00198.9790E051.5503
std0.03891.0279124.47590.00180.00030.4663
best0.00610.00482.51490.00022.2448E070.7925
worst0.17304.4407670.97170.00700.00182.7873

F7avg0.01310.02770.16410.26390.03300.0014
std0.00610.01060.08700.07070.01460.0006
best0.00470.01010.04410.14930.01350.0004
worst0.02640.06050.36090.47630.08190.0028


FunctionGOADAALOPSOOBLGOA

F13.0199E − 113.0199E − 113.0199E − 113.0199E − 113.0199E − 11
F28.9934E − 113.0199E − 113.0199E − 115.0723E − 105.5727E − 10
F33.0199E − 113.0199E − 113.0199E − 113.0199E − 110.0001
F43.0199E − 113.0199E − 113.0199E − 113.0199E − 113.0199E − 11
F57.7725E − 093.0199E − 110.00020.00640.3953
F61.5292E − 053.0199E − 113.0199E − 113.0199E − 113.0199E − 11
F73.0939E − 063.0199E − 113.0199E − 111.0277E − 063.0199E − 11
F80.81870.02711.1841E − 092.7726E − 050.5895
F96.6914E − 112.9566E − 111.7616E − 112.3050E − 081.3989E − 05
F103.0199E − 113.0199E − 113.0199E − 113.0199E − 113.0199E − 11
F113.0180E − 113.0180E − 113.0180E − 113.0180E − 113.0180E − 11
F123.0199E − 113.0199E − 113.0199E − 110.19077.2208E − 06
F133.8307E − 053.6897E − 115.9673E − 095.5999E − 071.2493E − 05
F147.9359E − 101.5600E − 111.8519E − 100.00051.6415E − 11
F150.00030.00400.00040.00730.0001
F167.7686E − 127.7686E − 127.7686E − 123.0802E − 077.7686E − 12
F171.2118E − 121.2118E − 124.4703E − 12NaN1.2118E − 12
F183.0047E − 113.0047E − 113.0047E − 112.0056E − 093.0047E − 11
F196.5238E − 073.1565E − 050.1259649331.1279E − 115.4604E − 06
F200.97050.56880.75035.5530E − 050.0404
F210.00000.00610.00130.06240.0144
F220.00020.00020.02710.10190.0406
F230.00040.01630.00100.60380.0519
F240.25810.12243.0199E − 114.4440E − 070.0421
F250.06790.02243.0199E − 110.34030.0615
F260.01030.00183.0199E − 110.00110.0002
F276.0459E − 072.1327E − 053.0199E − 118.8829E − 061.1567E − 07
F280.00220.07013.0199E − 110.00640.0271
F293.0199E − 113.0199E − 113.0199E − 113.0199E − 114.1825E − 09

4.1.2. Evaluation of Exploration Capability

Multimodal functions, which have several local optimums, are used for evaluating exploration ability. Tables 5 and 6 list the results of the compared algorithms on multimodal test functions and fixed-dimension multimodal functions, respectively. The EGOA works best in 10 of the 16 benchmark tests. Although, among the remaining six benchmark tests that do not achieve the best results, the results of F8, F13, F15, and F19 are all very close to the optimal values. These results show that the EGOA has competitive exploration ability. Also, the p values presented in Table 4 demonstrate that it has statistical significance.


FunctionTypeEGOAGOADAALOPSOOBLGOA

F8avg−7870.7306−7579.2715−7376.4956−5681.4728−6619.9581−7671.5200
std649.2707900.0826882.9292694.0512784.2524730.6476
best−9102.6546−9467.2412−9220.7846−9004.5042−8481.6966−9154.0221
worst−6794.9884−5681.3700−5402.0041−5417.6748−5343.1670−6321.3596

F9avg0.55069.71747.03459.58485.30641.1282
std1.12167.79794.13995.01573.10631.5848
best0.00000.99500.99520.99500.99506.0484E − 09
worst4.074732.833416.916419.899114.92445.9717

F10avg6.8570E083.11288.12426.01031.77120.0010
std1.8729E081.07873.94693.63250.73010.0002
best4.2697E081.64662.89302.31810.00030.0006
worst1.1399E076.207116.650914.37562.95790.0016

F11avg3.9540E130.72931.16630.06280.03670.0003
std5.6432E130.17820.16520.03020.07308.1399E − 05
best5.0071E140.28721.03660.01203.4788E − 070.0001
worst3.1125E121.02951.79570.12680.39240.0005

F12avg0.05964.771320.663711.69640.47800.0443
std0.04652.28418.11494.46390.60030.0198
best1.5136E − 020.84157.94874.82242.3850E090.0124
worst0.23229.142839.230222.32262.34530.0888

F13avg0.81747.6309103.839422.42010.19500.4521
std0.369210.0853336.041714.64350.68310.3147
best0.20730.05712.65430.01202.9372E070.0783
worst1.406140.24981865.876752.44183.67651.6232


FunctionTypeEGOAGOADAALOPSOOBLGOA

F14avg0.99801.06412.05012.18274.07913.2005
std1.93E160.36222.02621.97463.21782.4733
best0.99800.99800.99800.99800.99800.9980
worst0.99802.982110.76327.874012.670510.7632

F15avg0.00050.00630.00230.00350.00040.0076
std0.00030.00870.00490.00670.00040.0098
best0.00030.00060.00040.00050.00030.0003
worst0.00130.02040.02040.02040.00160.0224

F16avg−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
std4.7012E161.0871E − 122.2380E − 069.7720E − 146.6486E − 163.5117E − 06
best−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
worst−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316

F17avg0.39790.39790.39790.39790.39790.3979
std0.00006.0985E − 137.5449E − 073.5239E − 140.00001.0888E − 06
best0.39790.39790.39790.39790.39790.3979
worst0.39790.39790.39790.39790.39790.3979

F18avg3.000013.80003.00003.00003.90008.4000
std4.6795E1528.00541.5781E − 059.0080E − 134.92950301820.5504
best3.00003.00003.00003.00003.00003.0000
worst3.000084.00003.00013.000030.000084.0006

F19avg−3.8628−3.8091−3.8628−3.8628−3.8628−3.8357
std5.0160E − 070.1558580051.2444E − 065.5947E − 132.6962E150.1481
best−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628
worst−3.8628−3.0333−3.8628−3.8628−3.8628−3.0513

F20avg−3.2341−3.2665−3.2767−3.2620−3.2665−3.2388
std0.05840.06030.06060.06110.06030.1028
best−3.3220−3.3220−3.3220−3.3220−3.3220−3.3220
worst−3.1803−3.2030−3.1831−3.2000−3.2031−3.0253

F21avg−8.2885−4.7947−6.3718−6.7001−5.0570−7.8820
std2.49262.68122.85912.99052.81392.9107
best−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532
worst−5.0552−2.6305−2.6305−2.6305−2.6305−2.6304

F22avg−8.5972−7.3266−6.1121−7.4811−5.8788−7.7390
std2.84523.63923.17883.25603.55473.4005
best−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029
worst−2.7659−2.7519−2.7518−2.7519−2.7519−2.7658

F23avg−9.4383−5.5672−4.7293−6.0268−5.2543−7.0597
std2.51213.68102.85523.57133.34983.8262
best−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364
worst−3.1054−1.8595−2.4217−2.4217−2.4217−2.4217

4.1.3. Ability to Avoid Local Minima

Composite functions, which have composite values, are very challenging test problems for metaheuristic algorithms. Furthermore, they are suitable for benchmarking both exploration and exploitation simultaneously for a massive number of local optimums. The results for composite benchmark functions listed in Table 7 indicate that the EGOA outperforms the other algorithms in F24 and F29, gets the best avg fitness in F25, and achieves the best std fitness and avg fitness in F27. The results can demonstrate that the EGOA is able to escape from local optimums.


FunctionTypeEGOAGOADAALOPSOOBLGOA

F24avg84.4598122.8630188.75221123.4550266.8987188.0859
std59.7037127.0180109.6276106.0951100.089888.8280
best13.718420.941486.6431901.6364144.826540.9847
worst254.6628515.3122576.93551272.6311586.9204359.9907

F25avg299.2990326.4544337.65061178.6612443.8959399.0075
std134.7602160.2337166.363679.6722132.5885156.4962
best48.478933.091587.4429997.4426181.622969.3652
worst563.9395528.9073575.99221303.5229647.2719625.4685

F26avg735.5623681.8806866.85621532.2853618.5467887.9904
std139.1450277.1884181.9240128.6867142.0694151.7927
best418.2166285.5568636.98031705.2732386.0311536.5325
worst919.42731249.56401357.33101174.8147881.33471209.6244

F27avg892.2331942.9682989.62821408.3321755.0872891.5666
std42.5450122.960276.124948.0027127.582246.2035
best666.9724648.5125695.39671309.5712497.6061646.9350
worst900.00191040.30321082.90541479.51171025.0401900.0087

F28avg264.4913163.9287233.87881343.2227308.1830557.7770
std204.3920141.5067211.9575142.2203127.8672352.8388
best105.960455.045093.33521017.2573128.6536138.0444
worst900.0010545.28741038.85261534.4240722.3016900.0052

F29avg900.0003907.2424924.65291357.1334923.0487900.0005
std0.00014.80046.147250.55998.87420.0005
best900.0001900.2385914.08811187.8816911.5907900.0001
worst900.0007918.4818936.44681460.9597944.8864900.0022

4.1.4. Analysis of EGOA Convergence Curves

In this subsection, the convergence speed of the EGOA is analyzed. Several benchmark functions were chosen and executed with 30 search agents over 500 iterations. The results for the EGOA, GOA, DA, ALO, PSO, and OBLGOA are presented in Figure 1. The abscissa uses the number of current iterations, and the ordinate represents the best fitness value in each iteration. The representative convergence curves in each part of benchmark functions are selected, which can show that EGOA has an outstanding performance in solving different kinds of benchmark functions compared to well-known metaheuristic algorithm. During the iterations of F1, F4, and F10, the convergence speed of EGOA is approximately in the middle of the other five comparison algorithms before about 250 iterations. But, after 250 iterations, an inflection point appears, and EGOA accelerates the convergence process. This is because EGOA is hybridized with a niche mechanism to provide better individual direction and faster convergence speed. Besides, during the iterations of F8, F15, and F21, the slope of the convergence curve is larger in the initial iteration, and after several jumps, it jumps out of the local optimum and finally obtains the optimal fitness. This can be explained as EGOA utilizes a nonlinear convergence parameter f to balance the two search phases, and the combination of EGOA and β-hill climbing technique can improve the ability to jump out of local optimization. Overall, the convergence curves demonstrate that the proposed EGOA provides relatively fast convergence speed in most of the iterations.

4.2. Benchmark Set 2

The CEC2019 benchmark set contains 10 test functions. Unlike the previous CEC benchmark set, the complexity of the test function is significantly increased, and more attention is paid to the ability of evaluation algorithms to find an accurate solution. In Table 8, the column labelled represents the objective fitness function. The column labelled represents the dimensionality of each function, represents the boundaries of the search space, and represents the optimal value.


FunctionDimRange

Storns Chebyshev polynomial fitting problem9[−8192, 8192]1
Inverse Hilbert matrix problem16[−16384, 16384]1
Lennard–Jones Minimum energy cluster18[−4, 4]1
Shifted and rotated rastrigins function10[−100, 100]1
Shifted and rotated Griewanks function10[−100, 100]1
Shifted and rotated Weierstrass function10[−100, 100]1
Shifted and rotated Schwefels function10[−100, 100]1
Shifted and rotated expanded Schaffers F6 function10[−100, 100]1
Shifted and rotated Happy Cat function10[−100, 100]1
Shifted and rotated Ackley function10[−100, 100]1

Each test function uses 30 search agents for 500 iterations. The presented results were recorded based on 30 independent trials with random initial conditions to calculate statistical results, including the average fitness (avg), standard deviation of fitness (std), best fitness (best), and worst fitness (worst) which represent the same meaning as mentioned above in the benchmark set 1. To assess the accuracy of the algorithm, in this part of the solution space, we need to focus more on average fitness and best fitness.

4.2.1. Evaluation of EGOA’s Performance in CEC2019

The CEC2019 test functions, which have complicated solution space, are more challenging test problems for metaheuristic algorithms. Furthermore, they are suitable for evaluating the algorithm’s ability to find accurate solutions in the complicated search process. The results for CEC2019 test functions listed in Table 9 indicate that EGOA works best in f4 and f10 and second in f5 and f6. Meanwhile, it gets the best avg fitness in 7 of the 10 benchmark tests and achieves the best fitness in 5 of 10. The results demonstrate that compared with other algorithms, EGOA has a good average performance and higher reliability. At the same time, it shows the best optimization ability in most search processes.


FunctionTypeEGOAGOADAALOPSOOBLGOA

f1avg1.5212E + 051.9571E + 064.6646E+041.1544E + 071.8749E + 061.2357E + 05
std1.8946E + 051.8015E + 061.4480E + 051.4062E + 071.4164E + 061.4289E+05
best1.00003.2527E + 051.00008.0545E + 052.1036E + 038.3500E + 02
worst5.2933E + 054.9750E + 064.5874E + 053.9307E + 074.4657E + 064.4661E+05

f2avg271.2334719.7666116.65102664.53013490.0571217.4122
std361.7063645.6242148.63812243.36731450.532792.5807
best4.3805311.43805.0277432.72581432.2960103.6213
worst996.96882384.2235372.95328234.19645455.4163438.8886

f3avg2.86699.29293.13988.51253.43872.1356
std1.00382.23911.11731.41371.94441.7195
best1.77074.60821.42666.43141.40911.4091
worst4.393711.71115.214111.71167.15636.7118

f4avg27.366327.641733.749138.030727.397231.8436
std9.683411.431014.895614.049413.415411.2370
best8.959713.93458.973822.75599.954617.9143
worst40.798247.094759.703967.514552.041158.7070

f5avg1.21611.28971.30121.59971.22001.1464
std0.07700.11190.11210.24680.09060.0386
best1.06641.08131.10221.12761.06881.0787
worst1.31751.43041.45041.79231.34691.1968

f6avg4.37974.06306.24247.24726.15412.6168
std1.67492.12781.76311.86831.63171.1671
best1.68091.77613.09294.01923.65291.0000
worst6.56119.33128.55859.53168.61614.7035

f7avg920.21881183.37341254.71911195.19841318.86191058.0542
std325.6570419.8797254.7967350.7395555.2455295.0898
best272.5803479.3381884.4280815.0716423.4350641.5707
worst1307.06501834.66421561.12091877.95512318.16171532.6045

f8avg4.05814.15284.47054.63174.50154.0682
std0.42090.45810.30540.45360.33320.4385
best3.12663.51963.93653.45303.82553.1707
worst4.51455.00394.99945.01395.04444.5417

f9avg1.18031.23491.34491.32851.29191.2322
std0.07320.15090.13490.10520.09570.1046
best1.11621.12291.14851.21401.16491.0843
worst1.31331.58991.57671.56361.45801.4399

f10avg21.009221.014921.083921.377621.022121.1974
std0.02930.04210.08940.13050.05950.1520
best20.999820.999921.017021.125520.999921.0000
worst21.092721.133921.314921.609121.190021.4238

4.2.2. Analysis of EGOA’s Convergence Curves in CEC2019

In this subsection, the convergence behavior of the EGOA in CEC2019 test functions is investigated. Several benchmark functions were chosen and executed with 30 search agents over 500 iterations. The results over part of the benchmark functions are presented in Figure 2. The parameter space in Figure 1 indicates that the solution space of the CEC2019 test function is more complex than test functions solution space in benchmark set 1, and it is more challenging to find the optimal fitness. During the iterations of f4, f7, and f10, EGOA keeps a large slope relatively in the initial iteration, and after several sudden drops of the curve, it converges to the optimal fitness in the search space, which is far superior to other algorithms. Although during the iterations of f2 OBLGOA achieves the best result, EGOA ranks second. Considering comprehensively, it can still prove that the proposed EGOA has also superior search ability in more complex solution spaces, compared with well-known metaheuristics.

5. Application of the EGOA to the BPP

5.1. BPP Formulation

The BPP consists of packing a set of items with different weights into a minimum number of bins, which may have different capacities. Mathematically, the BPP can be defined as a programming problem as follows [32]: