Abstract

The whale optimization algorithm (WOA) is a high-performance metaheuristic algorithm that can effectively solve many practical problems and broad application prospects. However, the original algorithm has a significant improvement in space in solving speed and precision. It is easy to fall into local optimization when facing complex or high-dimensional problems. To solve these shortcomings, an elite strategy and spiral motion from moth flame optimization are utilized to enhance the original algorithm’s efficiency, called MEWOA. Using these two methods to build a more superior population, MEWOA further balances the exploration and exploitation phases and makes it easier for the algorithm to get rid of the local optimum. To show the proposed method’s performance, MEWOA is contrasted to other superior algorithms on a series of comprehensive benchmark functions and applied to practical engineering problems. The experimental data reveal that the MEWOA is better than the contrast algorithms in convergence speed and solution quality. Hence, it can be concluded that MEWOA has great potential in global optimization.

1. Introduction

Optimization tasks can be classified into many forms according to the relation and number of objectives or hybrid methods, such as robust optimization [1], large scale optimization [2, 3], multiobjective [4], fuzzy optimization [5], and many objectives [6, 7]. One of the classes of the optimizers is based on an iteration-based evolution of the swarm. At present, metaheuristic methods have attracted more and more attention from the public [822]. Because optimization problems appear in many fields of science, and they can be a solution to many fields like support vector machines [20, 23], feature selection [17, 2428], extreme learning machine (ELM) [2934], bankruptcy prediction [19, 3538], engineering design [16, 18, 3943], optimal resource allocation [44], monitoring [4548], parameters optimization [49], temperature optimization [50], intelligent damage detection [51, 52], smart grid [53], image enhancement optimization [54], image and video handling [5560], medical image recognition [61], and image segmentation [62, 63], its randomness, diversification, and strengthening ability can effectively solve high nonlinearity and multilocal optimization problems. It can maximize the relationship between resources and interests and create maximum benefits under the condition of limited resources.

Swarm methods have emerged in many fields as they can deal with a wide variety of problems in a flexible way. Some well-distributed techniques are slime mould algorithm [64], Harris hawks optimization (HHO) [65], naked mole-rat algorithm (NMR) [66], hunger games search (HGS) [67], moth search algorithm (MSA) [68], monarch butterfly optimization (MBO) [69], krill herd algorithm (KH) [70], teaching-learning-based optimizer (TLBO) [71], differential evolution (DE) [72], and differential search (DS) [73]. The exploratory and exploitative abilities of the swarm-based method can be a good evolutionary basis to be applied to areas such as computer vision [74], deployment optimization [75], enhancement of the transportation networks [76], optimization of the tasks of deep learning [7779], improvement of the prediction methods [80, 81], and decision-making technique [8284]. One of the possible methods is the whale optimization algorithm (WOA), which is an intelligent optimization algorithm presented by Mirjalili and Lewis [85] in 2016. The algorithm mainly focuses on the whale’s hunting behavior and the prey’s predatory behavior to seek the optimal solution to the problem. Although WOA has the advantages of fewer parameters and strong global convergence, the standard WOA still possesses the problems of slow convergence speed and low convergence accuracy. Thus, many improved versions can be found in the literature. Wang et al. [86] devised the MOWOA, an opposition-based multiobjective WOA using global grid ranking, using multiple parts to improve optimization performance. In order to solve the problems of poor exploration and local optimal stagnation of WOA, Salgotra et al. [87] proposed an improved WOA based on mechanisms such as opposition-based learning, exponentially decreasing parameters, and elimination or reinitialization of the worst particles. The improved algorithm has been experimentally demonstrated to have a large improvement in performance. Sun et al. [88] enhanced the WOA with the strategy of quadratic interpolation (QIWOA). The algorithm introduced new parameters to effectively search for solution spaced and handled premature convergence and adopted the quadratic interpolation around the best individual to enhance its exploitation capability and solution precision. Agrawal et al. [89] combined quantum concepts with the WOA, adopting the quantum bit representation of population agents and the quantum rotation gate operator as mutation operators to improve the exploration and exploitation ability of classical WOA. Hussein et al. [90] handled binary optimization problems by using the basic version of WOA and devising two transfer functions (S-shaped and V-shaped) to map the continuous search space into binary ones. Luo et al. [91] integrated three strategies with the original approach to obtain a better balance between exploration and exploitation trends. Firstly, the chaos initialization phase is utilized to start a group of chaos-triggered whales at the initial phase. Then, the diversity level of the evolutionary population is enhanced by the Gaussian mutation. Finally, the chaotic local search, combined with the “narrowing” strategy, is utilized to raise the original optimizer’s utilization tendency. The effect of the signal is due to the base concepts in chaos theory [59, 9294]. Sun et al. [95] proposed a nonlinear dynamic control parameter updating strategy based on cosine function and integrated this strategy and Lévy flight strategy into WOA (MWOA). Hemasian-Etefagh and Safi-Esfahani et al. [96] introduced a new idea (called GWOA) in whale grouping to overcome the early convergence problem. Elaziz and Mirjalili [97] integrated chaotic mapping and opposition-based learning into the algorithm and used the differential evolution (DE) algorithm to automatically select the chaotic graph and part of the population to alleviate the defects (DEWCO). Guo [98] devised an enhanced WOA by using the strategy of social learning and wavelet mutation. A new linear incremental probability is designed to increase the capability of global search. According to the principle of social learning, an individual’s social network is constructed by using social hierarchy and social influence. So as to learn the exchange and sharing of information among groups, they established an adaptive neighborhood learning strategy on the basis of the network relationship. The Morlet wavelet mutation mechanism is adopted to learn the dynamic adjustment of mutation space, thus enhancing the capability of the algorithm to get rid of the local optimum.

WOA and its improved versions are often used to solve some practical application problems. Revathi et al. [99] devised an optimization scheme with the brainstorm WOA (BS-WOA) to identify the key used to improve the data structure’s privacy and practicability by amending the database. Gong et al. [100] used an improved edition of the WOA to determine the optimal features and amend the classification’s artificial neural network weights. The model is simulated on FLAIR, T1, and T2 data sets, showing that the presented model has a robust diagnostic capability. The model was then used to diagnose common diseases such as breast cancer, diabetes, and erythema squamous epithelium. Zhang et al. [101] utilized the best convolutional neural network (CNN) to process the skin disease image and adopted the improved WOA to optimize the CNN. Xiong et al. [102] devised an enhanced WOA, called IWOA, to accurately optimize different PV models’ parameters, a typical complex nonlinear multivariable strong coupling optimization problem. Petrović et al. [103] analyzed the scheduling problem of a single mobile robot, and the best transportation method of raw materials, goods, and parts in an intelligent manufacturing system was found through WOA. Li et al. [104] used WOA to modify the input weight and hidden layer bias of extreme learning machine (ELM) and used this model to assess the aging of insulated gate bipolar transistor module. Akyol and Alatas [105] adopted WOA for emotional analysis, which is a multiobjective problem. Qiao [106] introduced adaptive search and encircling mechanism, spiral position, and jump behavior to enhance the efficiency of WOA and used the improved algorithm to predict short-term gas consumption. Lévy flight and pattern search were embedded into WOA for parameter estimation of solar cell and photovoltaic system [107].

Although WOA has significantly improved its performance and robustness compared with other metaheuristics algorithms, it is still not free from the dilemma of easily falling into local optimal solutions and other problems, and the same phenomenon of low solution accuracy and slow convergence exists when solving function problems. So, this paper proposes an improved variant of WOA, which is named MEWOA. We introduce two strategies: elite strategy and spiral motion from moth flame optimization (MFO) [12, 108, 109], which significantly strengthens the convergence accuracy and speed of the basic WOA easier to jump out of local optimum. To further verify the performance of MEWOA, the algorithm is also utilized to solve practical engineering problems. The results reveal that MEWOA is superior to the other algorithms with high solution quality and convergence speed.

The main contributions of this study can be summarized as follows:(i)Aiming at overcoming the problems of WOA, we introduce elite strategies as well as spiral motion into WOA to improve the diversity of populations while enhancing optimal solution selection and finally propose an improved WOA (MEWOA)(ii)The MEWOA is compared with some with metaheuristic algorithms and advanced algorithms on function test sets such as CEC2017 and CEC2014, respectively, and satisfactory results are obtained(iv)The proposed MEWOA has achieved excellent results in three typical engineering problems

This paper is structured as follows. Section 2 briefly introduces WOA, elite strategy, and MFO. Section 3 describes MEWOA. In Section 4, a range of experiments is conducted based on MEWOA to demonstrate the proposed algorithm’s performance. In Section 5, the full content is summarized, and the future research direction is pointed out.

2. Background Knowledge

2.1. Whale Optimization Algorithm (WOA)

WOA is a new metaheuristic algorithm devised by Mirjalili and Lewis [85] based on the Bubble-net behavior of humpback whales during hunting. In this algorithm, each humpback’s position represents a feasible solution. Humpback whales hunt by producing unique bubbles along with a circular or “9” shaped path. According to such a phenomenon, the authors’ mathematical models include the following three steps: random search, encircling prey, and attacking prey.

2.1.1. Random Search

Each agent’s position is randomly generated to find prey. Moreover, the specific process is as follows:where is the position of the d-th dimension in the randomly selected whale, denotes the position of the current individual in d-th dimension, means the current number of iterations, the calculation result of denotes the distance between the random individual and the current individual, and and are the coefficients as shown in the following formulas.where is a constant that will linearly lessen from 2 to 0, and and are random numbers in .

2.1.2. Encircling Prey

When searching for the prey, the mathematical model is as follows:where reveals the position of the d-th dimension in the best individual so far, denotes the position of the current individual in d-th dimension, and the calculation result of D denotes the distance between the best individual and the current individual.

2.1.3. Attacking Prey

On the basis of the hunting behavior of humpback whale, it swims in a spiral motion, so the mathematical model of hunting behavior is devised as follows:where denotes the distance between the whale and its prey, is a constant utilized to define the shape of the spiral, and is a random number in .

As the whale approaches its food in a spiral shape, it also shrinks its encircling circle. Therefore, is adopted to realize this synchronous behavior model, and Mirjalili sets as 0.5 to change the position of the whale between the constricted encircling mechanism or the spiral model. The concrete model is shown as follows:where is a random number in . When and , the current position of means the , and the whale updates the formula of encircling its prey. Otherwise, the agent updates the position by the randomly selected reference whale.

2.2. Elite Strategy

According to the position of the original population , we introduce a new population , according to the value of fitness . Then, and are combined to form the population sorted by fitness , and the top N is selected. The pseudocode of elite strategy is shown in Algorithm 1.

Calculate the fitness of the original population ;
Sort and rearrange by serial number to get ;
Combine and to form ;
Calculate the fitness of the population recorded as ;
Sort and select the top N;

We know that the population obtained by random initialization can satisfy the search in the global solution, but such a search is not targeted. If the invalidity of some spatial regions has been proved during the first initialization, it is still possible to search for these useless regions when the random search is performed again, which leads to a waste of resources. The addition of the elite strategy can solve this problem better. While satisfying the global search, the population search will not search the invalid solution space again after the population search again but search again in the space where the optimal solution may exist, which can improve the efficiency of the algorithm to a greater extent. Through the elite strategy, new populations are generated by ranking the original populations according to their fitness values, after which the two are combined and from which the optimal top N populations of individuals are then selected. Doing so is the selection of the optimal individuals each time and ultimately improves the overall population quality.

2.3. Moth Flame Optimization (MFO)

MFO is a new swarm intelligence optimization algorithm proposed by Xu et al. [12, 108, 109]. It is inspired by the unique flight mode of moth named transverse orientation for navigation at night. In this algorithm, the set of moths M can be illustrated aswhere is the j-th position corresponding to the i-th moth. Assuming the flame set is , is the j-th position corresponding to the i-th flame, and the flame set can be expressed as follows:

Each agent updates its position according to the following expression:where is the i-th moth, is the i-th flame, and is the helix function.where denotes the linear distance between the i-th moth and the j-th flame; means the defined helix shape constant; denotes a random number in the interval .

To help the moth escape from the local optimum, the number of flames will decrease during the iteration:where denotes the number of the current iteration; means the maximum quantity of flames; means the maximum quantity of iterations.

The process of MFO is summarized as follows:(1)Initialize the population and calculate the fitness values of the population(2)The fitness values are sorted; calculate the location of the flame and its fitness value(3)Calculate the number of flames according to equation (12)(4)Calculate the linear distance between the moth and the corresponding flame and substitute it into equation (11) to obtain the updated value(5)Calculate the fitness value according to the updated moth population(6)Judge whether the termination condition is met; otherwise, jump to Step 2

The strategies in the MFO give good access to the most optimal population individuals, i.e., the corresponding flame positions. Because the flame position is obtained with respect to the moth population, the flame position is obtained after the fitness value is calculated and ranked for the moth individuals, and with iteration, the flame position is selected only for the better individuals in the moth population. Therefore, applying MFO to WOA can effectively enhance the local search capability of the algorithm.

3. Proposed Method

In this section, the MEWOA is illustrated in detail. The flowchart of the proposed MEWOA is presented in Figure 1. MEWOA incorporates the elite strategy and MFO algorithm for balancing the capability of exploration and exploitation. The algorithm first uses an elite strategy to generate a high-quality candidate population. Based on this population, the MFO algorithm is used to form a better population, which can help the algorithm possess the fast convergence, find out the optimal solution, and effectively avoid premature stagnation. The pseudocode of MEWOA is illustrated in Algorithm 2.

Initialize the population , , , ,
Calculate the fitness of each search agent
 = the best search agent
while
 Adopt the Elite strategy
 Adopt the MFO algorithm
 for each whale
  Update , , , and
   if
    if
     Update the position of the current agent by the equation (5)
    elseif
     Select a random search agent
     Update the position of the current search agent by equation (2)
    end if
   elseif
    Update the position of the current search agent by equation (6)
   end if
 end for
Check if any search agent goes beyond the search space and amend it
Calculate the fitness of each search agent
Update if there is a better solution
end while
return

The computational complexity of MEWOA is due to the population size (N), dimension size (Dim), and maximum evaluation iterations (Max_FEs). Iteration number (l) is related to the maximum evaluation number and population size, l = Max_FEs/N. Time complexity expression: O(MEWOA) = O(evaluation of the fitness) + t × (O(elite strategy) + O(MFO) + O(estimation of the fitness) + O(WOA)). The complexity of the estimation of the fitness is O (N), the complexity of elite strategy is O(0), the complexity of MFO is O(N × Dim), and the complexity of WOA is O(N × Dim). So, the whole time complexity is O(MEWOA) = O(N) + O(2 × N × Dim+N) × t. The time complexity of the original WOA is O(WOA) = t × (O(evaluation of the fitness) + O(WOA)). So, the whole complexity of the original WOA is O(WOA) = t × (O(N) + O(N × Dim)). The increased complexity compared to the two is O(N) +O(N × Dim) × t.

4. Experimental Studies

In this section, we further verify the performance of MEWOA. Firstly, the combination of strategies and the stability of the algorithm is analyzed. Next, on the CEC 2017 competition data set, we adopt several advanced versions of WOA as a comparison. At last, the algorithm is applied to three practical engineering problems.

The related experiments are conducted under the Windows Server 2012 R2 operating system adopting MATLAB R2014a software, and the hardware platform is configured with Intel (R) Xeon (R) Sliver 4110 CPU (2.10 GHz) and 16 GB RAM.

4.1. Benchmark Functions and Performance Evaluation Measures

This experiment adopts the IEEE CEC 2017 competition data set as a test function, which can effectively estimate the algorithm’s ability. To ensure the experiment’s fairness, the involved algorithms are evaluated under the same conditions: the overall scale and the maximal iteration numbers are set as 300000 and 150000, respectively. This is to ensure there is no bias and unfair setting that make the tests toward a specific method, as per artificial intelligence works [110112]. The related algorithms are estimated 30 times on each benchmark function independently. Friedman’s test [113] is a nonparametric statistical comparison test that can evaluate the experimental results. It is usually utilized to seek the difference between multiple test results and rank all algorithms’ average performance to make a statistical comparison and get the ARV value (average ranking value). For the statistical test, paired Wilcoxon signed-rank test [114] is also adopted in this experiment. Wilcoxon signed-rank test can compare the performance between two algorithms. When the value is less than 0.05, it indicates that the performance of MEWOA is statistically significantly improved compared to another algorithm.

4.2. Impacts of Components

MEWOA is a novel group intelligence algorithm introducing the two mechanisms of the MFO [108] algorithm and Elite Opposition-Based Learning (EOBL) [115] into the basic WOA. To better understand the influence of each mechanism on the performance of the WOA, we compare the model MWOA after the MFO algorithm, the EWOA model after the EOBL mechanism, and the MEWOA model after the two mechanisms which are integrated at the same time to study the impact of each mechanism on the algorithm. In Table 1, “M” represents the MFO algorithm, and “E” represents the EOBL mechanism. Furthermore, “1” indicates that this mechanism is used in the WOA algorithm, and “0” indicates that the corresponding mechanism is not used. Table 2 reveals the test data of the four algorithms in the CEC2017 [116] functions. This experiment is carried out under the same condition. The dimension is set to 30, the number of particles is set to 30, and the maximum number of evaluations is set to 150,000 times. For obtaining the average results, each algorithm is run 30 times independently.

We test the impact of different mechanisms on the algorithm on 30 benchmark functions in CEC2017. Table 2 shows the comparison results of various models. We have listed the average results and standard deviations of different algorithms running on the test function 30 times, and the optimal values are shown in bold. On 30 test functions, the improved algorithm MEWOA has achieved the optimal solutions on most functions. MEWOA has significant advantages compared with MWOA and EWOA. The experimental results reveal that the MFO algorithm and EOBL mechanism added to the WOA algorithm can effectively enhance the performance of the original WOA and enhance the ability to search for optimal solutions.

To further study the improved MEWOA algorithm’s performance, we performed the following analytical experiments on the CEC2017 functions. Figure 2 demonstrates the results of the feasibility analysis of MEWOA, where the original WOA algorithm is chosen for comparison. The graph in the first column (a) shows the three-dimensional location distribution of the MEWOA search history. The second column (b) graph reveals the two-dimensional location distribution of the MEWOA search history. The graph in the third column (c) shows the trajectory of MEWOA during the iterative process. The graph in the fourth column (d) shows the average fitness variation over the iterative process. The graph in the fifth column (e) demonstrates the convergence curve of the algorithm.

The black dots in Figure 2(b) show the algorithm’s historical search positions, and the red dots show the optimal solutions’ positions. It can be visualized from the figure that most of the black dots are clustered around the red dots, and a small portion of the black dots are scattered all over the solution space. The individual trajectories in Figure 2(c) show that the individuals fluctuate significantly in the first and middle stages and gradually stabilize in the later stages. Both data show that the algorithm can search the whole solution space as much as possible and then determine the region where the optimal solution is located for further exploitation. Figure 2(d) shows that the algorithm’s average fitness curve maintains a constant decline throughout the iterations. F1, F4, F7, and F26 fall to lower fitness values early in the iteration. It shows that the algorithm exhibits good convergence ability on these functions. In Figure 2(e), it is more evident from the convergence curves of the two algorithms that MEWOA can find solutions with better quality.

This paper also analyzes the balance and diversity of these two algorithms on the CEC 2017 functions. Figure 3 demonstrates the results of the balanced analysis of MEWOA and WOA. The red and blue curves in the figure represent the exploration effect and exploitation effect, respectively. The higher the value of the curve, the more dominant the corresponding effect. A third curve is added to visualize the relationship between the two effects more clearly. When the value of the exploration effect is higher than or equal to the exploitation effect, the curve increases. Otherwise, the curve decreases. When the curve decreases to a negative value, it is set to zero.

The general algorithm always performs a global search first and then develops the target area locally after the target area is identified. Therefore, in the algorithm’s balance analysis curve, the exploration curve always starts with a higher value, and MEWOA has no exception. From Figure 3, we can see that the exploration and exploitation curves of both algorithms fluctuate considerably. The exploitation effect occupies most of the time in both. On the selected functions, the exploration phase of MEWOA ends significantly earlier than that of WOA. The exploitation curves have been increasing since then, indicating that MEWOA spends more time exploiting the target area.

Figure 4 reveals the change of the algorithm diversity during the optimization process. From the figure, we can clearly see that the algorithm shows high population diversity at the beginning due to its random initialization. As the iterations progress, the algorithm keeps narrowing the search and reduces the population diversity. From the figure, we can see that the diversity curves of MEWOA and WOA are relatively similar. We know that both elite selection and MFO will make the algorithm converge faster in the early stage, and the population diversity will decline rapidly. However, the encircling mechanism, random search mechanism, and unique update method of WOA fluctuate between global and local during exploration. This success prevents MEWOA from converging too quickly in the early stage.

4.3. Scalability Test

To test the MEWOA algorithm’s performance for searching the optimal solution in different dimensions, we conducted the test in two dimensions of 50 and 100 and compared it with six other algorithms. In the experiment, the number of particles is set to 30. The maximum number of evaluations is set to 150,000 times. Each algorithm is independently run 30 times to take the average, and the CEC2017 test function is selected. The related results are demonstrated in Table 3, where AVG denotes the results’ average, and STD means the standard deviation. Compared with other algorithms, the data shows that MEWOA has excellent advantages in processing single-mode functions in 50 and 100 dimensions. The improved MEWOA possesses more excellent performance than the other six improved WOA algorithms and also has a powerful ability to search for optimal solutions.

4.4. Comparison with Well-Established Methods

To investigate the improved MEWOA algorithm’s performance and advantages in this paper, a comparative test is made with several improved WOA. These algorithms are very successful improved WOA with excellent search performance. In the test, the dimension of the particles is set to 30, the number of particles is set to 30, the maximum number of evaluations is set to 150,000 times, each algorithm is independently run 30 times to take the average, and the test function selects the CEC2017 test function. Table 4 lists the involved algorithms’ comparison results using the average and standard deviation of each algorithm running 30 times on different test functions. The table reveals that the average and standard deviations obtained from the improved MEWOA in this paper are smaller than other comparison algorithms.

We use the Friedman [113] test to rank the algorithms’ performance and the Friedman test to find the difference between the results of multiple tests, which are nonparametric statistical comparative tests. The Friedman test ranks the average scores of the involved algorithms and then conducts further statistical comparisons to obtain ARV (average ranking values) from the results. It can be realized from Table 4 that the enhanced algorithm in this paper possesses better performance than other comparison algorithms in test functions except for F22, F27, and F28. Wilcoxon’s [114] rank-sum test is also utilized in this paper to test whether MEWOA is superior to the comparison algorithm. As the p value is less than 0.05, we can realize that the MEWOA is significantly better than the comparison algorithm in the current test function. As shown in Table 4, the value of MEWOA is less than 0.05 on most test functions, so the improved algorithm in this paper is better than other compared algorithms on most test functions.

Convergence speed and convergence accuracy are important indicators for investigating the performance of evolutionary algorithms. We have selected six representative test functions for learning the algorithm’s effectiveness and search trends more quickly and clearly, which are shown in Figure 5, namely, F1, F10, F12, F18, F26, and F30. It can be seen that on test functions F1, F12, F18, and F30, the improved algorithm in this paper has not converged to the optimal value after 150,000 evaluations, and the convergence trend is much higher than other comparison algorithms. In all cases, the convergence accuracy of the MEWOA is greater than the peers.

4.5. Comparison with Representative Metaheuristic Algorithms

To better verify the performance of MEWOA, in this section, we will select some representative metaheuristics to compare with MEWOA. Among the algorithms involved in the comparison, there are classical algorithms, such as DE, as well as algorithms with good results proposed in the past years, such as MFO, and new algorithms proposed in recent years, such as SMA. The details are shown as follows.(i)HHO(ii)SMA(iii)Hunger games search (HGS) [67](iv)DE(v)MFO(vi)Cuckoo search (CS) [117](vii)Grasshopper optimization algorithm (GOA) [118]

The parameters of the experiments were set approximately the same as the previous experiments. The dimension of particles was set to 30, the number of particles was set to 30, and the maximum number of evaluations was set to 300,000. The test function is IEEE CEC2017. Table 5 lists the experimental results of this experiment. In Table 5, AVG denotes the average value obtained for each algorithm after 30 independent tests on the corresponding function, STD denotes the corresponding standard deviation, and Rank denotes the ranking of the algorithm on each function. In addition, we used the Wilcoxon singed-rank test to calculate the value on the algorithms, the purpose of which is to obtain whether there is variability in the comparison results of the two algorithms. If the value is less than 0.05, it means that the comparison between MEWOA and the corresponding algorithm is statistically significant, and vice versa, it means that the results do not have any significance.

There are 30 functions in the CEC2017 test set, which are divided into 4 categories, among which, F1–F3 are Unimodal functions, F4–F10 are Multimodal functions, F11–F20 are Hybrid functions, and F21–F30 are Composition functions. In Figure 6, we have selected two functions from each class and depicted the convergence curves of MEWOA with other metaheuristic algorithms.

On the Unimodal functions, the performance of MEWOA can be ranked in the middle among the listed algorithms; especially, for the F2 and F3 functions, MEWOA can be ranked third among all algorithms, exceeding the classical algorithm DE, so the overall results are still good.

On the Multimodal functions, MEWOA does not perform as well as on the Unimodal function, in terms of both convergence speed and convergence accuracy. However, on F10, its results are still relatively good. The final convergence accuracy can be ranked third. In addition, according to the picture, except for MFO, MEWOA can achieve a good convergence effect in the first half process iteration.

MEWOA has the best results on Hybrid functions, especially on F13, F16, F18, and F19, where MEWOA can be ranked second among all functions. According to the experimental results, it can also be found that the difference between the convergence accuracy of MEWOA and the first-ranked algorithm is not very large in the remaining functions.

Finally, in the Composition functions, we can see that the results shown by MEWOA are still good in the convergence graphs of both F22 and F30 functions, especially in the F22 function, which can achieve a better solution than the other algorithms.

4.6. Comparison with Advanced Algorithms

To further verify the performance of MEWOA, this section has selected some advanced algorithms to compare with MEWOA. Among the compared algorithms, champion algorithms, such as LSHADE, are included, as well as improved algorithms on DE, such as SADE, and algorithms with stronger performance, such as HCLPSO. The specific algorithms involved in the comparison are shown as follows.(i)Adaptive DE with success-history and linear population size reduction (HCLPSO) [119](ii)Self-adaptive differential evolution (SADE) [120](iii)Adaptive differential evolution with optional external archive (JADE) [121](iv)Comprehensive learning particle swarm optimizer (CLPSO) [122](v)Adaptive DE with success-history and linear popuation size reduction (LSHADE) [123](vi)LSHADE_cnEpSi (LSHADE_ES) [124].(vii)Multistrategy enhanced sine cosine algorithm (MSCA) [16]

The experimental parameters were set in the same way as in the previous section, and in addition, we have modified the test functions with the aim of making them more diverse. We used a total of 27 functions for testing, where F1–F8 denote Unimodal functions, F8–F13 are Multimodal functions, F14–F19 are Hybrid functions, and F20–F27 are Composition functions. The specific experimental results are shown in Table 6, where the significance of the indicators that appear is explained in the previous section. In addition, we have chosen two functions in each class as examples and depicted the convergence curves of each algorithm on the corresponding functions.

Both Unimodal and Multimodal functions, F1–F13, are taken from the benchmark functions, while F14–F27 are taken from the CEC2014 test set. The entire performance of the MEWOA can be ranked in the middle position among all algorithms in terms of Unimodal and Multimodal functions. The overall performance of MEWOA is quite good due to the presence of algorithms like HCLPSO, LSHADE, and other algorithms with extremely strong performance inside the algorithms involved in the comparison. From the convergence curves in Figure 7, we can also find that the MEWOA works very well on F1, F2, F11, and F12, in terms of both convergence speed and convergence accuracy, especially for F11, where MEWOA can reach the optimal convergence accuracy at one-third of the iterations, and this performance is also worthy of recognition.

As for the Hybrid functions and the Composition functions, MEWOA’s effect is not very good, but in the two functions F21 and F23, its effect is still relatively good. We can see that on these two functions, MEWOA’s ranking can be located in the middle and upper. And among some remaining functions, such as F16, F17, F19, F20, and F22, although the ranking of MEWOA is not satisfactory, the difference between them is minimal.

Owing to the strong optimization capability, the MEWOA can be applied to many other promising problems such as medical science [125, 126], financial risk prediction [19, 35], and video deblurring [127129]. In addition, MEWOA can also be used for parameter tuning for the convolutional neural networks [127, 130132]. Other potential applications include feature selection [133136], parameter optimization of solar cell [137141], social recommendation and QoS-aware service composition [142144], brain function network decomposition and estimation [145, 146], image editing [147149], image dehazing [130, 150152], blockchain technology [153155], prediction problems in the educational field [156160], and computer vision [161163], which are also interesting topics that are worthy of investigation in the near future.

4.7. Practical Constraint Modeling Problems

In this part, we apply the improved algorithm MEWOA in this paper to three engineering constraint problems to demonstrate the MEWOA algorithm’s performance in the mathematical constraint modeling problems, including tension/compression spring, welding beam, and I-beam. The mathematical model’s main target value is constructed through penalty functions [164] and automatically discards the infeasible solution through the heuristic algorithm. There is no need to carry out this solution’s infeasibility and usually uses a recursive iteration way with each recursive call generating a new point before finding a feasible solution. So, the model constructed by penalty functions combined with the MEWOA algorithm is utilized to handle three mathematical modeling problems.

4.7.1. Tension-Compression String Design Problem

The tension/compression spring design aims to obtain the spring’s minimum weight [165167]. The model needs to iterate through the MEWOA algorithm to optimize the three design variables, including wire diameter (d), the average coil diameter (D), and the quantity of active coils (N). The mathematical model formula is illustrated as follows:

Consider .

Objective function:

subject to

Variable ranges:

Some scholars used mathematical techniques or metaheuristic techniques to solve the model. He and Wang [168] used PSO to handle tension/compression spring design problems. Coello Coello [169] utilized genetic algorithms to settle the problem, and the final minimum weight is 0.0127048. Some more algorithms were used to solve this problem that can be seen in the harmony search (IHS) algorithm [170] and RO algorithm [171]. Experimental results show that the weight of the model obtained by MEWOA is 0.0126788, as shown in Table 7, which is smaller than the minimum value obtained by other methods.

4.7.2. Welded Beam Design Problem

The aim of the welded beam design model [169] is to achieve the lowest welded beam manufacturing cost. The model includes the following four constraint variables: critical buckling load (), shear stress (τ), internal bending stress of the beam (θ), and deflection rate (δ). The steel bar height (t), weld seam thickness (h), steel bar thickness (b), and steel bar length (l) are the direct parameters that affect the manufacturing cost of welded beams. The mathematical model is as follows:

Consider

Objective

subject to

Variable ranges:where

Some scholars use mathematical techniques or metaheuristic techniques to solve the model. Kaveh and Khayatazad [171] adopted RO to solve the manufacturing cost of the model. The enhanced HS model IHS [170] was also used to calculate the model’s manufacturing cost. Radgsdell and Phillips [172] used Davidon-Fletcher-Powell, Richardson’s random method, and the Simplex method to solve the model’s minimum manufacturing cost. As shown in Table 8, when the parameters were set to 0.1885, 3.471, 9.11343, and 0.206754, MEWOA obtained the minimum manufacturing cost of the welded beam of 1.720001. It proves that the MEWOA possesses a very good effect in this engineering problem.

4.7.3. I-Beam Design Problem

We used the MEWOA method to solve the I-beam design problem by optimizing the four parameters, including the I-beam length, two thicknesses, and height, to get the minimum vertical deflection. The mathematical model is as follows:

Consider

Objective

subject to

Variable range:

Wang used the ARSM [174] method to solve the model and obtained a minimum vertical deflection of 0.0157. The minimum vertical deflection optimized by the improved method IARSM is 0.0131. Gandomi et al. [175] utilized CS to decrease the minimum vertical deflection to 0.0130747. Cheng and Prayogo [176] used SOS to obtain deflection the value of 0.0130741. Table 9 demonstrates that the I-beam’s vertical deflection obtained by the MEWOA algorithm is 0.0130741, and the experimental result is better than other comparison methods.

5. Conclusions and Future Works

This article presents an enhanced MEWOA by integrating the opposition-based learning mechanism and MFO algorithm to enhance the balance between exploration and exploitation of the original WOA. Firstly, the MEWOA was estimated and compared with the basic algorithms in different dimensions to verify its effectiveness. Moreover, the devised MEWOA was also compared with six metaheuristic algorithms and five advanced algorithms to demonstrate its superiority. The experimental results proved that MEWOA has achieved much better performance than the original one. The convergence accuracy and scalability of MEWOA also have been improved a lot. The MEWOA can also effectively solve practical engineering problems. The results have shown that MEWOA can achieve a good balance between the exploration and exploitation ability and solve the constraint problem effectively.

In future work, we plan to improve the WOA in a deeper way from the principle of WOA. And the MEWOA can also be extended to a multiobjective version or binary version for other optimization tasks.

Data Availability

The data involved in this study are all public data, which can be downloaded through public channels.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was funded by the Foundation of Jilin Educational Committee (JJKH20210750KJ), Guangdong Natural Science Foundation (2018A030313339), Scientific Research Team Project of Shenzhen Institute of Information Technology (SZIIT2019KJ022), and Taif University Researchers Supporting Project Number (TURSP-2020/125), Taif University, Taif, Saudi Arabia. Thanks are due to the efforts of Ali Asghar Heidari (https://aliasgharheidari.com) during the preparation of this research.