Abstract

It has been observed that the structure of whale optimization algorithm (WOA) is good at exploiting capability, but it easily suffers from premature convergence. Hybrid metaheuristics are of the most interesting recent trends for improving the performance of WOA. In this paper, a hybrid algorithm framework with learning and complementary fusion features for WOA is designed, called hWOAlf. First, WOA is integrated with complementary feature operators to enhance exploration capability. Second, the proposed algorithm framework adopts a learning parameter according to adaptive adjustment operator to replace the random parameter . To further verify the efficiency of the hWOAlf, the DE/rand/1 operator of differential evolution (DE) and the mutate operator of backtracking search optimization algorithm (BSA) are embedded into WOA, respectively, to form two new algorithms called WOA-DE and WOA-BSA under the proposed framework. Twenty-three benchmark functions and six engineering design problems are employed to test the performance of WOA-DE and WOA-BSA. Experimental results show that WOA-DE and WOA-BSA are competitive compared with some state-of-the-art algorithms.

1. Introduction

Nowadays, most function optimization problems and engineering design problems are turning out to be complicated, which prevents traditional methods from providing accurate or even satisfactory solutions. However, the emergence of the metaheuristic optimization algorithms breaks this situation and attracts the attention of many researchers because of their flexibility, derivation-free mechanisms, and local optima avoidance. Generally, metaheuristic algorithms can be divided into two main categories. The first category is evolutionary algorithms (EAs) that are a new family of optimization algorithms inspired from the evolutionary procedure or intelligent behaviors of organisms such as genetic algorithm (GA) [1], differential evolution (DE) algorithm [2], biogeography-based algorithm (BBO) [3], backtracking search optimization algorithm (BSA) [4], grey prediction evolution algorithm (GPEA) [5], and so on. The second category is swarm intelligence algorithms (SIs) which mimic the social behaviors of groups of animals. The most popular SI is particle swarm optimization (PSO) [6]. Other popular algorithms are artificial bee colony (ABC) [7], cuckoo search (CS) [8], grey wolf optimizer (GWO) [9], and so on.

Whale optimization algorithm, proposed by Mirjalili.et al.in 2016 [10], is one of the most recent SIs. Although it has strong local search capability due to its bubble-net attacking method, it is easy to get into local optimization and is weak in exploration. Hybrid metaheuristic algorithms, as one of the most efficient methods for enhancing the performance of algorithm, have been testified to be more efficient than the method that uses solely an algorithm. Since WOA was proposed, it has been widely promoted in various fields by combining with other algorithms. One of the most widely used strategies to construct hybrid WOA is by simply hybridizing two algorithms according to a certain way. For example, Bentouati et al. [11] designed a hybrid whale optimization algorithm and pattern search technique called WOA-PS and tested it in optimal power flow problem. Xiong et al. [12] put forward a hybrid algorithm called DE/WOA, which combined the exploration of DE with the exploitation of WOA for extracting the accurate parameters of PV models. Luo et al. [13] proposed a MDE-WOA algorithm which combines modified DEGL algorithm and WOA to solve global optimization problems. Mafarja et al. [14] introduced a hybrid algorithm with whale optimization algorithm and simulated annealing, in which two hybridization models are considered for solving the feature selection problem.

Another strategy to construct hybrid WOA is by combining operators with complementary fusion features into WOA to form a superior algorithm. Trivedi et al. [15] hybridized particle swarm optimization with whale optimization algorithm called PSO-WOA to handle global numerical function optimization. The main idea is to combine best characteristic of both algorithms to form a novel mutation operator. Korashy et al. [16] hybridized whale optimization algorithm and grey wolf optimizer algorithm called WOA-GWO for optimal coordination of direction overcurrent relays. Xu et al. [17] proposed a memetic WOA (MWOA) through incorporating a chaotic local search into WOA to enhance its exploitation ability. Kaveh et al. [18] attempted to enhance the original formulation of the WOA by hybridizing it with some concepts of the colliding bodies optimization (CBO). The new method, known as WOA-CBO algorithm, is applied to building site layout planning problem. Khalilpourazari et al. [19] introduced a novel hybrid algorithm termed as sine-cosine WOA (SCWOA) for parameter optimization problem of multipass milling process to decrease total production time. Singh and Hachimi [20] proposed a new hybrid whale optimization algorithm with mean grey wolf optimizer (WOA-MGWO) for global optimization. Abdel-Basset et al. [21] proposed a new algorithm which combines the WOA with a local search strategy to defeat the permutation flow shop scheduling problem. Laskar et al. [22] proposed a new population-based hybrid metaheuristic algorithm called whale-PSO algorithm (HWPSO) to manage complex optimization problems.

In this paper, a hybrid algorithm framework named hWOAlf is proposed by combining learning and complementary fusion features into WOA. Specifically, the proposed algorithm framework adopts a learning parameter according to adaptive adjustment operator instead of the random parameter p, namely, the selection of search equation depends on the process of learning from the subpopulation. Additionally, to improve the exploration of WOA, operators with complementary fusion features are employed to balance exploration and exploitation capacity. This paper adopts the DE/rand/1 operator of differential evolution (DE) and the mutate operator of backtracking search optimization algorithm (BSA) to form two new algorithms, respectively, called WOA-DE and WOA-BSA. Then, twenty-three benchmark functions and six engineering design problems are employed to test the performance of WOA-DE and WOA-BSA. Experimental results show that WOA-DE and WOA-BSA are competitive compared with some state-of-the-art algorithms.

The main contributions of this paper can be summarized as follows:(i)Proposing a hybrid algorithm framework for WOA, named hWOAlf. First, designing a learning parameter according to adaptive adjustment operator. Second, combining complementary fusion feature operators into WOA.(ii)Forming two new algorithms WOA-DE and WOA-BSA by complementary fusion features, in which the DE/rand/1 operator of DE and the mutation operator of BSA are incorporated to WOA, respectively.

The rest of the paper is organized as follows. Section 2 gives a brief review of WOA, DE, and BSA. The proposed hybrid algorithm framework is presented in Section 3. In Section 4, the experimental results and analysis are given. Finally, the conclusions are shown in Section 5.

2. Preliminary

2.1. Whale Optimization Algorithm (WOA)

Whale optimization algorithm is a new swarm-based metaheuristic optimization algorithm, proposed by Mirjalili and Lewis in 2016 [10], which mimics the intelligent foraging behavior of humpback whales. The algorithm is inspired by bubble-net hunting strategy. WOA includes three operators to simulate the search for prey, encircling prey, and bubble-net foraging behavior of humpback whales. Among them, the encircling prey and spiral bubble-net attacking method is the exploitation phase, while searching for prey is the exploration phase. The mathematical model of the two phase is presented below.

2.1.1. Stage 1: Exploitation Phase (Encircling Prey/Bubble-Net Attacking Method)

During the exploitation phase, humpback whales update their position according to two mechanisms: shrinking encircling mechanism (encircling prey) and spiral updating position (spiral bubble-net attacking method). Shrinking encircling mechanism is represented by the following equations:where t represents the current iteration, is the position vector of the best solution obtained so far, and is the position vector. and are coefficient vectors and the calculation methods are as follows:where is linearly decreased from 2 to 0 over the course of iterations (in both exploration and exploitation phases) and is a random vector in [0, 1].

Then, the spiral updating position is mathematically expressed by the following equation:where represents the distance of the ith humpback whale to best solution obtained so far, b is a constant defining the logarithmic spiral shape, l is a random number in [−1, 1], and is the element-by-element multiplication.

When whales capture their prey, it is worth noting that the two mechanisms mentioned above, that is, shrinking encircling mechanism and spiral updating position, are executed simultaneously. In order to imitate this behavior, a probability of 50% is assumed to choose between them. The mathematical model is as follows:where p is a random number in [0, 1].

2.1.2. Stage 2: Exploration Phase (Searching for Prey)

In this stage, in order to enhance the exploration capability in WOA, we update the position of whale according to a randomly chosen whale instead of the best whale found so far. Therefore, a coefficient vector A with the random values greater than 1 or less than -1 is used to force the whale to move far away from the best known whale. The model can be mathematically expressed as follows:where is a random position vector selected from the current population.

2.2. Differential Evolution (DE)

Differential evolution (DE) algorithm is a population-based approach firstly proposed by Storn and Price in 1997 [2] and proven to be one of the most promising global search methods. After generating initial population, the population is updated by looping mutation, crossover, and selection operations. This paper adopts a classic mutation operation, called DE/rand/1. In brief, DE procedures can be summarized in the following four steps.

2.2.1. Step 1: Initialization

At the beginning, an initial population is generated by means of random number distribution. We can initialize the jth dimension of the ith individual according towhere N is the population size, D is the dimensional of individual, denotes an uniformly distributed random number in [0, 1] range, and and denote the upper and lower bounds of the jth dimension, respectively.

2.2.2. Step 2: Mutation

After the initialization, DE enters its main loop. Each target individual in the population is used to generate a mutant individual via mutation operators , where the generated can be represented aswhere , , and are randomly selected from the current population and F is the mutation control parameter to scale the difference vector. Similarly, we give another five frequently used mutation operators as follows:(1)“DE/rand/2”(2)“DE/best/1”(3)“DE/best/2”(4)“DE/rand-to-best/1”(5)“DE/current-to-best/1”where is the individual that has the best fitness function value.

2.2.3. Step 3: Crossover

Then, a binomial crossover operation is applied to the target individual and mutant individual to generate a trial individual as follows:where is a randomly chosen integer in and is called the crossover control parameter.

2.2.4. Step 4: Selection

Finally, the greedy selection operation is used to select the better one from the target individual and crossover individual into the next generation. This operation is mainly based on the comparison of the fitness value and is performed as shown below:where is the fitness function.

2.3. Backtracking Search Optimization Algorithm (BSA)

Backtracking search optimization algorithm (BSA) is a population-based metaheuristic algorithm, proposed by Pinar Civicioglu in 2013 [4]. Similar to most metaheuristic algorithms, the algorithm achieves the purpose of optimization through the mutation, crossover, and selection of population. What makes the BSA unique is its ability to remember historical populations, which enables it to benefit from previous generation populations by mining historical information. There are five steps contained in original BSA called initialization, selection-I, mutation, crossover, and selection-II. The five steps of BSA are simply introduced as follows.

2.3.1. Step 1: Initialization

BSA initializes the population P and the historical population with the following formula, respectively:where , . N and D represent the population size and the population dimension, respectively. U is the uniform distribution. and are the lower and upper boundaries of variables.

2.3.2. Step 2: Selection-I

Firstly, the historical population is updated according to equation (17). Then, the locations of individuals in are randomly changed as shown in equation (18).where performs a random permutation of the integers from 1 to N.

2.3.3. Step 3: Mutation

The mutation operator of BSA can generate the initial trial population which takes advantage of the historical information and the current information. The mutation operation can be expressed as follows:where F is a control parameter and . This operation gives the algorithm powerful global search ability.

2.3.4. Step 4: Crossover

BSA’s crossover process has two steps. The first step generates a binary integer-valued matrix of size . The second step is to determine the positions of crossover individual elements in population P according to the generated matrix and then exchange such individual elements in P with the corresponding position elements in population M to obtain the final trial population T. The crossover operation can be expressed as follows:

After crossover operation, some individuals of the trial population T obtained at the end of BSA’s crossover process may overflow the allowed search space limits. At this point, the individuals beyond the boundary control will be regenerated according to equation (15).

2.3.5. Step 5: Selection-II

The greedy selection mechanism is employed to maintain the most promising trial individuals into the next generation. Compare the fitness values of the target individuals and trial individuals; if the fitness value of trial individual is less than that of the target individual, then the trial individual is accepted for the next generation; otherwise, the target individual is retained in the population. The selection operator is defined aswhere is the objective function value of an individual.

3. Proposed Hybrid Framework for WOA: hWOAlf

WOA which imitates the hunting behavior of humpback whales is one of the most recent SIs. It includes three stages: searching for prey, encircling prey, and hunting prey. The basic WOA has the disadvantage of weak exploration capability. This paper proposes a hybrid algorithm framework (hWOAlf) to improve the global search capability of WOA. Operators with complementary fusion features, DE/rand/1 operator of DE and mutation operator BSA, are embedded into WOA to form two new algorithms which include WOA-DE and WOA-BSA, respectively. In addition, a learning parameter adjusted by adaptive adjustment operator will replace the random parameter p of the original WOA, which can continuously balance exploration and exploitation through learning mechanism. The proposed framework consists of two main parts, hybrid framework and learning parameter mechanism, explained below.

3.1. Proposed Hybrid Framework for WOA

The mutation operators of equations (1) and (3) in WOA have strong exploitation capability, but only the mutation operator of equation (5) has a strong exploration capability. This makes WOA more capable of exploitation and less capable of exploration. In order to balance the exploration and exploitation capabilities of WOA, a generic framework flowchart was developed and is shown in Figure 1. With this framework, BSA mutation operator (equation (7)) and DE/rand/1 mutation operator (equation (19)) are used to improve the exploration capability of WOA.

In order to have a clearer understanding of our proposed hWOAlf, we summarize the four mutation operators used in this framework as follows:(i)Operator 1: if (ii)Operator 2: if (iii)Operator 3: if (iv)Operator 4: if where is a test individual.

3.2. Proposed Learning Parameter Mechanism

In basic WOA, p is a random number. In this paper, we design as a function with learning strategy. When the population is fully evolved in the current iteration, the following four numbers are recorded: the number of individuals mutated by operator 1 and operator 2 is , and the number of individuals mutated by these two operators that can successfully enter the next generation is ; similarly, the number of individuals mutated by operator 3 and operator 4 is . The number of individuals better than their parents after the mutation of these two operators is . The learning parameter is designed as follows:

The initial value is 0.5 and needs to be updated after each iteration. In this paper, is designed to control the execution frequency of the search operators in equations (22) and (23) and those in equations (24) and (25). In the process of algorithm evolution, when is less than a random number, operator 1 and operator 2 are used to mutate; otherwise, operator 3 and operator 4 are selected to mutate. This continuous adaptive process will enable the algorithm to choose the most suitable learning strategy in the evolution process.

3.3. The Pseudocode of hWOAlf

For clarity, the pseudocode of hWOAlf is given in Algorithm 1.

Input:
Output:
(1)//randU (0, 1), randperm (N)random permutation of the integers 1 : N, normrnd (u, d)random number chosen from a normal distribution with mean u and standard deviation d
(2)Initialization for to N do
(3)  for to D do
(4)    % Generate an initial whales population
(5)  end
(6)end
(7)% Calculate the fitness of each search agent
(8) = the best search agent
(9) = the fitness of the best search agent
(10) and
(11)while do
(12)   Mutation
(13)   %% Update the Position of search agents
(14)   for to N do
(15)     Update a, A, C and l
(16)     if then
(17)        if then
(18)          the individual is mutated by equation (22)
(19)        else
(20)          Select a random individual ; the individual is mutated by equation (23)
(21)        end
(22)     else
(23)        if then
(24)          the individual is mutated by equation (24)
(25)        else
(26)          the individual is mutated by equation (25)
(27)        end
(28)     end
(29)   end
(30)   %% Checking allowable range
(31)   for to N do
(32)     for to D do
(33)       if or then
(34)        
(35)       end
(36)     end
(37)   end
(38)   %% Update the leader
(39)   for to N do
(40)     % Calculate objective function values of T
(41)     if then
(42)       
(43)       
(44)     end
(45)   end
(46)   %% Update the learning parameter
(47)   the is updated by equation (26)
(48)   
(49)end

4. Experimental Results and Discussion

In this section, to verify the validity of the proposed hybrid algorithm framework, WOA-DE and WOA-BSA are demonstrated on twenty-three benchmark functions and six engineering design problems. Experimental results and comparisons with other representative algorithms show that the proposed hWOAlf is effective and efficient. All experiments are implemented in the software of MATLAB 2012a. The computer system running the program is Intel(R) Core(TM) i5-3230M CPU @ 2.60 GHz and 4 GB RAM.

4.1. Twenty-Three Benchmark Functions

In order to evaluate the performance of the proposed algorithm framework, WOA-DE and WOA-BSA are compared with some EAs and SIs, i.e., DE, BBO, BSA, PSO, HS, GSA, and WOA on twenty-three benchmark functions listed in Tables 13. These functions can be divided into three groups: unimodal, multimodal, and fixed-dimension multimodal. The unimodal functions have only one global optimum, so they are used to investigate the exploitation capability, while the remaining functions with more than two local optima are utilized to evaluate the exploration ability. In addition, in the tables, D denotes the number of function dimensions, is the boundary of the function search space, and is the theoretical optimum. In order to have a fair comparison, each experiment is independently run 30 times for all the compared algorithms, and the results of every run are recorded. The maximal iteration number () is used as the termination condition for all algorithms, which is set to 1000, and population size is set to 30.

4.2. Comparison Results of Twenty-Three Benchmark Functions

Tables 4 and 5 tabulate the Best, Mean, and Std results of twenty-three benchmark functions for WOA-DE, WOA-BSA, and other seven compared algorithms. The results of WOA-DE and WOA-BSA are shown in bold.

4.2.1. Exploitation Capability

Functions can be used to evaluate the exploitation capability of the proposed algorithms, since each of them has only one global optimal value. The results from Table 4 show that with respect to the best, WOA-DE and WOA-BSA rank first and second, respectively, for functions among all compared algorithms, which means that the results obtained by WOA-DE and WOA-BSA are closer to the theoretical optimal solutions on these functions and WOA-DE and WOA-BSA have very strong exploitation capacity compared with other algorithms. In terms of Mean and Std index, WOA-BSA outperforms the other algorithms, while WOA-DE is not the best but it is better than the other six algorithms.

4.2.2. Exploration Capability

Functions are multimodal functions. They include many local optima, and the number of local optima increases exponentially with the increase of dimension D. So, these functions are considered to be very useful in evaluating the exploration capabilities of algorithms. The results of test functions for WOA-DE and WOA-BSA are shown in Table 5, where are multimodal functions and are fixed-dimension multimodal functions. The results from Table 5 show that WOA-DE and WOA-BSA have no advantage in terms of Best, Mean, and Std compared to the other six algorithms (such as DE, BBO, BSA, PSO, HS, and GSA). However, both of them are better than the original algorithm WOA, which indicates that our proposed hybrid algorithm framework can effectively improve the performance of the original algorithm WOA.

Figure 2 represents six convergence graphs obtained by the nine algorithms for F1, F5, and F7 (unimodal functions), F10 and F11 (multimodal functions), and F15 (hybrid composition function). As can be seen from Figure 2, WOA-BSA has faster convergence speed for F1, F5, F7, F10, F11, and F15; meanwhile, WOA-DE also has faster speed. Based on the above results, we can see that WOA-DE and WOA-BSA have competitive performance, although they are not better than the compared algorithms in all aspects.

4.3. Engineering Design Problems

In order to further examine the proposed hybrid framework, it is applied to six well-known engineering design problems mixed the variables taken from the optimization literatures. The description of these problems including three-bar truss, pressure vessel, speed reducer, gear train, cantilever beam, and I-beam is given in Appendix A. Table 6 shows the parameter settings of six engineering problems according reference [23, 24], where N, T, and D represent the population size, the maximum number of iterations, and the dimension of optimization problems, respectively. However, there are a number of complex constraints in each problem, which make it challenging to find the optimal solution satisfying all the constraints. So, a constraint handling method, such as penalty functions, feasibility and dominance rules, stochastic ranking, multiobjective concepts, ε-constrained methods, and Deb’s heuristic constraints processing method [25, 26], is necessary. This paper adopts Deb’s heuristic constraints processing method. Each experiment is independently run 30 times and the results obtained include the worst value (Worst), the mean value (Mean), the best function value (Best), the standard deviation (Std), and the function evaluations (FEs).

4.3.1. Three-Bar Truss Design Problem

Three-bar truss design (as shown in Figure 3) is to minimize the weight of a truss with three bars, subject to stress constraints. The problem has two design variables and three nonlinear inequality constraints. The formulation for this problem is shown in Appendix A.1.

The results of WOA-DE and WOA-BSA are compared with WOA, dynamic stochastic selection (DEDS) [27], hybrid evolutionary algorithm (HEAA) [28], hybrid particle swarm optimization with differential evolution (PSO-DE) [29], differential evolution with level comparison (DELC) [30], mine blast algorithm (MBA) [31], and crow search algorithm (CSA) [32]. The best solutions obtained from these algorithms are presented in Table 7. From Table 7, within the allowable error range of the constraint function value, the function value of the best solutions of WOA-DE and WOA-BSA is the smallest among all the comparison algorithms, equaling to 263.895711. As can be seen from Table 8, although the FEs (equaling to 10000) obtained by DELC are the smallest, WOA-DE and WOA-BSA find out the current best solution of this problem. In addition, Figure 4 shows the convergence figure of WOA-DE, WOA-BSA, and WOA on three-bar truss design problem, which indicates the convergence speed of WOA-DE, and WOA-BSA is faster than that of WOA. That is to say, the proposed hybrid framework is effective.

4.3.2. Pressure Vessel Design Problem

This problem aims to minimize the total cost of a cylindrical vessel subject to constraints on the material, forming, and welding as shown in Figure 5. From this picture, we can see that when the head is hemispherical and both ends of the cylindrical container are sealed by hemispherical heads. Four optimization variables are the thickness of the shell , thickness of the head , inner radius R , and cylindrical section length L , where and need to be integer multiple of 0.0625, while R and L are continuous variables. In addition, the formulation of this problem is shown in Appendix A.2.

The approaches applied to this problem include GA based on coevolution model (CGA) [33], GA based on a dominance-based tour tournament selection (DGA) [34], coevolutionary PSO (CPSO) [35], hybrid PSO (HPSO) [36], coevolutionary differential evolution (CDE) [37], CSA, and WOA. WOA-DE and WOA-BSA are compared with the seven algorithms listed above, and the best solutions obtained by these methods are shown in Table 9. From Table 9, we can see that the objective function values of the best solutions of WOA-DE (equaling to 6059.708057) and WOA-BSA (equaling to 6059.708025) are superior to those of comparison algorithms. Moreover, as can be seen from Table 10, the FEs obtained by the proposed WOA-DE and WOA-BSA are minimal, equaling to 30240 and 30140, respectively. This suggests that WOA-DE and WOA-BSA can find out the current best solution of this problem with the smallest computational overhead. Figure 6 shows that WOA-DE and WOA-BSA have a faster convergence rate than the original WOA on pressure vessel design issues.

4.3.3. Speed Reducer Design Problem

The speed reducer design problem, shown in Figure 7, is a practical design problem that has been often used as a benchmark problem for testing the performance of different optimization algorithms. The objective is to find the minimum weight of speed reducer under the constraints of bending stress of the gear teeth, surface stress, transverse deflections of the shafts, and stresses in the shafts. There are seven design variables, namely, the face width b , the tooth module m , the number of teeth in pinion z , the length of the first shaft between bearings , the length of the second shaft between bearings , and the diameter of the first shaft and the second shaft . The mathematical formulation of the problem is shown in Appendix A.3.

Tables 11 and 12 show the best solutions and statistical results of MDE, DEDS, DELC, HEAA, PSO-DE, MBA, and WOA. From Table 11, it can be seen that the function values of the current best feasible solutions of WOA-DE and WOA-BSA are equal to 2994.469448 and 2994.469656, respectively. The statistical results are given in Table 12. From the results of this table, we can see that the FEs of WOA-DE and WOA-BSA equaling to 15000 are better than those of the other algorithms except MBA equaling to 6300. This indicates that WOA-DE and WOA-BSA in the framework, compared to other algorithms, can obtain a relatively better solution by sacrificing a little computational overhead on this problem. Moreover, Figure 8 presents the convergence figure of WOA-DE, WOA-BSA, and WOA on this problem. From Figure 8, the convergence speeds of WOA-DE and WOA-BSA are faster than WOA, and their solutions are closer to the optimal solution.

4.3.4. Gear Train Design Problem

Gear train design aims to minimize the cost of the gear ratio of the gear train. It has only boundary constraints in parameters. Variables to be optimized are in discrete form since each gear has to have an integral number of teeth. In the problem, the design variables are , , , and , which are discrete and must be integers. The gear ratio is defined as . The formulation of this problem is shown in Appendix A.4. Figure 9 shows the speed reducer and its parameters.

Table 13 shows the comparison of best solutions for artificial bee colony algorithm (ABC) [39], MBA, CSA, WOA, WOA-DE, and WOA-BSA. From the table, we can see that the best solutions obtained by the six algorithms are the same (equaling to 2.7e − 12). In addition, the statistical results of all six algorithms are shown in Table 14. WOA-DE with FEs of 940 and WOA-BSA with FEs of 1900 rank second and fourth, respectively, among all the six compared algorithms. The FEs of the top one are 60 belonging to ABC. In addition, WOA-DE and WOA-BSA have faster convergence speeds than WOA from the function convergence graphs of WOA-DE, WOA-BSA, and WOA (see Figure 10).

4.3.5. Cantilever Beam Design Problem

The objective of cantilever beam design problem is to minimize the weight of a cantilever beam consisting of five hollow sections with fixed diameters as shown in Figure 11. The design variables are the heights of all beam elements, and there is a constraint that should not be violated by the final optimization design. The formulation of this problem is shown in Appendix A.5.

Table 15 provides the comparison results between WOA-DE, WOA-BSA, WOA, MMA [40], GCAI [40], GCAII [40], cuckoo search algorithm (CS) [41], symbiotic organisms search (SOS) [42], and moth-flame optimization algorithm (MOF) [43]. The results in Table 15 show that WOA-DE and WOA-BSA can also effectively solve such problems. Convergence plot using WOA-DE, WOA-BSA, and WOA for cantilever beam design problem (as shown in Figure 12) indicates that WOA-DE and WOA-BSA have faster rates of convergence, which are close to the optimal solution.

4.3.6. I-Beam Design Problem

In this problem, the goal is to minimize the vertical deflection of the beam shown in Figure 13. There are four design variables for this problem: length , height , and two thicknesses . The problem includes only one optimization constraint and is formulated in Appendix A.6.

Table 16 compares the best solutions of seven state-of-the-art methods including adaptive response surface method (ARSM) [44], improved ARSM (IARSM) [44], CS, SOS, WOA, WOA-DE, and WOA-BSA for I-beam design problem. From the results of this table, we can see that the optimal values obtained by SOS, WOA-DE, and WOA-BSA are the same. The convergence plots of the three algorithms (WOA, WOA-DE, and WOA-BSA) are shown in Figure 14. We can see that WOA-DE and WOA-BSA have faster convergence speed.

4.4. Analysis and Discussion of Experimental Results

Taking the experimental results on twenty-three benchmark functions and six engineering design problems into consideration simultaneously, the comprehensive performance of the proposed hybrid framework is discussed based on WOA-DE and WOA-BSA from three aspects, including the solution accuracy, convergence rate, and algorithmic robustness.

4.4.1. Discussion of the Solution Accuracy

The Best values in the above tables can reflect the solution accuracy of an algorithm in solving twenty-three benchmark functions and six engineering design problems. The smaller the Best values, the better the solution accuracy. On the one hand, compared with eight algorithms on twenty-three benchmark functions, WOA-DE ranks first among seven unimodal functions; however, it ranks fourth among multimodal functions, which means that WOA-DE is not competitive enough for more complex functions. WOA-BSA ranks second for both unimodal functions (except WOA-DE) and multimodal functions (except DE). This shows that WOA-BSA is competitive in solving function optimization problems. On the other hand, compared with some state-of-the-art algorithms on engineering design problems, WOA-DE can find out the current best optimal solutions in four problems, including three-bar truss design problem, speed reducer design problem, gear train design problem, and I-beam design problem. WOA-BSA can obtain the current best feasible solutions in four engineering problems. All the above experimental results indicate that the proposed hybrid framework has excellent performance with respect to solution accuracy.

4.4.2. Discussion of the Convergence Rate

The FE values reflect the computational overhead of an algorithm when finding out the best solution of an optimization problem. The smaller the computational overhead of an algorithm, the faster the convergence rate. On the one hand, when solving the six engineering design problems, although the FEs by WOA-DE are not the smallest, its ranking among all compared algorithms is competitive. The convergence speed of WOA-BSA, compared with WOA-DE and original WOA, is far faster than the original WOA. On the other hand, when solving the twenty-three benchmark functions, all the compared algorithms search within the same FE limit. At this time, the convergence rate of the compared algorithms can be reflected by the trend of the convergence curves. As illustrated in Figure 2, WOA-BSA and WOA-DE have better performance in convergence rate when compared with other algorithms.

4.4.3. Discussion of the Algorithmic Robustness

The Mean and Std values in the above tables can reflect the robustness of an algorithm. The smaller the Mean and Std values, the higher the robustness of an algorithm. On the one hand, on twenty-three benchmark functions, the Mean and Std values of WOA-DE rank third in unimodal functions and rank third and fourth in multimodal functions, respectively, while the ranking of the Mean and Std values obtained by WOA-BSA is fourth and fifth for unimodal functions, respectively. For multimodal functions, the ranking of WOA-BSA is the first for the Mean and Std values, which indicates that the algorithm has strong robustness. On the other hand, on six engineering problems, unfortunately, the Mean and Std values obtained by WOA-DE are mediocre, while the performance of WOA-BSA on Mean and Std values of 30 run times is also not competitive enough. All in all, the WOA-DE and WOA-BSA have better overall performance in terms of robustness, but there is still much room for improvement.

From the analysis of results based on the above two sets of experiments on WOA-DE and WOA-BSA, we can provide insight into the following two characteristics about the proposed algorithm framework. Firstly, the proposed framework is an effective and generic technique for function optimization and engineering constrained problems. Secondly, the overall performance of the framework is outstanding, but there is still space to improve its robustness.

5. Conclusion

The original WOA is easy to fall into local optimum when it carries out the operation of prey encircling. To address this issue, a hybrid algorithm framework with learning and complementary fusion features for WOA called hWOAlf is proposed in this paper. First, a learning parameter according to adaptive adjustment operator is proposed to balance exploration and exploitation capability. Furthermore, the proposed hybrid algorithm framework called hWOAlf takes advantage of the complementary fusion feature operator to enhance the exploration capability of WOA. Two new algorithms, WOA-DE and WOA-BSA, are formed by embedding the DE/rand/1 operator of DE and the mutate operator into WOA, respectively. The effectiveness of the proposed hWOAlf framework is exhaustively demonstrated through comparing WOA-DE and WOA-BSA with some state-of-the-art optimization algorithms on twenty-three continuous benchmark functions and six engineering design problems.

In the future, there are two further studies to be done; on the one hand, better parameter tuning strategies can be developed to balance the exploration and exploitation capabilities of WOA; on the other hand, more metaheuristics are expected to be added to the proposed hybrid framework.

Appendix

A. Engineering Design Problems

A.1. Three-Bar Truss Design Problem

A.2. Pressure Vessel Design Problem

A.3. Speed Reducer Design Problem

where

A.4. Gear Train Design Problem

A.5. Cantilever Beam Design Problem

A.6. I-Beam Design Problem

Data Availability

The data used to support the findings of this study are included within the article. In addition, in order to better share the research results, the LaTeX codes related to the data are available from the corresponding author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest.