Abstract

In view of the shortcomings of the whale optimization algorithm (WOA), such as slow convergence speed, low accuracy, and easy to fall into local optimum, an improved whale optimization algorithm (IWOA) is proposed. First, the standard WOA is improved from the three aspects of initial population, convergence factor, and mutation operation. At the same time, Gaussian mutation is introduced. Then the nonfixed penalty function method is used to transform the constrained problem into an unconstrained problem. Finally, 13 benchmark problems were used to test the feasibility and effectiveness of the proposed method. Numerical results show that the proposed IWOA has obvious advantages such as stronger global search ability, better stability, faster convergence speed, and higher convergence accuracy; it can be used to effectively solve complex constrained optimization problems.

1. Introduction

The whale optimization algorithm (WOA) was proposed by Mirjalili and Lewis [1] in 2016. It is a new swarm intelligence optimization algorithm that simulates humpback whale hunting behavior. The main idea of the algorithm is to solve the target problem by imitating the whale’s predatory behavior [2]. Since its introduction, the  WOA has been favored by many scholars, and it has been widely used in optimal allocation of water resources [3], optimal control [4], and feature selection [5]. But as a swarm intelligence optimization algorithm, like DE, PSO, ACO, and other algorithms, they all have the shortcomings of slow convergence and easy to fall into local optimum. Therefore, in practical applications, various improvements have been made to the standard algorithms, such as [610]. Therefore, for the WOA algorithm, in recent years, many scholars have made a lot of improvements in improving algorithm convergence speed and optimization accuracy. For example, Abdel-Basset et al. [11] used Lévy flight and logical chaos mapping to replace and determine the coefficient vector C and switching probability P in the WOA, proposed an improved whale optimization algorithm (IWOA), and verified the effectiveness of the proposed algorithm through experiments. Long Wen et al. [12] coordinated the exploration, developed the algorithm by updating the formula of the convergence factor of the nonlinear change, proposed an IWOA, and used experiments to verify the effectiveness of the improved algorithm. He et al. [13] introduced the adaptive strategy into the whale position update formula and proposed an IWOA for solving function optimization problems. This method balanced the global exploration and local development capabilities of the algorithm and accelerated the convergence speed and optimization accuracy of the algorithm. The experiments proved its superiority. Wuet al. [14] initialized the population through the quasireverse learning method to improve the population diversity. At the same time, the linear convergence factor was modified to a nonlinear convergence factor. In order to improve the solution of complex function optimization problems, Liu and He [15] proposed an IWOA based on adaptive parameters and niche technology, in which the adaptive probability threshold coordination algorithm was introduced for global search and local development capabilities. Experiments show that the improved algorithm can effectively improve the ability to solve complex function optimization. WOA and its improved algorithms are all proposed for unconstrained optimization problems; there is no research on using this algorithm to solve constrained optimization problems. Therefore, this paper proposes an IWOA algorithm for solving constrained optimization problems.

The remainder of this paper has been organized as follows: The related research work has been described in Section 2, the introduction of the standard WOA and its improvement work has been described in Section 3, experimental results have been explained in Section 4, and the conclusion and future work have been explained in Section 5.

Constrained optimization problems (Cops) are a type of nonlinear programming problems that often occur in the fields of daily life and engineering applications. There are usually two ways to solve this problem: deterministic algorithm and random algorithm [16]. Deterministic algorithms generally have high initial requirements, and they are generally unable to solve some problems that are not derivable, the feasible region is not connected, or there is no obvious mathematical expression. Even if some problems can be solved, the solutions obtained are mostly local optimal solutions [16]. The random algorithm is a swarm intelligence optimization algorithm that has emerged in recent years; it has obtained a lot of research in solving constrained optimization problems. Chen and Huo [17] proposed to use an improved GA to solve the Cops; this method used floating-point encoding; they also improved the genetic mutation operator and termination criterion. Long and Zhang [18] proposed an improved bat algorithm for solving Cops. This method used the good point set method to construct the initial population to maintain population diversity and also used inertial weights to improve the performance of the algorithm. An improved particle swarm optimization algorithm for solving Cops was proposed by Mi Yong and Gao [19]. This method used the penalty function method to treat constrained optimization problems as unconstrained optimization problems and used feasible basis rules to update individual and global extreme values. Lei et al. [20] proposed a new empire competition algorithm to solve the Cops and used the lexicographic method to simultaneously optimize the objective function of the problem and the degree of constraint violation. Long et al. [21] proposed the firefly algorithm to solve the constrained optimization problem. The algorithm used chaotic sequences to initialize the firefly position and introduced a dynamic random local search to speed up the convergence of the algorithm. Wang et al. [22] proposed an adaptive artificial bee colony algorithm to solve the Cops. The algorithm used an antilearning initialization method and an adaptive selection strategy. Mohamed et al. [2325] proposed using an improved differential evolution algorithm to solve constrained optimization problems. This method is mainly to improve the mutation operator in standard differential evolution, and the experimental results verify the effectiveness of the improved algorithm. There are many outstanding constraint-handling techniques in the literature. However, we do not review the literature of the WOA algorithm on constrained optimization problems.

Without loss of generality, the model of the Cops studied in this paper is as follows:where is the objective function, is the inequality constraint, is the equality constraint, and and are the upper and lower bounds of the variable , respectively.

3. The Introduction of the Standard WOA and Its Improvement

3.1. Standard WOA

WOA is a swarm intelligence optimization algorithm that simulates whale predation behavior. The algorithm simulates the unique bubble-net foraging method of whales [26] as shown in Figure 1.

The principle of the whale’s bubble-net foraging is as follows: after the whale finds its prey, it creates a bubble net along the spiral path and moves upstream to prey. This predation behavior is divided into three stages: surrounding prey, bubble-net attack, and hunting prey.

3.1.1. Surrounding Prey Stage

In the WOA algorithm, the whale first recognizes the location of the prey and then surrounds it, but in fact, the whale cannot know the location of the prey in advance. Therefore, assuming that the current optimal position is the target prey, the other individuals in the group all move to the optimal position. The enclosing stage can be expressed by the following mathematical model:where is the current number of iterations, is the prey position vector (current optimal solution), is the prey position vector, is the surrounding step size, and

In the above formula, rand is a random number between [0, 1], is a control parameter, and it decreases linearly from 2 to 0 with increasing of the iterations. The expression is as follows:where is the maximum number of iterations.

3.1.2. Bubble-Net Attack Stage

The humpback whale’s bubble-net foraging method is to move along the spiral path toward the prey in the constricted encirclement. Therefore, in WOA, two methods are designed to describe the predation behavior of whales: shrinking and surrounding mechanism and spiral update position, respectively.

Shrinking and surrounding mechanism: it is achieved by reducing the convergence factor in equations (3) and (4).

Spiral update position: first calculate the distance between the individual whale and the current optimal position, and then simulate the whale to capture food in a spiral. The mathematical model can be expressed as follows:where is the distance between the th whale and the current optimal position, is a constant coefficient used to define the logarithmic spiral form, and takes the random number of [−1, 1]. In the predation process, the whale also needs to shrink the enclosure while spiraling to surround the prey. Therefore, in order to achieve this synchronous model, spiral envelopment and contraction envelopment are performed with the same probability.

3.1.3. Hunting Prey Stage

If , randomly select the whale to replace the current optimal solution; it can keep the whale away from the current reference target and enhance the algorithm’s global exploration capabilities and also need to find a better prey to replace the current reference whale. The mathematical model iswhere means randomly selecting the position vector of the whale.

Figure 2 shows the flowchart of the WOA.

3.2. Improvement of WOA

The WOA has the disadvantages of slow convergence and easy localization; this paper introduces three improved strategies and proposes an IWOA for solving constrained optimization problems.

3.2.1. Generating Initial Population with Good Point Set Method

For swarm intelligence optimization algorithms, the quality of the initial population directly affects the accuracy and speed of the algorithm [27]. The better the diversity of the initial population, the stronger the algorithm’s global search ability. But for optimization problem, in the absence of any prior knowledge, it is necessary to use the information of the initial population. It is hoped that the diversity of the initial population will be used to fully reflect the basic information of the individual. The good point set method is an effective method that can reduce the number of trials. Experiments show that when the same number of points is taken, the good point set sequence is more uniformly distributed than the point sequence selected by the general random method, and it is closer to the integrate curve [28]. And the accuracy of the good point set method has nothing to do with the dimension; it can overcome the shortcomings of the random method. Take a good point set of M points in s-dimensional space:where means taking the decimal part of .

In order to explain the difference between the initial population generated by the good point set and the general random method, take the two dimensions as an example, where the value range of the variable is [−80, 80] and the population size is 100. Figures 3 and 4 show the population distribution map generated by the random method and the good point set method, respectively.

As you can see from the abovementioned figures, the distribution of the initial population of individuals generated by the good point set method is more uniform and the diversity is better than that generated by the random method.

3.2.2. Nonlinear Convergence Factor

In the standard WOA, the global search ability and local development ability of the algorithm mainly depend on the parameter A, and the value of A mainly depends on the convergence factor ; therefore, the convergence factor is critical to the algorithm’s ability to seek optimization. Literature [1] pointed out that a larger convergence factor has better global search ability, to avoid the algorithm falling into local optimum; the smaller convergence factor has stronger local search ability, to speed up the convergence of the algorithm. But in the standard WOA algorithm, the convergence factor decreases linearly from 2 to 0 with the increasing of iteration. However, the linear reduction strategy of convergence factor makes the algorithm have better global search ability in the early stage but the convergence speed is slow, and the convergence speed in the later stage is fast, but it is easy to fall into the local optimal, and it is especially obvious when solving multipeak function problems. Therefore, in the evolutionary search process of WOA, the linear decreasing strategy of convergence factor with the number of iterations cannot fully reflect the actual optimization search process [29]. Therefore, this paper proposes that, in the early stage of evolution, the convergence factor slowly decreases exponentially, to ensure a strong global search capability in the early stage of the search while maintaining a fast convergence speed; later in the search, the convergence factor decreases linearly, in order to ensure that the algorithm has a faster convergence speed and avoid the algorithm falling into the local optimal. The updated formula of the convergence factor is

Here, is the current number of iterations and is the maximum number of iterations.

3.2.3. Improvement of Mutation Operation

Similar to other swarm intelligence algorithms, in the late evolution of the WOA, the group gathers closer to the optimal solution, diversity decreases, and it is easy to fall into local optimum. Secondly, in the evolution update formula of the standard WOA, the evolution information of the current generation is less used. According to previous research on differential evolution algorithm [30], in the evolution process, making full use of current generation information can ensure the global search ability and local search ability of the algorithm; inspired by this, this paper also introduces the current generation information in the update formula of the WOA, set as follows:

Randomly generate probability p between [0, 1]; if and |A| ≥ 1, then update according to (9):

If |A| < 1, update according to (10):

If the probability , update according to (11):

Here, b = 1, , and rand is a random number between [0, 1]; at the same time, in order to increase the diversity of the population and avoid premature convergence, Gaussian mutation is implemented on the individuals after the mutation, which takes the midpoint of the current individual and the optimal individual as the mean and the distance between the optimal individual and the current individual as the variance; the variation formula is as follows:

3.2.4. Constraint Processing Technology

In the constrained optimization problem, there are usually two methods for dealing with constraints. One is to transform into a multiobjective function, and the other is a penalty function method. This paper uses the penalty function method. The basic idea of the penalty function method is to take the violation of constraints as a kind of punishment for the minimum value and incorporate constraints into the objective function to obtain an unconstrained optimization problem and then use the optimization algorithm to optimize the unconstrained optimization problem and make the algorithm find the optimal solution of the problem under the action of the penalty [31]. Therefore, the objective function constructed in question (1) in this paper is as follows:where is the original objective function, δ (t) Q (x) is the penalty term, δ (t) is the penalty force, and Q (x) is the penalty factor. In the above formula, if δ (t) is fixed, it is called the fixed penalty function method; otherwise, it is called the nonfixed penalty function method. In the nonfixed penalty function method, δ (t) changes with the number of iterations. This paper uses the nonfixed multisegment mapping penalty function method proposed by Parsopoulos and Vrahatis [32]; the expression is as follows:

And

3.3. Out-of-Bound Processing Method

During the evolution of swarm intelligence algorithm, it is a common problem that the new solutions generated exceed the prescribed boundaries. How to deal with individual solutions beyond boundaries is also critical. There are two common processing methods: one is to take the boundary value when beyond the boundary, and the other is to regenerate a new solution within the value range. This paper takes the first method.

The specific steps of the IWOA are as follows:(i)Step 1. Set the population size and the maximum number of iterations and generate the initial population with the point set method.(ii)Step 2. Calculate the fitness of the population and keep the current optimal solution.(iii)Step 3. Update parameters .(iv)Step 4. Generate a random number p between [0, 1]; if and A ≥ 1, mutate according to equation (9); otherwise, update according to equation (10), if p ≥ 5, and then according to (11).(v)Step 5. Perform Gaussian mutation to the individuals in step 4 according to formula (12).(vi)Step 6. Perform boundary treatment on the individual after step 5 mutation and calculate the fitness of the new individuals, keeping the current optimal solution.(vii)Step 7. Determine whether the termination condition is satisfied. If yes, output the current optimal solution; otherwise, and return to step 2.

4. Simulation Experiment

In order to test the performance of the IWOA, firstly select 8 benchmark functions [33] for testing as shown in Table 1. The selected functions include single-modal functions and complex nonlinear multimodal functions. The single-modal function tests the local exploitation ability of the algorithm, and the multimodal function tests the global exploration ability of the algorithm. The dimension of all tested functions is set to 30. And compared with the standard WOA, Table 2 shows the mean values and the standard deviations of the two algorithms running 30 times.

It can be seen from Table 2 that, for multimodal functions F6 and F7, both algorithms can find the optimal solution. But F8 is better than the standard WOA. For single-mode functions F1, F2, F3, F4, and F5, the mean and std. are significantly better than the standard WOA, especially F3; the advantage of the IWOA is particularly obvious. This shows that the IWOA algorithm is superior to the standard WOA in both global search and local search capabilities.

Figure 5 shows the function evolution curves.

It can also be seen from the evolution curve that the IWOA algorithm converges fast; the tested functions can converge to the optimal solution or close to the optimal solution.

To further evaluate the performance of the improved algorithm to solve constrained optimization problems, in this paper, 13 benchmark constrained optimization problems in [34] are selected for testing. All functions are transformed into unconstrained optimization problems according to (13). In fact, it is similar to function optimization after conversion. The maximum evolutionary generation is 800. When the variable dimension , the population size is set to 80; , the population size is set to 100. Other parameter settings are the same as mentioned earlier. The WOA and IWOA algorithms are coded and realized in MATLAB. For each problem, 20 independent runs are performed and statistical results are provided including the maximum number of function evaluations (FEs), the best, mean values, and standard deviations. Table 3 shows the calculation results and comparison with other algorithms.

It can be seen from Table 3 that the IWOA algorithm can find the optimal solution or very close to the optimal solution. Among them, g01, g04, g05, g06, g08, g9, g10, g11, g12, and g13 have the best results, and all find the theoretical optimal solutions. And the solutions of g02, g03, and g07 are very close to the global optimal solutions. It can also be seen from the standard deviations that the stability of the IWOA is improved and has better robustness.

Further analysis found that, for g01, g04, g06, g08, g09, g10, and g12 with inequality constraints, g11 and g13 with equality constraints, and g05 with both equality and inequality constraints, the optimal solutions found by the IWOA algorithm are equivalent to ICA [20]. For g01, g08, g09, and g011, IWOA and ICA have almost the same stability, but for g02, g07, and g10, IWOA is more stable than ICA. The COMDE [23] algorithm shows good stability, although the standard deviations of the IWOA are less than that of COMDE, but for g01, g02, g03, g05, g07, g09, g10, and g13, their FEs are less than that of COMDE, indicating that the calculation cost of IWOA is less than that of COMDE. The NDE [24] is able to find the global optimal solution consistently in 12 out of 13 comparison functions with the exception of g02, but their FEs are set to 240,000. Compared with IPES [35], the IWOA algorithm finds more optimal solutions, and there are better mean values on g01, g02, g05, g10, and g13. For g03, g08, g11, and g12, the mean values of the two are equivalent. For g01 and g13, IWOA and IFOA [36] have almost the same mean values and standard deviations. On the whole, IFOA has strong stability, but its ability to find the best solutions is poor.

Figures 68 show the convergence graphs of log10 (f (x) −f ()) over FEs with 20 runs, where f () is the known optimal solution. Following the previous rule, the dimension of g01, g02, g03, and g07 is greater than 10, so the FEs are set to 80,000, and the others are set to 64,000.

As shown in Table 3, IWOA is able to find the global optimal solution consistently in 10 out of 13 test functions over 20 runs with the exception of test functions (g02, g03, and g07).

In order to test the effect of various parameters on IWOA, we compared the methods before and after the improvement. Among them, the initial population produced without a good point set method is called IWOA-1, the method without nonlinear convergence factor is called IWOA-2, the method without improved mutation operation is called IWOA-3, the method without constraint processing technology is called IWOA-4, and the method without out-of-bound processing is called IWOA-5. Each algorithm runs 20 times under the same conditions. Table 4 shows the best, mean values, and standard deviations.

It can be seen from Table 4 that the IWOA-2 without nonlinear convergence factor is the worst. It shows that, in the IOWA algorithm, the linear convergence factor has an obvious influence on it. For g01, g04, g05, g07, and g10 with many constraints, IWOA-5 without out-of-bound processing is not ideal, indicating that the more the constraints, the more sensitive the boundary. For IWOA-4, it can be seen that the experimental results are not as good as those with constraint processing technology, indicating that the constraint processing technology adopted in this paper is effective. For g01, g02, g03, g04, g07, g09, g10, and g13 with more variables, the effect of using the good point set method to generate the initial solutions is obviously better than the IWOA-1 without the good point set method. It shows that increasing the population diversity can improve the optimization ability. At the same time, it can be seen that the mutation operation that adds evolution information is better than IWOA-3. On the whole, the most obvious impact on the algorithm is the processing of nonlinear convergence factor and the processing of constraint technology.

In summary, the test results of 13 benchmark constrained optimization problems show that the IWOA algorithm has high accuracy and strong stability, and it is competitive in constrained optimization problems.

5. Conclusion and Future Work

Like other swarm intelligence algorithms, the WOA algorithm also has the disadvantages of slow convergence and easy to fall into local optimum. According to the characteristics of the function optimization problem, this paper improves the standard WOA from three aspects: (i) initial population, (ii) convergence factor, and (iii) mutation operation. An IWOA is proposed. At the same time, in order to solve the constraint optimization problem, the dynamic penalty function method is introduced to transform the constrained optimization problem into an unconstrained optimization problem. The test of 8 benchmark functions shows that the IWOA algorithm has improved global exploration ability and local exploitation ability. Then 13 benchmark constrained optimization problems were tested, and compared with other algorithms in many aspects, the experimental results can show that the IWOA algorithm is competitive.

WOA is a new type of swarm intelligence algorithm; the next work is to further improve the algorithm, so that it can be applied in more fields and solve more constrained optimization problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Science and Technology Research Project of Guangxi Universities (KY2015YB521), the Youth Education Teachers’ Basic Research Ability Enhancement Project of Guangxi Universities (2019KY1098), and the Key Project in Teaching Reform of Lushan College of Guangxi University Science and Technology (2018JGZ004).