Abstract

Grey wolf optimizer (GWO) is an up-to-date nature-inspired optimization algorithm which has been used for solving many of the real-world applications since it was proposed. In the standard GWO, individuals are guided by the three dominant wolves alpha, beta, and delta in the leading hierarchy of the swarm. These three wolves provide their information about the potential locations of the global optimum in the search space. This learning mechanism is easy to implement. However, when the three wolves are in conflicting directions, an individual may not obtain better knowledge to update its position. To improve the utilization of the population knowledge, in this paper, we proposed a grey wolf optimizer based on the dimensional learning strategy (DLGWO). In the DLGWO, the three dominant wolves construct an exemplar wolf through the dimensional learning strategy (DLS) to guide the grey wolves in the swarm. Thereafter, to reinforce the exploration ability of the algorithm, the Levy flight is also utilized in the proposed method. 23 classic benchmark functions and engineering problems are used to test the effectiveness of the proposed method against the standard GWO, variants of the GWO, and other metaheuristic algorithms. The experimental results show that the proposed DLGWO has good performance in solving the global optimization problems.

1. Introduction

tIn the real world, there are many optimization problems to be solved, and thus, the development of optimization techniques is of great importance. Most of these techniques rely on the derivatives of the functions involved in the problem, but for some reasons, derivatives of the functions are sometimes hard to obtain. As an important part of derivative-free methods, nature-inspired algorithms are attracting more and more attention for their robust searching performance. Generally, nature-inspired algorithms can be classified into three categories: evolutionary algorithms (EAs), swarm intelligence (SI), and physics-based (PB) algorithms [15].

SI-based algorithms, as one category of nature-inspired algorithms, are becoming increasingly popular due to their various characteristics such as strong searching ability, simple implementation, few parameters, and the ability to avoid getting trapped at local optima [6]. In these algorithms, the search agents cooperate and exchange information with each other, which guarantees that the information about the search space is efficiently utilized. This helps the whole swarm to move towards more promising area in the search space. With the progress of the algorithm, there are roughly two patterns of behavior for the swarm to perform: the exploration and the exploitation. The exploration is the process of discovering new regions of the search space. The exploitation is the process of excavating the promising area in the search space for potentially better solutions. These two patterns are conflicting [7], and maintaining an appropriate balance between them is a great challenge for any SI-based algorithm. In the past few decades, numerous SI-based algorithms have been proposed in the literature and applied to real-life problems. Some of these algorithms are very representative such as particle swarm optimization (PSO) [8], ant colony optimization (ACO) [9], artificial bee colony (ABC) algorithm [1014], cuckoo search (CS) [15, 16], and bat algorithm (BA) [17, 18]. Some of the recent SI are teaching-learning-based optimization (TLBO) [19], salp swarm algorithm (SSA) [20], sine cosine algorithm (SCA) [21], whale optimization algorithm (WOA) [22], butterfly optimization algorithm (BOA) [23], and Harris hawks optimization (HHO) [24]. The grey wolf optimizer (GWO) was proposed by Mirjalili et al. [25] in 2014. The search mechanism of the GWO is established by mimicking the rigorous leadership hierarchy of a grey wolf pack and their group behavior when hunting. This mechanism guarantees a reliable exploration ability in the algorithm. Owing to its various advantages such as easy implementation, low computational complexity, and high search efficiency, the GWO has become quite notable and has been successfully applied to many fields in the past seven years [26, 27]. Despite the fact that the applications of the GWO technique in many fields have proved to be successful, previous works have shown that the GWO has some defects such as premature convergence and underutilization of population information and is prone to getting trapped at local optima, which are also common to some other SI-based algorithms [28, 29]. Many studies on the GWO method have been presented to address these shortcomings, which can be roughly categorized into four classes:(1)Modification of the algorithm parameters: in [30], a dynamically varying parameter was adopted to maintain the proportion of wolves staying in the feasible area. In [31], Taguchi method was utilized to tune the parameters of the GWO. Rodríguez et al. [32] adopted the fuzzy inference system to dynamically adapt parameters and . In [33], a chaotic map was incorporated into the standard GWO to control parameters in the algorithm. Moreover, Kumar [34] provided an enhanced variant where parameter was dynamically updating with the number of iterations. These modifications can improve the performance of the GWO on some problems, but the global exploration ability and local exploitation ability of the standard GWO are directly determined and balanced by the parameters and ; thus, the modification of these parameters can break the balance between exploration and exploitation.(2)Employ novel search strategies: in [35], an enhanced GWO variant was designed based on a new wolf position update mechanism. This significantly improved the local exploitation ability of the GWO, though it might suffer from premature stagnation when dealing with multimodal problems. Cai et al. [36] introduced the random local search strategy into the standard GWO to reinforce exploration. Yang et al. [37] employed the backward learning strategy to heighten the exploration in the GWO. Despite the contributions of this work, the proposed algorithm had difficulty in solving problems with more than three objectives, and its computational consumption was higher than the standard GWO. In [38], crossover and mutation operators were incorporated into the standard GWO to improve the diversity of the population, but these operators can prevent the proposed method from converging fast. Gupta and Deep [39] modified the original GWO algorithm based on random walk. Although the proposed method cannot exhibit outstanding ability in local exploitation, the global exploration ability of the leader wolves was enhanced. In [40], the authors employed different selection methods and investigated the effects of them on the GWO. Liu et al. [41] proposed a modified GWO variant based on multiple search strategies such as the adaptive chaotic mutation strategy, boundary mutation strategy, and elitism strategy. The modified method can enhance the performance of the GWO for solving many-objective optimization problems, but it was only equipped with one type of constraint-handling technique, which might be a limitation in dealing with other practical engineering problems. In [42], the enhanced global-best lead strategy, the adaptable cooperative strategy, and the disperse foraging strategy were embedded into the GWO. Dhargupta et al. [43] combined opposition-based learning with the GWO to guarantee its exploration performance and convergence rate. Although the improvement of population diversity in the proposed algorithm is significant, it is challenging for the initial population to converge fast at the beginning of the iterations.(3)Combine with other metaheuristics: Wen et al. [44] hybridized the GWO with cuckoo search algorithm. Although there was extra computational consumption, the hybrid algorithm obtained solutions with higher quality. In [45], the GWO was combined with min-conflict algorithm. The hybrid method shows a good search performance, which can be further improved by adjusting the selection method of min-conflict algorithm. Qu et al. [46] simplified the GWO and combined it with symbiotic organisms’ search. The combination of these two approaches accelerated the convergence speed of the algorithm, but individuals in the proposed method updated their positions only according to alpha, instead of the best three wolves. This put the population under the risk of trapping in the local optimum.(4)Adjustment of the hierarchy or population structure: in [47], two additional leader wolves were introduced to the leadership hierarchy to heighten the exploration ability of the standard GWO. Miao et al. [48] proposed a GWO variant based on an enhanced leading hierarchy. Yang et al. [49] divided the wolf pack into two independent groups. Fehmi Burcin Ozsoydan [50] investigated the effects of dominant wolves and made modifications to the GWO based on the variations of dominant wolves. Wang et al. [51] provided a GWO variant based on chaotic initialization. Despite the effectiveness of these adjustments, the changes in the hierarchy structure or population structure can affect the information flow mechanism of the GWO. For instance, a new leadership hierarchy might change the update process of the population, and the utilization of the multipopulation strategy might limit the utilization and exchange of information within the subpopulations.

tAccording to various works mentioned above, the main common purpose of these studies is to boost the convergence speed and accuracy of the GWO while maintaining population diversity to keep an appropriate balance between exploration and exploitation. However, the population diversity is strongly affected by the information flow mechanism of the algorithm. In the standard GWO, the search process is guided by the three dominant wolves. This leads the population to converge towards these wolves. Therefore, the information of these wolves is essential for the population, and it would reduce the search efficiency of the algorithm when the information of these wolves is conflicting.

To improve the information utilization of the population in GWO algorithm, in this paper, the dimensional learning strategy (DLS) is implemented in the standard GWO. DLS was proposed by Xu et al. [52] to protect the potential useful information of the population best solution in PSO. In the DLS, a learning exemplar is constructed for each particle. During the constructing process, each dimension of a particle’s personal best solution learns from the corresponding dimension of the population best solution. Therefore, the learning exemplar achieves the combination between the excellent information of the personal best experience and of the population best experience. Inspired by the DLS in PSO, we introduce the DLS into the leading hierarchy of GWO algorithm. There are some similarities and differences between our work and the DLS for PSO. The main purpose of employing the DLS in GWO is to efficiently protect the potential useful knowledge of the three dominant wolves, which is similar to the protection of the population best solution in PSO. The implementation of the DLS in GWO is by constructing an exemplar wolf within the three dominant wolves, which is similar to the construction of a learning exemplar between a particle’s personal best solution and the population best solution in PSO. However, DLS in PSO constructs a learning exemplar for each particle based on its personal best solution and the population best solution in the current iteration. To reduce computational cost, we only construct one exemplar wolf at each iteration to guide the whole population. On each dimension, the delta wolf learns from the corresponding dimensions of both alpha and beta wolves to determine the corresponding dimension of the exemplar wolf. By employing the DLS, the potential useful knowledge of the three dominant wolves is well protected, and the situation that their information on some dimensions is conflicting is effectively avoided, which allows the GWO to utilize the knowledge of the three dominant wolves comprehensively and efficiently. In addition, to maintain an appropriate balance between exploration and exploitation, the Levy flight [15] is also employed in the algorithm. Levy flights generate steps that are drawn from the Levy distribution. Levy distribution has a good chance to randomly generate long-distance movement, which can introduce random figures to the population and improve the diversity of the population. Based on the above modifications, an enhanced grey wolf optimizer based on the dimensional learning strategy, namely, DLGWO, is proposed.

The reminder of this paper is organized as follows: the related work is introduced in Section 2. In Section 3, the proposed method is presented and illustrated in detail. Several groups of experiments to test the proposed DLGWO are conducted and reported in Section 4. In Section 5, the applications of the DLGWO to address real-world optimization problems are given. Finally, we provide the conclusion of this study in Section 6.

2.1. GWO

GWO is a metaheuristic optimization algorithm which is inspired from social hierarchy and hunting behavior of grey wolves. Figure 1 shows the hierarchical structure in a grey wolf pack and the hunting behavior of the grey wolves in the 2D search space.

To mathematically present the social hierarchy of a wolf pack, the fittest three solutions of the population are considered as alpha (), beta (), and delta (), respectively. Other solutions in the population are considered as omega ().

The encircling behavior is mathematically modelled using the following equations:where and are the position vectors of the wolf and the prey, respectively. and are coefficient vectors, and and are random vectors in . and are the current iteration and the max iteration, respectively. The value of is linearly decreasing from 2 to 0.

Assume that alpha, beta, and delta have better knowledge about the prey’s position, and the other wolves update their positions according to the positions of alpha, beta, and delta. To mathematically express the hunting behavior, the equations are as follows:where , , and are the best three solutions. , , and are determined by equation (3). , , and are calculated using the following equations:where , , and are random vectors. In the GWO, the search behavior of the grey wolves is determined by parameter . Notice that is generated randomly in the interval . leads the search agents to expand their searching areas, and leads the search agents to converge to the areas that have already been explored. The pseudo-code of the standard GWO is presented in Algorithm 1.

(1)Initialize the positions of the population and the parameters of the GWO (max iteration , population size , , and )
(2)Calculate the fitness of each wolf and record the best three wolves
(3) = position of the best wolf
(4) = position of the second best wolf
(5) = position of the third best wolf
(6)whiledo
(7)for each wolf do
(8)  Update the position of the current wolf using equations (6)–(8)
(9)end for
(10) Update parameters and
(11) Check the boundaries and revise those wolves beyond the boundaries
(12) Calculate the fitness of all wolves
(13) Update , , and
(14)
(15)end while
(16)Return
2.2. PSO

PSO [8] mimics the swarm behavior, and the particles in the swarm represent search agents in the search space. Each particle updates its velocity and position by learning from its personal best position and the population best position in the search space, and the update equations are as follows:where and are the velocity and position of the th dimension of particle , respectively. is the historical best position of particle , and is the population historical best position. and are the acceleration coefficients, and and are two uniformly distributed random numbers independently generated in for the th dimension. The inertial weight is used to control the velocity, which decreases linearly from 0.9 to 0.4 over iterations.

2.3. DLS for PSO

Xu et al. [52] proposed the dimensional learning strategy (DLS) to protect the potential helpful information of the particles in PSO. In the standard PSO, the particles learn from their personal best experience and population best experience. This learning strategy can lead to the phenomena of “oscillation.” [53] When the personal best position and the population best position locate in two opposite directions of the current position , after particle moves towards , it will move closer to at the next iteration since the difference is larger than the difference . A particle will constantly wander between the personal best position and the population best position, which can cause “oscillation” and limit the search efficiency of PSO. DLS is different from the learning strategy of the standard PSO. In the DLS, the personal best position learns from the population best position dimension by dimension to construct a learning exemplar which allows the excellent information of to be inherited by the exemplar , improving the information utilization of . By replacing by in equation (9) of the standard PSO, the improved velocity update equation is as follows:

The process of constructing an exemplar is shown in Algorithm 2.

(1)
(2)for each dimension do
(3)
(4)ifthen
(5) Continue
(6)end if
(7)
(8)ifthen
(9)  
(10)end if
(11)end for

3. The Proposed Method

3.1. Motivation of the Work

In the standard GWO, other wolves update their positions according to the three dominant wolves in the leading hierarchy. These three wolves provide their information about the potential location of the prey, and their positions are closer to the global optimum or local optimum in the search space. However, when alpha, beta, and delta are located in conflicting directions, an individual may not obtain better knowledge about the promising area, as demonstrated in Figure 2. Thus, it is of great importance to protect the potential helpful knowledge of the leading hierarchy and improve the information utilization of the population.

4. DLS for the GWO

Inspired by the DLS in PSO [52], in this paper, DLS is employed in the leading hierarchy of the GWO to protect the potential helpful knowledge about the prey’s location and guide the individuals with more efficiency. In the DLS, delta learns from both alpha and beta dimension by dimension to construct an exemplar wolf; in this way, the excellent information of the three wolves can be passed to the exemplar wolf and other wolves in the pack. Figure 3 illustrates the process of the DLS; suppose that 4D sphere function is the objective function of the minimum problem, which has the global minimum point . , , and are the positions of alpha, beta, and delta, respectively. The numbers filled in red, blue, green, and yellow are the values of the three dominant wolves and the exemplar wolf at the first, second, third, and forth dimension, respectively. Initially, let the position vector of the exemplar wolf ; it is easy to calculate that ; then, two temporary vectors and are set for at each dimension during the process of the DLS which can be expressed as follows:(1)For dimension 1: let , , , and . ; thus, , , and .(2)For dimension 2: let , , , and . ; thus, remains unchanged, , and .(3)For dimension 3: let , , , and . ; thus, , , and .(4)For dimension 4: let , , , and . ; thus, , , and .

After the DLS, , and it learns from alpha at the first and the forth dimensions, beta at the third dimension, and delta at the second dimension. Hence, the final is constructed by combining with the dimensions learned from and , which indicates that the exemplar wolf is not worse than the three dominant wolves. To implement the DLS in the GWO, we substitute the best three wolves in the standard GWO with the exemplar wolf to update the positions of other wolves. The improved equation is given bywhere is the position vector of a wolf. is the position vector of the exemplar wolf. is an intermediate variable. and can be obtained by equations (3) and (4), respectively. As illustrated by Figure 3, before the value of the exemplar wolf at the current dimension is updated, is compared with the two temporary vectors and . and are set by substituting the value of with the value of and at the current dimension, respectively. If or is better than the exemplar wolf, the value of the exemplar wolf at the current dimension is updated. Otherwise, the exemplar wolf remains unchanged and continues the process of DLS at the next dimension. Therefore, the exemplar wolf learns only from the dimensions of the three dominant wolves that can help improve its fitness value, which guarantees that the exemplar wolf will not be degraded when alpha, beta, and delta are located in conflicting directions and hence improves the utilization of the population knowledge. The flowchart of the DLS is illustrated in Figure 4.

4.1. Trial Solutions Based on Levy Flight

For a SI-based algorithm, exploration and exploitation are performed simultaneously. Exploration is to discover more promising areas in the search space, and exploitation is to focus on the current optimal areas. Hence, it is important to keep an appropriate balance between exploration and exploitation. In the DLS, each grey wolf learns from the exemplar wolf; this leads the population to converge to the exemplar wolf which strengthens the exploitation ability of the algorithm and potentially causes premature convergence as well. To improve the exploration performance of the algorithm, the method proposed by Mantegna [54] to generate the Levy flight is utilized in the algorithm. In comparison to the Gaussian distribution which always generates small steps, the Levy distribution can occasionally generate long steps which is helpful for the exploration. The 2D and 3D trajectories drawn from the Levy distribution are illustrated in Figure 5.

After a wolf updates its position according to the exemplar wolf by equation (12), a trial solution is obtained by Levy flight using the following formula:where is the dimension size, is a randomly generated vector with the size of , and is the Levy flight function, which is given bywhere and are random values in the interval [0,1] and is a constant set to 1.5. can be expressed as

Then, the objective function value of the trial position is compared with that of the updated position , and the position with the smaller value is preserved, which can be expressed by

4.2. The Proposed DLGWO Algorithm

By employing the DLS and Levy flight, we proposed an enhanced variant of the GWO, named as dimension learning grey wolf optimizer (DLGWO). The framework of the proposed DLGWO is illustrated in Figure 6. Firstly, DLS is utilized to protect the potential useful knowledge of the three dominant wolves; this enhances search efficiency and reinforces the exploitation ability simultaneously. Secondly, Levy flight is embedded into the algorithm as an effective measure to guarantee population diversity and strengthen the exploration ability. The process of constructing the exemplar wolf is shown in Algorithm 3.

(1)
(2)for each dimension do
(3)
(4)
(5) Calculate and record the corresponding vector
(6) Replace with the corresponding vector, and if , then remains unchanged
(7)end for
4.3. Analysis of Computational Complexity

The computational costs of the standard GWO include initialization , evaluation at each iteration , selecting the dominant wolves , and position updating ; and are the population and dimension size, respectively. For the DLGWO, the additional costs occur when the dimensional learning process is utilized in which the position vector of the exemplar wolf is constructed by comparing with the smaller value of and , and since the position vector of the exemplar wolf has been updated and recorded during the process, the employing of the DLS requires additional computation of fitness evaluations in each iteration. From the above analysis, the computational complexity for the standard GWO and DLGWO is at the same level.

5. Experimental Verification and Analysis

In this section, 4 experiments are carried out and presented. First, the efficacy of the DLS and Levy flight in the proposed algorithm is verified. Then, we perform experiments to verify the capacity of the DLGWO on functions of different dimensions. After that, the proposed DLGWO is compared with the GWO and its 6 promising variants. Finally, DLGWO is examined with other well-established metaheuristics. A widely utilized set of benchmark functions is employed [28, 30, 40, 48, 55], and the detailed information about the benchmark functions can be found in Table 1. As presented in Table 1, the test set includes 7 unimodal functions (), 6 multimodal functions (), and 10 fixed-dimension multimodal functions (). There is only one global optimal solution in unimodal functions; thus, these functions are suitable for evaluating the local exploitation capability of an algorithm. With respect to the multimodal functions, they have several local optimal solutions besides the global optimal solution and thus are utilized to challenge the global exploration ability and the capacity of avoiding the local optimum. The fixed-dimension multimodal functions are composed of global optimum, local optimum, and many different characteristics such as rotation and shift and are used to examine the ability of an algorithm when disposing of complicated cases. Figure 7 demonstrates the landscapes of three benchmark functions. The indexes for comparing include the mean values (Mean), the standard deviations (SD), and rank (Rank) of the average best result for each method. Besides, to check if the improvements of the DLGWO over the other algorithms are significant, the Wilcoxon rank-sum tests at a 0.05 significance level are also utilized. The Wilcoxon rank-sum test is a paired test that checks for significant differences between two algorithms, where “+//” means that the proposed algorithm is significantly better, similar to, or significantly worse than the comparison algorithm. In addition, all the experiments are implemented using MATLAB R2016a and are run on a CPU Core i5-4210U, 4GB RAM, with Windows 10 operating system.

5.1. Effect of Different Strategies

Before comprehensive evaluations, it is essential to investigate the effect of each strategy on the performance of the proposed algorithm. As mentioned above, the DLGWO algorithm consists of two main improvement strategies: DLS and Levy flight. In this part, the effectiveness of these two strategies is validated. For this purpose, a comparison between the DLGWO and its variants is conducted. Herein, its variants are denoted as the DLSGWO and LFGWO, respectively. The algorithm adopting the DLS while the Levy flight is ignored is denoted as DLSGWO. The algorithm employing the Levy flight while the DLS is not utilized is denoted as LFGWO.

To more intuitively measure the exploration and exploitation abilities of the DLGWO and its variants, we compare the search history, trajectories of the first search agent at the first dimension, and population diversity of the DLGWO, DLSGWO, LFGWO, and the standard GWO. The population diversity is calculated using the following equations:where is the population size, is the dimension of the search space, denotes the th dimension of the th particle, and denotes the th dimension of the center position of the population.

The obtained results are exhibited in Figure 8. These results record the search history, trajectories of the first search agent at its first dimension, and diversity of solutions based on 20 search agents with the maximum number of function evaluations of 30000 in dealing with two unimodal functions ( and ) and two multimodal functions ( and ) from Table 1. The search history and trajectories are displayed in two dimensions, more clearly. These plots are recorded in the same way demonstrated in the original work of the GWO.

From the search history and trajectory plots in Figure 8, it can be seen that the DLGWO and LFGWO exhibit a better distributed scatter plot in the initial stage of iteration compared with the DLSGWO. These two methods have more extensive coverage on those unexplored areas in the search space that has not been covered efficiently by the DLSGWO. Those blue and yellow points away from the center of the search space are generated mostly based on Levy flight. Then, after initial iterations, search agents are guided towards areas with more high-quality solutions (in these four functions, the global optimum is at the center of the search space). In this process, the exemplar wolf constructed in the DLSGWO carries the information about potential areas with excellent solutions and leads the population to the center and its surrounding areas. Therefore, more search agents in the population are guided by the DLSGWO than other methods towards the vicinity of the center. This verifies the exploitation ability of the DLSGWO. By contrast, LFGWO cannot guarantee strong exploitation in further iterations; it shows a lower concentration of solutions around the optimum due to the randomness from Levy flight. Furthermore, DLGWO is able to obtain a fine balance: it exhibits a strong exploration capacity in the initial iterations and a good exploitation capacity in further iterations. This is because of the incorporation of the two strategies. As for the diversity curves in Figure 8, LFGWO obtains the highest diversity, and DLGWO is ranked second. From the results of diversity, it can be observed that the DLSGWO maintains the smallest diversity and, consequently, has high convergence speed. As expected, the LFGWO maintains the highest diversity during the initial stage, and then Levy flight can significantly improve the exploration of the search space which provides a wide range of variety in solutions. Finally, the diversity of the DLGWO is lower than that of the LFGWO, but higher than that of the standard GWO and DLSGWO. This is because of the appropriate balance between exploitation and exploration provided by the interaction and cooperation of the DLS and Levy flight. Therefore, the diversity comparison results also verify our expectation that the DLS is mainly responsible for local exploitation while Levy flight for global exploration. According to the results and analysis above, the incorporation of the DLS and Levy flight can effectively improve the balance between the exploratory and exploitative performance of the DLGWO, while the DLSGWO and LFGWO with only one of those strategies cannot obtain a better balance, individually.

To further investigate the utility and efficacy of the DLS and Levy flight, we compare the optimization results on of the DLGWO with both DLS and Levy flight, DLSGWO with the DLS alone, and LFGWO with Levy flight alone. The maximum number of fitness evaluations is set to 300000. Table 2 presents the statistical results of unimodal functions () and multimodal functions (), respectively. From the numerical results shown in Table 2, DLSGWO ranks first, followed by DLGWO and LFGWO, for unimodal functions; LFGWO ranks first, followed by DLGWO and DLSGWO, for multimodal functions. It can be drawn from the experimental results that DLS guarantees, in each generation, the outstanding information of the three dominant wolves to be inherited by each wolf in the swarm through the exemplar wolf, which strongly enhances the convergence performance. However, the diversity of the DLSGWO is relatively low and is easily trapped in the local optimum; thus, it is challenging for the DLS to deal with multimodal functions. Levy flight improves the diversity of the population and exhibits better performance for solving multimodal functions with many local optima. Our design expectation is verified by the experimental results that the DLS concentrates on local exploitation, while Levy flight focuses on global exploration. It is the interaction and cooperation of DLS and Levy flight that enables the competitive performance of the DLGWO for both unimodal and multimodal functions.

5.2. The Impact of Problem Dimension

Any efficient optimizer should be able to make a fine tradeoff between minimizing cost and maximizing accuracy. From the analysis of computational complexity, the computational cost of utilizing DLS increases as the dimension of the problem increases. To better evaluate the impact of dimension on the performance of the proposed method, DLGWO and the standard GWO are implemented on from Table 1 with dimensions of 10, 30, 100, and 200. All conditions are the same, and the maximum number of fitness evaluations is 300000. By increasing the dimension, the statistical results are recorded and shown in Table 3. The boldface in Table 3 indicates the best experimental results. It can be seen from Table 3 that DLGWO outperforms the standard GWO in all dimensions with a promising and stable performance. When dimension increases, the exemplar wolf has a more comprehensive knowledge of population information at different dimensions; thus, the exemplar wolf can guide the swarm with more efficiency. However, it can also be observed that, compared with the proposed method, the standard GWO can obtain optimization results with higher accuracy on , , and when the dimension increases to 200. The reason is that the extra fitness evaluations for DLGWO algorithm will also increase as the dimension rises, which can limit the performance of the DLGWO. Therefore, it is still an interesting and challenging future work to decrease the extra cost of constructing the exemplar wolf and obtain a better tradeoff between algorithm performance and computational cost.

5.3. Comparison of the DLGWO with the GWO and Its Variants

In this section, the proposed DLGWO is compared with the GWO and its several promising variants. The explanations and parameter settings for these GWO variants are given in Table 4. The test functions utilized in this part are all classical benchmark problems from Table 1. Each problem is independently executed 30 times, where the number of search agents and the maximum number of fitness evaluations are 40 and 300000, respectively. The statistical results are provided in Tables 59.

5.4. Algorithm Accuracy Analysis

The numerical results on unimodal functions () are presented in Table 5. As for unimodal functions , these problems are suitable for testing the exploitation ability of algorithms. From the obtained results, DLGWO achieves the best rank for , , , and . On , IGWO is the best optimizer with a competitive accuracy of 8.42E − 192, whereas DLGWO takes the second place with an accuracy of 6.53E − 182. On , DLGWO is not as competitive as GWOCS, MGWO, and IGWO and comes in the fourth place. The global optimum of is in a narrow area, which is quite challenging to obtain for most optimizers; thus, the results of the DLGWO and the other algorithms for are satisfactory. On , RWGWO exhibits slightly higher search accuracy compared with the DLGWO, but DLGWO also provides an outstanding accuracy of 4.28E − 07. In the final average rank, DLGWO is the best optimizer with the best exploitation ability, followed by IGWO, GWOCS, MGWO, learnGWO, SOGWO, RWGWO, and GWO.

The numerical results on multimodal functions () are presented in Table 6. Contrast to unimodal functions, multimodal functions are suitable for examining the exploration ability since most of them contain numerous local optima, which may lead to premature convergence of optimizers. On , it is difficult for the standard GWO to locate the global optimum. The reason is that this problem has many deep local optima far away from the global optima; once alpha, beta, and delta in the standard GWO are all trapped into a deep local optimum, they can hardly escape and may lead more search agents into the local optimum. From Table 6, DLGWO exhibits better ability of avoiding the local optimum and hence obtains a higher search accuracy of −8.35E + 03. This can be ascribed to the Levy flight strategy in the DLGWO which improves the population diversity of the algorithm. All the methods can locate the global optimum for and . On , IGWO obtains the highest accuracy of 4.44E − 15, followed by DLGWO with an accuracy of 4.94E − 15. On , DLGWO has an apparent advantage compared with the other GWO variants, with an excellent accuracy of 1.75E − 08. On , DLGWO takes the first place, and RWGWO also obtains a competitive accuracy of 3.57E − 07. On multimodal problems, DLGWO also exhibits the best performance, and RWGWO is the second competitive method, followed by IGWO, learnGWO, SOGWO, GWOCS, MGWO, and GWO. As observed in the results of unimodal and multimodal functions, DLGWO obtains the best rank for 9 out of 13 problems and can achieve the global optima for 4 problems, i.e., , , , and . It ranks first statistically among all the algorithms on both unimodal and multimodal functions, which demonstrates the efficiency of the DLS and Levy flight and the strength of their interaction and cooperation.

Table 7 presents the numerical results on the fixed-dimension multimodal functions (). These functions can evaluate the efficiency of the algorithms when dealing with functions with different characteristics such as rotation and shift. From the results provided in Table 7, DLGWO performs best on 4 out of 10 functions, i.e., , , , and . On , , , , and , DLGWO has the same search accuracy compared with the algorithms ranking first (RWGWO for , , and , SOGWO for , and MGWO for ), but DLGWO is not as robust as these algorithms and has a higher standard deviation. On , SOGWO obtains the global optimum, followed by DLGWO with a search accuracy of −3.28E + 00. On , DLGWO has the most robust performance with the smallest standard deviation of 5.04E − 07. In general, DLGWO shows competitive and robust performance on the fixed-dimensional functions, which proves that the two strategies implemented in the DLGWO show different effectiveness when dealing with functions with different features.

Tables 8 and 9 give the overall Wilcoxon rank-sum test results of the DLGWO and each algorithm. From Table 9, DLGWO outperforms the other comparative methods in different problems and obtains results of 20/3/0 vs. GWO, 14/6/3 vs. RWGWO, 19/4/0 vs. learnGWO, 18/4/1 vs. GWOCS, 14/6/3 vs. IGWO, 18/4/1 vs. SOGWO, and 16/5/2 vs. MGWO.

5.5. Convergence Behavior Analysis

To compare the convergence behavior, Figures 9 and 10 present the convergence curves of the DLGWO and other algorithms on . It can be detected that the DLGWO has the fastest convergence speed on and , and it obtains the global optimum within 100000 function evaluations. On , all the methods have similar convergence trends, but IGWO has the highest search accuracy, followed by DLGWO. On , GWOCS, MGWO, and IGWO can exploit the search space more successfully than the DLGWO, whereas the DLGWO takes the fourth place. The convergence rate for is similar for all the algorithms, and they converge fast during a few iterations, but then reveal stagnation behaviors. On and , DLGWO exhibits the fastest convergence speed and the highest convergence accuracy. The same trend can be observed on . On and , DLGWO and other GWO variants can all locate the global optimum. On , DLGWO exhibits slightly higher convergence rate compared with other optimizers. On and , DLGWO and RWGWO are top 2 optimizers, and they can explore the search space with more efficiency when other methods reach stagnation prematurely. According to these results, the utilized DLS and Levy flight in the proposed DLGWO algorithm can generally improve its convergence performance.

5.6. Comparison of the DLGWO with Other Metaheuristics

In this section, the performance of the DLGWO is examined with other well-established metaheuristics which can be generally divided into two categories: (1) recently developed algorithms, including TLBO [19], SSA [20], SCA [21], WOA [22], and BOA [23]; (2) high-performance algorithms, including LSHADE [57] (champion optimizer in the CEC 2014 test) and LSHADE-cnEpSin [58] (champion optimizer in the CEC 2017 test). The parameter settings for these algorithms are given in Table 10. The experimental conditions are the same as in the previous section: all algorithms are implemented over 30 independent runs with 40 search agents and the maximum number of function evaluations of 300000.

5.7. Algorithm Accuracy Analysis

Table 11 provides the comparison results on unimodal functions. For the unimodal functions (), DLGWO attains the best rank for 4 functions, i.e., , , , and , and obtains the global optimum for and . LSHADE achieves the best rank for 5 functions, one more than the DLGWO, and finds the global optimum for , , , and , two more than the DLGWO. LSHADE-cnEpSin achieves the best rank for 4 functions, the same as the DLGWO, but also achieves the global optimum for , , , and , two more than the DLGWO. On , DLGWO obtains the highest accuracy of 9.57E − 148, which is a great superiority to LSHADE and LSHADE-cnEpSin. LSHADE and LSHADE-cnEpSin overtake the other methods on with apparently higher search accuracy, and DLGWO comes in the third place. Compared with the rest metaheuristics, i.e., TLBO, SSA, SCA, WOA, and BOA, DLGWO exhibits distinguished performance since its average rank is smaller than these algorithms. To sum up, the proposed DLGWO shows strong performance on unimodal functions, but its exploitation ability is worse than CEC 2014 champion algorithm LSHADE and CEC 2017 champion algorithm LSHADE-cnEpSin. In the final rank, LSHADE-cnEpSin and LSHADE are the top 2 optimizers, with DLGWO ranking third, followed by TLBO, WOA, BOA and SSA (tied for the sixth place), and SCA. With respect to the multimodal functions () shown in Table 12, WOA outperforms the other methods on with the highest accuracy of −1.23E + 04, followed by the proposed DLGWO with an accuracy of −8.29E + 03. Among 8 algorithms, 5 algorithms can achieve the global optimum on , i.e., SCA, WOA, LSHADE, LSHADE-cnEpSin, and DLGWO. On , LSHADE-cnEpSin has the best result, but DLGWO also obtains competitive result with an accuracy of 4.64E − 15. The global optimum of can be achieved by 6 methods, i.e., TLBO, SCA, WOA, LSHADE, LSHADE-cnEpSin, and DLGWO. TLBO takes the first place on with the highest search precision of 2.45E − 24, followed by LSHADE-cnEpSin and DLGWO. On , SSA is the most efficient optimizer. According to the average rank and final rank of these algorithms on multimodal problems, DLGWO is inferior to CEC 2017 champion algorithm LSHADE-cnEpSin, but outperforms CEC 2014 champion algorithm LSHADE with slight advantage, and is superior to the other algorithms. Hence, the experimental results demonstrate the exploration capacity of the proposed DLGWO.

Table 13 gives the statistical results on the fixed-dimension multimodal functions (). According to the obtained results in Table 13, SSA, SCA, LSHADE-cnEpSin, and DLGWO obtain the highest accuracy on , but SSA has the smallest standard deviation. On , LSHADE-cnEpSin achieves the best result with the highest search accuracy of 2.11E − 07, followed by LSHADE and BOA, with DLGWO ranking fourth. On , all algorithms can obtain or locate close to the global optimum (−1.0316 for , 0.398 for , 3.00 for , and −3.86 for ), but they have different standard deviations, which indicate different performance stability of algorithms. TLBO attains the smallest standard deviations for , , and . On , LSHADE and LSHADE-cnEpSin have the most robust performance and achieve the smallest standard deviations. LSHADE obtains the best result on , while DLGWO also provides competitive result with an accuracy of −3.28E + 00. On , WOA and DLGWO are the best performed algorithms, both methods achieve the highest search precision of −1.01E + 01, but the standard deviation of WOA is slightly smaller. On and , DLGWO exhibits a noticeable advantage in terms of search precision and stability. From the average rank and final rank of these methods, DLGWO is the best optimizer in solving the fixed-dimension multimodal functions, with LSHADE-cnEpSin and LSHADE ranking second and third, followed by WOA and TLBO (tied for the fourth place), SSA, BOA, and SCA. Moreover, from Table 13, it can also be observed that DLGWO exhibits larger standard deviation on compared with several non-GWO competitors. The unstable performance on these functions can be attributed to the defect in the original GWO design that has a search tendency towards the origin of the coordinate system. When the global optimum of functions is shifted from the origin, this defect can affect the GWO and its modified variants [59].

Tables 14 and 15 provide the overall Wilcoxon rank-sum test results between DLGWO and each metaheuristic. From Table 14, DLGWO generally shows strong performance in comparison with other non-GWOs. Even compared with CEC 2017 champion algorithm LSHADE-cnEpSin and CEC 2014 champion algorithm LSHADE, DLGWO can obtain competitive results of 7/8/8 and 7/10/6, respectively. With regard to the rest optimizers, it shows superior performance and achieves results of 13/4/6 vs. TLBO, 15/1/7 vs. SSA, 19/4/0 vs. SCA, 12/7/4 vs. WOA, and 18/4/1 vs. BOA.

5.8. Convergence Behavior Analysis

The convergence curves of the DLGWO and the other metaheuristics based on 4 test functions are presented in Figure 11. As shown in Figure 11, DLGWO has the fastest convergence speed on , followed by LSHADE-cnEpSin and LSHADE. WOA does not converge fast in the initial stage of iterations, but it then accelerates and converges fast in further iterations. As for the other methods SSA, SCA, and BOA, they converge prematurely and cannot find solutions with high accuracy. On , LSHADE-cnEpSin and LSHADE have the highest convergence accuracy, although they cannot converge fast in the first half of iterations. DLGWO is inferior to these two algorithms in terms of search accuracy, but it shows an agreeable convergence behavior during the whole iterative course and has a strong ability to resist the trapping of local optima. By contrast, WOA, SSA, SCA, and BOA suffer from the immature stagnation. With regard to , DLGWO has the fastest convergence speed and achieves the global optimum at a very early stage of the iterations. On , the convergence accuracy of the DLGWO is slightly lower than that of LSHADE-cnEpSin and LSHADE, but DLGWO has the second fastest convergence rate, only inferior to TLBO. From the above analysis, DLGWO is effective with regard to convergence accuracy and speed compared with other metaheuristics.

6. DLGWO for Real-World Optimization Problems

In this section, the proposed DLGWO is implemented on three classical engineering optimization problems, tension/compression spring design, welded beam design, and pressure vessel design, to verify its performance in solving real-world problems. These three problems are often employed as classical constrained optimization problems [60, 61]. For a fair comparison, the experiments on the DLGWO and the other optimizers mentioned above are conducted on the same platform based on 30 independent executions with 40 search agents and 100000 function evaluations.

6.1. Tension/Compression Spring Design

This is a well-known optimization problem. The structure of a tension/compression spring is shown in Figure 12. The objective is to find the minimum weight of a spring with constraints on shear stress, surge frequency, and minimum deflection. Three decision variables are involved in the problem: wire diameter (), mean coil diameter (), and active coil number ().

The mathematical formulation of this problem is as follows:

The numerical results obtained by the DLGWO are compared with other optimization algorithms such as GWO, learnGWO, GSA, SCA, MVO, HHO, and LSHADE, and the comparison results are listed in Table 16. As shown in Table 16, the minimum weight for the compression/tension spring is achieved by LSHADE. Although the DLGWO is inferior to LSHADE, it is the second best optimizer for the compression/tension spring problem with a promising optimization performance.

6.2. Welded Beam Design

The goal of this problem is to obtain the minimum cost of the welded beam subject to constraints including bending stress (), shear stress (), buckling load (), beam deflection (), and other side constraints. The four design variables related to the problem are weld thickness (), length of the beam attached to the weld (), height of the beam (), and thickness of the beam (). The schematic of this problem is illustrated in Figure 13.

This problem can be mathematically expressed as follows:

Table 17 gives the numerical results obtained by the DLGWO and the other methods such as GWO, RWGWO, MFO, KH, BOA, SSA, MPA, and JADE. As demonstrated by Table 17, the minimum weight for the welded beam design problem is obtained by the DLGWO.

6.3. Pressure Vessel Design Problem

The pressure vessel design problem was proposed by Kannan and Kramer [68], and the goal of this problem is to obtain the minimum cost of a columnar vessel, which is shown in Figure 14.

The design variables are the thickness of the ball (), shell thickness (), shell length (), and inside radius (). The mathematical model can be expressed as follows:

The optimization results for the pressure vessel design problem are listed in Table 18. The compared algorithms are GWO, EEGWO, CMA-ES, TSO, CSA, EO, TLBO, and DA. From Table 18, DLGWO can provide an excellent parameter design plan with lower cost compared to other algorithms.

7. Conclusion

This paper introduces DLS into the leading hierarchy of the GWO algorithm. During the process of the DLS, the delta wolf learns from the corresponding dimensions of both alpha and beta wolves to construct an exemplar wolf. Instead of learning from the three dominant wolves in each iteration, the search agents learn from the exemplar wolf which is composed of the excellent information learned from the three dominant wolves. This improves the efficiency of utilizing the population information of the GWO. Moreover, the Levy flight is adopted to reinforce the global exploration ability. Based on these two strategies, the DLGWO algorithm is proposed. To verify the validity of the proposed method for solving global optimization problems, DLGWO is compared with some variants of the GWO and some other metaheuristics based on 23 widely utilized benchmark functions. The experimental results of the benchmark functions manifest that the DLGWO exhibits a promising search performance and has a good balance between exploitation and exploration. Furthermore, DLGWO is implemented into three engineering problems, and the statistical results verify that the proposed DLGWO is an efficient and stable optimization method.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Grant no. 51675228) and Program of Science and Technology Development Plan of Jilin Province, China (Grant nos. 20180101052JC and 20190303020SF).