Abstract

The grey wolf optimizer (GWO) algorithm is a recently developed, novel, population-based optimization technique that is inspired by the hunting mechanism of grey wolves. The GWO algorithm has some distinct advantages, such as few algorithm parameters, strong global optimization ability, and ease of implementation on a computer. However, the paramount challenge is that there are some cases where the GWO is prone to stagnation in local optima. This drawback of the GWO algorithm may be attributed to an insufficiency in its position-updated equation, which disregards the positional interaction information about the three best grey wolves (i.e., the three leaders). This paper proposes an improved version of the GWO algorithm that is based on a dynamically dimensioned search, spiral walking predation technique, and positional interaction information (referred to as the DGWO). In addition, a nonlinear control parameter strategy, i.e., the control parameter that is nonlinearly increased with an increase in iterations, is designed to balance the exploration and exploitation of the GWO algorithm. The experimental results for 23 general benchmark functions and 3 well-known engineering optimization design applications validate the effectiveness and feasibility of the proposed DGWO algorithm. The comparison results for the 23 benchmark functions show that the proposed DGWO algorithm performs significantly better than the GWO and its improved variant for most benchmarks. The DGWO provides the highest solution precision, strongest robustness, and fastest convergence rate among the compared algorithms in almost all cases.

1. Introduction

The rapid development of artificial intelligence (AI) is primarily attributed to the considerable progress of computational intelligence (CI). CI that is based on complex systems mainly consists of two categories [1], i.e., single-solution-based metaheuristics and population-based metaheuristics. Both single-solution-based algorithms and population-based algorithms employ a variety of mechanisms and are designed to solve extremely challenging problems in different complex systems.

Single-solution-based metaheuristics are usually only suitable for specific complex optimization problems because of their single particle scale and weak coordination capability. A heuristic based on simulated annealing (SA) is designed to solve the machine reassignment problem [2]. The threshold-accepting (TA) metaheuristic method is applied to solve the job shop scheduling problem of dehydration plants [3]. The microcanonical annealing (MA) algorithm is proposed for remote sensing image segmentation [4]. The tabu search (TS) metaheuristic is collaborated with a regenerator-reducing procedure to solve the regenerator location problem [5]. The guided local search (GLS) approach is introduced for multiuser detection in ultra-wideband systems [6]. The dynamically dimensioned search (DDS) is introduced for automatic calibration in watershed simulation models [711].

Compared with single-solution-based algorithms, the research and applications of population-based algorithms are more extensive because of the following three main advantages [11, 12]: more information can be obtained to guide the trial solutions toward a promising area within the search space by a set of trial solutions; local optimization can be effectively avoided because of the interaction of a set of trial solutions; and in terms of exploration ability, population-based heuristic algorithms are superior to single-solution-based heuristic algorithms. The genetic algorithm (GA) is used to address the characterization of hyperelastic materials [12, 13]. Particle swarm optimization (PSO) is attributed to improving and evaluating the performance of automated engineering design optimization [13, 14]. Differential evolution (DE) is presented for mobile robots to avoid obstacles [14, 15]. The dragonfly algorithm (DA) has been improved to train multilayer perceptrons [15, 16]. Shuffled complex evolution (SCE) is designed to optimize the load balancing of gateways in wireless sensor networks [16, 17]. The dolphin echolocation algorithm (DEA) is applied to design a steel frame structure [17, 18]. The bat algorithm (BA) is introduced to optimize the placement of a steel plate shear wall [18, 19], and the artificial bee colony (ABC) algorithm is presented to image steganalysis [19, 20]. The grey wolf optimizer (GWO) is adopted for parameter estimation in surface waves [20, 21].

The GWO is one of the most impressive swarm intelligence algorithms and is the only algorithm that is based on leadership hierarchy theory; it was introduced by Mirjalili et al. [22]. The GWO algorithm has three advantages [23, 24]: it has universal applicability to some real-life optimization problems; it is insensitive to derived information in the initial search; and it requires fewer algorithm parameters for adjustment. These features render it a simple, flexible, adaptable, usable, and stable algorithm [24, 25]. Therefore, since the GWO was proposed, researchers have conducted a considerable amount of in-depth research and applications. Regarding the improvements in research on the GWO algorithm, researchers tend to improve the performance of the GWO from four aspects [25]: position-updating mechanisms, new control parameters, encoding scheme of individuals, and population structure and hierarchy. Typical study cases are listed as follows: Mittal et al. [26] used the exponential decay function a to enhance the exploration process in the GWO. However, this algorithm suffers premature convergence. Kishor and Singh [27] proposed a modified version of the GWO by incorporating a simple crossover operator between two randomly selected individuals. However, this technique has low capacity in solving high-dimensional complex problems. A complex-valued encoding strategy was employed by Luo et al. [28] to substitute a typical real-valued strategy that was adopted in the standard GWO and propose a complex-encoded GWO. The main shortcoming of this method is that it suffers premature convergence. Yang et al. [29] used an effective cooperative hunting group and a random scout group strategy to propose a novel grouped grey wolf optimizer. This approach employs a complex mechanism. Xu et al. [30] proposed a chaotic dynamic weight grey wolf optimizer (CDGWO), in which a new position-updated equation, formed by employing a chaotic map and dynamic weight, was built to guide the search process for potential candidate solutions. Gupta and Deep [23] proposed a novel random walk grey wolf optimizer (RW-GWO); in the RW-GWO, the random walk strategy was used for improving the search ability of the GWO. However, it shows low solution accuracy. In addition, an improved grey wolf optimization (VW-GWO) algorithm based on variable weight strategies and the social hierarchy in the searching positions was presented by Gao and Zhao [31]. However, it employs a complex methodology. In terms of successful applications of the GWO, representative application research can be summarized as flow shop scheduling [32], machine learning [3336], economic load dispatch [37], robotics and path planning [38, 39], channel estimation in wireless communication systems [40], and other applications detailed in References [24, 25]. Theoretical and practical research has shown the potential of the GWO algorithm in real life. However, numerous studies and experimental results concluded that the optimization performance of the GWO algorithm needs improvement. Specifically, the trial solution diversity would be hampered by the three best wolves that were identified in the accumulative search [12]. Many metaheuristics, such as the GWO, can be easily trapped in the local optima when solving multimodal optimization problems, where multiple global optimum solutions exist [41] and the linear control parameter strategy is not the perfect design for balancing the exploration and exploitation. These drawbacks may lead to an undesirable optimal performance of the algorithm [27, 42]. In addition, existing research on the GWO algorithm does not discuss improvements in the performance of the GWO algorithm by considering the positional interaction information among the three leaders (i.e., the first three best wolves). In the actual hunting process, however, better predation efficiency can be obtained only when positional information is communicated among the three leaders. In this paper, the positional interaction information refers to the information communication among the three leaders in their predation process as reflected by the relative change in position. In addition to not considering the positional interaction information among the three leaders of the grey wolves, existing research does not explore other predation methods, such as spiral walking hunting, which may help hunting and increase the chance of jumping out of the local optima for the GWO algorithm. In summary, the GWO algorithm is a strong algorithm but suffers from the abovementioned shortcomings. Considering the drawbacks of the GWO algorithm, this paper decided to improve upon it.

Based on this analysis, this paper improves the GWO algorithm from the following three aspects: a hunting model is built based on the spiral walking, a position-updated equation is rebuilt based on the positional interaction information among the three leaders of grey wolves, and a nonlinear control parameter is designed to replace the linear control parameter of the standard GWO algorithm. The proposed algorithm is tested on 23 classical benchmark problems, CEC2014 suite, and three well-known engineering optimization problems. The experimental results reveal that the proposed method is robust, efficient, and superior compared to other algorithms.

The remaining sections of the paper are organized as follows: The original GWO algorithm and DDS are briefly overviewed in Section 2. In Section 3, the dynamically dimensioned search grey wolf optimizer, which is based on the deep search strategy (DGWOD), is proposed. The principle of searching the GWO by the DDS is detailed, and the position-updated equations, which are based on the deep search strategy and the nonlinear control parameter equation, are constructed. Section 4 provides the experimental results and a discussion of a set of well-known test functions. This paper is concluded and future research directions are presented in Section 5.

2. Overview of GWO and DDS

2.1. Standard GWO Algorithm

In this section, four parts of the basic GWO algorithm [22] inspired by the complete process of hunting the grey wolf for preying are described.

2.1.1. Foundation of the Social Hierarchy

In the GWO algorithm, the search is executed by the joint guidance of the first three best grey wolves (i.e., , , and ), and the positions of the grey wolves (solutions) are constantly adjusted with the guidance of the first three best grey wolves with an increase in the iteration number.

2.1.2. Encircling Prey

Grey wolves hunt their prey by encircling them, which is considered to be wise behavior. To describe the principle of this predation from the perspective of a mathematical model, Mirjalili et al. [22] constructed the following equation:where is the current number of iterations, is the position vector of the grey wolf at the iteration, the symbol “” indicates dot product, is the position vector of the prey at the iteration, is a vector that is relative to the position of the prey , and are the coefficient vectors, is a vector whose values are linearly decreased from 2 to 0 over iterations, and and are randomly generated vectors whose values lie between 0 and 1.

2.1.3. Hunting

As described in Section 2.1.2, the action of the grey wolf that encircles its prey provides the leader of the grey wolf group with necessary position information and forces the prey into promising areas. After the leader of the grey wolf group receives the position information about the prey, the next step is to guide omega wolves to conduct the hunting. To describe the hunting behaviors of grey wolves from the perspective of a mathematical model, we assumed that , , and represent the positions of the wolf, wolf, and wolf, respectively. Therefore, the mathematical models for grey wolf hunting are described as follows:

, , and are calculated using equation (1) as follows:

2.1.4. Attacking Prey

In the GWO algorithm, the behaviors of grey wolves that attack their prey are controlled by constant changes in the value of the linear control parameter . According to equation (3), the expression of the vector is correlated with the parameter . When the value of linearly decreases from 2 to 0, the value of the vector also decreases. When , the hunting of a grey wolf will occur at any position between its current position and that of its prey. When , wolves will search the entire solution space to locate the prey (optima). Therefore, represents a controlling parameter vector that causes exploration and exploitation. We determine that different values of the control parameter have a different role in the exploration and exploitation of the GWO algorithm. According to Reference [42], a larger is favorable for global exploration, while a smaller facilitates local exploitation. Therefore, the control parameter has an important role in balancing the exploration and exploitation of the GWO algorithm. However, for the standard GWO algorithm, several studies have shown that the value of the control parameter linearly changes and the design of the position-updated equation will cause some drawbacks, such as premature convergence of the algorithm and powerlessness when solving multimodal problems [12, 27, 42].

Based on this description, the pseudocode of the GWO algorithm is shown in Algorithm 1.

Input: population size N and the maximum number of iterations MaxIter
Output: optimal individual position and best fitness values
(1)Randomly initialize N individuals’ position to construct a population
(2)Calculate the fitness value of each individual and find , , and
(3)while or stopping criteria not met do
(4)for each individual do
(5)  Update current individuals’ position according to equation (8)
(6) end
(7) Update , , and ,
(8) Evaluate the fitness value of each individual
(9) Update , , and
(10)end while
2.2. DDS Algorithm

The DDS algorithm is a powerful single-solution-based metaheuristic algorithm, which was employed for calibration problems that arise in watershed simulation models. DDS was developed by Tolson and Shoemaker in 2007 [7] and was proposed for optimization problems that are bound constrained. Thus, achieving excellent optimization results for bounded constrained global optimization problems is the advantage of the DDS algorithm.

DDS is a point-to-point stochastic-based heuristic global search algorithm with no parameter tuning; global solutions are obtained by scaling within a user-specified maximum number of function evaluations (MaxIter) [43]. Since it is a simple model that is easily programmed and a global search algorithm, many researchers have focused their great attention on it. At the beginning, when the number of iterations is small, the global search of the algorithm is dominant. As the number of iterations approached the maximum, the algorithm evolved into a local search. The key idea for the DDS algorithm to transit from a global search to a local search is to dynamically and probabilistically reducing the number of dimensions to be perturbed in the neighborhood of the current best solution [11, 43]. The operation to dynamically and probabilistically reduce the number of dimensions to be perturbed can be summarized as follows: in each iteration, the variable is randomly selected with the probability from m decision variables for inclusion in the neighborhood . The probability is expressed aswhere indicates the current iteration and MaxIter represents the maximum number of iterations.

At each iteration , a new potential solution is obtained by perturbing the current best in randomly selected dimensions. These perturbation magnitudes are sampled using the standard normal random variable and reflecting decision variable bounds as [11]where ; is a scalar neighborhood size perturbation factor; is a random vector that is generated for the variable to be perturbed; and correspond to the upper bound and lower bound of the variable; and and denote the variable of the trial potential solution and the current best solution, respectively.

To accurately choose the best solution between the current best and the trial potential for the next iteration, the greedy search method is employed. The current best solution will be replaced by the trial solution if the objective function value of is smaller than that of , i.e., ; otherwise, the current best solution is reserved for the next iteration. The pseudocode description of the DDS algorithm is presented in Algorithm 2 [11].

Input: scalar neighborhood size perturbation factor , maximum number of iterations , number of variables (dimension) , and upper bounds and lower bounds
Output: and
(1)Initialization
  ,
  Set t = 1, , ,
(2)while do
(3) Compute the probability of perturbing the decision variables using equation (12)
(4)for to do
(5)  Generate uniform random numbers,
(6)  if then
(7)   Set
(8)  end if
(9)end for
(10)  Generate a standard normal random vector,
(11)  for to do
(12)    //equation (13)
(13)end for
(14)for to do
(15)   if then
(16)    Set
(17)    if then
(18)     Set
(19)    end if
(20)   end if
(21)   if then
(22)    Set
(23)    if then
(24)     Set
(25)    end if
(26)   end if
(27)  end for
(28) Evaluate
(29)if then
(30)  Set ,
(31)end if
(32) Set
(33)end while

3. Proposed Algorithm

As presented in the previous sections, the GWO algorithm encounters a few drawbacks, such as premature convergence and a low capability to handle the difficulties of a multimodal search landscape [25]. To overcome these weaknesses, the most effective improvement is to increase the diversity of candidate solutions and further improve the balance between exploration and exploitation during the iteration. In terms of increasing the diversity of candidate solutions, inspired by the core idea of the DDS algorithm, this study adopts two improved strategies to increase the performance of the GWO algorithm. One approach is to dynamically and probabilistically reduce the number of dimensions to be perturbed in the neighborhood, which enables candidate solutions to be perturbed between the current solutions and each of the three best solutions. Another method is to use the positional interaction information about the first three best grey wolf individuals (i.e., , , and ) in the process of encircling and preying on the prey to perform a deep search. The position-updated equation of the GWO algorithm, which is based on the dynamically dimensioned perturbation and position interaction information, is proposed, which is referred to as the DGWO. To balance the exploration and exploitation of the GWO algorithm, the introduction of the search mechanism of the DDS algorithm enables the GWO algorithm to gradually transform from a global search to a local search with an increase in the number of iterations. The GWO algorithm has a strong exploration ability in the initial search stage and a strong exploitation ability in the subsequent stage of iteration. The nonlinear control parameter is proposed to replace the linear control parameter of the standard GWO algorithm. This nonlinear control parameter strategy produces a GWO algorithm with a strong exploitation ability in the early stage of searching and a strong exploration ability in the subsequent stage of searching. Therefore, the introduction of the DDS and the nonlinear control parameter strategy strengthens the balance between exploration and exploitation of the GWO algorithm, and positional interaction information is utilized to conduct in-depth search and ensure the diversity of the candidate solution.

3.1. Two Ways to Hunt Prey Are Freely Switched Using DDS

As described in Section 2.1.3, a grey wolf hunts by direct encirclement. However, though actual observation, we found that, in addition to the previously mentioned hunting strategy, the grey wolf also approaches its prey by spiral walking when hunting. This way of spiral walking around the prey is often considered to be a very effective way to hunt [44]. Although we have determined that a grey wolf hunts by direct encirclement and spiral walking, a reasonable conversion of these two hunting methods has not been performed via research. The traditional method is to randomly convert between those two hunting methods by equal probability [44]. In an actual situation, the conversion probability between these two methods is not equal. A reasonable conversion method is that the grey wolf can freely switch between these two hunting methods during its predation process. Thus, the grey wolf has the best hunting effect, that is, to ensure that the grey wolf achieves the best prey (global optimum) in the best situation or obtains the relatively better prey (global approximation solution) in poorer conditions. We determined that the DDS method employs the conversion technique that we expected. According to Section 2.2, the core principle of the DDS algorithm is to transit the search from global to local by dynamically and probabilistically reducing the number of dimensions to be perturbed in the neighborhood of the current best solution, which causes the DDS to converge to the desired region to locate the global optimum in the best case or a reasonable local optimum in the worst case. Based on this analysis, the DDS method is introduced in the GWO algorithm to conduct free switching of the hunting behavior between direct encirclement and spiral walking to improve the quality of the solution of the GWO algorithm. The implementation steps are described as follows:(i)First, at each iteration t, , , and are calculated using equations (9)–(11).(ii)Second, at each iteration t, , , and are computed using the following equations:(iii)Finally, using the ideas of the DDS algorithm to transition the search from global to local, , , and are recalculated using the following equations:where , , and are random vectors between 0 and 1 and is a scalar neighborhood size perturbation factor, whose value is 0.2 in this paper.

3.2. Position-Updated Equation Based on the Positional Interaction Information

As described in the original literature [22] of the GWO algorithm, the alpha wolf is the supreme leader of the grey wolf pack and is primarily responsible for commanding all wolves to hunt, sleep, and wake. The leader in the second tier is referred to as the beta wolf, which is controlled by and is responsible for commanding the remaining wolves. The third tier of leadership entails the delta wolf, who has to submit to and but dominate the wolf. is the common wolf and has the subordinate role of listening to the orders of the first three leaders. This top-down leadership mechanism of the grey wolf pack enables the GWO algorithm to have strong exploration ability. As previously described, the cooperative hunting behavior of the grey wolf group is outstanding. One situation is that the first three best grey wolves (leaders) directly lead the wolf to hunt. In another situation, the wolf commands the wolf and the wolf to hunt and the wolf commands the wolf to hunt. The leadership relationship between these three leaders usually occurs via the relative position changes, that is, positional interaction information. In the standard GWO algorithm, however, only the former case is considered and the latter case is disregarded, which is very important for the hunting of the grey wolf group. Based on this shortcoming of the standard GWO algorithm, we design a position-updated equation that is based on positional interaction information aswhere , , and indicate the positional interaction information about the three leaders; , , and are random vectors between 0 and 1; and and are the position weights.

3.3. Nonlinear Control Parameter Design

As previously presented, the DDS algorithm has a strong global search ability (i.e., strong exploration) at the initial search stage and a strong local search ability (i.e., exploitation) in the subsequent search stage. In addition, the exploration and exploitation of the GWO algorithm are primarily controlled by the control parameter . When linearly decreases from 2 to 0, the algorithm exhibits strong exploration and weak exploitation in the initial stage of searching and exhibits strong exploitation and weak exploration in the subsequent stage. Since both the GWO algorithm and DDS algorithm have strong exploration in the initial iteration and strong exploitation in the subsequent iteration, introducing the search mechanism of the DDS algorithm into the GWO algorithm has further aggravated the imbalance between the exploration and the exploitation of the GWO algorithm. Therefore, an imbalance between exploration and exploitation occurs at different search stages of the algorithm. To address this problem, we designed the new nonlinear control parameter , which nonlinearly increases from −2 to 2, to substitute the linear control parameter of the standard GWO algorithm. In the initial search phase, because the population has a higher diversity, small values are needed to enhance the exploitation capability and accelerate convergence. In contrast, in the later stage of the search, since the diversity decreases in the population, larger values facilitate exploration and can help the search agents to be away from the local optimum. Therefore, the new nonlinear control parameter can ensure the relative strong exploitation at the initial iteration and a strong exploration in the subsequent iteration of the GWO algorithm. The nonlinear control parameter is designed aswhere and indicate the current iteration and the maximum iteration number, respectively.

Figure 1 shows the transition between exploration and exploitation that was caused by the adaptive values of the control parameters and . As shown in Figure 1, with respect to the GWO algorithm, half of the iterations are adapted to exploration () and the rest of the iterations are devoted to exploitation (). However, with regard to the DGWO algorithm, the number of iterations used for exploration and exploitation is 60.2% and 39.8%, respectively.

3.4. Framework and Pseudocode of the DGWO Algorithm

In this paper, the proposed hunt strategy of spiral walking is added to the GWO algorithm to enhance the ability of the predation. This strategy is freely switched between the original encirclement methods using the search mechanism of DDS. This method combines the proposed nonlinear control parameter strategy and the position-updated equation, which considers the positional interaction information, and develops the DGWO algorithm. The pseudocode of the proposed DGWO algorithm is shown in Algorithm 3.

Input: population size N, scalar neighborhood size perturbation factor , maximum number of iterations MaxIter, number of variables , and upper bounds and lower bounds
Output: optimal individual position and best fitness value
(1)Randomly initialize N individuals’ position r to construct a population
(2)Calculate the fitness value of each individual, find , , and , and set
(3)while do
(4) Compute the probability () of perturbing the decision variables using equation (12) and the value of the nonlinear control parameter using equation (27)
(5) Generate uniform random numbers
(6)for i = 1 to N do
(7)   for j = 1 to m do
(8)    if then
(9)     Calculate , , and according to equations (14)–(19)
(10)     Calculate , , and using equations (23)–(25)
(11)     Calculate , , and using equations (5)–(7)
(12)     Update current individuals’ position according to equation (26)
(13)    else
(14)     Calculate , , and according to equations (9)–(11)
(15)     Calculate , , and using equations (23)–(25)
(16)     Calculate , , and using equations (5)–(7)
(17)     Update current individuals’ position according to equation (26)
(18)    end if
(19)   end for
(20)end for
(21) Update , , and , i = 1, 2, 3
(22) Evaluate the fitness value of each individual
(23) Update , , and
(24) Set
(25)end while
3.5. Time Complexity of DGWO

The time complexities of the DGWO and GWO are summarized as follows:(1)In the initialization phase, the DGWO and GWO require O(N × m) time, where N represents the population size and m represents the dimension of the problem(2)Calculation of the control parameters of the DGWO and GWO requires O(N × m) time(3)Update of the agents’ position-updated equations of the DGWO and GWO requires O(N × m) time(4)Evaluation of the fitness value of each agent requires O(N × m) time

Based on the above analysis, for each generation, the total time complexity is O(N × m), and given a maximum number of iterations, the total time complexity of the DGWO and GWO is O(N × m × ), where indicates the maximum number of iterations.

3.6. Analysis and Comparison of the Diversity between GWO and DGWO

From equations (5) to (8), we can know that the grey wolves update their position under the leadership of the three best wolves. However, when the three fittest wolves get into local optimum, the whole search agents will all concentrate in this region, which leads to the decrease in diversity in the population, and the algorithm easily falls into local optimum. Based on this point, the DGWO algorithm is proposed to enhance the diversity of the GWO algorithm. To analyze and compare the diversity between the GWO and the DGWO, we choose the Sphere function as the benchmark test problem to see the difference in diversity between the GWO and the DGWO at different iterations. We set the population size as 30, the dimension of the problem as 2, and the upper and lower boundaries of the problem as 10 and −10, respectively. The diversity distributions of the DGWO and GWO at different iterations are plotted in Figure 2.

From Figure 2(a), when the number of iterations is 2, both the DGWO and GWO have high diversity individuals. However, from Figures 2(b) to 2(d), the DGWO algorithm shows better diversity of solutions than the GWO algorithm. This comparison confirms that the DGWO algorithm has higher diversity of solutions than the standard GWO algorithm.

4. Results and Discussion

4.1. Test Function Selection and Control Parameter Settings

In this section, to validate the performance of the proposed DGWO algorithm, 23 benchmark problems with various complexity and sizes are collected from studies [21, 23, 43]. The characteristics of the selected test functions are summarized in Table 1, where denotes the global optimal value. In this table, the key to test functions, the mathematical expression of each benchmark test problem, the boundary of variables, the dimensions of the solution, and the category of each function are detailed. These test problems were divided into three categories: unimodal, multimodal, and fixed-dimension multimodal. In Table 1, are unimodal problems that are used to benchmark the exploitation of algorithms because they have one global optimum and no local optima. Conversely, functions are multimodal and fixed-dimension multimodal problems, which are helpful in examining exploration and local optima avoidance of algorithms, since they have a large number of local optima [26, 42].

The same control parameter settings for the GWO and DGWO algorithms are listed in Table 2, where “m” represents the dimension of the problem, “N” represents the size of the population, “” represents the maximum number of iterations, “” and “” represent the position weight values, and “R” represents the independent simulation experiment times for each test problem. The proposed DGWO and standard GWO algorithms were coded in MATLAB R2015a. All simulation experiments were performed on a personal computer with Windows 10 64 bit professional OS and 4 GB RAM.

4.2. Impact of Position Weights and

As described in Section 3.2, the strategy of the modified position-updated equation (i.e., equation (26)) has an important role in balancing between exploration and exploitation in the evolution process. However, in equation (26), and are two crucial position weights for improving the optimization performance of the DGWO. In this section, to further investigate the impact of the position weight coefficients and , several independent experiments were designed and conducted. We varied the values of and and kept other algorithm’s parameters fixed for all benchmark functions. Values , , , , , , , , and , are selected to conduct experiments using 23 test functions. Among these test functions, the dimension of 13 test problems (f1f13) is 30. All experimental results are reported in Table 3. ‘‘Mean’’ and ‘‘St. dev.’’ values are two performance evaluation indexes shown in Table 3.

As shown in Table 3, the comprehensive optimization performance of the DGWO algorithm with position weights and is superior to that of other algorithms. Compared with the DGWO with and , the proposed DGWO algorithm with position weights and achieved better and similar results for 9 functions (i.e., f4f8, f12f14, and f16) and 3 functions (i.e., f11, f17, and f18), respectively, and achieved worse results for 11 functions (i.e., f1f3, f9-f10, f15, and f19f23). Compared with the DGWO with and , the DGWO with and obtained better and similar results for 6 problems (i.e., f5f7, f12-f13, and f16) and 3 problems (i.e., f11f14, and f18), respectively, and presented worse optimization results for 14 functions (i.e., f1f4, f8f10, f15, f17, and f19f23). Compared with the DGWO with and , the DGWO method with and attained better results for 6 functions (i.e., f5, f12f14, f16, and f20), obtained similar results for two functions (i.e., f11 and f18), and obtained worse results for 15 functions (i.e., f1f4, f6f10, f15, f17, f19, and f21f23). Compared with the DGWO algorithm with and , the DGWO with and achieved better optimization performance for 4 functions (f5, f14, f16, and f20), achieved similar results for one function (i.e., f18), and achieved worse results for the remaining functions. Based on this analysis, the optimization performance of the DGWO worsens as the position weight value increases and decreases. Therefore, considering all and values, we concluded that setting the position weight values and for the DGWO algorithm was an ideal choice, and the position weights and of the DGWO algorithm were set as 0.1 and 0.9, respectively, in the next experiments.

The convergence curves of the average objective function values of the DGWO with different position weight values and for 10 typical benchmark functions are plotted in Figure 3.

4.3. Effectiveness Analysis of the Two Components in DGWO

In the DGWO algorithm, two main components, namely, the modified position-updated equation and the nonlinear control parameter strategy, are proposed. To validate the effectiveness of these two components in improving the optimization performance of the DGWO, two experiments were conducted for 23 benchmark functions recorded in Table 1. Among those functions, the dimension of f1f13 is 30. The algorithm parameters are set the same as in Table 2. In the first experiment, the DGWO employed the modified position-updated equation (i.e., equation (26)), and the linear control parameter that is similar to that in the study of Mirjalili et al. [22] is referred to as the DGWO-1. In the second experiment, the DGWO only used the nonlinear control parameter strategy (i.e., equation (27)), and the position-updated equation (8) is referred to as the DGWO-2. Two statistical criteria, “Mean’’ and ‘‘St. dev.,’’ and the results of the DGWO-1, DGWO-2, and DGWO are shown in Table 4. Sign-rank sum tests at 0.05 and 0.1 significance levels were performed between the DGWO and each of DGWO-1 and DGWO-2.

From Table 4, compared to the DGWO, the DGWO-1 achieved better results on 6 functions (i.e., f4, f6, f8, f16, f21, and f22), showed similar or approximate performance on 3 test functions (i.e., f9, f10, and f11), and provided slightly poorer results on the rest of the test functions. It should be emphasized that the DGWO-1 could obtain very competitive optimization results compared to the DGWO, and its performance is not significantly inferior to that of the DGWO. We attribute the first experiment results to the fact that the modified position-updated equation has more advantages in balancing between exploration and exploitation and could ensure more potential solution diversity in the evolution process. Therefore, we can conclude that the performance differences between the DGWO and the DGWO-1 were not significant. From the results of the second experiment, it is found that the DGWO surpassed the DGWO-2 on 19 test functions and obtained similar results on function f18. To better understand this phenomenon, we need to know that the nonlinear control parameter strategy was specifically designed for the modified position-updated equation and is not suitable for independent use in the search process. Thus, the performance of the DGWO significantly outperformed that of the DGWO-2.

The convergence curves of the average objective function values of the DGWO, DGWO-1, and DGWO-2 on 10 typical test functions are plotted in Figure 4. From Table 4 and Figure 4, we can conclude that the two components of the DGWO are able to compensate for each other to improve the optimization performance of the GWO.

4.4. Performance Comparison with the Standard GWO Algorithm

We independently tested each problem 30 times to obtain four statistical criteria for comparing the performance of the algorithms; that is, “Best” indicates the best value, “Worst” represents the worst value, “Mean” denotes the average best values, and “St. dev.” indicates the standard deviation value. The simulation experimental results are described in Table 5.

As shown in Table 5, the DGWO has a better optimization performance for the seven unimodal benchmarks, with the exception of f5 and compared with the standard versions of the GWO, since the DGWO provides the best “Best,” “Worst,” “Mean,” and “St. dev.” values for 6 of 7 unimodal benchmarks. For the six multimodal benchmarks (f8f13) in Table 5, none of the standard versions of the GWO has a better optimization performance than the DGWO algorithm for all test problems using the “Mean” statistical criterion. As observed in Table 5, the DGWO algorithm achieved a better performance than the GWO for 5 fixed-dimension multimodal test functions (i.e., f14, f20f23) and provided slightly better results than the GWO for functions f16, f18, and f19. For function f16, however, the GWO obtained better results than the DGWO.

The percentage of problems solved by the GWO and DGWO is recorded in Table 6. It should be noted that when we use a certain algorithm to solve the 13 test functions (i.e., f1f13) and 10 test functions (i.e., f14f23) listed in Table 1, if the error between the actual value and its theoretical value is 10−5 and 10−3, respectively, then this problem can be regarded as having been successfully solved. From Table 6, it can be seen that, for functions f1, f2, f10, f16, f18, and f23, the DGWO and GWO obtained the same percentage of solving problems. On 13 test problems (i.e., f3, f4, f6f9, f11f15, and f21-f22), the DGWO has shown a higher percentage than the GWO. However, the GWO has shown a higher percentage than the DGWO for functions f17, f19, and f20.

To obtain an intuitive cognition of the convergence rate of the DGWO and GWO algorithms, Figure 5 shows the convergence curves of the DGWO and GWO for 12 typical test functions with m = 10, 30, 50, and 100. As shown in Figure 5, the DGWO algorithm achieved a faster convergence than the standard GWO algorithm for all 12 test problems. This finding verifies that the position-updated strategy and the nonlinear control parameter proposed in this paper can achieve faster search and excellent optimization performance of the DGWO algorithm for low- and high-dimensional problems.

4.5. Performance Comparison with the Modified GWO Algorithm

To further compare the optimization performance of the proposed DGWO algorithm with that of other improved GWO variants, i.e., the modified grey wolf optimizer (mGWO) algorithm [26], the grey wolf optimizer, which is based on the Powell local optimization (PGWO) method [45], and the exploration-enhanced grey wolf optimizer (EEGWO) algorithm [42], the parameters of the mGWO, PGWO, and EEGWO algorithms were established as follows: their population size was set to 30, and the maximum number of iterations was 500. The 23 benchmark test functions were selected from Table 1. The dimensions for 13 test functions (f1f13) were set to 10, 30, 50, and 100. Each algorithm was independently run 30 times using each test function for their corresponding dimension. The mean (denoted by “Mean”) and standard deviation (denoted by “St. dev.”) of the fitness value are the two statistical criteria used to evaluate the performance of the algorithm. The simulation results of these four algorithms are recorded in Table 7.

As shown in Table 7, the DGWO obtained the best “Mean” and “St. dev.” for functions f1, f2, f3, f8, and f13 with low dimensions (m = 10 and 30) and high dimensions (m = 50 and 100) compared with the mGWO, PGWO, and EEGWO. For the test problems f4 and f7 with m = 30, 50, and 100, the EEGWO achieved the best results among the four modified GWO algorithms, and the DGWO achieved slightly worse results than the EEGWO but better results than the mGWO and PGWO. For functions f9, f10, and f11, the DGWO and EEGWO achieved the same results and are better than the mGWO and PGWO; note that the DGWO and EEGWO can obtain theoretical optima (0) for functions f9 and f11. The PGWO obtained the best results on test problems f5, f6, and f12 with all dimensions (m = 10, 30, 50, and 100) and attained the global theoretical optima (0) on problem f6. However, the DGWO obtained the second best results for functions f5, f6, and f12, which are similar to the results of the PGWO. For functions f14 to f23 with a fixed number of dimensions, the DGWO achieved the best results for 7 test functions (f14, f15, f17, f19, and f21f23). Compared to the mGWO, the PGWO attained almost the same results for functions f16 and f20, which are better than those of the DGWO and EEGWO. On test function f18, the mGWO and PGWO obtained the best fitness values. In addition, the EEGWO algorithm exhibits poor optimization performance on functions f14 to f23.

From Table 7, we can see that the EEGWO provides very competitive results compared to the DGWO, and it is challenging to determine which algorithm is better. Therefore, it is necessary to conduct an appropriate statistical analysis to see whether the obtained results of the employed algorithms are significant at a given confidence interval. In this paper, the sign test is adopted, which is obtained from references [11, 46]. The statistical results are recorded in Table 8. It should be noted that these statistical analysis results are conducted based on the average results of the 20 independently obtained best results. As seen from Table 8, the DGWO is significantly better than the GWO, mGWO, and PGWO on unimodal and multimodal test functions at a significance level of 0.05 but shows a nonsignificant performance difference on 10 fixed-dimension multimodal benchmark functions. In addition, when compared to the EEGWO, the DGWO provides a nonsignificant performance difference on 13 unimodal and multimodal test functions but obtains significantly better results on 10 fixed-dimension multimodal benchmark functions at a significance level of 0.1.

The percentages of problems solved by the mGWO, PGWO, and EEGWO are recorded in Table 9. Compared to the mGWO, the DGWO obtained the same percentage on six functions (i.e., f1, f2, f5, f16, f18, and f23) and a higher percentage on thirteen functions (i.e., f3-f4, f6f9, f11f13, f15, and f21-f22), while the DGWO showed a lower percentage on three functions (i.e., f17, f19, and f20). Compared to the PGWO, the DGWO provided the same and higher percentage on five functions (i.e., f1, f2, f6, f11, and f18) and eleven functions (i.e., f3-f4, f7f12, f14-f15, and f23), respectively; on the contrary, the PGWO showed a higher percentage than the DGWO on six functions (i.e., f5, f17, f19, and f20f22). For function f13, the DGWO showed a higher percentage than the PGWO when the dimensions were 10, 30, and 50 but obtained a lower percentage when the dimension was 100. Compared to the EEGWO, the DGWO achieved the same and higher percentage on nine functions (i.e., f1f5, f9f11, and f20) and twelve functions (i.e., f8, f12-f13, f14f19, and f21f23), respectively. For function f7, however, the EEGWO obtained a higher percentage than the DGWO.

To investigate the convergence speed of the three modified versions of the GWO mentioned in this paper and the proposed DGWO algorithm for low-dimensional and high-dimensional problems, Figure 6 plots the convergence curves of 10 typical functions (f1f4, f6-f7, f9-f10, and f12-f13) with dimensions of 30 and 100. For functions f1f4, f7, and f9, the DGWO and EEGWO achieve the fastest convergence speed, whereas the EEGWO achieves a faster convergence speed for high-dimensional functions; however, the DGWO attains a better convergence speed for low-dimensional problems. The PGWO has a fast convergence speed for functions f6 and f12, and the DGWO ranked second. The DGWO exhibits the fastest convergence speed for functions f10 and f13, and the EEGWO shows the same convergence speed for function f10. These analysis results verify that the proposed DGWO achieves excellent convergence performance for both low-dimensional problems and high-dimensional problems.

In addition to the abovementioned GWO versions, an interesting GWO variant named “GWO-EPD” [47] has successfully caught our attention because it exhibits some similarities and differences compared with our proposed DGWO algorithm. The GWO-EPD algorithm has some features that are similar to those of the DGWO algorithm, such as dynamically removing some inferior solutions and repositioning them by adopting alpha, beta, and delta wolves. However, the differences between those two algorithms are also easy to distinguish. For example, in the DGWO, some variables of the current best solution are removed and repositioned by using probability that was modeled as equation (12), while in GWO-EPD, half of the worst search agents are eliminated and reinitialized with equal probability. In addition, in the DGWO, the variables are repositioned by employing the modified position-updated equation (see equation (26)); however, in GWO-EDP, the mechanism of EDP is applied in the GWO algorithm to randomly reinitialize its worst search agents. To further verify the scalability of the DGWO, we compared it with GWO-EPD on 13 test functions (i.e., f1f13), and the results are recorded in Table 1, with dimensions from 30 to 100. All of the DGWO parameters were kept the same as those defined in the above section. The parameter values of GWO-EPD were kept the same as in its original papers. In addition, the maximum number of iterations and population size were set as 500 and 30, respectively, and 30 independent runs were executed for each test function. The experimental results are presented in Table 10.

As seen from Table 10, for m = 30, compared to the GWO-EPD algorithm, the DGWO provided better results on eleven functions (i.e., f1f4, f6-f7, and f9f13). Similarly, for m = 100, when compared to GWO-EPD, the DGWO also offered better results on eleven functions (i.e., f1f4, f6-f7, and f9f13). However, better results for f5 and f8 were obtained by the GWO-EPD algorithm. In summary, the increase in dimensions has little impact on the performance of the DGWO algorithm. Even when suffering from large-scale optimization problems, the DGWO still worked well and obtained promising results.

4.6. Performance Comparison with Other State-of-the-Art Algorithms (m = 30)

In this section, we compared the DGWO to seven recently proposed state-of-the-art population-based optimization methods, such as the autonomous particles groups for particle swarm optimization (AGPSO) [48], the improved PSO with time-varying accelerator coefficients (IPSO) [49], the improved PSO algorithm based on asymmetric time-varying acceleration coefficients (MPSO) [50], the time-varying acceleration coefficients particle swarm optimization (TACPSO) [51], the hybrid differential evolution with biogeography-based optimization (DEBBO) [52], the hybrid whale optimization algorithm with simulated annealing (WOA-SA) [53], and the salp swarm algorithm (SSA) [54]. All DGWO parameters were kept the same as those listed in Section 4.1. The parameter settings of the seven algorithms are listed as follows: the population size is 30, the maximum number of iterations is 500, and the other algorithm parameters are the same as in their original papers.

To compare the optimization performance of the seven algorithms, the results are achieved over 30 independent runs. The best (denoted by “Best”), average (denoted by “Mean”), and standard deviation (denoted by “St. dev.”) of the best solution in the last iteration are collected in Table 11. The best obtained results are highlighted in boldface type.

Table 11 shows the results for 23 test functions. As presented in this table, the DGWO had the best results for three of seven unimodal benchmark problems (i.e., f3, f4, and f7). For function f6, the DGWO performed slightly worse than the SSA and obtained the second best result. For functions f1 and f2, the WOA-SA achieved the global optimal values (0), and the DGWO provided solutions near 0. For the multimodal benchmark functions f8f13, the DGWO presented the best results, with the exception of functions f12 and f13. For functions f8, f12, and f13, the DEBBO obtained almost the same results as the DGWO. Compared to the WOA-SA, the DGWO obtained similar and worse results for three functions (f9f11) and one function (f13), respectively. Table 11 also shows the results for 10 fixed-dimension multimodal benchmark functions (f14f23). As shown in Table 11, the results of the AGPSO, IPSO, MPSO, and TACPSO are equal for four functions (f14, f16, f18, and f20) and better than those of the DGWO. However, the DGWO achieved the best results of all algorithms for six fixed-dimension unimodal benchmark problems (i.e., f15, f17, f19, and f21f23). The WOA-SA and SSA obtained similar results for function f20 and are better than the other algorithms.

The percentages of problems solved by the seven state-of-the-art algorithms are listed in Table 12. As seen from this table, the AGPSO, IPSO, MPSO, and TACPSO have all failed to solve thirteen test functions (i.e., f1f13) but completely solved all five test functions (i.e., f14 and f16f19). Of the thirteen functions f1f13, nine functions are completely solved by the DGWO, three functions are fully solved by the DEBBO, six functions are fully solved by the WOA-SA, and two functions are fully solved by the SSA. Of the ten functions f14f23, four functions are completely solved by the DEBBO and SSA, three functions are fully solved by the WOA-SA, and two functions are fully solved by the DGWO. However, for functions f15 and f22-f23, the DGWO has achieved the highest percentage.

Figure 7 plots the convergence curves of the average objective function values of the algorithms for some typical test problems, where f1, f3, f4, and f7 are unimodal functions, f9, f10, and f11 are multimodal benchmark functions, and f15, f21, and f23 are fixed-dimension multimodal benchmark functions. As observed from these curves, the DGWO has the best convergence rates for all 10 classic benchmark functions. Note that unimodal test problems are suitable for benchmarking the convergence ability of algorithms since they have only one global minimum and do not have local minima in the search space [48]. Since multimodal and fixed-dimension multimodal benchmark functions have more than one local optimal solution, they are suitable for benchmarking the capability of algorithms in avoiding local minima [48]. As indicated by the results, the DGWO performs better than the seven compared algorithms on both the unimodal and multimodal benchmark functions. The DGWO achieves superior results because the candidate particles have diversity in the population and the balance between exploration and exploitation during the iteration is achieved by the strategies of the modified position-updated equation (i.e., equation (26)) and nonlinear control parameter (i.e., equation (27)).

To further investigate the optimization performance of the DGWO on some standard and complex benchmark problems, we compared it with the TACPSO, IPSO, and GWO on a CEC2014 benchmark test suite with the dimension 30. The parameter settings of the DGWO and other selected algorithms were the same as mentioned above. The maximum number of iterations for the DGWO was 5 × 104, and for each problem, 20 independent runs were implemented. The experiment results are shown in Table 13.

From Table 13, it can be seen that, of the unimodal test functions (f1f3), the proposed DGWO achieves the best performance on f1 and f2. Of the 13 multimodal functions (f4f16), the DGWO shows better results on 11 benchmark test functions and similar results on two test functions (i.e., f13 and f16). Of the 6 hybrid functions (f17f22), the DGWO gives the best results on four functions (f17, f19, f21, and f22), while it becomes the second best algorithm for function f18. Of the 8 composition functions (f23f30), the proposed DGWO gives the best results on all test functions except for function f26 but provides the worst result on function f26.

From the statistical analysis listed in Table 13, the DGWO performs better than the TACPSO at a significance level of 0.05 and better than the IPSO and GWO at a significance level of 0.1.

4.7. Experiment on Real-World Engineering Problems

In this section, several classic real-world engineering optimization problems were selected to validate the practical optimization performance of our proposed algorithm. The DGWO and GWO methods were applied to solve three well-known constrained engineering design problems, including Himmelblau’s problem, the gear train design, and the pressure vessel design. It is noted that the penalty function methods were employed to address the constrained optimization problems [55]. DGWO and GWO parameters for these three real-world engineering optimization applications were provided as follows: the population size was 30, the maximum number of iterations was 1000, and each problem was run independently 30 times.

4.7.1. Himmelblau’s Nonlinear Optimization Problem

Himmelblau’s problem is a well-known benchmark nonlinear constrained optimization problem that was developed by Himmelblau [56]. This type of optimization is performed to find the decision vector . To obtain the minimize function f, the objective function f is modeled aswhere

Several researchers have employed different algorithms to solve this problem, such as the generalized reduced gradient (GRG) [56], the genetic algorithm (GA) [57], GA solution based on a global reference (GA-G) [58], and GA solution based on a local reference (GA-L) [58]. Table 14 illustrates the results of the best run obtained by the DGWO and the previously mentioned methods. Table 14 reveals that the results achieved by employing the DGWO algorithm are better than those of the previously reported best feasible solution and that the DGWO could provide very competitive results compared to the GWO.

4.7.2. Gear Train Design Problem

The gear train design problem has four integer variables and was initially introduced by Sandgran [59]. The task of solving this problem is to determine the optimal number of teeth of the gearwheel between 12 and 60 to minimize the gear ratio of the gear train displayed in Figure 8. The optimization model of this problem, with the decision vector , is formulated as follows:where the .

Table 15 shows the optimization results of the best run of the gear train design problem, which is solved by different algorithms and the proposed DGWO algorithm. The statistical results of these algorithms and the results of the GSA-GA and CS algorithms proposed by Gandomi et al. [60] and Garg [61], which are shown in Table 16 with studies [60, 61], conclude that the result proposed by the DGWO algorithm is superior to the results of the two algorithms, and the worst (Worst), mean (Mean), and standard deviation (St. dev.) are low. The results obtained by the DGWO are slightly better than those by the GWO and are significantly better than those reported by different methods in [5962].

4.7.3. Pressure Vessel Design Problem

In this problem, a cylindrical pressure vessel is mounted on both ends by hemispherical balls, and its cylinder is formulated by combining two longitudinal welds, as described in Figure 9 [63]. The four decision variables, which include the thickness of the pressure vessel (Ts), thickness of the head (Th), inner radius of the vessel (R), and length of the cylindrical section of the vessel (L), are selected to be optimized to achieve the minimized total cost of the pressure vessel. Therefore, the formulation for this problem consists of four variables , which are modeled as follows:

The results obtained for this problem are computed by the proposed DGWO method and are compared with the results of the best run achieved by other algorithms in Table 17. The practical optimization performance of the DGWO algorithm is superior to that of existing approaches but slightly worse than that of the GWO. The statistical results after 30 independent runs are recorded in Table 18, which further validates the finding that the standard deviation of the proposed DGWO method is less than that of other algorithms except for the result reported in [64] and the worst result is better than that of the compared algorithms. In addition, the DGWO could obtain very close best and average results to the GWO and is better than other compared algorithms.

4.8. Several Insights for Applying the DGWO Algorithm

As discussed above, the optimization performance of the DGWO algorithm has been validated on several classical well-known benchmark functions. As seen from Table 3, the different position weight values and play an important role in improving the optimization performance of different types of problems. If the objective problems are a kind of unimodal or multimodal problem (such as ), the values of and can be 0.1 and 0.9 or 0.3 and 0.7, respectively, and both of these two strategies can obtain relatively high-quality solutions. If the objective problems are a kind of fixed-dimension multimodal problem (such as ), and can achieve better results than other position weight values. In addition, from Table 4, we can observe that the DGWO-1 algorithm, which employed the modified position-updated equation (i.e., equation (26)) and the linear control parameter , can provide very competitive results for unimodal and multimodal problems and slightly poorer results on fixed-dimension multimodal problems. Therefore, if the objective problems are unimodal or multimodal problems, the control parameter of the DGWO can adopt both and ; otherwise, if the objective problems are fixed-dimension multimodal problems, we recommend that the practitioners use the nonlinear control parameter strategy proposed in this paper (i.e., equation (27)).

5. Conclusions

In this paper, an improved version of the GWO (referred to as the DGWO) algorithm is proposed to solve continuous numerical optimization problems. First, the DDS method is introduced into the GWO algorithm to perturb a set of dimensions of the first three best solutions to increase the diversity of a particle and to enhance the exploration ability of the GWO algorithm. This method realizes the predation mode of freely switching between direct encirclement and spiral walking. Second, the position interaction information about the three leaders (i.e., , , and ) in the predation process is further considered, and the position-updated equation that is based on this information is proposed to increase the ability of the GWO algorithm to jump out of the local optimum. Finally, the proposed nonlinear control parameter strategy is designed to enhance the exploitation ability of the GWO algorithm, as well as the convergence precision and convergence rate. Based on the three improvements to the GWO algorithm, the balance between exploration and exploitation, convergence precision, and convergence rate have been enhanced. Twenty-three benchmark test problems, the CEC2014 benchmark suite, and three classic real-world engineering design applications were employed to verify the practical optimization performance of the proposed DGWO technique. First, the experimental results on unimodal functions show the exploitation ability of the DGWO, which helps accelerate the convergence speed and enhance the solution accuracy. Second, the exploration capability of the DGWO was demonstrated by the results on the multimodal functions. Third, the results from fixed-dimension multimodal functions and composite functions show that the DGWO succeeds in jumping out of local optima by balancing the exploration and exploitation. The simulations confirmed that the DGWO could find very competitive optimization results compared to recent GWO variants and state-of-the-art heuristic algorithms. However, the optimization performance of the DGWO algorithm for Himmelblau’s nonlinear engineering design problem is not very competitive but shows excellent results for the gear train design problem and the pressure vessel design problem. Although several experiments have demonstrated that the DGWO is efficient, effective, and robust, it also has several obvious shortcomings, such as the greater number of parameters that need to be adjusted compared with the original GWO algorithm, the poor optimization performance on the complex problems that are included in the CEC2014 suite, and the percentage of problems solved is not 100% guaranteed.

In future works, there are two main aspects that need to be implemented. An interesting research point is to further simplify the spiral walking predation technique and to improve the positional interaction information strategy to propose a variant of the GWO with a simpler algorithm structure and higher optimization performance. In addition, we intend to utilize the proposed DGWO algorithm for solving multiobjective optimization problems and economic load dispatch problems and training neural networks in our future research.

Data Availability

The MATLAB code used to support the findings of this study is available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was funded in part by the National Natural Science Foundation of China (grant no. 71841054), the Young Innovative Talents Training Program for Universities in Heilongjiang Province (grant no. UNPYSCT-2018151), the Fundamental Research Funds for the Central Universities (grant no. 2572018BM07), the Natural Science Foundation of Heilongjiang Province of China (grant no. LH2019G014), the Postdoctoral Foundation of Heilongjiang Province of China (grant no. LBH-Z18015), and the 2019 Annual Basic Project of the Party’s Political Construction Research Center of the Ministry of Industry and Information Technology (grant no. 19GZY411).