Abstract

The optimization of high-dimensional functions is an important problem in both science and engineering. Wolf pack algorithm is a technique often used for computing the global optimum of a multivariable function. In this paper, we develop a new wolf pack algorithm that can accurately compute the optimal value of a high-dimensional function. First, chaotic opposite initialization is designed to improve the quality of initial solution. Second, the disturbance factor is added in the scouting process to enhance the searching ability of wolves, and an adaptive step length is designed to enhance the global searching ability to prevent wolves from falling into the local optimum effectively. A set of standard test functions are selected to test the performance of the proposed algorithm, and the test results are compared with other algorithms. The high-dimensional and ultrahigh-dimensional functions (500 and 1000) are tested. The experimental results show that the proposed algorithm features in good global convergence, high accuracy calculation, strong robustness, and excellent performance in high-dimensional functions.

1. Introduction

The rapid development of science and technology enriches human life. Meanwhile, the problems encountered in various fields, especially in engineering and technology related fields, become increasingly complex and diversified, for example, large-scale dispatching problems [13] and large-scale power system optimization problems [46]. The complexity of the problem increases exponentially with the increase of dimension, which reduces the performance of traditional optimization algorithms and thus the ability to solve this kind of problems well. Inspired by the living mode of creatures in nature, swarm intelligence algorithm contains the wisdom of nature for thousands of years. Compared with the traditional optimization algorithm, its structure is simple and easy to implement, so it is widely used. Swarm intelligence algorithm is applied to high-dimensional function optimization problems by many scholars. The study in [7] used neighborhood factors to improve the NFO algorithm, which is applied to complex optimization problems and achieves good results. Compared with the traditional particle swarm optimization (PSO) algorithm, the improved particle swarm optimization algorithm in [8] is applied to high-dimensional function optimization because its convergence accuracy in high-dimensional function is improved. The study in [9] applied the improved dolphin colony algorithm to solve the high-dimensional function optimization problem and achieved good results. Although the performance of the improved method outweighs the original algorithm in high-dimensional function optimization, the improved one still has low convergence precision and low solving efficiency, making it difficult to meet the requirements of more and more refinement in the field of engineering science and technology.

In [10], wolf pack algorithm (WPA) was proposed based on a detailed analysis of wolves’ scouting behavior and prey allocation methods. WPA is abstracted as three kinds of artificial wolves (the lead wolf, the scout wolf, and the ferocious wolf). In the process of hunting, there are three kinds of intelligent behaviors (the calling of the lead wolf, the wandering of the scout wolf, and the siege of the ferocious wolf), in addition to the generation rule of the “winner is the king” and the population updating rule of “survival of the fittest”. As a novel swarm intelligence algorithm, WPA has good performance in global optimization and local exploration [11]. Its optimization method is different from the previous bionic intelligent algorithms such as particle swarm optimization (PSO) [12], ant colony optimization (ACO) algorithm [13], shuffled frog leaping algorithm (SFLA) [14], genetic algorithm (GA) [15], and artificial bee colony (ABC) algorithm [16]. It is widely adopted in PID parameter tuning [17], path planning [18], knapsack problem [19], combinational optimization problem [20], etc. The study in [10] indicates that the superiority of WPA is obvious in solving complex and high-dimensional problems. However, some shortcomings still exist in the algorithm: for example, the initial population produced randomly makes the algorithm easily fall into local optimum, and low quality initial solution increases the calculation amount of the algorithm; the greedy search method and rigid wandering direction of wolf detection stand great chances of falling into local optimum and missing the optimal solution; the step size of three linear relationships leads to the inflexible movement of the whole wolf pack, increasing the computational complexity of the algorithm.

The study in [21] makes use of tent chaotic mapping method to improve the quality of initial solution to the wolf pack algorithm, which endows the WPA with faster convergence speed and higher solution accuracy. In [22], a chaos optimization method based on logistic map was used to initialize the population, which improved the optimization accuracy and convergence rate of WPA. In [23], the d-dimensional chaotic variables were mapped to the solution space to obtain the initial wolf group. With the aid of opposition based learning (OBL), the study in [24] adopts an opposite wolf pack initialization method and puts forward an oppositional wolf pack algorithm (OWPA), which improves the quality of the initial solution and the convergence speed of the algorithm.

However, considering only the initial population quality or initial population distribution, the above method is still prone to fall into local optimum and the convergence accuracy is not high in some functions. Meanwhile, the above improved method is not applied to high-dimensional function optimization, so it is difficult to show that it has advantages in high-dimensional function optimization.

Compared with the existing research work, the four main innovative aspects of this paper are as follows:(1)Chaotic self-logical sequences are used and mapped to the solution space, and then the initial population is further optimized by the reverse formula(2)Disturbance factor is used to disturb the wolf ’s scout direction to increase the randomness of the scout(3)The scouting step length of the scout wolf and the ferocious wolf is set as the adaptive step size, so as to adjust the moving speed according to their own position during the movement process(4)A lot of experimental comparisons are made on hyperdimensional function showing the effectiveness of the improved algorithm

The remainder of this paper is organized as follows: In Section 2, the chaotic disturbance wolf pack algorithm (CDWPA) is described in detail. In Section 3, the theoretical analysis of the algorithm is provided. In Section 4, a variety of benchmark functions with different mathematical characteristics are tested. The simulation turns out that the CDWPA possesses higher accuracy in convergence and computational robustness and can effectively solve the problem of high-dimensional function optimization.

2. Chaotic Disturbance Wolf Pack Algorithm

The wolves are located in the Euclidean space , in which is the total number of artificial wolves in the pack and is the number of variables to be optimized. The position of the artificial wolf can be expressed as , where is the position of the artificial wolf in the d-dimensional variable space to be optimized; the odor concentration of prey perceived by the artificial wolf is , where is the value of the target function. The improved wolf pack algorithm is composed of chaos reverse initialization population, wolf generation, adaptive and disturbance scout, fierce wolf siege, and population update rules. The algorithm flowchart is shown in Figure 1.

2.1. Chaotic Opposite Initialization of the Population

Chaotic state exists widely in nature and society. It has the characteristics of randomness, ergodicity, and regularity. Chaotic movement can traverse all the states according to its own law in a certain range. Therefore, in order to ensure the diversity and randomness of the individual population of wolves , this paper adopts logical self-mapping (LSM) function to generate the wolf pack sequence. First, random wolves were created, and then wolves were mapped to chaotic space by

To intuitively display the characteristics of the above chaotic maps, the probability distribution characteristics of the logical self-mapping on the (0, 1) interval are compared with those of Gauss map [25], sinusoidal iterator [26], and logistic map [27]. The calculation of probability distribution is estimated as follows: 50000 chaotic numerical points are generated by 50000 iterations of chaotic map, the interval (0, 1) is equally divided into 100 intervals, and the probability of these 50000 points falling into 100 equal intervals is counted; the probability distribution diagram is shown in Figures 25.

From the analysis of Figures 25, the distribution of sinusoidal iterator is low in the front end but high in the back end, and most of the initial solutions are distributed in the interval . If the optimal value of the function problem does not exist in this interval, it will easily miss the optimal solution by the sinusoid iterator. The Gauss map is just the opposite; its probability distribution shows a decreasing trend, and the distribution trend of the logistic map is very large in middle and small at both ends. If the global optimal solution of the function problem is distributed in the middle, a large number of invalid searches will appear, which will be unfavorable to global optimization. LSM chaotic sequences are uniformly distributed in the interval (0, 1). This distribution can avoid oversearching in some local areas (for example, a large number of searches of logistic maps are concentrated at both ends of the interval), thus reducing the adverse effects on the optimization algorithm due to the mismatch between the distribution characteristics of chaotic sequences and the position of the global optimal solution of the optimization problem.

If the definition domain of logical self-mapping function is except for 0 and 0.5, chaos will take effect in logic definition. After the chaotic variable sequence is obtained by chaotic search, the chaotic ergodic sequence should be transformed into the original solution space according to the following formula to evaluate the fitness value.

Among them, and represent the upper and lower boundaries of the artificial wolf , respectively. Besides, the objective function values of the system random wolf pack and chaotic mapping wolf pack are calculated severally. Once the better solution is found in this process, the better wolf pack will replace the original wolf pack. Then the optimized wolf pack is operated reversely to calculate the reverse wolf pack.

Many artificial wolves with better target function values are selected from the to form the initial wolves and complete the initialization of the wolves.

2.2. Selecting Lead Wolf

The lead wolf, responsible for directing the whole to cooperate in hunting, is the artificial wolf with the largest odor concentration of prey in each generation. In the algorithm, the artificial wolf with the largest objective function value is selected as the lead wolf in the current wolf pack. There is only one lead wolf. If there are multiple wolves with the same objective function value, one of them will be randomly selected as the leader. Meanwhile, the lead wolf is not a life-long system. In the process of operation, if the objective function value of the artificial wolf is better than that of the current one, the artificial wolf will replace the former one and become the current leader. The lead wolf does not perform the behavior of wandering, besieging, and so on.

2.3. Adaptive Disturbance Scout

After the lead wolf is determined, the scout wolf proceeds with optimization according to its own position, and the basic calculation formula is the following one:

In this formula, refers to the place of the current searching wolf, is the walk length, is the walk direction, . After is given in the formula, the searching wolf can only walk in a fixed direction, which is easy to fall into local optimum. Therefore, a random disturbance factor is designed to change the walking formula to

At the same time, considering the distance between the detection wolf and the lead wolf, the scouting step length is improved towhere stands for the current location of the lead wolf and is the optimal location of the lead wolf, so that the detection wolf can not only cover the fixed search range previously, but also search other directions randomly, which increases the optimal searching ability by breaking through the encirclement. The walking location set of the detection wolf is , and the value of the target function and that of the lead wolf are analyzed. If the fitness value is greater than the current lead wolf, then the current wolf detector becomes the lead wolf.

By analyzing (5), after the search direction is fixed, the migration of different wolf detectors is equivalent to the movement in fixed directions of . As shown in Figure 6, if the optimal prey is located at the parallel lines of the search direction, the detection wolf will be easily trapped into the local optimum and escape will be difficult. When the disturbance factor and adaptive step size are added, the search direction of the wolf detector is more random and possesses strong ability to break through; thus, it is not easy to fall into local optimum.

2.4. The Besiege of Ferocious Wolf

When the scouting behavior is over, the current artificial lead wolf sends out a calling behavior to inform the other artificial wolves of their own location and fitness value. The rest of the artificial wolves move closer to the lead wolf in a rush step according to their distance from the lead wolf.

In the formula, stands for the current location of the lead wolf. is the besieging size of ferocious wolf. When the artificial wolf is far away from the lead wolf, it approaches the lead wolf with a larger step length; when the artificial wolf is close to the lead wolf, it approaches the lead wolf with a smaller step length. In addition, if the odor concentration of prey perceived by the ferocious wolf is greater than that of the current lead wolf, the strong one will become the leader, guiding the wolf pack to hunt until it is replaced by a better wolf or its operations is over.

After the iteration of the , if the odor concentration of prey perceived by wolf is , then the ferocious wolf replaces the former wolf leader and initiates the calling behavior. If the odor concentration of prey perceived by wolf is , then the ferocious wolf will continue to attack until the distance between wolf and the lead wolf is less than the determined distance ; it changes into besiege behavior. Given that the value range of the d-variable to be optimized is , is the determinant factor in distance, and the determinant distance is

The Act of Siege. After the attack, the wolves besiege the prey to capture it. At this time, the position of the wolf is regarded as the location of the prey. If the prey’s position in the d-dimensional space is , the siege behavior of wolves can be expressed by the following equation:where is the random integer evenly distributed in the interval [−1, 1] and is the siege step length. If the odor concentration of prey perceived by artificial wolf is greater than that of its original position after the siege, the location of the artificial wolf will be updated; otherwise, the location of the artificial wolf will remain unchanged.

2.5. Updating the Wolves

To maintain the competitiveness and diversity of the wolf pack, after the end of the hunting behavior, the wolf pack will distribute the prey according to their contributions and eliminate the artificial wolf with the worst fitness value. At the same time, the artificial wolf will be randomly generated to complete the population updating. is the random integer in , and will be taken as the updating proportion factor.

The steps of chaotic disturbance wolf pack algorithm are as follows:Step 1. Chaos reverse wolf group initialization: initialize the spatial coordinates of wolf pack randomly in the solution space according to (1)–(4).Step 2. Adaptive disturbance walk: the walk strategy is carried out on (6)–(7).Step 3. Calling behavior: the lead wolf sends out signals based on its position; the besieging wolf takes adaptive attack strategy to approach the lead wolf after receiving the signal.Step 4. The siege of ferocious wolf: after reaching the siege distance, the ferocious wolf starts to encircle the prey.Step 5. Update the location of the wolf pack: sort on the basis of the fitness value of the wolf pack, and select the best one.Step 6. Population regeneration: individuals with poor fitness in the population will be eliminated, and an equal number of new individuals will be generated.Step 7. Repeat the process of 2–6 until the maximum number of iterations is reached or the algorithm accuracy meets the threshold.

3. Theoretical Analysis of the CDWPA Algorithm

3.1. Convergence Analysis of the Algorithm

Markov chain is a stochastic process with no aftereffect, which is often used to prove the convergence of the algorithm [28]. CDWPA is a process that repeatedly produces chaotic sequence, reverse scouting, summoning, rush, besieging, and wolf pack renewal behavior. Each behavior of the population is only related to the current state of the population but has nothing to do with the previous state. Therefore, the CDWPA population sequence conforms to Markov chain.

Reference [29] has proved that if an intelligent algorithm satisfies the following two conditions, the intelligent algorithm converges to the global optimal solution with probability 1.Condition one: the sum of any two solutions and in the solution space can be obtained by various operators in of the algorithmCondition two: if the population sequence is monotone

According to relevant theories, the problem of proof is transformed into the following two points:Point one: the CDWPA population sequence is ergodicPoint two: CDWPA is a finite homogeneous Markov chain

If the above two conditions are satisfied, CDWPA will converge to the global optimal solution with probability 1.

Assuming that the search space of CDWPA is and the state changes of state space caused by chaotic sequence, reverse population, scouting, rushing, besieging, and population updating are, respectively, expressed by transition probability , then the transition probability matrix of Markov chain of CDWPA is obtained as

The definitions are given as follows:

Definition 1. Assume the to be the transition probability matrix of a Markov chain. If there is such a matrix for , resulting in , then the Markov chain is said to be irreducible.

Definition 2. Assume that there is a nonempty set . If the greatest common divisor of the set is 1, then the Markov chain is aperiodic.

Definition 3. Assume that for a recurrent state . If , is called normal return. In particular, if is normal recurrent and aperiodic, then the Markov chain is ergodic.
To prove the convergence of CDWPA, we have the following two points.Point one: the Markov chain of CDWPA population sequence is ergodic.The following is proved: (1) Markov chain of CDWPA population sequence is irreducible. Suppose the generation population of the algorithm is , is the state of the first artificial wolf. Because the transition probability matrix of the population Markov chain is only related to the initial and terminal states and is always true, the transition probability matrix P of a population is positive definite matrix. According to Definition 1, the Markov chain of CDWPA population sequence is irreducible. (2) The Markov chain of CDWPA population is aperiodic and irreducible. For a given , the Markov chain obtained from condition one is irreducible, and is inevitable, so that it holds that , and combined with Definition 2,. Therefore, the maximum common divisor of is 1, so the Markov chain of CDWPA population is aperiodic. From (1) The Markov chain of CDWPA population is aperiodic and irreducible.The Markov chain of CDWPA population is ergodic.Since the values of the transition matrix H, F, Y, B, W, and Z are all within [0, 1] and is the probability of the statebeing transformed into a state through various behaviors, there must be ; let . The Cauchy–Riemann equation and Definition 3 prove .In conclusion, the Markov chain of CDWPA population sequence is ergodic, and point one is proved.Point two: CDWPA is a finite homogeneous Markov chain.It is proved that each generation of populationis limited, and so is Markov chain; secondly, the algorithm repeatedly generates chaotic sequence, reverse scouting, summoning, rush, siege, and wolf pack update behavior to find better prey, and the individual renewal has the characteristics of high-quality selection. The generation of population is only related to current generation. After repeated iterative optimization, CDWPA can obtain a set of sequence solutions, and the sequence is a finite homogeneous Markov chain. To sum up, reasoning 1 and 2 show that the Markov chain of CDWPA population sequence is ergodic, and its sequence solution is a finite homogeneous Markov chain. Therefore, it is proved that CDWPA converges to the global optimal solution of the problem with probability 1.

3.2. Time Complexity Analysis of the Algorithm

The time complexity reflects the operation efficiency of the algorithm. The time complexity of cuckoo algorithm is analyzed in [30]. In this paper, the same method is adopted to analyze the time complexity of CDWPA.

In the WPA, we set the size of the wolf pack as and the individual dimension as . If the time of step size , update scale factor , search direction of wolf , and judging distance is , the time of generating random number is , and the time of solving fitness function value is , then the time complexity of initialization phase is shown in

The wolf pack was sorted in line with the fitness value, the time of the lead wolf was selected as , and the time for the artificial wolf to execute the walking strategy was calculated as . In the summoning step, the distance between the artificial wolf and the lead wolf is , the time needed to move the position of a single wolf in each dimension is , and the time to judge whether the attack range is reached is ; then, the time complexity of this stage is given as

From the above formula, we can get the time complexity of the (WPA) to obtain the optimal solution of each generation.

The process of CDWPA is analyzed. In the initialization stage, the generation time of logical self-mapping sequence is , and the time of reverse generation population is . The other generation parameters, dimensions, and solution fitness function values are the same as those of WPA. Then, the time complexity of CDWPA initialization phase is given as

The time of calculating the distance between the detection wolf and the lead wolf added by the adaptive step length is , and the other wolves are selected to perform the same steps as WPA, such as scouting, summoning, and besieging; then, the time complexity of this stage is

To sum up, the total time complexity of CDWPA to find the optimal value of each generation is shown as

From the above analysis, compared with WPA, the time complexity of CDWPA does not change, and the efficiency of CDWPA does not decrease.

4. Simulation Experiment and Algorithm Validity Test

4.1. Test Function

In order to test the performance of the CDWPA proposed in this paper, we select 10 standard test functions in [31] (dimensions from 2 to 200) to conduct the first set of experiments, and compare the optimization performance with wolf pack algorithm (WPA), oppositional wolf pack algorithm (OWPA) in [24], chaotic wolf pack algorithm (CWPA) in [22], PSO, ABC, and ASFA [32]. In order to further verify the ability of the improved algorithm to solve high-dimensional complex functions, the second set of experiments was carried out. Four high-dimensional (500 and 1000 dimensions) functions covering different types were selected, and the test results of seven algorithms were compared. The specific characteristics of the 10 standard test functions are shown in Table 1. “U” is unimodal function, “M” is multimodal function, “S” is separable function, and “N” is nonseparable function. Unimodal function is used to test the mining ability of the algorithm, multimodal function tests the exploration ability of the algorithm, and high-dimensional function tests the ability of the algorithm to solve complex problems.

To express the optimization effect more clearly, BEST, WORST, MEAN, and STD are selected to represent the best objective function value, the worst value, the mean value of target function, and the standard deviation of target function at the end of the optimization result. SR (successful rate) indicates the success rate of optimization, and the ideal optimal value of the function is set as R. when the relationship of them satisfies (18), the optimization is successful.

4.2. Simulation Experiment and Result Analysis

Because different authors adopt various parameters, this paper selects the same general parameters; for example, the population size is 100 (wolves, particle swarm, fish swarm, bee colony, etc.), the maximum number of iterations in the first set of experiments is 200, and the maximum number of iterations in the second set is 2000. The other parameter settings are shown in Table 2.

The above experimental environment is as follows: HP Shadow Genie 4, Windows 10, inter® Core™ i7-8750H; the program is implemented by MATLAB R2017b, M language.

In the above table, means the maximum scouting times. means step length factor. means determination distance factor. means renewal scaling factor. means inertia weight. and mean learning factor. means individual speed limit. means number of bees. means controls parameter. means maximum number of temptations. means sense of distance. means crowding factor. means step size.

4.2.1. Analysis of the First Set of Experimental Results

Table 3 shows the optimal value, mean value, worst value, standard deviation, and success rate of the seven algorithms for ten standard test functions.

Table 3 shows the comparison of the first set of experimental results of the seven algorithms. The analysis shows that the success rate of CDWPA for ten test functions is 100%, and the results are better than WPA, OWPA, and CWPA, except that the performance of Booth optimization of 2D function is worse than ABC, the performance of other function optimization is better than other algorithms, and the convergence accuracy of CDWPA in , is improved. In particular, for functions and , their convergence results achieved the theoretical optimal value 0. Simulation results show the effectiveness of the improved CDWPA.

In the case of the same population size and iteration times, on the solution of the low peak one-dimensional separable function sphere, the success rate of PSO, ABC, and ASFA was 0. The success rate of WPA, OWPA, CWPA, and CDWPA was 100%. Moreover, CDWPA is obviously superior to other algorithms in terms of optimal value, worst value, standard deviation, and average value, which indicates that CDWPA owns better calculation accuracy and robustness.

Except for ABC, all algorithms can successfully search for the solution of the unimodal low-dimensional indivisible functions Easom and Matyas. OWPA has the best optimal value, and CDWPA follows closely with slight difference. Moreover, it is better than OWPA and other algorithms in mean value, standard deviation, and STD, which shows that CDWPA algorithm has stronger stability and balance ability.

When solving the low-dimensional, multipeak, and separable function Booth, the success rate of all algorithms is 100% except ASFA which is 95%. CWPA has the best optimal value. Compared with WPA and the two improved wolf pack algorithms OWPA and CWPA, CWPA is better in terms of mean, variance, and worst value. However, CDWPA has no obvious advantage compared to ABC.

In solving the step of middle dimension, unimodal and separable functions, WPA and CDWPA can find the theoretical optimal value, and the success rate is 100%. CDWPA has fewer iterations and higher optimization efficiency. However, the OWPA with the same reverse strategy holds a success rate of only 50%, and its convergence accuracy is lower than that of CDWPA. In CDWPA, chaotic sequences are used to optimize the initial population.

For solving unimodal and multidimensional inseparable function on Eggcrate, the success rate of PSO, ABC, WPA, and CDWPA is 100%, and that of OWPA and CWPA is 95% and 40%, respectively, yet the success rate of ASFA is only 5%. CDWPA has the best performance in respect of optimal value optimization and is many orders of magnitude more accurate than other algorithms. At the same time, it performs better in mean value, standard deviation, and worst value.

For 200-dimensional functions, PSO, ABC, and ASFA cannot be well optimized when the number of iterations is set to 200, making it easy to fall into local optimization and difficult to escape. WPA, OWPA, CWPA, and CDWPA proposed in this paper have good performance on high-dimensional functions. By analyzing and comparing the characteristics of these algorithms in solving high-dimensional functions, the characteristics of CDWPA are illustrated.

First, the function Sumsquares belongs to the high-dimensional, unimodal, separable function, and Quadric is a peak multidimensional separable function. The success rate of WPA, OWPA, and CDWPA is 100%, and the success rate of CWPA is 90%. Compared with function, the only difference from function and function is that both of them are high-dimensional function. This shows that CDWPA has its own advantages in high-dimensional function optimization, and CDWPA has improved a lot compared with the optimal value, average value, variance, and other indicators.

For Griewank and Ackley with 200-dimensional, multimodal, separable function, the success rate of WPA is 95% and 96%, and that of CWPA is 100% and 0, respectively, which are easy to fall into local extremum. For OWPA and CDWPA, the success rate is 100%, while CDWPA can get global extremum. At the same time, by comparing OWPA with CDWPA further, it turns out that CDWPA is better than OWPA in the precision of optimal value, and CDWPA is less than OWPA in variance, which indicates that CDWPA is better than OWPA in robustness.

For low-dimensional functions such as , , , and , the classical intelligent optimization algorithms of PSO, ABC, and ASFA have a good solving effect. Although CDWPA effect is improved, it is not very obvious, and the solving effect is even slightly lower than that of ABC on function . However, as the dimensions rise to 100, the CDWPA advantage becomes apparent. The analysis shows that, for most heuristic optimization algorithms, the generation of random variables is usually based on a certain standard probability distribution, such as uniform distribution and Gaussian distribution. As chaos not only has randomness but also has better spatial ergodicity and unrepeatability, the algorithm is more diverse after adding chaos to random search based on certain probability distribution, and the possibility of jumping out of local extremum point is greater, which enables the algorithm to search at a relatively faster speed. When the dimension is low, the solution space is relatively simple, the distribution of the initial population can meet the needs of the solution space, and the advantages of chaos are not obvious. However, when the solution space becomes more complex, the quality of the solution is required to be higher, and the advantages of chaos and disturbance are fully embodied. This is the reason why the optimization effect of the algorithm is improved significantly in the case of high-dimensional complex functions and is not obvious in the case of low-dimensional functions.

4.2.2. Analysis of the Second Set of Experimental Results

Tables 4 and 5 show the mean value, variance, and optimal value of the four test functions calculated 20 times by seven algorithms on 500 and 1000 dimensions, respectively. Tables 6 and 7 show the P values for the rank sum test. Figures 7 and 8 show the optimization curves of seven algorithms in 500 and 1000 dimensions.

The advantages of CDWPA are not very obvious on low-dimensional functions. However, when the function dimension increases to 500 or even 1000 dimensions, the results of CDWPA are better than other algorithms in optimal value, mean value, and standard deviation. In the case of 500 dimensions, CDWPA achieves the theoretical optimal value of 0 towards Griewank function. With the increase of function dimension, the convergence accuracy of several algorithms decreases, but the optimization effect of CDWPA is obviously better than that of other algorithms and has better advantages in standard deviation, which demonstrates that CDWPA is better in robustness. In general, CDWPA performs better than the other six algorithms in 500- and 1000-dimensional test functions, which verifies the effectiveness of CDWPA in solving high-dimensional complex functions.

For low-dimensional space optimization, the guidance information of optimization is easy to obtain and confirm. The introduction of the optimization strategy based on the idea of “certainty” can improve the algorithm’s optimization precision and convergence speed. However, high-dimensional data space to present a kind of highly nonlinear and asymmetric, nonconvex, multipeak complex features, such as value orientation information, is very difficult to obtain accurately; at this point in the optimization strategy, parameter is set to introduce some “random” strategy and will help the algorithm to maintain the diversity of population, to produce more high-quality data processing, prompting algorithm to jump out of local extremum, and balance the exploration and development ability of the algorithm. Precisely, because the strategy adopted by CDWPA is the “directed random” method, the performance of the low-dimensional solution space is improved, but the gap with other algorithms is not obvious. However, in the high-dimensional solution space, the advantage of “directed random” is clearly reflected.

In order to determine whether the proposed CDWPA is significantly improved compared with other algorithms, this paper carried out nonparametric statistical test, namely, Wilcoxon’s rank sum test [33]. The results of CDWPA in each benchmark function of 500 and 1000 dimensions were tested and compared with other algorithms by 5%. Tables 6 and 7 list the P values obtained through the tests, in which values less than 0.05 indicated that the null hypothesis was rejected, so there was a significant difference at the level of 5%. Conversely, an underlined P value (greater than 0.05) indicates no significant difference between the comparison values. It can be concluded from the results in Tables 6 and 7 that, in most comparisons, the values are less than 0.05, which proves that the improvements achieved by the proposed CDWPA are statistically significant for most of the benchmark functions.

Analysis suggests that, for low-dimensional space optimization, the guidance information of optimization is easy to obtain and confirm. The introduction of the optimization strategy based on the idea of “certainty” can improve the algorithm’s optimization precision and convergence speed. However, high-dimensional data space to present a kind of highly nonlinear and asymmetric, nonconvex, multipeak complex features, such as value orientation information, is very difficult to obtain accurately; at this point in the optimization strategy, parameter is set to introduce some “random” strategy and will help the algorithm to maintain the diversity of population, to produce more high-quality data processing, prompting algorithm to jump out of local extremum, and balance the exploration and development ability of the algorithm. Precisely, because the strategy adopted by CDWPA is the “directed random” method, the performance of the low-dimensional solution space is improved, but the gap with other algorithms is not obvious. However, in the high-dimensional solution space, the advantage of “directed random” is clearly reflected.

Figure 9 shows the comparison of standard deviations of the seven algorithms in optimizing Ackley function in the dimensions of 300–1000. STD reflects the stability of algorithm in the optimization process, with dimension rising and solution space becoming more and more difficult. From the analysis of the STD of seven kinds of algorithms in Ackley function, we found that the standard deviation of stable algorithms such as PSO, ABC, OWPA, and CWPA optimization effect is very poor. PSO, ABC, and OWPA can even not succeed in finding the optimal value, were trapped into local optimum, and cannot get out of the local optimal trap after optimal “trap.” However, because it falls into the local optimum at dimension 300 and cannot jump out of the local extreme value effectively, it still cannot find the optimization successfully. WPA and CDWPA have a good optimization effect. Interestingly, the change curves of standard deviations of the two algorithms are similar, with large pulsations at 500 and 1000 dimensions, indicating that the algorithm can effectively jump out of the local optimum and find better targets when it conducts optimization in higher dimensions. OWPA adopts the reverse strategy to optimize the initial population, but it does not consider the distribution location of the initial population, which easily leads to the aggregation of the population. The initial populations of WPA and CWPA were relatively dispersed, but due to the low quality of the initial populations and the relatively fixed migration direction, the convergence accuracy was not high in the optimization. However, CDWPA considers both the initial population distribution position and the initial population mass and adds disturbance factors to the movement of wolves, which equips CDWPA with the most efficiency in high-dimensional function optimization.

5. Conclusion

In order to improve the convergence accuracy of wolf pack algorithm (WPA) in solving high-dimensional complex functions, it is easy to fall into local optimization. In this paper, we use logical self-mapping to generate chaotic sequence in the reinitialization stage, and optimize the initial population through the reverse wolf pack, adding disturbance factor in the process of scouting, which improves the ability of wolf detection to jump out of local optimum. Meanwhile, according to the distance away from the lead wolf, the strong wolf adopts appropriate step size to approach the lead wolf. The convergence of the algorithm is tested by Markov process with the selection of different types of functions. The simulation results show that the comprehensive performance of CDWPA is better than that of wolf pack algorithm (WPA), oppositional wolf pack algorithm (OWPA), and chaotic wolf pack algorithm (CWPA), and three widely applied swarm intelligence algorithms (PSO, ABC, and ASFA) are more suitable for solving problems of high-dimensional function optimization. The improved method of the algorithm can be applied to other intelligent optimization algorithms. This algorithm can be applied to large-scale scheduling problem, photovoltaic power generation problem, multidimensional data clustering, and other problems.

However, practical problems are often more complex and changeable, and all methods cannot be perfect. Premature convergence still restricts the development of swarm intelligence algorithm. In the next step, we should focus on solving this problem to further improve the algorithm performance and efficiency. In addition, it is still the next key research direction to expand the application scope of the algorithm.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was partially supported by military science project of National Social Science Foundation (2019-SKJJ-C-092), National Natural Science Foundation of China (Grant no. 61502534), Natural Science Foundation of Shanxi Province (No. 2020JQ-493), Military Equipment Research Project (WJ20191C080073-17 and WJ2020A020029), and Research Foundation of Armed Police Force Engineering University (Nos. WJY201922 and JLY2020084).