Abstract

This study proposed a dynamic adaptive weighted differential evolution (DAWDE) algorithm to solve the problems of differential evolution (DE) algorithm such as long search time, easy stagnation, and local optimal solution. First, adaptive adjustment strategies of scaling factor and crossover factor are proposed, which are utilized to dynamically balance the global and local search, and avoid premature convergence. Second, an adaptive mutation operator based on population aggregation degree is proposed, which takes population aggregation degree as the amplitude coefficient of the basis vector to determine the influence degree of the optimal individual on the mutation direction. Finally, the Gauss perturbation operator is introduced to generate random disturbance and accelerate premature individuals to jump out of the local optimum. The simulation results show that the DAWDE algorithm can obtain better optimization results and has the characteristics of stronger global optimization ability, faster convergence, higher solution accuracy, and stronger stability compared with other optimization algorithms.

1. Introduction

The differential evolution (DE) algorithm is an optimization algorithm based on the theory of modern intelligence [1]. It was first proposed by American scholars Rainer Storn and Kenneth Price in 1995 to solve the Chebyshev inequality [2]. The DE algorithm solves the problem by simulating the biological evolution of the survival of the fittest [3]. Superior to the traditional optimization algorithm such as the method based on calculus [4] and the exhaustive method [5], the DE algorithm uses its unique memory ability to track the current search situation and adjust the search strategy at the same time. It has high robustness and strong global convergence ability and can effectively deal with complex problems that are difficult to be solved by the traditional optimization algorithms. In addition, the DE algorithm is not limited by the nature of the problem, for example, derivatives are not required as auxiliary information and are not constrained by search space constraints (such as continuous differentiability and single peak) [69]. It is widely used in constrained optimization calculation, neural network optimization, filter design, etc.

In recent years, the DE algorithm is widely used and is also a concern by scholars around the world. Wang et al. [10] proposed a generalized reverse differential evolution algorithm, which introduced acceleration and migration operations in DE. The acceleration operation used gradient information to lead the optimal individual to a better area, and when the dispersion of the population is lower than a certain threshold, the migration operation is used to regenerate new individuals in the vicinity of the optimal individual and replace the old individual, thereby maintaining the diversity of the population and preventing the algorithm from falling into local optimum to a certain extent. Chiou et al. [11] proposed a variable scaling hybrid differential evolution (VSHDE) algorithm, which does not need to select the type of mutation operation, but selects the appropriate mutation operator for DE from a variety of mutation operators in real time to speed up the optimization process of the algorithm. Compared with the random scale factor, the algorithm has a great improvement in performance. Qin et al. [12] proposed the SaDE algorithm to adaptively adjust and based on the experience of early evolution to generate high-quality solutions. Brest et al. [13] proposed a differential evolution (JDE) algorithm for adaptive control parameters and introduced new control parameters to adjust the values of and through a comparative study of numerical benchmark problems. Zhang et al. [14] introduced a new mutation strategy “DE/current-to-pbest” to improve the optimization performance of the algorithm and proposed an adaptive differential evolution (JADE) algorithm with optional external archiving. Hou Ying et al. [15] introduced a dynamic multiobjective differential evolution algorithm, based on the information on evolution progress (DMODE-IEP), which is developed to improve the optimization performance. The information of evolution progress, using the fitness values, is proposed to describe the evolution progress of MODE, and the dynamic adjustment mechanisms of evolution parameter values, mutation strategies, and selection parameter values based on the information on evolution progress are designed to balance the global exploration ability and the local exploitation ability.

Aiming at the problems of long search time, easy stagnation, and easy to fall into the local optimal solution of the differential evolution algorithm [16], this study proposes a dynamic adaptive weighted differential evolution (DAWDE) algorithm on the basis of the above several improved algorithms. We select several typical benchmark functions to test [16, 17]. The test results show that the DAWDE algorithm has relatively strong global optimization ability, strong convergence performance, and is not easy to fall into the local optima. The global optimal values obtained are all near or equal to the given optimal value.

2. DE Algorithm

The DE algorithm is an algorithm based on group evolution, which can not only memorize the optimal solution of individuals but also has the characteristics of information sharing within the population. It is a method of optimization used in symmetrical optimization problems and also in problems that are not even continuous, and are noisy and change over time [18]. The essence is a greedy genetic algorithm based on actual number coding and with the idea of preserving optimality [19].

In the DE algorithm, each population is composed of individuals, which is expressed as follows: , where is the population size; each individual is used to represent the solution of the problem, which is expressed as follows: , where is the dimension of the solution, is the ith individual in the 0th generation population, and is the jth component of the ith individual in the 0th generation population. The main operation steps of the algorithm are as follows.

2.1. Initialization

where and represent the upper and lower bounds of the jth dimension optimization, respectively, and represents a random number in the interval .

2.2. Mutation

The DE algorithm realizes individual mutation through the difference strategy, randomly selects two different individuals to scale their vector differences, and performs vector synthesis with the individual to be mutated to generate corresponding mutated intermediate individuals. The most commonly used strategy in mutation operation is DE/rand/1/bin, and the specific expression is as follows:where , and is the current iteration number, that is, the gth generation. is the scaling factor. The smaller the is, the stronger the local search ability is; the larger the is, the more it can jump out of the local area.

2.3. Crossover

The DE uses the original individual and the mutant intermediate individual to cross to generate a new individual , and determines whether the new individual gene is provided by the original individual or the mutant intermediate individual according to whether the conditions are met. The crossover operation is as follows:where is the crossover probability, and is a random integer on . means j is randomly selected so that one gene in is contributed by to ensure the generation of new individuals, and the rest of the genes are determined by the crossover probability factor . The larger is, the more contributes to , which is conducive to improving local development capabilities; the smaller is, the more contributes to , which is conducive to improving global search capabilities [20].

2.4. Selection

The DE algorithm adopts a greedy strategy when selecting operations, and only those with better fitness values are selected for the next generation. The selection operation is as follows:

3. Dynamic Adaptive Weighted Differential Evolution (DAWDE) Algorithm

3.1. Scale Factor Adaptive Adjustment Strategy

According to formula (2), the scaling factor in the mutation operation is an important parameter to control the diversity and convergence of the population, and it determines the magnification ratio of the deviation vector. The smaller the value of is, the smaller the group difference is, which will speed up the algorithm convergence and also lead to the local convergence of the algorithm, while the larger the value is, it will help the algorithm to jump out of the local optimal solution, but it will reduce the convergence speed. In order to balance the local and global search and maintain a fast convergence rate, an adaptive adjustment strategy is proposed as follows:where is the maximum value of the scaling factor, which is 0.9, and is the minimum value of the scaling factor, which is 0.2, is the current iteration number, that is, the gth generation, and is the maximum iteration number.

3.2. Dynamic Adjustment Strategy of Crossover Probability Factor

The crossover probability factor determines whether the new individual gene is provided by the original individual or the mutant intermediate individual, that is, the degree of participation of each dimension of individual parameters in the crossover. The smaller the is, the better individuals are retained, and the faster the algorithm converges, but it is easy to fall into the local optimum value. The larger is, the higher the diversity of the population is, and the better the global search ability is, but the convergence speed of the algorithm will decrease accordingly. In this study, a dynamic adaptive crossover factor is adopted, and is set as a dynamic adaptive function that continuously oscillates in [0,1] and is updated every 50 generations, so that the constant change in can make the new individual randomly inherit the mutant individual or parent generation of individual genes, jumping out of the local optimal solution. The value is as follows:where is the value of of the gth generation, is the value of of the g-1th generation, and is to be updated every 50 generations.

3.3. Adaptive Mutation Operator Based on Population Aggregation Degree

In the later stage of the iteration of the DE algorithm, the difference between individuals of the population decreases, the diversity decreases, finally, the aggregation phenomenon is formed, and the definition of population aggregation degree is proposed.

Assuming that the size of the group is , the particle dimension is , is the individual vector of individual x on the jth dimension, and is the average value of the individual vector particles of the population on the jth dimension, then the population aggregation degree can be defined as follows:

The smaller the population aggregation degree is, the smaller the individual differences of the population is, and the higher the aggregation degree is; on the contrary, it means that the population individuals are scattered and the population diversity is good. Therefore, the population aggregation degree can well describe the aggregation degree of population individuals. In this study, takes [0.05, 0.95].

The main purpose of improving the DE algorithm is to balance the global exploration ability and local development ability of the algorithm. The standard DE algorithm adopts a random selection mutation strategy, expressed as DE/rand/1, which is beneficial to improve the diversity of mutation and global exploration ability, but the randomness interferes with the evolution direction in a large range, with great blindness and uncertainty, which may lead to algorithm premature and limited convergence speed. While DE/best/1 uses the optimal individual as the mutation base to ensure the optimal direction of evolution and local development ability, the optimal individual may be the local optimal individual. If the population keeps evolving in this direction, it is very likely to fall into the local optimum. The mutation strategies DE/best/1 and DE/rand/1 of the standard differential evolution algorithm areWe carry out the weighted combination to propose a new weighted dynamic mutation strategy as follows:where are random integers on and are not equal to each other. is the optimal individual of the gth generation population.

According to formula (5), the population aggregation degree , as the weight of the influence of the optimal individual on the variation direction, is a monotonically increasing function; then, is a monotonically decreasing function. After weighted merging, in the early stage of optimization, the amplitude coefficient of is small, and the amplitude coefficient of is large, focusing on global exploration, and in the later stage of optimization, the amplitude coefficient of increases, and the amplitude coefficient of reduces, focusing on local development, so that the algorithm takes into account the global search ability and local development ability at the same time, which is not only beneficial to the diversity of the population but also improves the convergence speed of the algorithm.

3.4. Disturbance Dimensional Mutation to Get Out of Premature

When solving high-dimensional complex function problems, problems such as falling into local optimum will generally occur in the later stage of the DE algorithm. In order to describe the state of the population, the following premature definitions are given as follows:

Let be the precocious period, and if and the difference between the current fitness value and the fitness value before generations is extremely small, that is,the individual is said to be premature in the generation iteration. Among them, is the optimal fitness value of the gth generation, and is the premature test threshold and takes .

If it is judged by the above formula that the algorithm has been premature, the dimensional mutation is carried out, and the mutation strategy is as follows:where the weighting coefficient is the same as formula (5), is the acceleration random disturbance coefficient, is a random variable obeying the distribution , and is the optimal individual of the gth generation on the jth dimension.

It can be seen from formula (9) that the dimensional mutation strategy consists of two parts, the part is composed of the optimal individual and the weight of random individuals, and the information of the optimal individual is used to guide other individuals to evolve toward the optimization direction; the part is the accelerated random disturbance vector; and since the individual has fallen into the local optimum, disturbance mutation is randomly generated, which accelerates the individual to jump out of the local optimum and guides the individual to explore the global.

3.5. Algorithm Steps

Step 1. Initialization parameters: population size , solution dimension , maximum evolutionary generation , upper bound of individual variables , lower bound of individual variables , maximum sum of scaling factors , minimum sum of scaling factors , premature generation , and premature test threshold .

Step 2. Initialize the population, calculate the fitness value of each individual, and find out the individual of optimal fitness value and the corresponding optimal fitness value .

Step 3. Calculate , , and , according to formulas (5)–(7).

Step 4. Mutation operations. The variant individual is obtained according to formula (9).

Step 5. Crossover operations. A new test individual is obtained according to the formula (3).

Step 6. Selection operations. The next generation is obtained from the formula (4).

Step 7. Update the local and global optimal values.

Step 8. Check for premature. If and , calculate a new individual according to formula (11), and update the optimal value to jump out of the local optimum.

Step 9. Repeat Steps 48.

Step 10. If the maximum number of iterations is not reached, go to Step 3; otherwise, continue.

Step 11. Output the optimal value and .

4. Experimental Results

4.1. Test Functions and Comparison Algorithm

In order to verify the effectiveness of the algorithm, this algorithm is compared with the standard DE/rand/1, SaDE algorithm, and JADE algorithm. All algorithms are independently run 20 times on 8 typical test functions. The test functions are shown in Table 1. Their global optimal values are all 0. According to the obtained optimal solution and convergence curve, the performances of each algorithm in terms of convergence speed, optimal solution accuracy, and robustness are compared.

The simulation experiment is programmed with MATLAB R2016a software, and the experimental configuration is Intel(R) Core(TM) i5-6300HQ [email protected] GHz. The basic parameters of the algorithm are set as follows: , , , , and , and the compared algorithm DE/rand/1, SADE, and JADE parameter settings are the same as the original. In order to examine the comprehensive performance of the algorithm, considering the dimension and two cases, for the sake of fairness, the maximum number of iterations of all algorithms is 2000, and the algorithm is independently run 20 times.

4.2. Results and Analysis

The minimum value, average optimal value, and standard deviation of the solution results are shown in Table 2.

It can be seen from the analysis that the DAWDE algorithm can achieve better optimization results than the other four algorithms on the 8 test functions regardless of whether the dimension is 30 or 50. By comparing the best value, mean optimal value, and standard deviation, it can be seen that the DAWDE algorithm has strong global optimization ability and has great advantages compared with other algorithms in terms of convergence accuracy, convergence ability, and stability. The optimization performance of the DAWDE algorithm will not decrease due to the increase in the complexity of the function, which shows that the algorithm has high scalability, and it can be concluded from the standard deviation that the stability of the algorithm is strong. Compared with the other four algorithms, the theoretical optimal value of 0 cannot be obtained in 30 dimensions, and as the function becomes more complex, the optimal value converged by the algorithm deviates from the theoretical optimal value. It can be seen from the standard deviation that the stability of the other four algorithms in optimizing high-dimensional complex functions is also poor, and with the increase in the dimension, the evolutionary ability decreases to varying degrees.

Figures 18 show the fitness value convergencecurvesof the 5 algorithmstested for 8 functions. When we analyzedeach set of figures indetail, it can be seen from Figures 1(a) and 1(b) that for the function , although all the five algorithms can reach the theoretical optimal value of 0, the DAWDE algorithm has already converged to the optimal value before the 10th generation, which is the fastest among these algorithms. The other four algorithms all have stagnation to a certain extent. According to Figures 2(a) and 2(b), when the dimension is 30, the DAWDE algorithm gets the optimal value before the fifth generation, whereas other functions get the optimal value after the fifth generation. When the dimension is 50, the DAWDE algorithm and the JDE algorithm both perform well on , and the other three functions all fall into local optimum during the convergence process. It can be seen from Figures 3(a) and 3(b) that the DAWDE algorithm converges to the optimal value of 0 very quickly on , while the DE algorithm and the JDE algorithm perform slightly worse. In 30 and 50 dimensions, the DE algorithm and the JDE algorithm are optimized in about 400th generation and 1000th generation, respectively. The JADE algorithm and the SaDE algorithm perform even worse, failing to get the optimal value and falling into the local optimum. As can be seen from Figures 4(a) and 4(b), for the function , the DAWDE algorithm converges faster than the other algorithms. Figures 5(a) and 5(b) show that, for the function , when the dimension is 30, the DAWDE algorithm gets the optimal value before the 10th generation, whereas other functions get the optimal value after the 100th generation. When the dimension is 50, the convergence of the DAWDE algorithm is still very fast, but other algorithms are affected by the increase in dimension, and the optimal is not reached until 200 generations later. It can be seen from Figures 6(a) and 6(b) that for the function , the algorithms DE and SADE both fall into local optimum, and the solution accuracy is poor. As can be seen from Figures 7(a) and 7(b), for the function , although all five algorithms can obtain the optimal value, the curve convergence speed of the DAWDE algorithm is significantly faster than that of the other four algorithms. Figures 8(a) and 8(b) show that, for the function , when the other four algorithms fall into the local optimum, the DAWDE algorithm can still quickly obtain the optimum value, indicating that it has the ability to judge and jump out of premature. Comparing the 30-dimensional and 50-dimensional figures, it can be seen that although the spatial dimension has increased, the optimization ability of the DAWDE algorithm has not been significantly reduced. In contrast, the optimization capabilities of the other four algorithms have decreased to varying degrees, which proves that the DAWDE algorithm not only adapts to the low-dimensional search space but also meets the requirements of high-dimensional search space. To sum up, the DAWDE algorithm performs well in the early stage of optimization, and the convergence speed is very fast. The optimal value can be obtained around the 20th generation, and it is not affected by dimensions and has strong stability. Compared with DAWDE, the other four algorithms have a slower convergence speed and worse solution accuracy, and the more complex the function is, the more the objective function value deviates from the theoretical optimal value.

5. Conclusions

In this study, a dynamic adaptive weighted differential evolution (DAWDE) algorithm is proposed to solve the problems of long search time, easy stagnation, and easy to fall into local optimal solution when a differential evolution algorithm solves high-dimensional complex optimization problems. The improved algorithm includes adaptive optimization scaling factor and crossover factor, proposes an adaptive mutation operator based on the aggregation degree of the population, and adopts a random dimensional mutation and disturbance strategy. The algorithm dynamically balances the global exploration ability and local development ability of the algorithm. The simulation results show that the DAWDE algorithm can obtain better optimization results than other optimization algorithms on the 8 test functions. Regardless of the dimension, it has the characteristics of strong optimization ability, fast convergence, high solution accuracy, and strong stability, which provides a choice for solving complex high-dimensional optimization problems and also provides algorithm support for practical application research in the future [17].

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61966022) and the Open Project of Gansu Provincial Research Center for Conservation of Dunhuang Cultural Heritage (No. GDW2021YB15).