Artificial Intelligence and Computing on Industrial ApplicationsView this Special Issue
Probabilistic Analysis of Search Performance of Differential Evolution Algorithm in Low-Dimensional Case
In this paper, a method of establishing and analyzing the probability model of its mutation and crossoperation process for differential evolution (DE) algorithm is proposed; especially, the escape ability and further optimization ability of individuals trapped in the local optimal neighborhood are deduced in detail, and the characteristic curves of the influence of key parameters such as population size, scaling factor, and crossfactor on the search performance of the algorithm are obtained. It provides a theoretical reference for the application of the algorithm.
As a rising star in intelligent optimization algorithms, differential evolution (DE) algorithm  has the advantages of simple structure, few controlled parameters, and high parallelism. It has been successfully applied to various scientific research and practical engineering fields such as data mining, pattern recognition, signal processing, control, and scheduling. The research on its theory and application has important academic significance and engineering value .
The standard version of DE algorithm was first proposed by Rainer Storn and Kenneth Price of Berkeley University in 1995. Because of its unique differential mutation operator, it was originally used to solve the Chebyshev polynomial fitting problem, but it was later found that it is also very effective in solving complex optimization problems. In 2004, Vesterstrom and Thomsen conducted an in-depth comparative study of DE algorithm and other intelligent optimization methods by using 34 classical test functions in the literature . The results show that the performance of DE algorithm is the best. Standard DE also adopts basic genetic operations such as mutation, hybridization, and selection, but it reduces the complexity of operation and absorbs the idea of swarm intelligence algorithm using swarm information sharing, so as to further improve the global optimization efficiency of the algorithm.
In order to further optimize its performance and expand its application, researchers in various countries have carried out a series of improvement research, but only a few scholars have carried out relevant theoretical analysis on the optimization mechanism of DE algorithm. Based on the knowledge of probability distribution, Zaharie analyzed the influence of the CR value on different crossover operators. Dasgupta established the dynamic system model of standard DE and tried to analyze its stability and convergence speed theoretically. It shows that the DE algorithm is similar to the classical gradient descent method, so it can quickly approach the global optimal solution. Generally speaking, there are few theoretical analysis attempts on DE algorithm in academic circles, and the results are very limited.
In this paper, a method of establishing and analyzing the probability model of its mutation and crossoperation transient process is proposed; especially, the escape ability and further optimization ability of individuals trapped in the local optimal neighborhood are deduced in detail, and the characteristic curves of the influence of key parameters such as population size, scaling factor, and crossfactor on the search performance of the algorithm are obtained, which provides a theoretical reference for the application of the algorithm.
1.1. Research Status
At present, the researchers of DE algorithm have carried out further research on it mainly from the selection of control parameters, the design of difference strategy, parallel distributed computing, and the combination with other algorithms and promoted the development of DE algorithm in complex environment.
1.1.1. Research on Control Parameters
One of the advantages of DE algorithm is less control parameters. In the standard DE algorithm, there are only three control parameters: population size NP, scaling factor , and crossover probability factor CR. The traditional parameter setting is mostly based on experience and adopts the trial and error method to select the appropriate value, but this method is time-consuming and labor-consuming and has the problems of difficult decision-making and strain. Literatures [4, 5] analyze the influence of various parameters on the optimization performance of the algorithm through a large number of experimental studies and give useful setting ranges or recommended values. However, there are differences in the conclusions of various literatures, which cannot well deal with specific practical problems. Literature  pointed out that the algorithm parameters are sensitive to problems, and different settings are required for problems with different properties and characteristics. Unreasonable parameters will lead to premature or stagnation.
Therefore, many scholars have proposed dynamic or adaptive parameter control strategies. Reference  proposed an adaptive DE algorithm based on fuzzy principle, which uses fuzzy logic controller to adaptively control and CR. Reference  proposed a control strategy based on the change of fitness value, which dynamically adjusts by using the ratio of individual maximum fitness value to minimum fitness value in the optimization process, so as to adjust the step size according to the search information. Literature  uses mutation operator similar to DE algorithm to adaptively control . Document  proposed a DE algorithm with dynamic adaptive population size, which uses two different coding mechanisms: relative and absolute to adaptively control the population size. In reference , it is set to be fixed, while and CR are directly encoded into the solution individual to participate in the evolutionary operation. By testing some standard functions and comparing with other algorithms, the author verifies the efficiency of this method.
Literature  proposed JADE algorithm. Each individual corresponds to an subject to Cauchy distribution and a CR subject to normal distribution, records the success value of making the subindividual enter the next generation smoothly, and then adaptively adjusts the parameters of the two distributions by means of connection update, so as to guide the direction of parameter adaptive adjustment. Experiments show that this method effectively improves the robustness and convergence of the algorithm. According to the information in the evolution process, SaDE algorithm in document  adopts different adaptive mechanisms for parameters at different evolution times and different differential strategies, so that the algorithm has achieved good optimization results.
1.1.2. Research on Differential Strategy
Differential mutation operator is the core operator and main feature of DE algorithm. Price and Storn have proposed more than ten different difference operators to realize mutation operation. Then, researchers have carried out rich research on operator design and improvement. Reference  gives a trigonometric mutation operator for DE algorithm, which regards the individual as the center point of a supertriangle and moves along the three sides of the supertriangle composed of three groups of weighted difference vectors in different steps to produce new mutated individuals. Literature  proposed a mutation operator based on polynomials for multiobjective optimization problems. Reference  designed a new DE/target-to-best operator based on domain search in DEGL algorithm, which is weighted combined with traditional mutation operator to strengthen the local optimization ability of the algorithm. The JADE algorithm mentioned above makes full use of the information of the local optimal solution to construct the mutation operator. At the same time, it introduces an archive set for saving the inferior solution to realize the two mutation operations of archiving and nonarchiving. The results show that the algorithm is effective in solving high-dimensional and multimode problems. In recent years, inspired by parameter adaptation, literatures [17–19] have proposed the adaptive mechanism of differential mutation strategy. Its main purpose is to adaptively select the most appropriate mutation strategy from the pool of mutation strategies at different times.
1.2. Current Problems
The DE algorithm still needs further discussion and deepening in the following aspects: (1)Theoretical research
The theoretical research on convergence, convergence speed, parameter robustness, and iteration termination criterion of the DE algorithm is quite limited. The main research method is verified by experiments, and there is a lack of generally applicable algorithm convergence theory. In order to deeply explore the internal optimization mechanism of DE and improve the optimization performance of DE, it is necessary to speed up its theoretical research. (2)Improvement research
Like other evolutionary algorithms, the DE algorithm is prone to premature convergence or search stagnation. The main reasons are as follows: (1)The DE algorithm is sensitive to parameter setting. Whether the parameters are appropriate or not will affect the optimization efficiency of the algorithm, and the selection of parameters is closely related to the nature of specific problems and different search stages(2)Selection of mutation strategy is difficult. Different mutation strategies will have different guiding effects on the search process of the algorithm. It is difficult to select the best mutation strategy according to the actual problems(3)Insufficient local search capability. The DE algorithm is based on population and random search mode, which makes it have strong global search ability, but the algorithm itself lacks local search power, which leads to slow convergence speed in the later stage of evolution, and it is difficult to quickly converge to the optimal solution of the problem with less evaluation times of fitness function. This is due to the simple structure, insufficient use of information, and lack of consideration of the analytical properties of the objective function of evolutionary algorithms, including the DE algorithm. These defects are also the unavoidable cost of evolutionary algorithms
The existing improved DE has improved the overall performance of the algorithm to a certain extent, but these improvements have different characteristics. In the face of some practical application problems, the optimization results are still not satisfactory. Therefore, more efficient and systematic research is needed on how to design parameter strategies and construct improved operators that balance global search and local search and various hybrid forms of DE. At the same time, in view of the complexity of practical problems, paying attention to the algorithm research in complex environment is of great significance to engineering practice.
2. Introduction to Standard Differential Evolution Algorithm
2.1. Standard Process of Differential Evolution Algorithm
Standard DE is essentially a special genetic algorithm based on real number coding and optimization preserving greedy strategy. Its principle and structure are similar to genetic algorithm. Similarly, starting from the randomly initialized population, genetic operations such as mutation, crossover, and selection are adopted, so as to solve the optimization problem iteratively.
Constrained optimization problems are usually expressed as where and are the index sets of equality and inequality constraints, respectively, and are constraint functions. The standard process of solving the above general optimization problems by DE is described as below. (1)Population initialization
Assuming that the population size is NP (i.e., there are NP individuals participating in evolution) and the evolutionary algebra , the th individual in the population is expressed as where represents the dimension of individual variables. In the initialization stage, in order to make individuals cover the whole search space as much as possible, each individual in the initial population (i.e., ) is randomly and evenly generated within the value range specified by the objective function: where and are the upper and lower bounds of the th dimension of individual variable , respectively. is a random number uniformly distributed in the interval [0,1]. (2)Mutation operation
Mutation is the core operation of the DE algorithm. The purpose is to generate new candidate individuals, enhance the diversity of the population, and guide individuals to a favorable search direction. Compared with other evolutionary algorithms, the mutation operation of DE makes full use of population information, which generates mutation vector based on the difference vector between different individuals. is usually used to represent different mutation strategies, in which represents the selection mode of basis vector, often including “rand” and “best,” representing the basis vector selected randomly and optimally, respectively; represents the number of difference vectors. At present, the commonly used mutation strategies are
: where is a random integer that is not equal to and different from each other, representing the index number of different individuals in the population; is the individual with the best fitness in generation population; is the scaling factor, which is used to control the search step size, and its value directly affects the convergence performance of the algorithm. (3)Crossoperation
The purpose of crossover is to disturb the individuals in the original population, so as to enhance the exploration of local areas. Binomial crossover is often used in standard DE, and the target vector and variation vector are discretely crossed in each dimension, resulting in test vector: where is a uniformly distributed random number; is the randomly selected dimension variable index to ensure that at least one bit of is contributed by , while for other bits, it is determined by the crossover probability factor ; ; its value is often related to the specific nature of the problem (such as multimode problem or single-mode problem and independent variable problem or independent variable interdependence problem). (4)Select operation
Based on the natural selection mechanism of fittest survival, standard DE selects individuals with better fitness to reproduce under the condition of keeping NP unchanged, so as to realize the inheritance of excellent characteristics, and then guide the population to the optimal region and gradually approach the optimal solution. The selection policy can be expressed as where is the th individual of the next-generation population and is the fitness function. Obviously, the selection operation ensures that the new individual is at least no worse than the original individual and has the function of memorizing the individual optimal solution. (5)Algorithm termination
After mutation, crossover, and selection, DE completes an update of the population. At this time, it needs to judge whether it meets the operation termination conditions. If satisfied, the current optimal solution is output; otherwise, continue the search. The conditions for determining the termination of the algorithm can be as follows: (a)The fitness function value of the optimal individual in the two generations of the population is within a set range in successive iterations(b)Reach the maximum evolutionary algebra set by the algorithm
To sum up, the overall flow of the standard DE algorithm is shown in Figure 1.
2.2. Basic Characteristics of Differential Evolution Algorithm
Compared with other evolutionary algorithms, DE has its own uniqueness: (1)The structure is simple and easy to implement: the control parameters of DE are relatively few, which reduces the difficulty of parameter selection and adjustment in practical application. At the same time, its core difference mutation operator only performs addition and subtraction operations between individual vectors, so it is easy to use(2)Using real number encoding: compared with binary encoding of GA algorithm, real number encoding is very suitable for dealing with optimization problems in continuous space(3)Selection operation based on optimization: the selection operation of DE is greedy, which can ensure that the excellent solution will not be lost in the evolution process. At the same time, the one-to-one selection competition mechanism can maintain the population diversity better than the sorting selection and bidding selection of the GA algorithm(4)Variation operation based on difference: this operation enables the variation step size and search direction to be adjusted adaptively according to different objective functions, which is more conducive to global search
3. Probabilistic Analysis of Algorithm Performance in Low-Dimensional Case
3.1. Probability Analysis of Algorithm Variation Performance
The feasibility analysis of the optimization algorithm is mainly divided into the analysis of the convergence of the algorithm and the analysis of its search ability. According to the detailed theoretical derivation of DE convergence in literature, the algorithm converges asymptotically with probability 1. Therefore, this paper focuses on the search performance of the algorithm and analyzes two performance indicators: one is the ability to search for the optimal solution in the current local region, and the other is the ability to search other regions out of the current local region. When the population falls into a local region, if the optimal position within the local range can be quickly searched, it means that the algorithm has good local optimization force. If the algorithm can jump out of the local optimal region quickly, it means that the algorithm has strong global optimization force. Clearly, an algorithm that balances the two can be called excellent performance. At present, there are few quantitative theoretical analyses about these two performance indexes in algorithm field. Therefore, based on the principle of probability and statistics, this chapter proposes a visual probability model to quantitatively analyze the behavior ability of the DE algorithm in local areas and discusses the specific influence of algorithm parameters on algorithm performance.
The core operation of DE algorithm evolution is the driving differential mutation operation. This chapter first analyzes mutation operation : .
3.1.1. Analysis Based on Regular Circular Local Area Model
(1) Model Establishment of Local Optimal Neighborhood. Figure 2 shows three kinds of classical test function surfaces and contour lines. It can be seen that local single valley (peak) areas are mostly close to circle or ellipse, which can be simplified as “bowl”-shaped surfaces in modeling.
(a) Rastrigin function
(b) Ackley function
(c) Egg holder function
In this paper, binary space is taken as an example, and similarly, high-dimensional space can be speculated. Figures 3(a) and 3 (c) indicate that when the algorithm progresses to a certain stage, all the individuals in the population fall into a local single valley (peak) region in the search space. In order to facilitate modeling and analysis, the neighborhood was idealized into a circle centered on the local extreme point and made with a radius that could include the single valley (peak) region, as shown in Figures 3(b) and 3 (d), where the gray area was a ring domain composed of fitness value contour lines where the contemporary optimal solution and the worst solution were located, so all the solutions in the population were in this ring domain.
The relative ratios of , , and in the figure above have a wide variety of possibilities and can be extrapolated from each other. It is neither feasible nor necessary to parse all cases. For the proportion, therefore, this article selects the typical case, even, at this point A, B, and C; three areas for the area ratio of, population’s band from distance, or the area proportion of moderate, away from the local optimum location, and local area border have similar distance to their own band width, facilitate local optimization algorithm, had been studied, and jump out of the performance of the local neighborhood. Thus, the local neighborhood was divided into four parts: region A was the region to be searched for A better solution in the local neighborhood, region B was the ring region where the current population was located, region C was the region with worse fitness than the current population in the neighborhood, and region D was other regions outside the neighborhood.
(2) Analysis of Probability Density Distribution of Difference Vectors. Convention of concept name is shown in Figure 4.
The optimal sitting of the local neighborhood is labeled as .
The coordinate system is established by taking contemporary random solution vector as the center coordinate and the line direction between this point and center as the axis.
The distance between the contemporary random solution vector and the optimal coordinate vector in the local domain is denoted as .
The distance between contemporary optimal solution vector and optimal coordinate vector in the local domain is denoted as .
The distance between the contemporary worst solution vector and the optimal coordinate vector in the local domain is denoted as .
The difference vector modulus of contemporary random solution vectors and is ,. (a)Calculate the probability density of modulus of difference vector
The probability density of is the ratio of the line area (line width micro element is ) across the belt region to the band area with the subtraction vector as the center of the circle and the modulus value of vector as the radius , namely, where the expression of line length is
Based on the geometric properties of hollow bands during integration, the analytic analysis of probability density function is discussed in the following four stages: (1)When , as shown in Figure 5(a), the curve traversed by the difference vector at this stage is a closed circle:(2)When , as shown in Figure 5(b), part of the curve traversed by the difference vector at this stage is subtracted from the inner circle:(3)When , as shown in Figure 6(a), part of the curve traversed by the difference vector at this stage is subtracted by both the inner circle and the outer circle:(4)When , as shown in Figure 6(b), part of the curve traversed by the difference vector at this stage is subtracted from the outer circle:
According to the above analytical formula, the probability density distribution curve of can be obtained by numerical integration method, as shown in Figure 7. In the figure, the significant depression in the middle region of the curve is caused by the distribution characteristics of the ring region of the difference vector (the dotted line is the curve characteristic when the inner ring region is not affected). When the outer ring of the population is relatively unchanged, the size of the inner ring (the population is relatively far from the local optimum in the neighborhood) is positively correlated with the width of the middle depression interval. (b)Calculate the probability density of angle of difference vector (1)When , as shown in Figure 8(a), the vector does not pass through the inner region of the ring; then, (2)When , as shown in Figure 8(b), the vector passes through the inner region of the ring; then, (3)When , symmetric with case (1), the vector does not pass through the inner region of the ring: then, (4)When , as shown in Figure 9, the vector direction deviates from the inner region of the ring; then,
According to the above analytical formula, the probability density distribution curve of is obtained by numerical integration method, as shown in Figure 10.
It can be seen that, similar to the probability density distribution of , is also affected by the size of the inner ring and is reduced in some regions. It can be seen that the modulus value and direction of difference vectorhave no equal probability distribution, and they are highly coupled with each other. The probability distribution of the vector synthesized with the basis vector will be quite complex, which needs to be discussed in detail.
(3) Synthetic Analysis of Difference Vector and Basis Vector . The combination of basis vector and difference vector belongs to vector superposition. The process of vector superposition is shown in Figure 11. is randomly distributed on the inner circle with uniform probability, and is randomly distributed on the circle with radius . In the mutation operation, vector and are superimposed at any included angle , which is evenly distributed in the interval of . Therefore, as shown in Figure 12, coordinate in the probability distribution area of difference vector and coordinate in the coordinate system where basis vector reside should be overlapped and rotated once along this point to calculate the probability distribution of the variation vector. The probability sum of the four regions A, B, C, and D is the probability of the variation vector moving into each region.
In view of the complexity of the area integral of the overall rotation process in the distribution region of difference vectors, it is neither practical nor necessary to integrate all the distribution probabilities of variation vectors at all included angles. Then, 8 typical samples can be taken at equal intervals (every 45°) in the rotation process, as shown in Figure 13. Integrate the probability area occupied by each region of A, B, C, and D in each sample, and then, perform equal-weight averaging to obtain the required results. In the figure, green is area A (better area), and its relative area is . Yellow is zone B (wandering zone) with a relative area of . Red is region C (degraded region), with a relative area of . Blue is area D (jump out area); its relative area is .
Figure 13 takes as an example to show the variation of probability distribution in each region when is from 0 to 360°. In this chapter, the influence of scaling factor on algorithm search performance is studied. The probability distributions of , 0.5, 0.625, 0.75, 1.0, 1.5, and 2.0 are shown in Figure 14 ( and are taken as examples). It can be seen from several literatures that the classical value of is between 0.5 and 1.0 and is mostly in the range of 0.5 to 0.7. Therefore, 0.625 is added between 0.5 and 0.75 in this chapter to better study the influence of on variation performance.
It can be seen from Figure 13 that the areas of 180°~360° and 0°~180° are symmetric, so only the unilateral case of 0°~180° needs to be calculated. Figure 14 shows the general situation of any position in , while different values of may cause significant differences in the probability distribution of variation vectors falling into different regions, as shown in Figure 15.
Therefore, two extreme cases need to be calculated for each combination of and , as shown in Figure 16.
By integrating the probability distribution of each region for the two extreme cases under each , six probability distribution curves can be obtained as shown in Figure 17, where is and is . Both zone B and zone C belong to invalid operation zone and are combined into one consideration in probability distribution.
By taking the intermediate value of and , the characteristic curve of the probability distribution of variation vectors falling into each region as changes as shown in Figure 18 can be obtained.
As can be seen from the figure, is the best at the junction of blue and green lines (0.7~0.75) from a single solution, and both local optimization and local escape can be considered at this time. Since the DE algorithm is based on population mutation operation, the probability distribution of population operation results needs to be calculated. The probability of a population searching for a local better region after mutation operation and the probability of mutation jumping out of a local region are shown in equations (20) and (21). According to formula (20), the probability of the population searching for a locally better region after mutation operation can be calculated when NP and are combined, as shown in Table 1.
The parameter combination of NP and can better guarantee the local optimization performance of algorithm mutation operation in this area. Similarly, according to formula (21), the probability of the population jumping out of local neighborhood after mutation operation can be calculated for various combinations of NP and , as shown in Table 2.
According to the data in Table 2, the probability surface composed of can be drawn, as shown in Figure 20. The combination of NP and can better guarantee the local escape ability of the mutation operation of the algorithm in this region.
3.1.2. Analysis Based on Noncircular Local Region Model
In the first section, the probability analysis of variation performance of the algorithm in regular circular local area is carried out by numerical analysis. However, to simplify the local area into a regular circle, the prerequisite condition is too strong for the application background of the algorithm. It can be seen from Figure 21 that most of the ellipse neighborhoods with nonzero eccentricity exist in the surface, so this section carries out probability analysis on the local regional variation performance of ellipses under typical parameters following the thought in the previous section.
The elliptic valley region can be obtained by centrifugal deformation of the circular local single-valley region model in Figure 3, as shown in Figure 21, that is, the elliptic model with an eccentricity of 0.5.
When the valley region is ellipse (eccentricity is not 0), the vector representation of variation operation of the DE algorithm is shown in Figure 22. It can be seen that the coordinate positions of contour lines in this time domain no longer have equal geometric properties, and the possibility of geometric expression process of differential variation operation is more complex than that of circular operation. The optimal solution can be divided into three different situations: the minimum curvature position, the maximum curvature position, and the middle curvature position of the ellipse. The basis points of difference vectors are also divided into the minimum curvature position, the maximum curvature position, and some curvature position in the middle of the ellipse. Figure 23 shows nine typical geometric collocations of mutation operations.
According to the geometric collocation scheme shown in Figure 23, numerical integration and analysis are carried out on the important probabilities (the probability of seeking local better and jumping out of local region) of two representative algorithm performances under three typical cases , , and , respectively. Take as an example. The green and blue areas in Figure 24 are shown.
According to the integration region of different , the curves of two probability values changing with the change of can be obtained by integration, as shown in Figure 25.
Comparing Figure 25 with Figure 17, it can be seen that the probability distribution of variation vectors falling into “A” and “D” regions almost does not change with the change of eccentricity. In other words, local optimization performance and local jump performance are mainly related to scaling factor and population size when solvers perform differential variation operation in such single-valley/peak region.
Mutation operation is the core of the whole differential evolution operation, which determines the main performance of the DE algorithm. The subsequent crossover operation is a further probability optimization operator consistent with the DE algorithm.
3.2. Probability Analysis of Algorithm Crossover Performance
After mutation operation, the DE algorithm performs crossover operation to get the test vector. The essence of crossover operation is to selectively merge the vectors before and after mutation in all dimensions, so that the test vector after operation not only has the jump of mutation but also partly retains the traces before mutation; that is, the mutation operation is carried out by multidimensional carrier processing. Obviously, when , crossover operation actually does not work, and the crossover result is the variation vector before crossover. When , crossover operation maximizes to play a conservative role, and only one dimension in crossover result comes from the variation vector, while the other dimensions are the same as the original vector before mutation. When CR value is between 0 and 1, the crossover results are abundant, which requires quantitative discussion.
As shown in Figure 26, coordinate points and corresponding to various possible test vectors can be found according to the coordinates of crossvector and variation vector . Figure 26 shows the Cartesian coordinate system of three different angles, and three groups of different test vectors are obtained. Since the direction of the mutation operation has the property of isoprobability in each direction, the included angle between the coordinate axis and the mutation direction of the rectangular coordinate system has equal probability distribution between 0° and 360°. At the same time, all test vectors are evenly distributed on the circular line with the difference vector as diameter because the triangle surrounded by the coordinate point (or ), , and of the test vector must be a right triangle. The actual variation step is , where is the included angle between the variation vector and the -axis of the coordinate system.
In order to establish the probability model of crossover operation, after mutation operation, the local region model was rotated around the geometric center until the mutation vector fell on the horizontal dashed line at the right end of the geometric center of the model. As shown in Figure 27, the red dot is the position of . The white point is the representative coordinate of , and the dotted blue line shows where might fall after the crossover operation.
In this chapter, the positions of variation vectors that may be generated under different scaling factors and the representative coordinates of are traversed numerically, so as to obtain the probabilities of test vector falling into the local better region and jumping out of the local region, respectively. The results are shown in Figure 28.
According to Figure 28(a), when the variation vector is in the locally better region (1#~3#), the probability of the test vector falling into the locally better region is large and increases with the increase of . When the variation vector is outside the locally preferred region (4#~14#), the probability of the test vector falling into the locally preferred region is very small and decreases with the increase of . According to Figure 28(b), when the variation vector is in the locally better region (1#~3#), the probability of the test vector jumping out of the local region is 0. When the variation vector is located outside the local preferred region (4#~14#), the probability of the test vector jumping out of the local region increases rapidly with the variation vector deviating from the center of the local region and increases with the increase of .
3.3. Probability Analysis of Comprehensive Performance of the Algorithm
Combined with the quantitative results obtained in the previous two sections, this section carries out a comprehensive probability analysis of algorithm performance and jointly examines the impact of scaling factor and crossover probability factor on two important performance indicators of the algorithm. According to the quantitative analysis of individual variation behavior (see Figure 18 for details), the probability of variation vectors falling into different regions can be obtained as shown in Table 3.
According to the quantitative analysis of individual crossover behavior (see Figure 28 for details), the probabilities of test vectors falling into different regions can be obtained as shown in Table 4.
Joint calculation of the probability distribution in Tables 3 and 4 can obtain the probability of a single solution individual jumping out of the local area and falling into the local better area under different and CR, as shown in Figure 29.
It can be seen from Figure 29(a) that the probability of the solution individual jumping out of the local area is highly positively correlated with the scaling factor , highly negatively correlated with the crossover factor when is small, and gradually highly positively correlated with the increase of .
It can be seen from Figure 29(b) that the probability of solution individuals falling into local optimization is highly inversely correlated with the scaling factor , highly positively correlated with the crossover factor when is small, and gradually highly negatively correlated with the increase of .
It can be seen from the above that if and CR are both constant values, in order to give consideration to the two performance of the algorithm, it is appropriate to select the middle value of the value range of , which can be . Ifand CR are set by algorithm for adaptive adjustment, it can be set up in the early stage of the evolution of the largerand CR to emphasize particularly on the performance of the play to jump out of local enhancement algorithm, namely, initial exploration ability; thus improve the odds of finding the global optimal area; and take smaller in the late evolution and CR, focusing on the local optimization ability, namely, the improved algorithm later production ability, in order to perform a fine search near the optimal solution. At the same time, it can be found that CR always tends to take a larger value, and can be selected.
According to formulas (20) and (21), the population size and the comprehensive characteristics of the two search performance are considered in fusion, as shown in Figure 30. The geometric average method is used to synthesize the local optimization and local escape, which is better than the arithmetic average method to show the equilibrium degree of the two performances. It can be seen from Figure 30 that when the population size is small (10 solution individuals), at about 0.75 is obviously superior to other values. Comparatively speaking, the comprehensive search performance is not very sensitive to the value of CR. When the population size reaches 40, the search performance of the algorithm is significantly improved in both aspects. The data conclusions in this section are derived from the circular local region model. It has been verified that the elliptic local region model can obtain a similar result curve.
Not only the DE algorithm, the performance of almost all optimization algorithms is mainly concerned with the search speed of global optimal solutions and the ability to get rid of local optimal solutions. The simplest method is to observe the iterative performance of all populations after they fall into a local neighborhood in the search space. On the one hand, observe how quickly individuals of the populations can find the optimal solution in this neighborhood through cooperation. On the other hand, observe how likely the population can jump out of the local neighborhood and continue to test other regions of the solution space.
Based on the above situation, this paper puts forward for the first time the probability analysis method of DE algorithm mutation and crossover operation transient process, established a visual geometric probability model, quantitatively analyzed the behavior ability of DE algorithm in local areas, and obtained the characteristic curve of the impact of population size and scaling factor on algorithm mutation performance and the quantitative curve of the influence of the probability crossover factor on the crossover performance. Finally, the influence of parameters on the comprehensive performance of the algorithm is investigated, and a certain rule of parameter setting is obtained. In general, this chapter provides a theoretical reference for the application of the algorithm through the establishment and analysis of the algorithm probability model.
This analysis model is also applicable to the performance analysis of other intelligent optimization algorithms. In the future, the study on how to analyze the solution performance of the algorithm in high-dimensional solution space through this model will be further studied.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
K. V. Price, R. M. Storn, and J. A. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer-Verlag, Berlin, 2005.
J. Vesterstrom and R. Thomsen, “A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems. IEEE Congress on,” Evolutionary Computation, pp. 1980–1987, 2004.View at: Google Scholar
D. Zaharie, “Critical values for control parameters of differential evolution algorithm,” in Proceedings of 8th International Mendel conference on soft computing, pp. 62–67, Brno, Czech Republic, April 2002.View at: Google Scholar
J. Ronkkonen, S. Kukkonen, and K. V. Price, “Real parameter optimization with differential evolution,” in Proceedings of IEEE Conference on Evolutionary Computation, vol. 1, pp. 506–513, Edinburgh, SC, UK, 2005.View at: Google Scholar
J. Lampinen and I. Zelinka, “On stagnation of the differential evolution algorithm,” in Proceedings of 6th International Mendel conference on soft computing, pp. 76–83, Brno, Czech Republic, 2000.View at: Google Scholar