Research Article | Open Access
Qiuyu Li, Zhiteng Ma, "A Hybrid Dynamic Probability Mutation Particle Swarm Optimization for Engineering Structure Design", Mobile Information Systems, vol. 2021, Article ID 6648650, 32 pages, 2021. https://doi.org/10.1155/2021/6648650
A Hybrid Dynamic Probability Mutation Particle Swarm Optimization for Engineering Structure Design
Particle swarm optimization (PSO) is a common metaheuristic algorithm. However, when dealing with practical engineering structure optimization problems, it is prone to premature convergence during the search process and falls into a local optimum. To strengthen its performance, combining several ideas of the differential evolution algorithm (DE), a dynamic probability mutation particle swarm optimization with chaotic inertia weight (CWDEPSO) is proposed. The main improvements are achieved by improving the parameters and algorithm mechanism in this paper. The former proposes a novel inverse tangent chaotic inertia weight and sine learning factors. Besides, the scaling factor and crossover probability are improved by random distributions, respectively. The latter introduces a monitoring mechanism. By monitoring the convergence of PSO, a developed mutation operator with a more reliable local search capability is adopted and increases population diversity to help PSO escape from the local optimum effectively. To evaluate the effectiveness of the CWDEPSO algorithm, 24 benchmark functions and two groups of engineering optimization experiments are used for numerical and engineering optimization, respectively. The results indicate CWDEPSO offers better convergence accuracy and speed compared with some well-known metaheuristic algorithms.
Recently, an increasing number of fields, such as feature selection , artificial intelligence , and engineering structure design , have been faced with optimization problems due to the development of science and technology and industrialization level development. Among them, the engineering structure optimization problem is difficult to be solved by a specific algorithm because of extensive and complex constraints. Since the traditional optimization methods take too much time to solve these problems, thus making it difficult to find the optimal solution, people have started researching new optimization algorithms with a broader applicability. In the past two decades, physical or biological phenomena have provided inspiration. Researchers have consequently invented many optimization algorithms, like the whale optimization (WOA) algorithm , biogeography-based optimization (BBO) , the sine-cosine algorithm (SCA) , the moth-flame optimization (MFO) algorithm , ant colony optimization (ACO) , krill herd (KH) algorithm [9, 10], the artificial bee colony (ABC) , the gravitational search algorithm (GSA) , monarch butterfly optimization (MBO) [13, 14], earthworm optimization algorithm (EWA) , elephant herding optimization (EHO) [16, 17], moth search (MS) algorithm , slime mould algorithm (SMA) , Harris hawks optimization (HHO) , differential evolution (DE) algorithm,  and particle swarm optimization (PSO) [22, 23]. These optimization algorithms are also known as metaheuristic algorithms and are now widely used in real life.
PSO is considered as a swarm intelligence algorithm put forward by the American computer intelligence researcher Eberhart and psychological researcher Kennedy in 1995  and is derived from the foraging behavior of bird populations. PSO is not only robust and efficient but also easily implemented. Therefore, PSO is widely used in practical engineering optimization. However, it is susceptible to premature convergence and falls into local optimal solutions when dealing with complicated multimodal optimization or multiconstraint conditions. To overcome these shortcomings, several scholars have proposed various PSO variants. Liang et al. made an introduction to comprehensive learning PSO (CLPSO) , which combined an original learning method. Liu and Gao proposed chaos PSO (CPSO) , which improved the performance of PSO by introducing chaos mechanism. Wang et al. proposed an adaptive granularity learning distributed particle swarm optimization (AGLDPSO)  with the help of machine-learning techniques, including clustering analysis based on locality-sensitive hashing (LSH) and adaptive granularity control based on logistic regression (LR). To avoid premature convergence, Jordehi proposed a time-varying acceleration coefficient PSO (TVACPSO) . A kind of EC algorithm called local version particle swarm optimization (PSO) is adopted by Li et al. to implement a pipeline-based parallel PSO (PPPSO, i.e., SO) . Fan and Jen attempted to develop the basic PSO effectiveness and convergence by adopting the thought of multiple cooperative swarms, and then, an enhanced partial search PSO (EPS-PSO)  was proposed. Ma et al. proposed the chaotic PSO-arctangent acceleration coefficient algorithm (CPSO-AT) , which strengthened the effectiveness of the original PSO through chaotic initialization and nonlinear optimization. Zhang et al. proposed a cooperative coevolutionary bare-bones particle swarm optimization (CCBBPSO) with function independent decomposition (FID), called CCBBPSO-FID . Liu et al. proposed a coevolutionary particle swarm optimization with a bottleneck objective learning (BOL) strategy for many-objective optimization (CPSO) . Xia et al. proposed a triple archives PSO (TAPSO) . Although the convergence accuracy of these algorithms is higher compared with the basic PSO, their convergence and robustness are still unsatisfactory.
Similar to PSO, DE is also a typical swarm intelligence algorithm and possesses strong points of simple rules, excellent robustness, quick convergence speed, and easy implementation. Nevertheless, the individual convergence is too strong, which makes DE very easy to be attracted by local interference extreme values in the late evolution, leading to precocity. For the purpose of overcoming such issues, many scholars have done many works on adaptive aspects of DE, such as self-adaptive DE with developed mutation method (IMSaDE) , multipopulation ensemble DE (MPEDE) , self-adapting control parameters in DE (jDE) , and self-adaptive DE (SaDE) . In addition, Zhao et al. proposed an LBP-based adaptive DE (LBPADE) algorithm . Wang et al. proposed a new automatic niching technique based on the affinity propagation clustering (APC) and design a novel niching differential evolution (DE) algorithm, termed as automatic niching DE (ANDE) . Zhan et al. developed a novel double-layered heterogeneous DE algorithm and realized it in cloud computing distributed environment (CloudDE) . Wang et al. proposed a dynamic group learning distributed particle swarm optimization (DGLDPSO)  for large-scale optimization and extended it for the large-scale cloud workflow scheduling. Liu et al. developed a novel variant of adaptive DE, utilizing both the historical experience and heuristic information for the adaptation (HHDE) . Zhan et al proposed an adaptive DDE (ADDE)  to relieve the sensitivity of strategies and parameters. Chen et al. developed a distributed individual differential evolution (DIDE) algorithm  based on a distributed individual for multiple peaks (DIMPs) framework and two novel mechanisms. The most classical one is adaptive DE with the optional external archive (JADE)  put forward by Zhang and Sanderson, which uses a Cauchy and Gaussian distributions to dynamically update and improve population convergent speed to the optimal value. Although these modifications are better than traditional DE, the parameter improvement depends too much on previous historical experience, and there are many problems such as slow convergence.
To overcome the limitations of a single algorithm, scholars have combined PSO and DE and conducted much research on this kind of hybrid algorithms, such as self-adaptive mutation DE based on PSO (DEPSO) , ensemble PSO and DE (EPSODE) , combining DE and PSO (DEPSO) , and hybridizing PSO with DE (PSO-DE) . The existing hybrid algorithms can be roughly classified as follows: two algorithms are used in sequence, two algorithms solve and communicate with two different populations, respectively, and two algorithms evaluate and select optimal values. These hybrid algorithms inherit the advantages of each algorithm, which is able to promote basic PSO and DE optimization performance. However, the requirements for more computing resources and slower convergence speed are still the main problems. Based on the above, we combine some ideas of DE and propose a dynamic probability mutation particle swarm optimization with chaotic inertia weight (CWDEPSO), which improves the parameters of PSO and DE. It adds chaotic processing to the inertial weight. Once it is found that PSO cannot successfully update the optimal value, a different mutation operator is adopted for helping PSO run away from local extrema, thereby avoiding the premature maturity of the algorithm and enhancing the population distribution entropy that aims at achieving the overall balance between the local search capability and the global search capability. Among them, the distribution entropy of the population represents the disorder degree of the population’s species and the population diversity . Generally, individuals are scattered in the early stage of iteration, so the degree of disorder is high, and the population diversity is high. At the end of the iteration, individuals are concentrated in the area with strong fitness, so the degree of disorder is low, and the population diversity is low. The population diversity can be improved by adding a mutation operator . The contributions of this paper are as follows:(1)The paper proposes a different mutation operator DE/current-to-gbest/1 and a novel dynamic probability mutation strategy, and the former is applied to the latter.(2)In the selection stage, to fully exploit the individual value in the population, an original selection strategy is proposed, which is to introduce a candidate population to reuse the culled individuals.(3)The paper introduces a method of using logistic chaotic maps, inverse tangent, and cosine trigonometric functions and random distributions to optimize the parameters, respectively.(4)CWDEPSO is applied to optimize the actual engineering structure. CWDEPSO proposed in this paper is used to optimize the spring tension structure and the pressure vessel structure.
24 benchmark test functions and two groups of engineering optimization experiments are adopted for testing the algorithm to evaluate its performance. The results suggest that the performance of CWDEPSO is better compared with PSO , DE , AIWCPSO , MPSPSO-ST , JADE , SinDE , and DEPSO  and five common metaheuristic algorithms GSA , ABC , MFO , SCA , and BBO . Besides, it achieves the best performance in practical engineering optimization problems.
The remaining arrangements of the article are shown below. The second section is the introduction to the basic DE and PSO. The third section introduces the improvement of the parameters, and the fourth section presents the monitoring mechanisms and specific processes of CWDEPSO. The experimental part is shown in the fifth section. The sixth section uses CWDEPSO to solve the engineering structure optimization problem. Furthermore, the final section is the conclusion.
2. PSO and DE Algorithm
Two types of algorithms, including DE and PSO, are introduced in this section.
2.1. Particle Swarm Optimizer (PSO)
As a metaheuristic algorithm, the position of each particle is the representative of a candidate solution within the PSO algorithm, and the fitness of solution relies on the fitness function. The search method is completed by iteration. During the iterative process, the individual particles have a memory function, which can record the best position they have passed in the search procedure and combine the optimal position found by the entire population for refining their position and speed . The particles are searched for the optimal solution in -dimensional space by iteration. Within the -th particle of the -th generation, the position and velocity have been represented by vectors and , respectively. In the -th particle of the current generation, the greatest position that has undergone is . In addition, the best position explored by the whole particle swarm is . At the -th iteration, the -dimensional vector of the -th particle velocity and position is able to be updated as follows:where is the inertial weight; and are the random values in the interval [0, 1]; and and are the learning factors, with .
Shi and Eberhart  proposed a well-known PSO variant, which introduced a linearly decreasing inertial vector to balance well global and local searches, as shown in Figure 1. The definition of the update mechanism can be shown as follows:where and have been the iterations’ maximum number and the current iterations’ number, respectively.
2.2. Differential Evolution (DE) Optimizer
Being a random search method based on population, DE is divided into three steps: mutation, crossover, and selection to solve optimization problems. DE is adopted worldwide within various fields due to its simple principle, strong robustness, and few control parameters. This part mainly introduces the specific steps of the DE.
2.2.1. Population Initialization
Suppose there is a population containing individuals, and all the individuals can be represented by the vectors , and is the representative of the solution’s dimension. In the process of evolution, the evolution times and the maximum evolution times are, respectively, represented by and . The -th generation is represented by a vector . Every dimensional lower and upper limits are represented by the vectors and . The initial population is presented as follows:where is a random number between the interval [0, 1].
In the -th generation, each individual, the target vector , uses the mutation operator to produce the mutation vector . Figure 2 shows the classic mutation operator DE/rand/1. Selecting different mutation operators will influence the algorithm’s performance greatly. Common mutation operators are as follows.
DE/current-to-best/1:where are integers obtained in the interval [1, NP] randomly and . The parameter is the scaling factor, which is used to control the scale of variation, usually . is the most suitable individual in the current population.
After mutation, the target vector of each generation crosses with its corresponding mutation vector to generate an experimental vector . Each dimension of the experimental vector is gained from the mutation vector or the target vector . The specific operation of crossover looks as follows:where , , crossover probability . is used to control the probability that the experimental vector inherits the mutation vector. The parameter , which is used to make sure that one or more dimensions in the experimental vector, inherits the value of the target vector.
To prevent values from exceeding the upper and lower bounds, they must also be bounded. The specific operation is as follows:where , .
Use the greedy strategy between the target vector and the experimental vector: select the best function value to enter the next iteration. Better individuals can be inherited to the following generation via the greedy method, making sure the algorithm convergence. The specific operation process is described as follows:where and is the representatives of the function value of the vector .
3. The Parameters of CWDEPSO
This section describes the improvements of CWDEPSO parameters. We introduce chaos theory, the improvement of parameters in PSO, and that of parameters in DE, in turn.
3.1. Chaos Theory
Chaotic motion is a common nonlinear phenomenon in nature. It looks cluttered on the outside but has subtle regularity on the inside. It possesses the features of regularity, randomness, as well as ergodicity. Chaotic ergodicity is able to experience every state within a definite interval with no repetition. Therefore, the optimized search using chaotic variables is superior to the blind random search. There are many types of chaotic dynamic models. In this paper, logistic mapping [57, 58] is used to design chaotic operations. The iterative equation is defined as follows:where and . It is completely chaotic when the control parameter . The best results are obtained when , as shown in Figure 3, so let in this article.
3.2. Improvement of PSO Parameters
The traditional PSO algorithm uses fixed parameter settings during the evolution process. However, many pieces of literatures indicate that PSO is very sensitive to parameter settings [30, 58–61]. Among them, Zhan et al.  proposed APSO, which is a very famous PSO variant that well deals with the parameters. Also, Peng et al.  used probabilistic methods to prove that the inertia weight on PSO’s convergence is an order of magnitude higher than two learning factors, so the inertia weight can be more necessary to be adjusted in the evolution process of PSO. Among them, for the inertia coefficient , Shi and Eberhart  proposed a linearly decreasing . The reason for this improvement is to increase the global search capability in the early iterations and the local search capability in the late iterations. Correspondingly, a larger inertial weight is helpful for global exploration, and with small inertia weight, it is conducive to local exploration. Afterwards, after many numerous studies [53, 58, 59], it was found that the nonlinear is more helpful for algorithm enhancement. In addition, in this paper, we choose to use nonlinear and perform chaos treatment, aiming to enhance the disorder during iteration to enhance the population diversity of the algorithm in the late iteration. In general, we hope that the global exploration competence of the algorithm is strong in the primary period, and the local search competence is improved later to explore the optimal value more accurately. In this paper, we use the arctangent acceleration coefficient (AT) to propose a nonlinearly changing inertial weight and introduce a chaotic mechanism within to avoid a local optimum by increasing the population diversity and thereby improving the search performance of PSO. The update of the inertia weight is presented as follows:where , , and are constants . is the chaos term, which enables the algorithm to increase chaos characteristics while maintaining the original change trend, and is the direction factor. The introduction of the direction factor can make fluctuate up and down, and in each iteration, the value of will choose to be equal to 1 or −1. In each generation, is updated by equation (13) and substituted into equation (14). The specific image is shown in Figures 4(a) and 4(b).
Similarly, the learning factor also influences the performance of PSO greatly. and are called cognitive and social components, respectively. Within the traditional PSO algorithm, the cognitive component is always equal to the social component, that is, . However, many literatures [30, 53, 59] have demonstrated that the focus should be on the global exploration ability at the beginning of the iteration; i.e., the social component is larger, and the focus should be on the local exploration ability at the later part of the iteration; i.e., the cognitive component is larger. In order to better balance the global exploration and local exploration ability in the iteration, in this paper, we use the sine function to optimize it so that gradually decreases and gradually increases. The specific equation is as follows:where , , and are constants .
The change in curves of and is presented in Figure 5.
3.3. Improvement of DE Parameters
Similar to PSO, the parameters of DE also have a significant impact on the algorithm [21, 24, 40, 41]. The JADE first proposed the use of Cauchy and Gaussian distributions to update parameters and was widely cited in the literature [36, 62, 63], where the parameters and are proposed to be updated by a power function in the literature . Inspired by that, this paper adjusts the scaling factor and the crossover probability by using the parameter control methods below.
Similar to JADE, in each generation of evolution, the scaling factors of all the individuals in the population are independently and randomly created by the Cauchy distribution, as follows:where is a random number generated by the Cauchy distribution with a scaling factor of 0.1 and a position parameter of . If , make it equal to 1, and if , regenerate. In the current generation, all successful scaling factors are stored in set . The position parameter is initialized to 0.5. At the end of all the generations, is updated as follows:where is a constant between the interval [0, 1] and represents the Lehmer average:
In each evolutionary generation, the crossover probability of all the individuals within the population is generated independently on the basis of the Gaussian distribution parameters as follows:where is a random number generated by the Gaussian distribution with a standard deviation of 0.1 and a mean value of . If , let it be equal to 1, and if , let it be equal to 0. In the current generation, all successful crossover probabilities are stored in the set . The position parameter is initialized to 0.5. Unlike JADE, after the end of each generation, the update of adopts the power average, as follows:where represents the power-mean:where represents the cardinality of the set and is a constant .
4. Dynamic Probabilistic Mutation PSO
Under the premise of ensuring population diversity, PSO can effectively explore the solution area and has a fast convergence speed. However, the search can easily stall due to the lack of diversity during the search procedure. Once trapped in premature convergence, it is difficult to escape from local optimization [64–67]. To solve this problem, we have to enhance the population diversity in the late iteration in order to avoid falling into local optimum. According to , in the paper, we learn that DE effectively helps PSO jump out of the local optimum and improve the population diversity when the PSO algorithm falls into local optimum in the late iteration, and the literature [48, 68, 69] all shows that the DE algorithm combined with the PSO algorithm can effectively improve the algorithm performance and avoid premature convergence. In addition, some algorithms such as DEPSO  substantially increase the computational complexity in each iteration by comparing two different trial vectors when adapting. Therefore, in this paper, we choose to help the algorithm improve its diversity and at the same time avoid adding too much computational complexity by observing the results of the previous iterations to predict whether to join the DE algorithm next time. This paper proposes a novel dynamic probabilistic mutation PSO to improve local search in the later iteration. As shown in Figure 6, the improved mutation operator is introduced in PSO for increasing the population diversity and helping the algorithm get away from local convergence. The specific improvements can be seen below.
4.1. Monitor Global Optimum Update
First, the global optimum of each generation is monitored. Once the -th particle of -th generation fails to update the global optimum, the flag bit corresponding to the particle is increased by one. When the flag bit of all particles is greater than one, that is, all particles of the -th generation fail to search the optimal value, it is considered to be trapped within a local optimum, which might affect the effect of population convergence. We will use methods increasing the probability of mutation according to the change in the flag bit to aid the population jump out of local convergence.
4.2. Improvement of Mutation Operator
In addition, an appropriate mutation operator needs to be selected. The DE/current-to-best/1 mutation operator possesses great local development capabilities and fast convergence speed, which can reflect the current particle state, but in comparison, the global development ability is poor. The DE/rand/1 and DE/rand/2 mutation operators depend on other random individuals, which are conducive to improving the population diversity. The global convergence competence is strong, but the convergence speed is relatively slow. Even if PSO updates the global optimum unsuccessfully in the current population, the corresponding current particles still have a high development value. Considering that the algorithm is mostly at the end of evolution when it falls into local convergence, it should pay more attention to the local development ability; we thus take the DE/current-to-best/1 mutation operator as the basis and improve it. The new mutation operator DE/current-to-gbest/1 is as follows:where is a variable independent of the scaling factor and is a variable that changes with the number of iterations. Making the variable gradually increase with the number of iterations can reduce the influence of local convergence of the PSO in the later stage and improve the ability to explore the surrounding area. Mathematically, the variable is given as follows:
In contrast, the population diversity decreases as the number of iterations increases, and the variable affecting the population diversity should increase as the iterations increase. Otherwise, it will still fall into local convergence. K is defined as follows:
By strengthening the exploration of the global situation in the early period and focusing on the exploration of the current local area in the later stage, the balance between local and global convergences is effectively maintained.
4.3. Processing of Monitoring Data
When all particle flags , the count variable increases by one. With the increase in , it indicates that the number of convergence failures gradually increases, which means that the algorithm is more likely to fall into the state of local convergence. The initial increase in the variable has contingency, but the more the increases, the more likely it falls into premature convergence. Therefore, we define a probability , which represents the probability of the algorithm quoting mutation ideas. The larger the , the larger the probability . Inspired by the exponential formula update probability in the literature , the paper uses an exponential formula to control the probability of introducing mutation ideas. The probability is set as follows:where is the evolutions’ maximum number.
If the flag bit corresponding to the -th particle is greater than the warning value , it means that the individual has a high probability of falling into local convergence that is not easy to escape. As a result, the reference value of the particle would be low. This paper sets the warning value in this article and gives it a proper probability to reinitialize it. The specific operation is as follows:where .
The complete monitoring mechanism is shown in Figure 7.
4.4. Modification of Selection Method
After the mutation operation, the algorithm will proceed according to the crossover and selection operation of DE. For the select operation, the fundamental DE and a lot of DE variants adopt the greedy selection method, while greedy selection often leads to the abandonment of many good individuals falling into local convergence . In order to fully mine the effective information of these excellent individuals, we have added a temporary population to save these abandoned individuals temporarily as follows:where holds all the individuals who have been eliminated in the contemporary era. Some of the outstanding individuals in the have a higher probability than the worst individuals of the current updated population. If the worst individual within the brand-new population is worse than the best individual within the temporary population, it is replaced. It is worth noting that if too many individuals are replaced, it will cause local convergence.
On the basis of the analysis before, the pseudocode of CWDEPSO can be seen below (Algorithm 1).
5. Experimental Results and Discussion
In this section, the performance of CWDEPSO is evaluated by 24 benchmark functions that have been adopted widely in the field of numerical optimization methods [58, 70, 71], most of which belong to CEC2017 [72, 73], as shown in Tables 1–3. The experiment can be classified into two parts. The first part compares CWDEPSO with PSO, DE, and their variants AIWCPSO, MPSPSO-ST, JADE, SinDE, and DEPSO. The second part compares CWDEPSO with several other well-known optimization algorithms (GSA, ABC, SCA, MFO, and BBO). All of the algorithms are coded in MATLAB R2018a and run on a PC with an Intel(R) Core(TM) i7-7700HQ CPU@2.80 GHz/8GB RAM.
5.1. Test Function
The benchmark functions adopted within the article are divided into three categories, and are unimodal functions, where contains noise interference; are multimodal function. Among them, the local optimum of is large, and the suboptimal value differs significantly from the global optimum. The characteristic of is that it has a very narrow global basin, so it can be rather easy to fall into a local optimum. are multimodal benchmark functions with fixed dimensions. The three types are listed in Tables 1–3. Dim and Scope, respectively, represent the solution space dimension and range of the solution, and type represents the type of the current function.
5.2. Comparison of CWDEPSO with PSO, DE, JADE, AIWCPSO, MPSPSO-ST, JADE, SinDE, and DEPSO
CWDEPSO is compared with PSO, DE, and their variants AIWCPSO, MPSPSO-ST, JADE, SinDE, and DEPSO in this part. To ensure a fair comparison, we set the parameters in both sets of experiments all according to the original paper, strictly following the steps in the literature [58, 59], and the population size is set to 500. It is worth noting that the algorithm CWDEPSO proposed in this paper has the same number of function evaluations as PSO and DE in each iteration, so it can ensure fairness in the case of the same number of iterations. Also, the initial populations of all algorithms are generated with random numbers, and each group is run twenty times independently. The specific parameters are set as follows: PSO setting: . AIWCPSO setting: . MPSPSO-ST setting: , , , , , DE setting: . JADE setting: . SinDE settings: . DEPSO settings: . CWDEPSO setting: .
The benchmark functions adopted in this section have been presented within Tables 1–3. For the purpose of reducing statistical mistakes, all the test functions are independently evaluated 20 times. The obtained best value (Best), the worst value (Worst), the mean results (Mean), and the standard deviation (S.D.) by statistics are presented in Table 4, with the optimal values identified in bold. For the purpose of facilitating the comparison of the mean and the S.D., we represent the data with a radar chart, as shown in Figure 8. Figure 9 represents the trajectory of the average of the algorithm on the current test function.
As shown in Table 4, the single-peaked functions are significantly more accurate for the optimization of CWDEPSO compared with the other seven algorithms, achieving the best results in each index. Among the multipeaked functions, as for , the best results are obtained by DEPSO, followed by CWDEPSO. For the functions, CWDEPSO performs well and converges with the best accuracy. For the multimodal function , the best value is obtained by JADE, while DEPSO performs best among the worst, mean, and S.D., which obtains more accurate results than others. In the multimodal functions and , CWDEPSO performs the best. For the functions, DEPSO achieves the best performance. In , CWDEPSO succeeds in achieving the theoretical optimum, but WORST and S.D. are slightly worse than AIWCPSO, and the mean value is inferior to JADE. For, and , CWDEPSO shows the highest convergence accuracy compared to other comparing algorithms, finding the optimal result and providing the best performance. For , DE can search for the optimal value, and CWDEPSO performs the same as DE in terms of the best value. For f18 and f20, the search for the optimal solution is found by DEPSO and JADE, respectively, but for , CWDEPSO performs best in the best and successfully achieves the theoretical optimal solution of the function. For the fixed-dimensional multimodal benchmark functions , and , CWDEPSO still has a good performance. It performs slightly worse than JADE on function and achieves the best performance in .
In accordance with , to make the comparison results statistically significant and reliable, Wilcoxon’s rank-sum test was used to test the results of different algorithms for significant differences at 5% significance level. If there is a significant difference, then it is 1; otherwise, it is 0. Table 5 shows the results of the Wilcoxon’s rank-sum test for each algorithm on each function. We can find that, in and , CWDEPSO is significantly different from the other algorithms on most of the tested functions; however, there is no significant difference in some of the functions: DE, PSO, JADE, SinDE, AIWCPSO, MPSPSO-ST, and DEPSO . In , there is a significant difference only with SinDE and DEPSO, and in