Research Article | Open Access
An Improved Convergence Particle Swarm Optimization Algorithm with Random Sampling of Control Parameters
Although particle swarm optimization (PSO) has been widely used to address various complicated engineering problems, it still needs to overcome the several shortcomings of PSO, e.g., premature convergence and low accuracy. Its final optimization result is related to the control parameters selection; therefore, an improved convergence particle swarm optimization algorithm with random sampling of control parameters is proposed. For the proposed algorithm, the random sampling strategy of control parameters is designed, which can promote the flexibility of algorithm parameters and simultaneously enhance the updating randomness for both particle velocity and position. According to the convergence analysis of PSO, the sampling range for inertial weight is determined after both the acceleration factors have already been sampled in their respective value interval, to ensure convergence for every evolution step of algorithm. Besides that, in order to make full use of dimension information of some better particles, the stochastic correction approach on each dimension for the population optimum value has been adopted. The final experiments results demonstrate that the proposed algorithm further improves the convergence rate while maintaining higher convergence accuracy, compared with basic particle swarm optimization and other variants.
PSO, proposed by Kennedy and Eberhart , is an evolutionary algorithm based on swarm intelligence which simulates birds or fish predation, and it has already attracted a lot of interest from scholars and researchers for the reason that PSO has simple structure, strong maneuverability, easy realization, and other characteristics. Up to now, PSO has been successfully applied in many areas [2–6], and meanwhile some improved versions of PSO have also been studied accordingly [7–11].
Inertial weight, which is the relatively important control parameter of PSO, is first introduced by Shi  into the basic evolution equations of algorithm, and from then on, some research about the influence of inertial weight on optimization performance has been carried out. In , Alfi proposes an adaptive particle swarm optimization algorithm in which a dynamic inertia weight is used. Based on the Bayesian theory, an adaptive adjustment strategy for inertia weight is designed by Zhang , and simultaneously the historical positions of particles are fully used. Although the convergence precision of this improved PSO is higher, the convergence speed becomes slow. After Han  compared several common updating ways for inertial weight, it can be concluded that the convergence performances of the simulated annealing updating method and the linear decreasing method are relatively better, but the acceleration coefficient, when it changed, has big influence on the convergence performance. Besides that, a linear updating way for acceleration coefficient c1 and c2 is put forward by Ratnaweera  through the corresponding parameter analysis, and Yamaguchi  proposed an adaptive adjustment strategy for acceleration coefficient according to the updated positions of particles. Furthermore, Liang  proposed a comprehensive learning particle swarm optimization algorithm using a novel learning method to improve the convergence performance. However, all the above literatures improve the convergence performance by modifying control parameters of algorithm, and they are only demonstrated through numerical simulation experiments, but because of lack of the corresponding theoretical convergence analysis, perhaps some part of the actual evolutionary process is divergent and unstable.
The convergence of PSO algorithms should be based on the framework of random search algorithm , and it has been proved by Van den Bergh  that PSO is not a global optimization algorithm and also cannot be guaranteed to converge to a local optimum. On this basis, Trelea  utilized the linear dynamic system theory with constant coefficient to analyze the stability of basic PSO, and Clerc  established a constraint model of PSO described by only five parameters; then the convergence and trajectories of particles in phase plane were analyzed. Starting with the Markov chain formed by particles states, Ren  has pointed out that this Markov chain does not have conditions for stationary processes, and then it was proved that PSO is not globally convergent in the view of transition probability. On the basis of stochastic system theory, Jin  analyzed the mean square convergence of PSO and provided a sufficient condition of convergence. Although some these literatures for convergence analysis supply sufficient conditions, it is still not given how to adjust control parameters of algorithm in evolution process in order to get better convergence performance.
In view of the problems mentioned above, this paper proposes an improved convergence particle swarm optimization algorithm with random sampling of control parameters (SC-PSO), and the main contribution of the present work is delineated as follows.
The random sampling strategy is designed to improve the flexibility of control parameters, which also can strengthen the position updating randomness of particles to enhance the exploration ability of PSO and help to jump out of local optimum.
In order to ensure the convergence of the algorithm, the inertia weight is selected around the center part of the convergence interval, and the phenomenon of “oscillation” and “two steps forward, one step back” can be prevented.
Due to the weakness of exploitation caused by the random sampling strategy, the intermediate particle updating strategy is devised to update the optimal position of swarm population in every evolution step. In addition, the optimal value is updated dimensionally, and the dimensionality information of different particles is used to randomly select the value, so as to find the better position in the dimension.
This paper is organized as follows. Section 2 introduces the basic PSO and gives its theoretical analysis of convergence. Section 3 describes the proposed algorithm in detail. Section 4 presents the test functions, the parameters setting of each algorithm, the results, and discussions. Conclusions are given in Section 5.
2. PSO Algorithm
2.1. Basic PSO
While PSO is running, each particle is regarded as a feasible solution to the optimization problem in search space and the flight behavior of particles can be treated as the search process of all individuals; then the velocity of particles is dynamically updated according to the historical optimal position of particle and the optimal position of swarm population. It is assumed that the swarm population is composed of particles in D dimensional space, and the historical optimal position of the ith particle is represented by , , and the optimal position of swarm population is denoted as . In every evolution step, the velocity and position for each particle are updated by dynamically tracking its corresponding historical optimal position and the optimal position of swarm population. The detailed equations are expressed as follows: where t shows the iteration number and indicates dimension; thus xi,d(t) is the dth dimension variable of the ith particle in the tth iteration, and variables vi,d(t), pg,d(t), and pi,d(t) have the similar meanings in turn; ω is inertial weight, and c1 and c2 denote acceleration coefficients; r1 and r2 are random numbers uniformly distributed in interval .
According to the detailed optimization problem to be settled, the objective function should be set, and the objective function values of each particle are corresponding fitness values. The fitness value can be used to not only measure the position of particles but also update the historical optimal position of particles and the optimal position of swarm population.
2.2. Convergence Analysis
The convergence of particles trajectories is determined by control parameters of algorithm, and in order to facilitate analysis and generality, the case of a single particle system having only one dimension is taken as an example. After that, the basic evolution equations (1) and (2) can be transformed into the dynamic equation form.where we define , , and . If the historical optimal position of particle finally converges to the optimal position of swarm population , the dynamic equation (3) can be arranged as follows:
3. Our Proposal: SC-PSO Algorithm
3.1. Random Sampling Strategy for Control Parameters
For basic PSO, control parameters have great impact on the performance of algorithm. If they are assigned inappropriately, the trajectories of particles cannot converge and may even be unstable, which will cause that the optimal solution of optimization problems cannot be found. At present, the control parameters are usually chosen according to the experience or experiments from engineers, so it is not flexible and the exploration ability of PSO has also been greatly restricted.
Random sampling strategy is designed to improve the flexibility of control parameters and enhance the exploration ability of PSO to help to jump out of local optimum. On the basis of the conclusion from , the convergence of PSO should be considered when the parameters are randomly selected. First of all, acceleration coefficients c1 and c2 are, respectively, uniformly sampled in their corresponding value interval, and the parameters μ and σ can be computed using formula (6). According to the condition of mean square convergence from formula (5), the sampling interval for inertial weight ω can be solved.where and , respectively, denote the lower bound and the upper bound.
Finally, the inertial weight ought to be sampled in the above computed interval. However, in order to avoid the phenomenon of “oscillation” and “two steps forward, one step back,” the inertia weight is selected around the center part of the convergence interval of ω. According to formula (8) and (9), Figure 1 shows the relationship between μ and ω for mean square convergence of PSO. For example, when the acceleration coefficients have been already sampled, c1=c2=2, and then parameters μ and σ2 can be computed, μ=2, σ2=2/3. The convergence interval of inertial weight is . In practice, the inertial weight is uniformly selected around the center part of the above interval to avoid the phenomenon of “oscillation” and “two steps forward, one step back.”
On the basis of random sampling strategy, all particles in swarm update their positions and velocity using formula (1) and (2) in every evolution step. Besides that, intermediate particle updating strategy is meanwhile devised to avoid premature which is caused by the strong randomness of sampling strategy of control parameters, and it is only used to update the optimal position of swarm population in every evolution step.
The method for generating intermediate particles is as follows:
. Intermediate particle 1 (): the position of this particle in each dimension is the average value of all updated particles in the same dimension.
. Intermediate particle 2 (): the value of this particle in each dimension is equal to the median of particles in the corresponding dimension.
. Intermediate particle 3 (): each dimension value of this particle is the one which has larger absolute value of maximum and minimum in the same dimension.
. Intermediate particle 4 (): each dimension value of this particle is the one which has smaller absolute value of maximum and minimum in the corresponding dimension.
We choose the one which has the best fitness value from the four intermediate particles. If it is better than the optimal position, the optimal position of swarm population will be replaced by it; otherwise, the optimal position remains unchanged.
3.2. Stochastic Correction Approach on Each Dimension for the Optimal Position
The intermediate particles mentioned in the above section are only used to correct the optimal position in the same dimensions; however, we know that the positions of particles are evaluated overall, and the historical optimal positions of particles, which are less than the optimal position of swarm, may have some useful information at some dimensions. Therefore, the stochastic correction approach on each dimension for the optimal position is used in this section.
According to the fitness value order of historical optimal positions of particles, we select five better historical optimal positions of particles denoted as , ind = , and then the procedure of stochastic correction approach on each dimension for the optimal position is described in detail as follows: ; % two dimensions m and n are randomly selected from the five better historical optimal position and the optimal position to determine the correction range; , and ; , and ; , and ; ; % value1(d) indicates the dimension value of value1, and value2() is the similar; and () is a random number uniformly distributed in the interval . ; ;
3.3. Algorithm Flow
The whole process of SC-PSO algorithm is shown in Figure 2.
4. Numerical Simulation Experiments
4.1. Experiments Settings
In order to demonstrate the performance of the proposed algorithm SC-PSO, the basic PSO and the other three improved versions of PSO are selected to compare. For convenience, the basic PSO is represented by A1; A2 indicates DCW-PSO in  which can dynamically adjust the inertial weight; LDC-PSO in , represented by A3, makes the acceleration coefficient c1 be linearly increased but the acceleration coefficient c2 be linearly declined. Algorithm A4(DPSO) is an improved algorithm using asynchronous learning variation strategy for learning factors . Reference  describes the SS-PSO, marked by A5, which is based on the principle of Latin Hypercube Sampling. Finally, the proposed algorithm SC-PSO is marked by A5. Table 1 shows five benchmark functions and their corresponding features.
The problem dimension of the benchmark functions is set, respectively, as 30 and 50, and the sample interval for both acceleration coefficients c1 and c2 is ; in addition, maximum function evaluation times MFEs= 30000, and the size of population for every algorithm is 20. All the algorithms were conducted independently 30 times with different random initial positions.
4.2. Performance Analysis
Tables 2 and 3, respectively, show the optimization results of all the algorithms on benchmark functions for 30 dimensions and 50 dimensions. In both tables, Mean denotes the mean value of optimization results, and Std represents the standard deviation of results. From Tables 2 and 3, it can be concluded that the performance of SC-PSO proposed in this paper is better than the other algorithms. Compared with the basic PSO, the performance of the improved PSO variants has been greatly improved. For functions f1, f8, and f9, the convergence accuracy of the proposed algorithm is obviously better than that of all other algorithms. For f2, f3, and f10, the convergence accuracy of the proposed algorithms is improved slightly. Besides that, for the remaining test functions, the performance convergence accuracy of PSO, DCW-PSO, LDC-PSO, and DPSO is almost the same, and their convergence accuracy is worse than that of SS-PSO and the proposed algorithm. Table 3 shows the optimization result on benchmark functions in 50 dimensions; it can be seen from the rankings in the table that the change of function and dimension has little influence on the quality of the algorithm.
In order to intuitively investigate the convergence performance of SC-PSO, the convergence curves of the 6 algorithms (PSO, DCW-PSO, LDC-PSO, DPSO, SS-PSO, and SC-PSO) are on the 10 selected functions. From Figures 5, 7–9, and 12, SC-PSO’s convergence speed is faster than of any of the comparison algorithms and its advantages are prominent. In Figures 4 and 6, the convergence speed of SC-PSO is also faster than that of any of the comparison algorithms, but its advantages are not sharp. From Figures 3, 10, and 11, SC-PSO’s convergence speed is almost the same as that of SS-PSO in the early stage, but its convergence accuracy is better than that of SS-PSO. On the whole, SC-PSO has the best convergence performance. In general, the convergence performance of SC-PSO on the 10 functions is much better than other algorithms. The convergence curves further prove that improvement strategies are effective and that SC-PSO has good convergence performance.
In summary, compared with basic PSO and its modified versions, the convergence rate of SC-PSO proposed in this paper is significantly improved while its convergence accuracy is still guaranteed higher.
4.3. Running Time Analysis
From Figure 13, on average runtime, SC-PSO is the least among the 6 algorithms, which means that the speed of SC-PSO is the faster. Compared with PSO, SC-PSO’s average runtime is 0.7725 s, while PSO’s is 2.1244s, and SC-PSO’s average runtime is 36% of PSO’s. Compared with the other 4 algorithms, SC-PSO’s average runtime accounts for 34%, 37%, 20%, and 43% of DCW-PSO’s (2.2871s), LDC-PSO’s (2.0658s), DPSO’s (3.8148s), and SS-PSO’s (1.7965s), respectively.
This paper proposes an improved convergence particle swarm optimization algorithm with random sampling of control parameters. The random sampling strategy of control parameters is designed to improve the flexibility of control parameter setting, and corresponding updating randomness can also help to jump out from the local optimum. In order to avoid premature caused by the strong randomness of sampling strategy, intermediate particle updating strategy is devised. Besides that, the stochastic correction approach on each dimension for the optimal position is used to take advantage of some useful information of other particles. The final experimental results show that not only the proposed algorithm has high accuracy, but also its convergence rate is significantly improved. In the future, we will consider how to apply it to node localization in wireless sensor network.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflicts of interest.
This work was supported in part by the National Natural Science Foundation of China (U1604151; 61803146), Outstanding Talent Project of Science and Technology Innovation in Henan Province (174200510008), Program for Scientific and Technological Innovation Team in Universities of Henan Province (16IRTSTHN029), Science and Technology Project of Henan Province (182102210094), Natural Science Project of Education Department of Henan Province (18A510001), and the Fundamental Research Funds of Henan University of Technology (2015QNJH13, 2016XTCX06).
- J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the 1995 IEEE International Conference on Neural Network, pp. 1942–1948, 1995.
- Q. Lin, S. Liu, Q. Zhu et al., “article swarm optimization with a balanceable fitness estimation for many-objective optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 1, pp. 32–46, 2018.
- X. Deng, X. Sun, R. Liu, and S. Liu, “Consensus control of second-order multiagent systems with particle swarm optimization algorithm,” Journal of Control Science and Engineering, vol. 2018, Article ID 3709421, 9 pages, 2018.
- J. He and Z.-H. Liu, “Estimation of stator resistance and rotor flux linkage in SPMSM Using CLPSO with opposition-based-learning strategy,” Journal of Control Science and Engineering, vol. 2016, Article ID 5781467, 7 pages, 2016.
- S. Zhang, J. Xu, L. H. Lee, E. P. Chew, W. P. Wong, and C.-H. Chen, “Optimal computing budget allocation for particle swarm optimization in stochastic optimization,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 2, pp. 206–219, 2017.
- Z. H. Liu, X. H. Li, H. Q. Zhang et al., “An enhanced approach for parameter estimation: using immune dynamic learning swarm optimization based on multicore architecture,” IEEE Systems, Man, and Cybernetics Magazine, vol. 2, no. 1, pp. 26–33, 2016.
- M. R. Bonyadi and Z. Michalewicz, “Analysis of stability, local convergence, and transformation sensitivity of a variant of the particle swarm optimization algorithm,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 3, pp. 370–385, 2016.
- C. Leboucher, H. Shin, R. Chelouah et al., “An enhanced particle swarm optimisation method integrated with evolutionary game theory,” IEEE Transactions on Games, vol. 10, no. 2, pp. 221–230, 2018.
- H. Zhang, Y. Liang, W. Zhang, N. Xu, Z. Guo, and G. Wu, “Improved PSO-based method for leak detection and localization in liquid pipelines,” IEEE Transactions on Industrial Informatics, vol. 14, no. 7, pp. 3143–3154, 2018.
- M. R. Bonyadi and Z. Michalewicz, “Impacts of coefficients on movement patterns in the particle swarm optimization algorithm,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 3, pp. 378–390, 2017.
- M. R. Bonyadi and Z. Michalewicz, “Stability analysis of the particle swarm optimization without stagnation assumption,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 814–819, 2016.
- Y. H. Shi and R. C. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the 1998 IEEE International Conferenceon Evolutionary Computation, pp. 69–73, 1998.
- A. Alfi, “PSO with adaptive mutation and inertia weight and its application in parameter estimation of dynamic systems,” Acta Automatica Sinica, vol. 37, no. 5, pp. 541–549, 2011.
- L. Zhang, Y. Tang, C. Hua et al., “A new particle swarm optimization algorithm with adaptive inertia weight based on Bayesian techniques,” Applied Soft Computing, vol. v, pp. 138–149, 2015.
- W. Han, P. Yang, H. Ren et al., “Comparison study of several kinds of inertia weights for PSO,” in Proceedings of the IEEE International Conference on Progress in Informatics and Computing, pp. 280–284, 2010.
- A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004.
- T. Yamaguchi and K. Yasuda, “Adaptive particle swarm optimization: self-coordinating mechanism with updating information,” in Proceedings of the 2006 IEEE International Conference on Systems, Man, and Cybernetics, vol. 3, pp. 2303–2308, 2006.
- J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006.
- F. Pan, L. I. Xiao-Ting, Q. Zhou et al., “Analysis of standard particle swarm optimization algorithm based on markov Chain,” Acta Automatica Sinica, vol. 39, no. 4, pp. 381–389, 2013.
- F. van den Bergh and A. P. Engelbrecht, “A study of particle swarm optimization particle trajectories,” Information Sciences, vol. 176, no. 8, pp. 937–971, 2006.
- I. C. Trelea, “The particle swarm optimization algorithm: convergence analysis and parameter selection,” Information Processing Letters, vol. 85, no. 6, pp. 317–325, 2003.
- M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002.
- Z. Ren, J. Wang, and Y. Gao, “The global convergence analysis of particle swarm optimization algorithm based on Markov chain,” Control Theory & Applications, vol. 28, no. 4, pp. 462–466, 2011.
- X. Jin, L. Ma, T. Wu et al., “Convergence analysis of the particle swarm optimization based on stochastic processes,” Acta Automatica Sinica, vol. 33, no. 12, pp. 1263–1268, 2007.
- X. Zhang, Y. Du, G. Qin, and Z. Qin, “Adaptive particle swarm algorithm with dynamically changing inertia weight,” Journal of Xi'an Jiaotong University, vol. 39, no. 10, pp. 1039–1042, 2005.
- D.-F. Wang and L. Meng, “Performance analysis and parameter selection of PSO algorithms,” Acta Automatica Sinica, vol. 42, no. 10, pp. 1552–1561, 2016.
- J. Q. Tong, Q. Zhao, and M. Li, “Particle swarm optimization algorithm based on adaptive dynamic change,” Microletronics & Computer, vol. 36, no. 2, pp. 6–10, 2019.
- G. J. Jiang, H. Ye, and Y. H. Ma, “Particle swarm optimization algorithm via sampling strategy,” Control and Decision, vol. 10, pp. 1779–1784, 2015.
Copyright © 2019 Lijun Sun et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.