Research Article  Open Access
Lijun Sun, Xiaodong Song, Tianfei Chen, "An Improved Convergence Particle Swarm Optimization Algorithm with Random Sampling of Control Parameters", Journal of Control Science and Engineering, vol. 2019, Article ID 7478498, 11 pages, 2019. https://doi.org/10.1155/2019/7478498
An Improved Convergence Particle Swarm Optimization Algorithm with Random Sampling of Control Parameters
Abstract
Although particle swarm optimization (PSO) has been widely used to address various complicated engineering problems, it still needs to overcome the several shortcomings of PSO, e.g., premature convergence and low accuracy. Its final optimization result is related to the control parameters selection; therefore, an improved convergence particle swarm optimization algorithm with random sampling of control parameters is proposed. For the proposed algorithm, the random sampling strategy of control parameters is designed, which can promote the flexibility of algorithm parameters and simultaneously enhance the updating randomness for both particle velocity and position. According to the convergence analysis of PSO, the sampling range for inertial weight is determined after both the acceleration factors have already been sampled in their respective value interval, to ensure convergence for every evolution step of algorithm. Besides that, in order to make full use of dimension information of some better particles, the stochastic correction approach on each dimension for the population optimum value has been adopted. The final experiments results demonstrate that the proposed algorithm further improves the convergence rate while maintaining higher convergence accuracy, compared with basic particle swarm optimization and other variants.
1. Introduction
PSO, proposed by Kennedy and Eberhart [1], is an evolutionary algorithm based on swarm intelligence which simulates birds or fish predation, and it has already attracted a lot of interest from scholars and researchers for the reason that PSO has simple structure, strong maneuverability, easy realization, and other characteristics. Up to now, PSO has been successfully applied in many areas [2–6], and meanwhile some improved versions of PSO have also been studied accordingly [7–11].
Inertial weight, which is the relatively important control parameter of PSO, is first introduced by Shi [12] into the basic evolution equations of algorithm, and from then on, some research about the influence of inertial weight on optimization performance has been carried out. In [13], Alfi proposes an adaptive particle swarm optimization algorithm in which a dynamic inertia weight is used. Based on the Bayesian theory, an adaptive adjustment strategy for inertia weight is designed by Zhang [14], and simultaneously the historical positions of particles are fully used. Although the convergence precision of this improved PSO is higher, the convergence speed becomes slow. After Han [15] compared several common updating ways for inertial weight, it can be concluded that the convergence performances of the simulated annealing updating method and the linear decreasing method are relatively better, but the acceleration coefficient, when it changed, has big influence on the convergence performance. Besides that, a linear updating way for acceleration coefficient c_{1} and c_{2} is put forward by Ratnaweera [16] through the corresponding parameter analysis, and Yamaguchi [17] proposed an adaptive adjustment strategy for acceleration coefficient according to the updated positions of particles. Furthermore, Liang [18] proposed a comprehensive learning particle swarm optimization algorithm using a novel learning method to improve the convergence performance. However, all the above literatures improve the convergence performance by modifying control parameters of algorithm, and they are only demonstrated through numerical simulation experiments, but because of lack of the corresponding theoretical convergence analysis, perhaps some part of the actual evolutionary process is divergent and unstable.
The convergence of PSO algorithms should be based on the framework of random search algorithm [19], and it has been proved by Van den Bergh [20] that PSO is not a global optimization algorithm and also cannot be guaranteed to converge to a local optimum. On this basis, Trelea [21] utilized the linear dynamic system theory with constant coefficient to analyze the stability of basic PSO, and Clerc [22] established a constraint model of PSO described by only five parameters; then the convergence and trajectories of particles in phase plane were analyzed. Starting with the Markov chain formed by particles states, Ren [23] has pointed out that this Markov chain does not have conditions for stationary processes, and then it was proved that PSO is not globally convergent in the view of transition probability. On the basis of stochastic system theory, Jin [24] analyzed the mean square convergence of PSO and provided a sufficient condition of convergence. Although some these literatures for convergence analysis supply sufficient conditions, it is still not given how to adjust control parameters of algorithm in evolution process in order to get better convergence performance.
In view of the problems mentioned above, this paper proposes an improved convergence particle swarm optimization algorithm with random sampling of control parameters (SCPSO), and the main contribution of the present work is delineated as follows.
The random sampling strategy is designed to improve the flexibility of control parameters, which also can strengthen the position updating randomness of particles to enhance the exploration ability of PSO and help to jump out of local optimum.
In order to ensure the convergence of the algorithm, the inertia weight is selected around the center part of the convergence interval, and the phenomenon of “oscillation” and “two steps forward, one step back” can be prevented.
Due to the weakness of exploitation caused by the random sampling strategy, the intermediate particle updating strategy is devised to update the optimal position of swarm population in every evolution step. In addition, the optimal value is updated dimensionally, and the dimensionality information of different particles is used to randomly select the value, so as to find the better position in the dimension.
This paper is organized as follows. Section 2 introduces the basic PSO and gives its theoretical analysis of convergence. Section 3 describes the proposed algorithm in detail. Section 4 presents the test functions, the parameters setting of each algorithm, the results, and discussions. Conclusions are given in Section 5.
2. PSO Algorithm
2.1. Basic PSO
While PSO is running, each particle is regarded as a feasible solution to the optimization problem in search space and the flight behavior of particles can be treated as the search process of all individuals; then the velocity of particles is dynamically updated according to the historical optimal position of particle and the optimal position of swarm population. It is assumed that the swarm population is composed of particles in D dimensional space, and the historical optimal position of the i_{th} particle is represented by , , and the optimal position of swarm population is denoted as . In every evolution step, the velocity and position for each particle are updated by dynamically tracking its corresponding historical optimal position and the optimal position of swarm population. The detailed equations are expressed as follows: where t shows the iteration number and indicates dimension; thus x_{i,d}(t) is the d_{th} dimension variable of the i_{th} particle in the t_{th} iteration, and variables v_{i,d}(t), p_{g,d}(t), and p_{i,d}(t) have the similar meanings in turn; ω is inertial weight, and c_{1} and c_{2} denote acceleration coefficients; r_{1} and r_{2} are random numbers uniformly distributed in interval .
According to the detailed optimization problem to be settled, the objective function should be set, and the objective function values of each particle are corresponding fitness values. The fitness value can be used to not only measure the position of particles but also update the historical optimal position of particles and the optimal position of swarm population.
2.2. Convergence Analysis
The convergence of particles trajectories is determined by control parameters of algorithm, and in order to facilitate analysis and generality, the case of a single particle system having only one dimension is taken as an example. After that, the basic evolution equations (1) and (2) can be transformed into the dynamic equation form.where we define , , and . If the historical optimal position of particle finally converges to the optimal position of swarm population , the dynamic equation (3) can be arranged as follows:
For (4), [24] has given a sufficient condition of mean square convergence through theoretical analysis, and this condition is expressed aswhere
According to formula (5) and (6), it is easy to get the relationship between inertial weight and acceleration coefficients c_{1} and c_{2} at the situation of convergence.
3. Our Proposal: SCPSO Algorithm
3.1. Random Sampling Strategy for Control Parameters
For basic PSO, control parameters have great impact on the performance of algorithm. If they are assigned inappropriately, the trajectories of particles cannot converge and may even be unstable, which will cause that the optimal solution of optimization problems cannot be found. At present, the control parameters are usually chosen according to the experience or experiments from engineers, so it is not flexible and the exploration ability of PSO has also been greatly restricted.
Random sampling strategy is designed to improve the flexibility of control parameters and enhance the exploration ability of PSO to help to jump out of local optimum. On the basis of the conclusion from [24], the convergence of PSO should be considered when the parameters are randomly selected. First of all, acceleration coefficients c_{1} and c_{2} are, respectively, uniformly sampled in their corresponding value interval, and the parameters μ and σ can be computed using formula (6). According to the condition of mean square convergence from formula (5), the sampling interval for inertial weight ω can be solved.where and , respectively, denote the lower bound and the upper bound.
Finally, the inertial weight ought to be sampled in the above computed interval. However, in order to avoid the phenomenon of “oscillation” and “two steps forward, one step back,” the inertia weight is selected around the center part of the convergence interval of ω. According to formula (8) and (9), Figure 1 shows the relationship between μ and ω for mean square convergence of PSO. For example, when the acceleration coefficients have been already sampled, c_{1}=c_{2}=2, and then parameters μ and σ^{2} can be computed, μ=2, σ^{2}=2/3. The convergence interval of inertial weight is . In practice, the inertial weight is uniformly selected around the center part of the above interval to avoid the phenomenon of “oscillation” and “two steps forward, one step back.”
On the basis of random sampling strategy, all particles in swarm update their positions and velocity using formula (1) and (2) in every evolution step. Besides that, intermediate particle updating strategy is meanwhile devised to avoid premature which is caused by the strong randomness of sampling strategy of control parameters, and it is only used to update the optimal position of swarm population in every evolution step.
The method for generating intermediate particles is as follows:
. Intermediate particle 1 (): the position of this particle in each dimension is the average value of all updated particles in the same dimension.
. Intermediate particle 2 (): the value of this particle in each dimension is equal to the median of particles in the corresponding dimension.
. Intermediate particle 3 (): each dimension value of this particle is the one which has larger absolute value of maximum and minimum in the same dimension.
. Intermediate particle 4 (): each dimension value of this particle is the one which has smaller absolute value of maximum and minimum in the corresponding dimension.
We choose the one which has the best fitness value from the four intermediate particles. If it is better than the optimal position, the optimal position of swarm population will be replaced by it; otherwise, the optimal position remains unchanged.
3.2. Stochastic Correction Approach on Each Dimension for the Optimal Position
The intermediate particles mentioned in the above section are only used to correct the optimal position in the same dimensions; however, we know that the positions of particles are evaluated overall, and the historical optimal positions of particles, which are less than the optimal position of swarm, may have some useful information at some dimensions. Therefore, the stochastic correction approach on each dimension for the optimal position is used in this section.
According to the fitness value order of historical optimal positions of particles, we select five better historical optimal positions of particles denoted as , ind = , and then the procedure of stochastic correction approach on each dimension for the optimal position is described in detail as follows: ; % two dimensions m and n are randomly selected from the five better historical optimal position and the optimal position to determine the correction range; , and ; , and ; , and ; ; % value1(d) indicates the dimension value of value1, and value2() is the similar; and () is a random number uniformly distributed in the interval . ; ;
3.3. Algorithm Flow
The whole process of SCPSO algorithm is shown in Figure 2.
4. Numerical Simulation Experiments
4.1. Experiments Settings
In order to demonstrate the performance of the proposed algorithm SCPSO, the basic PSO and the other three improved versions of PSO are selected to compare. For convenience, the basic PSO is represented by A1; A2 indicates DCWPSO in [25] which can dynamically adjust the inertial weight; LDCPSO in [26], represented by A3, makes the acceleration coefficient c_{1} be linearly increased but the acceleration coefficient c_{2} be linearly declined. Algorithm A4(DPSO) is an improved algorithm using asynchronous learning variation strategy for learning factors [27]. Reference [28] describes the SSPSO, marked by A5, which is based on the principle of Latin Hypercube Sampling. Finally, the proposed algorithm SCPSO is marked by A5. Table 1 shows five benchmark functions and their corresponding features.

The problem dimension of the benchmark functions is set, respectively, as 30 and 50, and the sample interval for both acceleration coefficients c_{1} and c_{2} is ; in addition, maximum function evaluation times MFEs= 30000, and the size of population for every algorithm is 20. All the algorithms were conducted independently 30 times with different random initial positions.
4.2. Performance Analysis
Tables 2 and 3, respectively, show the optimization results of all the algorithms on benchmark functions for 30 dimensions and 50 dimensions. In both tables, Mean denotes the mean value of optimization results, and Std represents the standard deviation of results. From Tables 2 and 3, it can be concluded that the performance of SCPSO proposed in this paper is better than the other algorithms. Compared with the basic PSO, the performance of the improved PSO variants has been greatly improved. For functions f_{1}, f_{8}, and f_{9}, the convergence accuracy of the proposed algorithm is obviously better than that of all other algorithms. For f_{2}, f_{3}, and f_{10}, the convergence accuracy of the proposed algorithms is improved slightly. Besides that, for the remaining test functions, the performance convergence accuracy of PSO, DCWPSO, LDCPSO, and DPSO is almost the same, and their convergence accuracy is worse than that of SSPSO and the proposed algorithm. Table 3 shows the optimization result on benchmark functions in 50 dimensions; it can be seen from the rankings in the table that the change of function and dimension has little influence on the quality of the algorithm.


In order to intuitively investigate the convergence performance of SCPSO, the convergence curves of the 6 algorithms (PSO, DCWPSO, LDCPSO, DPSO, SSPSO, and SCPSO) are on the 10 selected functions. From Figures 5, 7–9, and 12, SCPSO’s convergence speed is faster than of any of the comparison algorithms and its advantages are prominent. In Figures 4 and 6, the convergence speed of SCPSO is also faster than that of any of the comparison algorithms, but its advantages are not sharp. From Figures 3, 10, and 11, SCPSO’s convergence speed is almost the same as that of SSPSO in the early stage, but its convergence accuracy is better than that of SSPSO. On the whole, SCPSO has the best convergence performance. In general, the convergence performance of SCPSO on the 10 functions is much better than other algorithms. The convergence curves further prove that improvement strategies are effective and that SCPSO has good convergence performance.
In summary, compared with basic PSO and its modified versions, the convergence rate of SCPSO proposed in this paper is significantly improved while its convergence accuracy is still guaranteed higher.
4.3. Running Time Analysis
From Figure 13, on average runtime, SCPSO is the least among the 6 algorithms, which means that the speed of SCPSO is the faster. Compared with PSO, SCPSO’s average runtime is 0.7725 s, while PSO’s is 2.1244s, and SCPSO’s average runtime is 36% of PSO’s. Compared with the other 4 algorithms, SCPSO’s average runtime accounts for 34%, 37%, 20%, and 43% of DCWPSO’s (2.2871s), LDCPSO’s (2.0658s), DPSO’s (3.8148s), and SSPSO’s (1.7965s), respectively.
5. Conclusions
This paper proposes an improved convergence particle swarm optimization algorithm with random sampling of control parameters. The random sampling strategy of control parameters is designed to improve the flexibility of control parameter setting, and corresponding updating randomness can also help to jump out from the local optimum. In order to avoid premature caused by the strong randomness of sampling strategy, intermediate particle updating strategy is devised. Besides that, the stochastic correction approach on each dimension for the optimal position is used to take advantage of some useful information of other particles. The final experimental results show that not only the proposed algorithm has high accuracy, but also its convergence rate is significantly improved. In the future, we will consider how to apply it to node localization in wireless sensor network.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflicts of interest.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (U1604151; 61803146), Outstanding Talent Project of Science and Technology Innovation in Henan Province (174200510008), Program for Scientific and Technological Innovation Team in Universities of Henan Province (16IRTSTHN029), Science and Technology Project of Henan Province (182102210094), Natural Science Project of Education Department of Henan Province (18A510001), and the Fundamental Research Funds of Henan University of Technology (2015QNJH13, 2016XTCX06).
References
 J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the 1995 IEEE International Conference on Neural Network, pp. 1942–1948, 1995. View at: Google Scholar
 Q. Lin, S. Liu, Q. Zhu et al., “article swarm optimization with a balanceable fitness estimation for manyobjective optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 1, pp. 32–46, 2018. View at: Google Scholar
 X. Deng, X. Sun, R. Liu, and S. Liu, “Consensus control of secondorder multiagent systems with particle swarm optimization algorithm,” Journal of Control Science and Engineering, vol. 2018, Article ID 3709421, 9 pages, 2018. View at: Publisher Site  Google Scholar  MathSciNet
 J. He and Z.H. Liu, “Estimation of stator resistance and rotor flux linkage in SPMSM Using CLPSO with oppositionbasedlearning strategy,” Journal of Control Science and Engineering, vol. 2016, Article ID 5781467, 7 pages, 2016. View at: Google Scholar
 S. Zhang, J. Xu, L. H. Lee, E. P. Chew, W. P. Wong, and C.H. Chen, “Optimal computing budget allocation for particle swarm optimization in stochastic optimization,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 2, pp. 206–219, 2017. View at: Publisher Site  Google Scholar
 Z. H. Liu, X. H. Li, H. Q. Zhang et al., “An enhanced approach for parameter estimation: using immune dynamic learning swarm optimization based on multicore architecture,” IEEE Systems, Man, and Cybernetics Magazine, vol. 2, no. 1, pp. 26–33, 2016. View at: Publisher Site  Google Scholar
 M. R. Bonyadi and Z. Michalewicz, “Analysis of stability, local convergence, and transformation sensitivity of a variant of the particle swarm optimization algorithm,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 3, pp. 370–385, 2016. View at: Publisher Site  Google Scholar
 C. Leboucher, H. Shin, R. Chelouah et al., “An enhanced particle swarm optimisation method integrated with evolutionary game theory,” IEEE Transactions on Games, vol. 10, no. 2, pp. 221–230, 2018. View at: Publisher Site  Google Scholar
 H. Zhang, Y. Liang, W. Zhang, N. Xu, Z. Guo, and G. Wu, “Improved PSObased method for leak detection and localization in liquid pipelines,” IEEE Transactions on Industrial Informatics, vol. 14, no. 7, pp. 3143–3154, 2018. View at: Google Scholar
 M. R. Bonyadi and Z. Michalewicz, “Impacts of coefficients on movement patterns in the particle swarm optimization algorithm,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 3, pp. 378–390, 2017. View at: Google Scholar
 M. R. Bonyadi and Z. Michalewicz, “Stability analysis of the particle swarm optimization without stagnation assumption,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 814–819, 2016. View at: Publisher Site  Google Scholar
 Y. H. Shi and R. C. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the 1998 IEEE International Conferenceon Evolutionary Computation, pp. 69–73, 1998. View at: Google Scholar
 A. Alfi, “PSO with adaptive mutation and inertia weight and its application in parameter estimation of dynamic systems,” Acta Automatica Sinica, vol. 37, no. 5, pp. 541–549, 2011. View at: Publisher Site  Google Scholar
 L. Zhang, Y. Tang, C. Hua et al., “A new particle swarm optimization algorithm with adaptive inertia weight based on Bayesian techniques,” Applied Soft Computing, vol. v, pp. 138–149, 2015. View at: Google Scholar
 W. Han, P. Yang, H. Ren et al., “Comparison study of several kinds of inertia weights for PSO,” in Proceedings of the IEEE International Conference on Progress in Informatics and Computing, pp. 280–284, 2010. View at: Google Scholar
 A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Selforganizing hierarchical particle swarm optimizer with timevarying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at: Publisher Site  Google Scholar
 T. Yamaguchi and K. Yasuda, “Adaptive particle swarm optimization: selfcoordinating mechanism with updating information,” in Proceedings of the 2006 IEEE International Conference on Systems, Man, and Cybernetics, vol. 3, pp. 2303–2308, 2006. View at: Publisher Site  Google Scholar
 J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at: Publisher Site  Google Scholar
 F. Pan, L. I. XiaoTing, Q. Zhou et al., “Analysis of standard particle swarm optimization algorithm based on markov Chain,” Acta Automatica Sinica, vol. 39, no. 4, pp. 381–389, 2013. View at: Google Scholar
 F. van den Bergh and A. P. Engelbrecht, “A study of particle swarm optimization particle trajectories,” Information Sciences, vol. 176, no. 8, pp. 937–971, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 I. C. Trelea, “The particle swarm optimization algorithm: convergence analysis and parameter selection,” Information Processing Letters, vol. 85, no. 6, pp. 317–325, 2003. View at: Publisher Site  Google Scholar  MathSciNet
 M. Clerc and J. Kennedy, “The particle swarmexplosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at: Publisher Site  Google Scholar
 Z. Ren, J. Wang, and Y. Gao, “The global convergence analysis of particle swarm optimization algorithm based on Markov chain,” Control Theory & Applications, vol. 28, no. 4, pp. 462–466, 2011. View at: Google Scholar
 X. Jin, L. Ma, T. Wu et al., “Convergence analysis of the particle swarm optimization based on stochastic processes,” Acta Automatica Sinica, vol. 33, no. 12, pp. 1263–1268, 2007. View at: Google Scholar
 X. Zhang, Y. Du, G. Qin, and Z. Qin, “Adaptive particle swarm algorithm with dynamically changing inertia weight,” Journal of Xi'an Jiaotong University, vol. 39, no. 10, pp. 1039–1042, 2005. View at: Google Scholar
 D.F. Wang and L. Meng, “Performance analysis and parameter selection of PSO algorithms,” Acta Automatica Sinica, vol. 42, no. 10, pp. 1552–1561, 2016. View at: Google Scholar
 J. Q. Tong, Q. Zhao, and M. Li, “Particle swarm optimization algorithm based on adaptive dynamic change,” Microletronics & Computer, vol. 36, no. 2, pp. 6–10, 2019. View at: Google Scholar
 G. J. Jiang, H. Ye, and Y. H. Ma, “Particle swarm optimization algorithm via sampling strategy,” Control and Decision, vol. 10, pp. 1779–1784, 2015. View at: Google Scholar
Copyright
Copyright © 2019 Lijun Sun et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.