Abstract

Although particle swarm optimization (PSO) has been widely used to address various complicated engineering problems, it still needs to overcome the several shortcomings of PSO, e.g., premature convergence and low accuracy. Its final optimization result is related to the control parameters selection; therefore, an improved convergence particle swarm optimization algorithm with random sampling of control parameters is proposed. For the proposed algorithm, the random sampling strategy of control parameters is designed, which can promote the flexibility of algorithm parameters and simultaneously enhance the updating randomness for both particle velocity and position. According to the convergence analysis of PSO, the sampling range for inertial weight is determined after both the acceleration factors have already been sampled in their respective value interval, to ensure convergence for every evolution step of algorithm. Besides that, in order to make full use of dimension information of some better particles, the stochastic correction approach on each dimension for the population optimum value has been adopted. The final experiments results demonstrate that the proposed algorithm further improves the convergence rate while maintaining higher convergence accuracy, compared with basic particle swarm optimization and other variants.

1. Introduction

PSO, proposed by Kennedy and Eberhart [1], is an evolutionary algorithm based on swarm intelligence which simulates birds or fish predation, and it has already attracted a lot of interest from scholars and researchers for the reason that PSO has simple structure, strong maneuverability, easy realization, and other characteristics. Up to now, PSO has been successfully applied in many areas [26], and meanwhile some improved versions of PSO have also been studied accordingly [711].

Inertial weight, which is the relatively important control parameter of PSO, is first introduced by Shi [12] into the basic evolution equations of algorithm, and from then on, some research about the influence of inertial weight on optimization performance has been carried out. In [13], Alfi proposes an adaptive particle swarm optimization algorithm in which a dynamic inertia weight is used. Based on the Bayesian theory, an adaptive adjustment strategy for inertia weight is designed by Zhang [14], and simultaneously the historical positions of particles are fully used. Although the convergence precision of this improved PSO is higher, the convergence speed becomes slow. After Han [15] compared several common updating ways for inertial weight, it can be concluded that the convergence performances of the simulated annealing updating method and the linear decreasing method are relatively better, but the acceleration coefficient, when it changed, has big influence on the convergence performance. Besides that, a linear updating way for acceleration coefficient c1 and c2 is put forward by Ratnaweera [16] through the corresponding parameter analysis, and Yamaguchi [17] proposed an adaptive adjustment strategy for acceleration coefficient according to the updated positions of particles. Furthermore, Liang [18] proposed a comprehensive learning particle swarm optimization algorithm using a novel learning method to improve the convergence performance. However, all the above literatures improve the convergence performance by modifying control parameters of algorithm, and they are only demonstrated through numerical simulation experiments, but because of lack of the corresponding theoretical convergence analysis, perhaps some part of the actual evolutionary process is divergent and unstable.

The convergence of PSO algorithms should be based on the framework of random search algorithm [19], and it has been proved by Van den Bergh [20] that PSO is not a global optimization algorithm and also cannot be guaranteed to converge to a local optimum. On this basis, Trelea [21] utilized the linear dynamic system theory with constant coefficient to analyze the stability of basic PSO, and Clerc [22] established a constraint model of PSO described by only five parameters; then the convergence and trajectories of particles in phase plane were analyzed. Starting with the Markov chain formed by particles states, Ren [23] has pointed out that this Markov chain does not have conditions for stationary processes, and then it was proved that PSO is not globally convergent in the view of transition probability. On the basis of stochastic system theory, Jin [24] analyzed the mean square convergence of PSO and provided a sufficient condition of convergence. Although some these literatures for convergence analysis supply sufficient conditions, it is still not given how to adjust control parameters of algorithm in evolution process in order to get better convergence performance.

In view of the problems mentioned above, this paper proposes an improved convergence particle swarm optimization algorithm with random sampling of control parameters (SC-PSO), and the main contribution of the present work is delineated as follows.

The random sampling strategy is designed to improve the flexibility of control parameters, which also can strengthen the position updating randomness of particles to enhance the exploration ability of PSO and help to jump out of local optimum.

In order to ensure the convergence of the algorithm, the inertia weight is selected around the center part of the convergence interval, and the phenomenon of “oscillation” and “two steps forward, one step back” can be prevented.

Due to the weakness of exploitation caused by the random sampling strategy, the intermediate particle updating strategy is devised to update the optimal position of swarm population in every evolution step. In addition, the optimal value is updated dimensionally, and the dimensionality information of different particles is used to randomly select the value, so as to find the better position in the dimension.

This paper is organized as follows. Section 2 introduces the basic PSO and gives its theoretical analysis of convergence. Section 3 describes the proposed algorithm in detail. Section 4 presents the test functions, the parameters setting of each algorithm, the results, and discussions. Conclusions are given in Section 5.

2. PSO Algorithm

2.1. Basic PSO

While PSO is running, each particle is regarded as a feasible solution to the optimization problem in search space and the flight behavior of particles can be treated as the search process of all individuals; then the velocity of particles is dynamically updated according to the historical optimal position of particle and the optimal position of swarm population. It is assumed that the swarm population is composed of particles in D dimensional space, and the historical optimal position of the ith particle is represented by , , and the optimal position of swarm population is denoted as . In every evolution step, the velocity and position for each particle are updated by dynamically tracking its corresponding historical optimal position and the optimal position of swarm population. The detailed equations are expressed as follows: where t shows the iteration number and indicates dimension; thus xi,d(t) is the dth dimension variable of the ith particle in the tth iteration, and variables vi,d(t), pg,d(t), and pi,d(t) have the similar meanings in turn; ω is inertial weight, and c1 and c2 denote acceleration coefficients; r1 and r2 are random numbers uniformly distributed in interval .

According to the detailed optimization problem to be settled, the objective function should be set, and the objective function values of each particle are corresponding fitness values. The fitness value can be used to not only measure the position of particles but also update the historical optimal position of particles and the optimal position of swarm population.

2.2. Convergence Analysis

The convergence of particles trajectories is determined by control parameters of algorithm, and in order to facilitate analysis and generality, the case of a single particle system having only one dimension is taken as an example. After that, the basic evolution equations (1) and (2) can be transformed into the dynamic equation form.where we define , , and . If the historical optimal position of particle finally converges to the optimal position of swarm population , the dynamic equation (3) can be arranged as follows:

For (4), [24] has given a sufficient condition of mean square convergence through theoretical analysis, and this condition is expressed aswhere

According to formula (5) and (6), it is easy to get the relationship between inertial weight and acceleration coefficients c1 and c2 at the situation of convergence.

3. Our Proposal: SC-PSO Algorithm

3.1. Random Sampling Strategy for Control Parameters

For basic PSO, control parameters have great impact on the performance of algorithm. If they are assigned inappropriately, the trajectories of particles cannot converge and may even be unstable, which will cause that the optimal solution of optimization problems cannot be found. At present, the control parameters are usually chosen according to the experience or experiments from engineers, so it is not flexible and the exploration ability of PSO has also been greatly restricted.

Random sampling strategy is designed to improve the flexibility of control parameters and enhance the exploration ability of PSO to help to jump out of local optimum. On the basis of the conclusion from [24], the convergence of PSO should be considered when the parameters are randomly selected. First of all, acceleration coefficients c1 and c2 are, respectively, uniformly sampled in their corresponding value interval, and the parameters μ and σ can be computed using formula (6). According to the condition of mean square convergence from formula (5), the sampling interval for inertial weight ω can be solved.where and , respectively, denote the lower bound and the upper bound.

Finally, the inertial weight ought to be sampled in the above computed interval. However, in order to avoid the phenomenon of “oscillation” and “two steps forward, one step back,” the inertia weight is selected around the center part of the convergence interval of ω. According to formula (8) and (9), Figure 1 shows the relationship between μ and ω for mean square convergence of PSO. For example, when the acceleration coefficients have been already sampled, c1=c2=2, and then parameters μ and σ2 can be computed, μ=2, σ2=2/3. The convergence interval of inertial weight is . In practice, the inertial weight is uniformly selected around the center part of the above interval to avoid the phenomenon of “oscillation” and “two steps forward, one step back.”

On the basis of random sampling strategy, all particles in swarm update their positions and velocity using formula (1) and (2) in every evolution step. Besides that, intermediate particle updating strategy is meanwhile devised to avoid premature which is caused by the strong randomness of sampling strategy of control parameters, and it is only used to update the optimal position of swarm population in every evolution step.

The method for generating intermediate particles is as follows:

. Intermediate particle 1 (): the position of this particle in each dimension is the average value of all updated particles in the same dimension.

. Intermediate particle 2 (): the value of this particle in each dimension is equal to the median of particles in the corresponding dimension.

. Intermediate particle 3 (): each dimension value of this particle is the one which has larger absolute value of maximum and minimum in the same dimension.

. Intermediate particle 4 (): each dimension value of this particle is the one which has smaller absolute value of maximum and minimum in the corresponding dimension.

We choose the one which has the best fitness value from the four intermediate particles. If it is better than the optimal position, the optimal position of swarm population will be replaced by it; otherwise, the optimal position remains unchanged.

3.2. Stochastic Correction Approach on Each Dimension for the Optimal Position

The intermediate particles mentioned in the above section are only used to correct the optimal position in the same dimensions; however, we know that the positions of particles are evaluated overall, and the historical optimal positions of particles, which are less than the optimal position of swarm, may have some useful information at some dimensions. Therefore, the stochastic correction approach on each dimension for the optimal position is used in this section.

According to the fitness value order of historical optimal positions of particles, we select five better historical optimal positions of particles denoted as  , ind = , and then the procedure of stochastic correction approach on each dimension for the optimal position is described in detail as follows:;% two dimensions m and n are randomly selected from the five better historical optimal position and the optimal position to determine the correction range; , and ;, and ;, and ;;% value1(d) indicates the dimension value of value1, and value2() is the similar; and () is a random number uniformly distributed in the interval .;;

3.3. Algorithm Flow

The whole process of SC-PSO algorithm is shown in Figure 2.

4. Numerical Simulation Experiments

4.1. Experiments Settings

In order to demonstrate the performance of the proposed algorithm SC-PSO, the basic PSO and the other three improved versions of PSO are selected to compare. For convenience, the basic PSO is represented by A1; A2 indicates DCW-PSO in [25] which can dynamically adjust the inertial weight; LDC-PSO in [26], represented by A3, makes the acceleration coefficient c1 be linearly increased but the acceleration coefficient c2 be linearly declined. Algorithm A4(DPSO) is an improved algorithm using asynchronous learning variation strategy for learning factors [27]. Reference [28] describes the SS-PSO, marked by A5, which is based on the principle of Latin Hypercube Sampling. Finally, the proposed algorithm SC-PSO is marked by A5. Table 1 shows five benchmark functions and their corresponding features.

The problem dimension of the benchmark functions is set, respectively, as 30 and 50, and the sample interval for both acceleration coefficients c1 and c2 is ; in addition, maximum function evaluation times MFEs= 30000, and the size of population for every algorithm is 20. All the algorithms were conducted independently 30 times with different random initial positions.

4.2. Performance Analysis

Tables 2 and 3, respectively, show the optimization results of all the algorithms on benchmark functions for 30 dimensions and 50 dimensions. In both tables, Mean denotes the mean value of optimization results, and Std represents the standard deviation of results. From Tables 2 and 3, it can be concluded that the performance of SC-PSO proposed in this paper is better than the other algorithms. Compared with the basic PSO, the performance of the improved PSO variants has been greatly improved. For functions f1, f8, and f9, the convergence accuracy of the proposed algorithm is obviously better than that of all other algorithms. For f2, f3, and f10, the convergence accuracy of the proposed algorithms is improved slightly. Besides that, for the remaining test functions, the performance convergence accuracy of PSO, DCW-PSO, LDC-PSO, and DPSO is almost the same, and their convergence accuracy is worse than that of SS-PSO and the proposed algorithm. Table 3 shows the optimization result on benchmark functions in 50 dimensions; it can be seen from the rankings in the table that the change of function and dimension has little influence on the quality of the algorithm.

In order to intuitively investigate the convergence performance of SC-PSO, the convergence curves of the 6 algorithms (PSO, DCW-PSO, LDC-PSO, DPSO, SS-PSO, and SC-PSO) are on the 10 selected functions. From Figures 5, 79, and 12, SC-PSO’s convergence speed is faster than of any of the comparison algorithms and its advantages are prominent. In Figures 4 and 6, the convergence speed of SC-PSO is also faster than that of any of the comparison algorithms, but its advantages are not sharp. From Figures 3, 10, and 11, SC-PSO’s convergence speed is almost the same as that of SS-PSO in the early stage, but its convergence accuracy is better than that of SS-PSO. On the whole, SC-PSO has the best convergence performance. In general, the convergence performance of SC-PSO on the 10 functions is much better than other algorithms. The convergence curves further prove that improvement strategies are effective and that SC-PSO has good convergence performance.

In summary, compared with basic PSO and its modified versions, the convergence rate of SC-PSO proposed in this paper is significantly improved while its convergence accuracy is still guaranteed higher.

4.3. Running Time Analysis

From Figure 13, on average runtime, SC-PSO is the least among the 6 algorithms, which means that the speed of SC-PSO is the faster. Compared with PSO, SC-PSO’s average runtime is 0.7725 s, while PSO’s is 2.1244s, and SC-PSO’s average runtime is 36% of PSO’s. Compared with the other 4 algorithms, SC-PSO’s average runtime accounts for 34%, 37%, 20%, and 43% of DCW-PSO’s (2.2871s), LDC-PSO’s (2.0658s), DPSO’s (3.8148s), and SS-PSO’s (1.7965s), respectively.

5. Conclusions

This paper proposes an improved convergence particle swarm optimization algorithm with random sampling of control parameters. The random sampling strategy of control parameters is designed to improve the flexibility of control parameter setting, and corresponding updating randomness can also help to jump out from the local optimum. In order to avoid premature caused by the strong randomness of sampling strategy, intermediate particle updating strategy is devised. Besides that, the stochastic correction approach on each dimension for the optimal position is used to take advantage of some useful information of other particles. The final experimental results show that not only the proposed algorithm has high accuracy, but also its convergence rate is significantly improved. In the future, we will consider how to apply it to node localization in wireless sensor network.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (U1604151; 61803146), Outstanding Talent Project of Science and Technology Innovation in Henan Province (174200510008), Program for Scientific and Technological Innovation Team in Universities of Henan Province (16IRTSTHN029), Science and Technology Project of Henan Province (182102210094), Natural Science Project of Education Department of Henan Province (18A510001), and the Fundamental Research Funds of Henan University of Technology (2015QNJH13, 2016XTCX06).