Abstract

Particle swarm optimization algorithm (PSO) is a global stochastic tool, which has ability to search the global optima. However, PSO algorithm is easily trapped into local optima with low accuracy in convergence. In this paper, in order to overcome the shortcoming of PSO algorithm, an improved particle swarm optimization algorithm (IPSO), based on two forms of exponential inertia weight and two types of centroids, is proposed. By means of comparing the optimization ability of IPSO algorithm with BPSO, EPSO, CPSO, and ACL-PSO algorithms, experimental results show that the proposed IPSO algorithm is more efficient; it also outperforms other four baseline PSO algorithms in accuracy.

1. Introduction

Particle swarm optimization (PSO) has been proposed originally by Kennedy and Eberhart in 1995, which is a population-based stochastic optimization techniques inspired by social behavior of bird flocking or fish schooling [1]. The PSO algorithm is easy to implement and has been empirically shown to perform well on many optimization problems [25]. The development of PSO can be classified into three categories in general. The first category emphasizes the variants of PSO mechanism itself, both in mathematics [6] and in topology [7]. The second one hybridizes other optimization techniques into PSO, such as ACO [8], GA [9, 10], Tabu [11], and simulated annealing [12]. The third one leverages the advantages of Chaos Maps, such as certainty, ergodicity, and stochastic property [13]. The hybridization of chaos with PSO has also become an active direction in recent research activities [14, 15].

With the concept of the center of gravity in Physics, Song et al. designed a centroid particle swarm based on each particle’s best position and proposed a centroid PSO (CPSO) algorithm [16] to enhance individual and group collaboration and information sharing capabilities. Also, Gou et al. proposed an ACL-PSO algorithm [17] with a population centroid based on every particle’s current position. Their experimental results showed that ACL-PSO algorithm improved the global searching capability and effectively avoided the premature convergence. Inertia weight [18], in the form of linear decreasing one, was embedded into the original PSO firstly by Shi and Eberhart [18]. Based on their work, a conclusion can be drawn that a large inertia weight facilitates a global search while a small one facilitates a local search. After that, different kinds of inertia weights were introduced and expressed in exponential formalities [19] or other nonlinear formalities [2022]. Recently, Ting et al. [23] proposed an exponential inertia weight frame. After their carefully analysis of the effect of local attractor and global attractor, they presented suggestions for adjusting these attractors in order to improve the performance of PSO algorithm.

In this paper, in order to prevent PSO from falling in a local optimum, we propose an improved PSO algorithm (IPSO) based on two forms of exponential inertia weight proposed by Ting et al. and two kinds of centroids, population centroid and the best individual centroid, which are based on a new weighted linear combination of each particle’s current position and a linear combination of each particle’s best position, respectively. Therein, the proposed IPSO algorithm provides two velocity updating forms, which are selected by roulette wheel for every particle at each of evolution iterations. Besides one particle’s own extreme position and the global extreme position, one of velocity updating forms is based on population centroid, another is based on best individual centroid. By means of comparing its optimization ability with other four PSO algorithms, the experiment results show that the proposed IPSO algorithm can reach more excellent optima.

The remainder of this paper is organized as follows. Basic PSO algorithm (BPSO) [18], exponential inertia weight PSO (EPSO) [23], center PSO (CPSO) [16], and self-adaptive comprehensive learning PSO (ACL-PSO) [17] are proposed in Section 2. We present our improved PSO algorithm (IPSO) model in Section 3. Section 4 shows our experimental results. Finally, we conclude our work in the last section.

2. Background

The basic PSO (BPSO) algorithm is a useful tool for optimization. Each particle’s position stands for a candidate solution to the problem which will be solved. The BPSO, EPSO, CPSO, and ACL-PSO are proposed below.

2.1. BPSO

Denote and as the fitness function, the scale of swarm, and the maximum iteration number, respectively. Let , and be the position, velocity, and fitness of the particle at the iteration, respectively. In addition, let be the particle’s best fitness and let be the corresponding position, St. . Also, let be the swarm’s best fitness and let be the corresponding position, St. .

In BPSO, tracking the two positions and , each particle flows through the multidimensional searching space to update its inertia weight, velocity, and position according to (1)–(3), respectively. Consider

In (1), stands for inertia weight at the iteration and and are the initial inertia weight and the final inertia weight, respectively. In (2), and are accommodation parameters and random numbers and belong to the interval of 0 and 1.

2.2. EPSO

Inertia weight, which had been introduced firstly by Shi and Eberhart is one of the important parameters in PSO algorithm [18]. Since that, linearly decreasing inertia weight and nonlinearly decreasing one have been used widely in literature. Chauhan et al. [22] summarized different inertia weight forms. However, there is no clear justification of how this parameter can be adjusted to improve the performance of PSO algorithm. In [23], Ting et al. have investigated the property for an exponential inertia weight inspired by adaptive crossover rate (ACR) used in differential evolution algorithm. The ACR is defined as where is the initial crossover rate, is the current generation number, and is the maximum number of generations. The adaptive function for the crossover rate is simply crafted based on the logic of high at the beginning of run to prevent premature convergence and low at the end of run to enhance the local search. This concept is exactly the same for the case of inertia weight in BPSO. Thus, Ting et al. defined their exponential inertia weight as follows: where is the initial inertia weight. Parameters and in (5) are named the local search attractor and the global search attractor, respectively. On the one hand, while parameter is set to 1, parameter has the ability to push down the value of . On the other hand, when is increased, it has the ability to pull up the value of the . Note that when is zero, the inertia weight becomes a static value ; when is zero, it is in fact a static with the value approximate to 0.32 on condition that is set to 1.

Substituting (5) for (1) in BPSO algorithm, Ting et al. proposed an exponential inertia weight PSO algorithm (EPSO). After testing on 23 benchmark problems, they draw a conclusion from simulation results that EPSO algorithm was capable of global search and local search when and , respectively. At the same time, their simulation results showed that the EPSO algorithm had better performance in comparison to the BPSO algorithm, which was used widely in many significant works.

2.3. CPSO

In order to enhance interparticle cooperation and information sharing capabilities, Song et al. proposed a centroid PSO (CPSO) [16] algorithm. In CPSO algorithm, the centroid of particle swarm is defined as (6), and (2) in BPSO algorithm is substituted for (7). Consider

In the abovementioned (7), is a positive constant similar to and ; is a random number between 0 and 1.

In this way, the running track of each particle is not only interrelated with the individual best position and the best position of the swarm but also interrelated with the centroid of the whole particle swarm. Their experimental results show that the CPSO algorithm not only enhances the local searching efficiency and global searching performance but also has an ability to avoid the premature convergence problem effectively.

2.4. ACL-PSO

After introducing population centroid learning mechanism into the BPSO, Gou et al. proposed an ACL-PSO algorithm [17] based on self-adapted comprehensive learning. They defined the fitness proportion of the particle as (8), and the population centroid corresponding to the iteration as (9). Consider

Considering not only individual particle’s own previous best position but also the evolution trend of populations, Gou et al. proposed two different evolution strategies at different evolution stage. Therein, was used to guide the particle searching direction at the early stage of evolution, while was used to guide its direction at the later stage. To realize these evolution strategies, they used (10) to compute at the iteration and used a random number between 0 and 1 to compare with . Consider

If , update the particle’s velocity by (11) or (12), else by (2). Consider

In the abovementioned formulas, inertia weight of ACL-PSO was computed by the following: where random number was in . In addition, was the weight coefficient of population centroid and was a random number between 0 and 1. The item was the entry of population centroid learning, which reflected an individual particle’s social cognition learning and thinking.

Compared to other four improved PSO algorithms in terms of accuracy, convergence speed, and computational complexity, ACL-PSO converged faster, resulting in more robust and better optima [17].

3. Our Proposed Method IPSO

Define particle fitness as object function directly, and adjust the IPSO algorithm search mechanism as follows:

Combining the above EPSO algorithm’s inertia weight, CPSO and ACL-PSO algorithms’ population centroids, taking advantage of ACL-PSO algorithm’s evolution strategies, we propose a new improved PSO algorithm (IPSO) as follows.

(1) Population Centroid and Best Individual Centroid. So as to CPSO and ACL-PSO algorithms, their centroids are different from each other. The centroid of CPSO algorithm is a linear combination of each particle’s best position, while the other is a weighted linear combination of each particle’s current position. Taking advantage of the two centroids, we embed them into BPSO to balance PSO algorithm’s performance of local and global searching, and denote them as best individual centroid and population centroid, respectively.

If is smaller, that is to say, the object value is smaller at the iteration, then the particle’s current position makes more contribution to the construction of the population centroid corresponding to the same iteration. Thus, in order to show the degree of importance of the particle’s current position, , in the population centroid, we define the particle proportion as (15) and compute the population centroid by (9). At the same time, (16) can be obtained naturally. Consider

Gou et al. [17] have proved that their proposed population centroid will be the convergence position on condition that ACL-PSO algorithm converges to local minimum or global convergence. Similarly, Let satisfy (17), (18) indicates that is the convergence position of IPSO algorithm on condition that the algorithm converges to local minimum or global convergence. Consider

(2) Inertia Weight. Based on the exponential inertia weight proposed by Ting et al. [23], we select two pairs of and . One pair is that is 1 and is 2. Another one is contrary. Denote them as (19) at the iteration. These inertia weights work in with different evolution strategies, the same as ACL-PSO algorithm. Consider

(3) Update Velocity. It can be drawn from (18) that population centroid will coincide with the population globally optimal location. So, we use Gou et al.’s method [17] to help the particle to select velocity updating formula. If , update the particle’s velocity by (12) with , else by (7) with . The execution process of IPSO algorithm is shown in Algorithm 1.

Algorithm. IPSO ()
Input:     
Output:   ,
Begin
   (1) Initialize the parameters including population size , the dimension size ,
      the maximum iteration number , and the current iterative count
   (2) Generate the initial population and initial velocity. The initial population and initial velocity for each
      particle are randomly generated as follows:
      Population =
      Velocity =
     where and are the position and velocity at the th iteration for the th particle,
     respectively. is the dimension of each particle. , are the minimum and maximum value of
     each point belonging to the th dimension, respectively. , are the minimum and maximum
     value of each point belonging to the th dimension, respectively.
   (3) Calculate the fitness value of each particle and record it as: , record the th particle's
      best fitness and its corresponding position as: , , St. ,
   (4) Calculate the population’s best fitness and its corresponding position , St.
   (5) Compute population centroid by (9) with the weighted coefficient computed by (15),
      compute best individual centroid by (6).
   (6) For   to do
       (6.1) Compute by (10), and generate a random number ;
       (6.2) If  
          (6.2.1) Update by (19)
          (6.2.2) For   to do
               Update and by (12) and (3) respectively, compute
               If   then , , ,
               End If
               End
          Else
          (6.2.3) Update by (19)
          (6.2.4) For   to do
               Update and by (7) and (3) respectively, compute
               If   then , , ,
               End If
               End
          End If
       (6.3) Compute and set , where
       (6.4) Compute population centroid by (9) with the weighted coefficient computed by (15),
           compute best individual centroid by (6).
       (6.5) let
    End
   (7) output ,
End

4. Experiments and Results

In this section, BPSO, EPSO, CPSO, and ACL-PSO algorithm are compared as four benchmark functions to verify the feasibility of IPSO algorithm. The descriptions of those test functions, which can be divided into unimodal and multimodal function, are shown in Table 1. Using the object function in Table 1 to evaluate each particle’s fitness, the smaller the function value the higher the fitness.

Experiments use the following methods. Firstly, determine the parameter pairs of IPSO algorithm, such as and . Secondly, fixing the number of iterations, with different number of particles, evaluate performances of those five algorithms by average object value corresponding to the iteration. At last, setting the maximum iteration number and different target accuracies of these functions, success rate and average convergence iteration number are compared.

The average object value according to the iteration of all the turns is as (20) for each algorithm. Therein, stands for the global best fitness for the algorithm, corresponding to the iteration at the turn. Consider

(1) Determine the Parameter Pairs of and in IPSO Algorithm. With , let be 10, 20, and 30 each time and let be updated by (21) for each . Run the IPSO algorithm turns per time. Its curves of above four benchmark functions are shown in Figures 1, 2, 3, and 4. Consider

From Figures 14, we can get that each of the number of iteration, the value of parameter , and swarm size produces an effect on performance of IPSO algorithm. Let be 0.5 and let be 30; will arrive to minimum at later iteration stage for the four benchmark functions. If is zero, will tend to be infinite, and the corresponding curves about will not appear. The phenomenon happened in curves of functions and , that is to say, IPSO algorithm can find target optima value for and at litter iteration number, this can be seen in Figures 1 and 2. While is smaller, will become smaller too. From Figure 3, we can get that setting is more effective for using IPSO algorithm to search optimal solution of at later iteration stage. So as to , setting is reasonable.

With different pairs of swarm size and , comes to the minimum at the later iteration. Table 2 lists the disjoint distribution of pairs according to . Larger population sizes require more function evaluations and increase the computing efforts for convergence, but increase the reliability of the algorithm. The problem is to find a compromise between cost and reliability. In order to make IPSO algorithm have better optimization capability, we will set swarm size , and let parameter take the corresponding values in the third row of Table 2 for the different benchmark functions in the following tests with the algorithm.

(2) Compare IPSO Algorithm with Other Four Algorithms. Using BPSO, EPSO, CPSO, and ACL-PSO algorithm to compare with IPSO algorithm on above four benchmark functions, we set , and other parameters related to those compared with algorithms are listed in Table 3.

Run each of above five algorithms 50 times independently. Record five indicators, which are the minimum object value (Min), the maximum object value (Max), the mean object value (Mean), the deviation of object value (Dev), and the average convergence iteration number (ACIN), for every run. Ours experimental results are shown in Table 4 and Figures 5, 6, 7, and 8.

From Table 4, it can be seen that there are higher accuracy for the IPSO than that for the BPSO, EPSO, CPSO, and ACL-PSO. From the mean and deviation in Table 4, the IPSO is better than the BPSO, EPSO, CPSO, and ACL-PSO, with a steady convergence. In addition, IPSO algorithm needs less number of iteration for coming to convergence than ACL-PSO algorithm does.

Let iteration number be as -coordinate and average best object value according to the iteration as -coordinate; we plot curves of these four benchmark functions in Figures 58.

From Figures 58, it can be seen that IPSO algorithm, compared to other four algorithms, searches more excellent object value at later iteration. EPSO, CPSO, ACL-PSO, and IPSO algorithm all can find more optimal solution than BPSO algorithm does after litter iteration number, and the performance of IPSO algorithm is the best among these five algorithms. In addition, IPSO algorithm needs less iteration number to come to convergence than ACL-PSO algorithm does. This phenomenon is considered to be due to the combination of inertia weight working with population centroid and working with best individual centroid.

(3) Compare Success Rate and Average Number of Iteration Corresponding to Target Precision Arriving. In order to validate the effectiveness of IPSO algorithm further, we set target precisions of above four benchmark functions, which are listed in Table 5 and run every algorithm 100 turns for each of test functions, respectively. Setting the maximum iteration number as 1000, while object value is less than or equal to its target precision, we plus success convergence number with one and record the current iteration number at one turn. Then, success rate (SR) is equal to the success convergence number divided by total number of turns. Average convergence iteration (ACI) is the mean iteration numbers at all. Four group target accuracies, which are listed in Table 5, are used to evaluate the stability of those algorithms. Our experimental results are shown in Table 5.

From Table 5, it can be seen that success rates of those five algorithms are affected by the target accuracies. Success rate of IPSO algorithm for each test function reaches 100% and is obviously better than those of BPSO, EPSO, CPSO, and ACL-PSO. Average convergence iteration of EPSO is smaller than CPSO algorithm, and ACI of IPSO is smaller than EPSO algorithm. So as to the ability of finding optimal solutions, ACL-PSO algorithm is better than BPSO, EPSO, and CPSO algorithms, and IPSO algorithm is better than ACL-PSO algorithm.

5. Conclusions

The particle swarm optimization algorithm is a global stochastic tool, which has ability to search the global optima. PSO algorithm is easily trapped into local optima. In this paper, in order to overcome the shortcoming of PSO algorithm, we propose an improved particle swarm optimization algorithm (IPSO) based on two forms of exponential inertia weight and two kinds of centroids. By means of comparing optimization ability of IPSO algorithm with BPSO, EPSO, CPSO, and ACL-PSO algorithms, experimental results of these four benchmark functions show that the proposed IPSO algorithm is more efficient and outperforms other PSO algorithms in accuracy investigated in this paper. Inertia weight is one of the important parameters in PSO algorithm. How can (5) respect the constraint on ? Moreover can and be chosen to be adaptive throughout a single evolution to guarantee a suitable trade-off between exploration and exploitation phase? These are good future research directions for us.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the Natural Science Foundation of Jiangsu Province of China under Grant No. BK20140857 and Grant No. BK20141420. It was also supported by the Excellent Young Talents Fund of Anhui Province of China (Grant No. 2012SQRL154).