Abstract

Particle swarm optimization (PSO) is a stochastic optimization method sensitive to parameter settings. The paper presents a modification on the comprehensive learning particle swarm optimizer (CLPSO), which is one of the best performing PSO algorithms. The proposed method introduces a self-adaptive mechanism that dynamically changes the values of key parameters including inertia weight and acceleration coefficient based on evolutionary information of individual particles and the swarm during the search. Numerical experiments demonstrate that our approach with adaptive parameters can provide comparable improvement in performance of solving global optimization problems.

1. Introduction

The complexity of many real-world problems has made exact solution methods impractical to solve them within a reasonable amount of time and thus gives rise to various types of nonexact metaheuristic approaches [13]. In particular, swarm intelligence methods, which simulate a population of simple individuals evolving their solutions by interacting with one another and with the environment, have shown promising performance on many difficult problems and have become a very active research area in recent years [411]. Among these methods, particle swarm optimization (PSO), initially proposed by Kennedy and Eberhart [4], is a population-based global optimization technique that involves algorithmic mechanisms similar to social behavior of bird flocking. The method enables a number of individual solutions, called particles, to move through the solution space and towards the most promising area for optimal solution(s) by stochastic search. Consider a -dimensional optimization problem as follows:

In the -dimensional search space, each particle of the swarm is associated with a position vector and a velocity vector , which are iteratively adjusted by learning from a local best found by the particle itself and a current global best found by the whole swarm: where and are two acceleration constants reflecting the weighting of “cognitive” and “social” learning, respectively, and and are two distinct random numbers in . It is recommended that since it on average makes the weights for cognitive and social parts both to be 1.

To achieve a better balance between the exploration (global search) and exploitation (local search), Shi and Eberhart [12] introduce a parameter named inertia weight to control velocity, which is currently the most widely used form of velocity update equation in PSO algorithms:

Empirical studies have shown that a large inertia weight facilitates exploration and a small inertia weight one facilitates exploitation and a linear decreasing inertia weight can be effective in improving the algorithm performance: where is the current iteration number, is the maximum number of allowable iterations, and and are the initial value and the final value of inertia weight, respectively. It is suggested that can be set to around 1.2 and around 0.9, which can result in a good algorithm performance and remove the need for velocity limiting.

PSO is conceptually simple and easy to implement, and has been proven to be effective in a wide range of optimization problems [1320]. Furthermore, It can be easily parallelized by concurrently processing multiple particles while sharing the social information [21, 22]. Kwok et al. [23] present an empirical study on the effect of randomness on the control coefficients of PSO, and the results show that the selective and uniformly distributed random coefficients perform better on complicate functions.

In recent years, PSO has attracted a high level of interest, and a number of variant PSO methods (e.g., [2432]) have been proposed to accelerate convergence speed and avoid local optima. In particular, Liang et al. develop a comprehensive learning particle swarm optimizer (CLPSO) [26], which uses all other particles’ historical best information (instead of pbest and gbest) to update a particle’s velocity: where can be the th dimension of any particle’s (including itself), and particle is selected based on a learning probability . The authors suggest a tournament selection procedure that randomly chooses two particles and then select one with the best fitness as the exemplar to learn from for that dimension. Note that CLPSO has only one acceleration coefficient which is normally set to 1.494, and it limits the inertia weight value in the range of [0.4, 0.9].

According to empirical studies [29, 30, 33], CLPSO has been shown to be one of the best performing PSO algorithms, especially for complex multimodal function optimization. In [34] a self-adaptation technique is introduced to adaptively adjust the learning probability, and the historical information is used in the velocity update equation, which effectively improve the performance of CLPSO on single modal problems.

Wu et al. [35] adapt the CLPSO algorithm by improving the search behavior to optimize the continuous externality for antenna design. In [36] Li and Tan present a hybrid strategy to combine CLPSO with Broyden-Fletcher-Goldfarb-Shanno (BFGS) method, which defines a local diversity index to indicate whether the swarm enters into an optimality region in a high probability. They apply the method to identify multiple local optima of the generalization bounds of support vector machine parameters and obtain satisfying results. However, to the best of our knowledge, modifications of CLPSO based on adaptive inertia weight and acceleration coefficient have not been reported.

In this paper we propose an improved CLPSO algorithm, named CLPSO-AP, by introducing a new adaptive parameter strategy. The algorithm evaluates the evolutionary states of individual particles and the whole swarm, based on which values of inertia weight and acceleration coefficient are dynamically adjusted to search more effectively. Numerical experiments on test functions show that our algorithm can significantly improve the performance of CLPSO.

In the rest of the paper, Section 2 presents our PSO method with adaptive parameters, Section 3 presents the computational experiments, and Section 4 concludes with discussion.

2. The CLPSO-AP Algorithm

2.1. Adaptive Inertia Weight and Acceleration Coefficient

To provide an adaptive parameter strategy, we first need to determine the situation of each particle at each iteration. In this paper, two concepts are used for this purpose. The first one considers whether or not a particle improves its personal best solution at the th iteration (in the paper we assume the problem is to minimize the objective function without loss of generality):

And the second considers the particle’s “rate of growth” from the th iteration to the th iteration: where denotes the Euclidean distance between and :

Based on (2.1), we can calculate the percentage of particles that successfully improve their personal better solutions: where is the number of particles in the swarm. This measure has been utilized in [33] and some other evolutionary algorithms such as [37]. Generally, in PSO a high indicates a high probability that the particles have converged to a nonoptimum point or slowly moving toward the optimum, while a low indicates that the particles are oscillating around the optimum without much improvement. Considering role of inertia weight in the convergence behavior of PSO, in the former case the swarm should have a large inertial weight and in the latter case the swarm should have a small inertial weight. Here we use the following nonlinear function to map the values of to :

It is easy to derive that ranges from about 0.36 to 1. The nonlinear and nonmonotonous change of inertial weight can improve the adaptivity and diversity of the swarm, because the search process of PSO is highly complicated on most problems.

Most PSO algorithms use constant acceleration coefficients in (1.4). But it deserves to note that Ratnaweera et al. [38] introduce a time-varying acceleration coefficient strategy where the cognitive coefficient is linearly decreased and the social coefficient is linearly increased. The basic CLPSO also uses a constant acceleration coefficient in (1.6), where reflects the weighting of stochastic acceleration term that pulls each particle towards the personal best position of particle at each dimension . Considering the measure defined in (2.2), a large value of indicates that at time, particle falls rapidly in the search space and gets a considerable improvement on the fitness function; thus it is reasonable to anticipate that the particle may also gain much improvement at least at the next iteration. On the contrary, a small value of indicates that particle progresses slowly and thus needs a large acceleration towards the exemplar.

From the previous analysis, we suggest that the acceleration coefficient should be an increasing function of , where is the SRSS of the rates of growth of all the particles in the swarm:

Based on our empirical tests, we use the following function to map the values of to :

2.2. The Proposed Algorithm

Using the adaptive parameter strategy described in the previous section, the equation for velocity update for the CLPSO-AP algorithm is where and are calculated based on (2.5) and (2.7), respectively.

Now we present the flow of CLPSO-AP algorithm as follows.(1)Generate a swarm of particles with random positions and velocities in the range. Let and initialize and each .(2)Generate a learning probability for each particle based on the following equation suggested in [10]: (3)Evaluate the fitness of each particle and update its and then select a particle with the best fitness value as .(4)For each particle in the swarm do the following. (4.1) For to do the next. (4.1.1) Generate a random number in the range . (4.1.2) If , let . (4.1.3) Else, randomly choose two other distinct particles and , and select the one with better fitness value as . (4.1.4) Update the th dimension of the particle’s velocity according to (2.8). (4.2) Update the particle’s position according to (1.3). (4.3) Calculate and for the particle according to (2.1) and (2.2), respectively.(5)Calculate of the swarm based on (2.4) and (2.5).(6)Calculate based on (2.5), and then calculate for each particle based on (2.7).(7)Let . If or any other termination condition is satisfied, the algorithm stops and returns .(8)Go to step 3.

In Step (7), other termination conditions can be that a required function value is obtained, or all the particles converge to a stable point.

3. Numerical Experiments

In order to evaluate the performance of the proposed algorithm, we choose a set of well-known test functions as benchmark problems, the definitions of which are listed in the Appendix section. The search ranges, optimal points and corresponding fitness values, and required accuracies are shown in Table 1.

We comparatively execute the basic PSO algorithm, CLPSO algorithm, and our CLPSO-AP algorithm on the test functions with 10 and 30 dimensions, where each experiment is run for 40 times. The parameter settings for the algorithms are given in Table 2.

We use the mean best fitness value and the success rate (with respect to the required accuracy shown in Table 1) as two criteria for measuring the performance of the algorithms. The experimental results (averaged over 40 runs) of which are presented in Tables 3, 4, 5, and 6 respectively.

As we can see from the experimental results, for all 10D and 30D dimensional problems, CLPSO-AP performs better than CLPSO in terms of both mean best values and success rates. Among the seven test functions, Ackley function is the only one on which CLPSO performs no better than the basic PSO However, CLPSO-AP performs better than basic PSO on both the 10D and 30D Ackley functions, and thus CLPSO-AP also outperforms basic PSO on all benchmark problems. It also deserves to note the 10D Rosenbrock function, for which CLPSO and most of the other PSO variants hardly succeed [26, 39, 40] while our CLPSO-AP algorithm gains a 10% success rate. Except for the 30D Rosenbrock function, CLPSO-AP successfully obtains the global optimum for all the other functions. In summary, our algorithm performs very well and overwhelms the other two algorithms on all of the test problems.

4. Conclusion

CLPSO has been shown to be one of the best performing PSO algorithms. The paper proposes a new improved CLPSO algorithm, named CLPSO-AP, which uses evolutionary information of individual particles to dynamically adapt the inertia weight and acceleration coefficient at each iteration. Experimental results on seven test functions show that our algorithm can significantly improve the performance of CLPSO. Ongoing work includes applying our algorithm to intelligent feature selection and lighting control in robotics [4143] and extending the adaptive strategy to other PSO variants, including those for fuzzy and/or multiobjective problems [44, 45].

Appendix

Definitions of the Test Functions

(1)Sphere function: (2)Rosenbrock’s function: (3)Schwefel’s function: (4)Rastrigin’s function: (5)Griewank’s function: (6)Ackley’s function: (7)Weierstrass function:

Acknowledgment

This work was supported in part by grants from National Natural Science Foundation (Grant no. 61020106009, 61105073, 61103140, and 61173096) of China.