About this Journal Submit a Manuscript Table of Contents
Computational Intelligence and Neuroscience
Volume 2013 (2013), Article ID 384125, 7 pages
http://dx.doi.org/10.1155/2013/384125
Research Article

Convergence Analysis of Particle Swarm Optimizer and Its Improved Algorithm Based on Velocity Differential Evolution

School of Electrical and Information Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China

Received 22 April 2013; Revised 28 July 2013; Accepted 4 August 2013

Academic Editor: Yuanqing Li

Copyright © 2013 Hongtao Ye et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents an analysis of the relationship of particle velocity and convergence of the particle swarm optimization. Its premature convergence is due to the decrease of particle velocity in search space that leads to a total implosion and ultimately fitness stagnation of the swarm. An improved algorithm which introduces a velocity differential evolution (DE) strategy for the hierarchical particle swarm optimization (H-PSO) is proposed to improve its performance. The DE is employed to regulate the particle velocity rather than the traditional particle position in case that the optimal result has not improved after several iterations. The benchmark functions will be illustrated to demonstrate the effectiveness of the proposed method.

1. Introduction

Algorithms to tackle optimization problems include not only classical techniques such as dynamic programming, branch-and-bound, and gradient-based methods, but also more recent techniques such as metaheuristics [1]. Among the existing metaheuristic algorithms, the particle swarm optimization (PSO) algorithm is a population-based optimization technique developed by Kennedy and Eberhart in 1995 [2]. The PSO has resulted in a large number of variants of the standard PSO. Some variants are designed to deal with specific applications [36], and others are generalized for numerical optimization [710]. A hierarchical version of PSO (H-PSO) has been proposed by Janson and Middendorf [10]. In H-PSO, all particles are arranged in a tree that forms the hierarchy. A particle is influenced by its own best position and the best position particle in its neighborhood. It was shown that H-PSO performed very well compared to the standard PSO on unimodal and multimodal test functions [10, 11]. H-PSO presents the advantage of being conceptually very simple and requiring low computation time. However, the main disadvantage of H-PSO is the risk of a premature search convergence, especially in complex multiple peak search problems.

A number of algorithms combined various algorithmic components, often originating from algorithms of other research areas on optimization. These approaches are commonly referred to as hybrid meta-heuristics [12]. The surveys on hybrid algorithms that combine the PSO and differential evolution (DE) [13] were presented recently [14, 15]. These PSO-DE hybrids usually employ DE to adjust the particle position. But the convergence performance is dependent on the particle velocity. Limiting the velocity can help the particle to get out of local optima traps [16, 17]. In this paper, we will combine these two optimization algorithms and propose the novel hybrid algorithm H-PSO-DE. The DE is employed to regulate the particle velocity rather than the traditional particle position in case that the optimal result has not improved after several iterations. The hybrid algorithm aims to aggregate the advantages of both algorithms to efficiently tackle the optimization problem.

The remainder of this paper is organized as follows. Section 2 briefly describes the basic operations of the PSO, H-PSO, and DE algorithms. Section 3 presents an analysis of the relationship of particle velocity and convergence. Section 4 provides the hybrid optimization method: H-PSO-DE. Section 5 reveals the simulations and analysis of H-PSO-DE in solving unconstrained optimization problems. Finally, conclusions are given in Section 6.

2. The PSO, H-PSO, and DE Algorithms

2.1. The PSO Algorithm

The PSO [1820] is a stochastic population-based optimization approach. Each particle is a -dimensional vector, and it consists of a position vector , which represents a candidate solution of the optimization problem, a velocity vector , and a memory vector , which is the best candidate solution encountered by the particle. The velocity and position of the particle are updated in every dimension by where is the inertia weight, which determines how much of the previous velocity the particle is preserved. and are positive constants. and are randomly chosen numbers uniformly distributed in the interval . represents the best position achieved by any member of the population.

2.2. The H-PSO Algorithm

In H-PSO [21], all particles are arranged in a hierarchy. The hierarchy is defined by the height h, the branching degree bd, and the total number of nodes tnn of the corresponding tree.

In H-PSO, the iteration starts with the evaluation of the objective function of each particle at its current position. Then, the new velocity vectors and the new positions for the particles are determined. This means that for particle , the value of in (1) equals , with being the particle in the parent node of the node of particle . H-PSO uses only when particle is in the root. If the function value of a particle is better than the function value at its personal best position so far, then the new position is stored in . For each particle in a node of the tree, its own best solution is compared to the best solution found by the particles in the child nodes . If the best of these particles is better than particle , then particles and swap their places within the hierarchy.

2.3. The DE Algorithm

The DE [11, 13, 22] is a stochastic parallel direct search method. More specifically, DE’s basic strategy can be summarized as follows.

Initialization. DE begins with a randomly initiated population of   -dimensional parameter vectors , as a population for each generation . The initial population of the th parameter of the th vector is where and indicate the lower and upper bounds, respectively. is a uniformly distributed random number lying between 0 and 1.

Mutation. DE mutates and recombines the population to produce a population of trial vectors. Specifically, for each individual , a mutant vector is generated according to where , commonly known as scale factor, is a positive real number. Three other random individuals , , and are sampled randomly from the current population such that , and .

Crossover. DE crosses each vector with a mutant vector: where is called the crossover rate.

Selection. To decide whether or not it should become a member of generation , the trial vector is compared to the target vector using the greedy criterion. The selection operation is described as where is the objective function to be minimized.

3. Relationship of Particle Velocity and Convergence

This section presents an analysis of the relationship of particle velocity and convergence.

Substituting (1) into (2) results in From (2), it is known that Substituting (8) into (7) results in

This recurrence relation can be written as a matrix-vector product, so that

The characteristic polynomial of the matrix in (10) is , which has a trivial root of and two other solutions where .

Note that and are both eigenvalues of the matrix in (10). The explicit form of the recurrence relation (9) is then given by where , , and are constants determined by the initial conditions of the system.

Substituting (12) into (8) results in

where , .

Consider

Equation (15) implies that if the PSO algorithm is convergent, the velocity of the particles will decrease to zero or stay unchanged until the end of the iteration.

4. The Proposed H-PSO-DE Algorithm

The main idea of the hybrid H-PSO-DE algorithm is to employ the DE to regulate the particle velocity rather than the traditional particle position in case that the optimal result has not improved after several iterations. If the swarm is going to be in equilibrium, the evolution process will be stagnated as time goes on. To prevent the trend, if the stagnating step of evolution process is larger than threshold value , the particle velocity performs mutation operators. The velocity and position of the particles are updated as follows.

If ( or , ), then where , is a random number in the interval , and and are sampled randomly from .

The procedure for H-PSO-DE algorithm is presented in Algorithm 1.

alg1
Algorithm 1: Procedure for the H-PSO-DE.

5. Simulations and Results

In this section, we present a simulation study to validate the proposed H-PSO-DE algorithm. A set of test functions that are commonly used in the field of continuous function optimization is listed in the appendix. They are a set of curvilinear functions for difficult unconstrained minimization problems. For illustration, the landscapes of two-dimensional versions of the six functions are depicted in Figure 1. The first two functions (Sphere and Rosenbrock) are unimodal functions, and they have a single local optimum that is also the global optimum. The remaining functions are multimodal, and they have several local optima. Note that the dimensional increase of these scalable functions does not change their basic features.

fig1
Figure 1: An illustration for 2-dimensional landscapes of the test functions. (a) Sphere function; (b) Rosenbrock function; (c) Rastrigin function; (d) Griewank function; (e) Ackley function; and (f) Schaffer’s F6.

In our experiments, the H-PSO uses the parameter values , and as suggested in [23] for a faster convergence rate. The population size that has been used is . The maximal number of generations uses . The remainder parameters are set as , , , , , and . Thirty independent runs were carried out. The convergence behavior of the H-PSO is shown in Figure 2. For comparison purpose, the H-PSO-DE is also given in the same figure. As shown in Figure 2, the convergence performance of the H-PSO-DE is better than the H-PSO. H-PSO-DE is compared with H-PSO, DE, and PSO-DE [1] in terms of the selected performance metrics, such as the mean, maximum, and minimum values. In DE, we use DE/rand/1/bin strategy (, ). As shown in Tables 1, 2, and 3, the H-PSO-DE outperforms H-PSO, DE, and PSO-DE. The H-PSO-DE is quite competitive when compared with the other existing methods.

tab1
Table 1: Comparing the mean value of H-PSO-DE with respect to the other state-of-the-art algorithms.
tab2
Table 2: Comparing the maximum value of H-PSO-DE with respect to the other state-of-the-art algorithms.
tab3
Table 3: Comparing the minimum value of H-PSO-DE with respect to the other state-of-the-art algorithms.
384125.fig.002
Figure 2: Convergence graph of the H-PSO and the H-PSO-DE for .

6. Conclusions

In this paper, a new method named H-PSO-DE is proposed to solve optimization problems, which improves the performance of the H-PSO by incorporating DE. In H-PSO-DE, when the evolution process is stagnated for several generations, all the particles may lose the ability of finding a better solution. Then, the DE is employed to regulate the particle velocity to avoid wasting too much calculation time for vain search, so the searching efficiency of the H-PSO-DE is improved greatly. The H-PSO-DE is compared on test functions with H-PSO, DE, and PSO-DE. It is shown that H-PSO-DE performs significantly better.

Appendix

Benchmark Functions

Sphere:

Rosenbrock:

Rastrigin:

Griewank:

Ackley:

Schaffer’s F6:

Acknowledgments

This work was supported by the Key Project of Chinese Ministry of Education (no. 212135), the Guangxi Natural Science Foundation (no. 2012GXNSFBA053165), the project of Education Department of Guangxi (no. 201203YB131), and the Doctoral Initiating Project of Guangxi University of Science and Technology (no. 11Z09).

References

  1. C. Zhang, J. Ning, S. Lu, D. Ouyang, and T. Ding, “A novel hybrid differential evolution and particle swarm optimization algorithm for unconstrained optimization,” Operations Research Letters, vol. 37, no. 2, pp. 117–122, 2009. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the 1995 IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, December 1995. View at Scopus
  3. J. Zhang, J. Wang, and C. Yue, “Small population-based particle swarm optimization for short-term hydrothermal scheduling,” IEEE Transactions on Power Systems, vol. 27, no. 1, pp. 142–152, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. R. Bhattacharya, T. K. Bhattacharyya, and R. Garg, “Position Mutated hierarchical particle swarm optimization and its application in synthesis of unequally spaced antenna arrays,” IEEE Transactions on Antennas and Propagation, vol. 60, no. 7, pp. 3174–3181, 2012.
  5. M. A. Cavuslua, C. Karakuzub, and F. Karakayac, “Neural identification of dynamic systems on FPGA with improved PSO learning,” Applied Soft Computing, vol. 12, no. 9, pp. 2707–2718, 2012.
  6. M. Han, J. Fan, and J. Wang, “A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control,” IEEE Transactions on Neural Networks, vol. 22, no. 9, pp. 1457–1468, 2011. View at Publisher · View at Google Scholar · View at Scopus
  7. F. Peng, K. Tang, G. Chen, and X. Yao, “Population-based algorithm portfolios for numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 5, pp. 782–800, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. X. Li and X. Yao, “Cooperatively coevolving particle swarms for large scale optimization,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 2, pp. 210–224, 2012. View at Publisher · View at Google Scholar · View at Scopus
  9. M. Li, D. Lin, and J. Kou, “A hybrid niching PSO enhanced with recombination-replacement crowding strategy for multimodal function optimization,” Applied Soft Computing Journal, vol. 12, no. 3, pp. 975–987, 2012. View at Publisher · View at Google Scholar · View at Scopus
  10. S. Janson and M. Middendorf, “A hierarchical particle swarm optimizer and its adaptive variant,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 35, no. 6, pp. 1272–1282, 2005. View at Publisher · View at Google Scholar · View at Scopus
  11. M. G. Epitropakis, D. K. Tasoulis, N. G. Pavlidis, V. P. Plagianakos, and M. N. Vrahatis, “Enhancing differential evolution utilizing proximity-based mutation operators,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 99–119, 2011. View at Publisher · View at Google Scholar · View at Scopus
  12. C. Blum, J. Puchinger, G. R. Raidl, and A. Roli, “Hybrid metaheuristics in combinatorial optimization: a survey,” Applied Soft Computing Journal, vol. 11, no. 6, pp. 4135–4151, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. K. V. Price, R. Storn, and J. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer, Berin, Germany, 2005.
  14. M. G. Epitropakis, V. P. Plagianakos, and M. N. Vrahatis, “Evolving cognitive and social experience in particle swarm optimization through differential evolution: a hybrid approach,” Information Sciences, vol. 216, pp. 50–92, 2012.
  15. B. Xin and J. Chen, “A survey and taxonomy on hybrid algorithms based on particle swarm optimization and differential evolution,” Journal of Systems Science and Mathematical Sciences, vol. 31, no. 9, pp. 1130–1150, 2011.
  16. H. Liu, X. Wang, and G. Tan, “Convergence analysis of particle swarm optimization and its improved algorithm based on chaos,” Control and Decision, vol. 21, no. 6, pp. 636–645, 2006. View at Scopus
  17. S. Jiang, Q. Wang, and J. Jiang, “Particle swarm optimization algorithm based on velocity differential evolution,” in Proceedings of the Chinese Control and Decision Conference (CCDC '09), pp. 1860–1865, Guilin, China, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  18. S. Ghosh, S. Das, D. Kundu, K. Suresh, and A. Abraham, “Inter-particle communication and search-dynamics of lbest particle swarm optimizers: an analysis,” Information Sciences, vol. 182, no. 1, pp. 156–168, 2012. View at Publisher · View at Google Scholar · View at Scopus
  19. X. F. Xie, W. J. Zhang, and Z. L. Yang, “A dissipative particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '02), pp. 1456–1461, Honolulu, Hawaii, USA, May 2002.
  20. W. Gao, S. Liu, and L. Huang, “Particle swarm optimization with chaotic opposition-based population initialization and stochastic search technique,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 11, pp. 4316–4327, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. S. Janson and M. Middendorf, “A hierarchical particle swarm optimizer for noisy and dynamic environments,” Genetic Programming and Evolvable Machines, vol. 7, no. 4, pp. 329–354, 2006. View at Publisher · View at Google Scholar · View at Scopus
  22. F. Neri and V. Tirronen, “Recent advances in differential evolution: a survey and experimental analysis,” Artificial Intelligence Review, vol. 33, no. 1-2, pp. 61–106, 2010. View at Publisher · View at Google Scholar · View at Scopus
  23. I. C. Trelea, “The particle swarm optimization algorithm: convergence analysis and parameter selection,” Information Processing Letters, vol. 85, no. 6, pp. 317–325, 2003. View at Publisher · View at Google Scholar · View at Scopus