Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2016, Article ID 9482073, 9 pages
http://dx.doi.org/10.1155/2016/9482073
Research Article

A Novel Particle Swarm Optimization Algorithm for Global Optimization

1Department of Mathematics, Henan Normal University, Xinxiang 453007, China
2Henan Engineering Laboratory for Big Data Statistical Analysis and Optimal Control, School of Mathematics and Information Sciences, Henan Normal University, Xinxiang 453007, China

Received 17 August 2015; Revised 1 December 2015; Accepted 8 December 2015

Academic Editor: Massimo Panella

Copyright © 2016 Chun-Feng Wang and Kui Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms.

1. Introduction

This paper considers the following global optimization problem: where is a continuous variable vector with domain defined by the bound constraint , . The function is a continuous real-valued function.

Many real-world problems, such as engineering and related areas, can be reduced to formulation (1). This problem usually has many local optima, so it is difficult to find its global optimum. For solving such problem, researchers have presented many methods during the past years, which can be divided into two groups: deterministic and stochastic algorithms. Most deterministic algorithms usually effective for unimodal functions have one global optimum and need gradient information. However, stochastic algorithms do not require any properties of the objective function. Therefore, more attention has been paid to stochastic algorithms recently, and many effective algorithms have been presented, including Simulated Annealing (SA) [1], Genetic Algorithm (GA) [2, 3], Differential Evolution (DE) [4], Particle Swarm Optimization (PSO) [5], Ant Colony Optimization (ACO) [6], Artificial Bee Colony (ABC) [7], and Harmony Search (HS) [8].

Among these stochastic algorithms, PSO is a population-based and intelligent method, which is inspired by the emergent motion of a flock of birds searching for food [5]. In PSO, a population of potential solutions is evolved through successive iterations. Since PSO algorithm has a number of desirable properties, including simplicity of implementation, scalability in dimension, and good empirical performance, it has been applied to solve many real-world problems, such as capacitor placement problem [9], short term load forecasting [10], soft sensor [11], the voltage stability of the electric power distribution systems [12, 13], the orbits of discrete chaotic dynamical systems towards desired target region [14], and the permutation flow-shop sequencing problem [15].

Although PSO algorithm has been applied successfully in solving many difficult optimization problems, it also has difficulties in keeping balance between exploration and exploitation when solving complex multimodal problems. In order to get a better performance for PSO algorithm, many variants of PSO have been developed. For example, by using random value of inertia weight, Eberhart and Shi proposed a modified PSO, which can track the optima in a dynamic environment [16]. By utilizing the success rate of the particles, a new adaptive inertia weight strategy was presented [17]. Through using Cauchy mutation, a hybrid PSO (HPSO) was developed [18]. In [19], a hybrid PSO with a wavelet mutation (HWPSO) was given, in which the mutation incorporates with a wavelet function. To avoid premature convergence, a novel parameter automation strategy is proposed [20]. Valdez et al. introduced an improved FPSO + FGA hybrid method by combining the advantages of PSO and GA [21]. Based on mutation operator and different local search techniques, a superior solution guided PSO (SSG-PSO) was developed [22]. By using the second personal best and the second global best particle, two modified PSO algorithms are presented [23]. In order to avoid being trapped in local optima in the convergence process, some other improved PSO have been proposed, such as crossover [24], orthogonal learning strategy [25], chaos [26], and elitist learning strategy [27].

In this paper, by utilizing the information of the the best neighbor of each particle and the best particle of the entire population in the current iteration, a new Particle Swarm Optimization algorithm is proposed, which is named NPSO. To avoid premature, an abandoned mechanism is presented in our algorithm. Furthermore, for improving the global convergence speed, a chaotic search is implemented in the best solution of each iteration.

The remainder of this paper is organized as follows. Section 2 describes the original PSO. Then our proposed modifications to PSO are described in Section 3. Numerical results and discussions are presented in Section 4. Finally, some concluding remarks are provided in Section 5.

2. Particle Swarm Optimization (PSO)

Assume that the search space is -dimensional; denotes the size of the swarm population. In PSO, each particle has a position in the search space and a velocity to indicate its current state. A position denotes a feasible solution. The position and the velocity are updated by the best position encountered by the particle so far and the best position found by the entire population of particles according to the following equations: where and are two learning factors which control the influence of the social and cognitive components, are random numbers in the range , and is the inertia weight, which ensures the convergence of the PSO algorithm and is decreased linearly.

3. The Novel PSO Algorithm (NPSO)

In the original PSO, since each particle moves in the search space guided only by its historical best solution and the global best solution , it may get trapped in a local optimal solution when current global best solution in a local optimum and not easy for the particle escapes from it. To solve such problem, in this section, three improvement strategies are proposed.

3.1. The First Improvement Strategy

From (2), we observe that only the information of the historical best position of each particle and the global best position of all particles is utilized. As a matter of fact, the information of the best neighbor of the particle may provide a better guidance than . The details are given as follows.

Firstly, we explain how to define the neighbors and to determine the best neighbor of the particle . In order to define appropriate neighbors, different approaches could be used. In this paper, the neighbors of are defined by using the mean Euclidean distance between and the rest of solutions. Let be the Euclidean distance between and and let be the mean Euclidean distance for . Then can be computed as follows: By (3), if , then could be accepted as a neighbor of .

In addition, we can also use a more general and flexible definition to determine a neighbor of : If (4) is used, then a new parameter , which refers to the “neighbourhood radius,” will be added to the parameters of PSO. If , it turns to the standard PSO. With the value of increasing, the neighborhood of enlarges or its neighborhood shrinks as the value of decreases. Once the neighbors are determined, we select the best position among the neighbors of as the best neighbor .

After determining the the best neighbor , we give a new way moving for each particle. If , then set If , then set After that, moves to a new position by the following equation:

In our algorithm, it is obvious that before each particle moves, it first watches the region which is centered by itself, selects the best neighbour, and then uses (5)–(7) to generate the next position.

3.2. The Second Improvement Strategy

To avoid premature, in our algorithm, an abandoned mechanism is proposed.

Assume that “” is a predetermined number, which will be used to determine whether the position of the particle should be abandoned. At the beginning, set . With the implementation of the algorithm, if the position was not improved, then set ; else set . If the position can not be improved anymore when is achieved, that is, , then the position will be abandoned and a new position will replace it, which is generated by using the following equation: where is a random number in the range and is the inertia weight that controls the impact of the optimal solution at current iteration, which is increased linearly.

From (8), it can be seen that, in the early stage of the algorithm, will be generated with a large randomness and in the late stage, it will be generated near the global best position .

3.3. The Third Improvement Strategy

To improve the global convergence of NPSO, a chaotic search operator is adopted. Next, we give the details.

Let be the best solution of the current iteration. Firstly, utilize the following equation (9) to generate chaotic variable : where is the length of chaotic sequence and is a random number. Then map to a chaotic vector in the interval : where and are the lower bound and upper bound of variable , respectively. Finally, a new candidate solution is obtained by the following equation: where is a shrinking factor, which is defined as follows: where is the maximum number of iterations and is the number of iterations.

By (11) and (12), it can be seen that will become smaller with the increase of evolutionary generation; that is, the local search range will become smaller with the process of evolution.

Based on the abovementioned explanation, the pseudocode of the NPSO algorithm is given in Algorithm 1.

Algorithm 1: Framework of NPSO. Remark: in our algorithm, for simplicity, in (8), we can set , which can meet our needs.

4. Experimental Results and Discussion

4.1. Experiments 1

In this subsection, the performance of NPSO algorithm is compared to PSO algorithm by evaluating convergence and best solution found for 14 benchmark functions, where are shifted functions and are rotated functions. The characteristics, dimensions, initial range, and formulations of these functions are listed in Table 1. is for and for ; is the shifted global optimum; is the orthogonal matrix.

Table 1: Benchmark functions used in experiments.

The proposed algorithm NPSO and PSO are coded in Matlab 7.0, and the experiments’ platform is a personal computer with Pentium 4, 3.06 GHz CPU, 512 M memory, and Windows XP.

The parameters of algorithms are given as follows. The common parameters are the dimension , the population size , the maximum number of iterations , , , , and . NPSO settings are as follows: the length of chaotic sequence and the limit of a position abandoned are set to 5; . Each experiment is repeated 30 times independently. The global minimums (Min), maximum number of iterations (Max iteration), mean best values (Mean), and standard deviations (SD) are given in Table 2. To show the convergence speed of PSO and NPSO more clearly, the convergence graphs of PSO and NPSO are shown in Figure 1.

Table 2: NPSO performance comparison with PSO.
Figure 1: Convergence rates on test functions.

From Table 2, it can be seen that the NPSO performs better than PSO on all test functions. From Figure 1, it can be seen that the convergence speed of NPSO is more fast than PSO.

4.2. Experiments 2

In this subsection, to further test the efficiency of NPSO, it is compared with other five algorithms, that is, CPSO [28], CLPSO [29], FIPS [30], Frankenstein [31], and AIWPSO [17].

Twelve benchmark functions are used for the comparison. The characteristics, dimensions, initial range, and formulations of these functions are listed in Table 3.

Table 3: Benchmark functions used in experiments.

In order to make a fair comparison, the maximum number of function evaluations (maxFEs) is set to 2e5 for all algorithms; the population size is 20. The other parameters for NPSO are set as in Experiments 1. The comparison results are presented in Table 4. For the sake of convenience and reliability, except for the NPSO algorithm, the rest of results reported here are taken directly from the literature [17].

Table 4: The mean and standard deviation of the best solutions of six PSO variants on 12 test problems in 200,000 function evaluations.

From Table 4, it can be seen that NPSO is significantly better than the other five algorithms on almost all the test functions, except for the function .

5. Conclusion

In this paper, by utilizing the information of the best neighbor of each particle and the best solution in the current iteration, we presented a new move equation. After that, based on the other two improvement strategies, a novel Particle Swarm Optimization algorithm NPSO was proposed. The performance of NPSO was compared with the standard PSO and other five variants of PSO. The results showed that NPSO presents promising results for considered problems.

In the future, the adaption of the parameters in NPSO can be studied to improve its performance.

Conflict of Interests

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

The research was supported by NSFC (U1404105, 11171094); the Key Scientific and Technological Project of Henan Province (142102210058); the Doctoral Scientific Research Foundation of Henan Normal University (qd12103); the Youth Science Foundation of Henan Normal University (2013qk02); Henan Normal University National Research Project to Cultivate the Funded Projects (01016400105); the Henan Normal University Youth Backbone Teacher Training.

References

  1. N. Gabere, Simulated annealing driven pattern search algorithms for global optimization [M.S. thesis], University of the Witwatersrand, Johannesburg, South Africa, 2007.
  2. K. Deep and M. Thakur, “A new mutation operator for real coded genetic algorithms,” Applied Mathematics and Computation, vol. 193, no. 1, pp. 211–230, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. P. Kaelo and M. M. Ali, “Integrated crossover rules in real coded genetic algorithms,” European Journal of Operational Research, vol. 176, no. 1, pp. 60–76, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  4. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  5. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, IEEE, Perth, Australia, November-December 1995. View at Publisher · View at Google Scholar
  6. M. Dorigo and T. Stutzle, Ant Colony Optimization, MIT Press, Cambridge, Mass, USA, 2004.
  7. D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep. TR06, Erciyes University, Kayseri, Turkey, 2005. View at Google Scholar
  8. Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001. View at Publisher · View at Google Scholar · View at Scopus
  9. N. W. Oo, “A comparison study on particle swarm and evolutionary particle swarm optimization using capacitor placement problem,” in Proceedings of the 2nd IEEE International Conference on Power and Energy Conference, pp. 1208–1211, Johor Bahru, Malaysia, December 2008. View at Publisher · View at Google Scholar
  10. B. Wang, N.-L. Tai, H.-Q. Zhai, J. Ye, J.-D. Zhu, and L.-B. Qi, “A new ARMAX model based on evolutionary algorithm and particle swarm optimization for short-term load forecasting,” Electric Power Systems Research, vol. 78, no. 10, pp. 1679–1685, 2008. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Wang and F. Qian, “An improved particle swarm optimizer with behavior-distance models and its application in soft-sensor,” in Proceedings of the 7th World Congress on Intelligent Control and Automation (WCICA '08), pp. 4473–4478, IEEE, Chongqing, China, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. S. Naka, T. Genji, T. Yura, and Y. Fukuyama, “A hybrid particle swarm optimization for distribution state estimation,” IEEE Transactions on Power Systems, vol. 18, no. 1, pp. 60–68, 2003. View at Publisher · View at Google Scholar · View at Scopus
  13. A. A. El-Dib, H. K. M. Youssef, M. M. El-Metwally, and Z. Osman, “Maximum loadability of power systems using hybrid particle swarm optimization,” Electric Power Systems Research, vol. 76, no. 6-7, pp. 485–492, 2006. View at Publisher · View at Google Scholar · View at Scopus
  14. B. Liu, L. Wang, Y.-H. Jin, F. Tang, and D.-X. Huang, “Directing orbits of chaotic systems by particle swarm optimization,” Chaos, Solitons and Fractals, vol. 29, no. 2, pp. 454–461, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  15. M. F. Tasgetiren, Y.-C. Liang, M. Sevkli, and G. Gencyilmaz, “A particle swarm optimization algorithm for makespan and total flowtime minimization in the permutation flowshop sequencing problem,” European Journal of Operational Research, vol. 177, no. 3, pp. 1930–1947, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  16. R. C. Eberhart and Y. H. Shi, “Tracking and optimizing dynamic systems with particle swarms,” in Proceedings of the IEEE Congress on Evolutionary Computation, vol. 1, pp. 94–100, Seoul, South Korea, May 2001. View at Publisher · View at Google Scholar
  17. A. Nickabadi, M. M. Ebadzadeh, and R. Safabakhsh, “A novel particle swarm optimization algorithm with adaptive inertia weight,” Applied Soft Computing, vol. 11, no. 4, pp. 3658–3670, 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. H. Wang, C. Li, Y. Liu, and S. Zeng, “A hybrid particle swarm algorithm with cauchy mutation,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '07), pp. 356–360, IEEE, Honolulu, Hawaii, USA, April 2007. View at Publisher · View at Google Scholar · View at Scopus
  19. S. H. Ling, H. H. C. Iu, K. Y. Chan, H. K. Lam, B. C. W. Yeung, and F. H. Leung, “Hybrid particle swarm optimization with wavelet mutation and its industrial applications,” IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, vol. 38, no. 3, pp. 743–763, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at Publisher · View at Google Scholar · View at Scopus
  21. F. Valdez, P. Melin, and O. Castillo, “Modular Neural Networks architecture optimization with a new nature inspired method using a fuzzy combination of Particle Swarm Optimization and Genetic Algorithms,” Information Sciences, vol. 270, pp. 143–153, 2014. View at Publisher · View at Google Scholar · View at Scopus
  22. G. H. Wu, D. S. Qiu, Y. Yu, W. Pedrycz, M. Ma, and H. Li, “Superior solution guided particle swarm optimization combined with local search techniques,” Expert Systems with Applications, vol. 41, no. 16, pp. 7536–7548, 2014. View at Publisher · View at Google Scholar · View at Scopus
  23. Y.-B. Shin and E. Kita, “Search performance improvement of particle swarm optimization by second best particle information,” Applied Mathematics and Computation, vol. 246, pp. 346–354, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. Y.-P. Chen, W.-C. Peng, and M.-C. Jian, “Particle swarm optimization with recombination and dynamic linkage discovery,” IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, vol. 37, no. 6, pp. 1460–1470, 2007. View at Publisher · View at Google Scholar · View at Scopus
  25. Z.-H. Zhan, J. Zhang, Y. Li, and Y.-H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832–847, 2011. View at Publisher · View at Google Scholar · View at Scopus
  26. B. Alatas, E. Akin, and A. B. Ozer, “Chaos embedded particle swarm optimization algorithms,” Chaos, Solitons and Fractals, vol. 40, no. 4, pp. 1715–1734, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  27. Z.-H. Zhan, J. Zhang, Y. Li, and H. S.-H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, vol. 39, no. 6, pp. 1362–1381, 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225–239, 2004. View at Publisher · View at Google Scholar · View at Scopus
  29. J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at Publisher · View at Google Scholar · View at Scopus
  30. R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, 2004. View at Publisher · View at Google Scholar · View at Scopus
  31. M. A. M. de Oca, T. Stützle, M. Birattari, and M. Dorigo, “Frankenstein's PSO: a composite particle swarm optimization algorithm,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 1120–1132, 2009. View at Publisher · View at Google Scholar · View at Scopus