About this Journal Submit a Manuscript Table of Contents
The Scientific World Journal

Volume 2014 (2014), Article ID 194706, 14 pages

http://dx.doi.org/10.1155/2014/194706
Research Article

Human Behavior-Based Particle Swarm Optimization

1School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China

2School of Science, University of Science and Technology Liaoning, Anshan 114051, China

3Department of Mathematics, Nanchang University, Nanchang 330031, China

Received 3 December 2013; Accepted 17 March 2014; Published 17 April 2014

Academic Editors: P. Agarwal, V. Bhatnagar, and Y. Zhang

Copyright © 2014 Hao Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Particle swarm optimization (PSO) has attracted many researchers interested in dealing with various optimization problems, owing to its easy implementation, few tuned parameters, and acceptable performance. However, the algorithm is easy to trap in the local optima because of rapid losing of the population diversity. Therefore, improving the performance of PSO and decreasing the dependence on parameters are two important research hot points. In this paper, we present a human behavior-based PSO, which is called HPSO. There are two remarkable differences between PSO and HPSO. First, the global worst particle was introduced into the velocity equation of PSO, which is endowed with random weight which obeys the standard normal distribution; this strategy is conducive to trade off exploration and exploitation ability of PSO. Second, we eliminate the two acceleration coefficients and in the standard PSO (SPSO) to reduce the parameters sensitivity of solved problems. Experimental results on 28 benchmark functions, which consist of unimodal, multimodal, rotated, and shifted high-dimensional functions, demonstrate the high performance of the proposed algorithm in terms of convergence accuracy and speed with lower computation cost.

1. Introduction

Particle swarm optimization (PSO) [1] is a population-based intelligent algorithm, and it has been widely employed to solve various kinds of numerical and combinational optimization problems because of its simplicity, fast convergence, and high performance.

Researchers have proposed various modified versions of PSO to improve its performance; however, there still are premature or lower convergence rate problems. In the PSO research, how to increase population diversity to enhance the precision of solutions and how to speed up convergence rate with least computation cost are two vital issues. Generally speaking, there are four strategies to fulfill these targets as follows.

(1) Tuning control parameters. As for inertial weight, linearly decreasing inertial weight [2], fuzzy adaptive inertial weight [3], rand inertial weight [4], and adaptive inertial weight based on velocity information [5], they can enhance the performance of PSO. Concerning acceleration coefficients, the time-varying acceleration coefficients [6] are widely used. Clerc and Kennedy analyzed the convergence behavior by introducing constriction factor [7], which is proved to be equivalent to the inertial weight [8].

(2) Hybrid PSO, which hybridizes other heuristic operators to increase population diversity. The genetic operators have been hybridized with PSO, such as selection operator [9], crossover operator [10], and mutation operator [11]. Similarly, differential evolution algorithm [12], ant colony optimization [13], and local search strategy [14] have been introduced into PSO.

(3) Changing the topological structure. The global and local versions of PSO are the main type of swarm topologies. The global version converges fast with the disadvantage of trapping in local optima, while the local version can obtain a better solution with slower convergence [15]. The Von Neumann topology is helpful for solving multimodal problems and may perform better than other topologies including the global version [16].

(4) Eliminating the velocity formula. Kennedy proposed the bare-bones PSO (BPSO) [17] and variants of BPSO [18, 19]. Sun et al. proposed quantum-behaved PSO (QPSO) and relative convergence analysis [20, 21].

In recent years, some modified PSO have extremely enhanced the performance of PSO. For example, Zhan et al. proposed adaptive PSO (APSO) [22] and Wang et al. proposed so-called diversity enhanced particle swarm optimization with neighborhood search (DNSPSO) [23]. The former introduces an evolutionary state estimation (ESE) technique to adaptively adjust the inertia weight and acceleration coefficients. The later ones, a diversity enhancing mechanism and neighborhood-based search strategies, were employed to carry out a tradeoff between exploration and exploitation.

Though all kinds of variants of PSO have enhanced performance of PSO, there are still some problems such as hardly implement, new parameters to just, or high computation cost. So it is necessary to investigate how to trade off the exploration and exploitation ability of PSO and reduce the parameters sensitivity of the solved problems and improve the convergence accuracy and speed with the least computation cost and easy implementation. In order to carry out the targets, in this paper, the global worst position (solution) was introduced into the velocity equation of the standard PSO (SPSO), which is called impelled/penalized learning according to the corresponding weight coefficient. Meanwhile, we eliminate the two acceleration coefficients and from the SPSO to reduce the parameters sensitivity of the solved problems. The so-called HPSO has been employed to some nonlinear benchmark functions, which compose unimodal, multimodal, rotated, and shifted high-dimensional functions, to confirm its high performance by comparing with other well-known modified PSO.

The remainder of the paper is structured as follows. In Section 2, the standard particle swarm optimization (SPSO) is introduced. The proposed HPSO is given in Section 3. Experimental studies and discussion are provided in Section 4. Some conclusions are given in Section 5.

2. Standard PSO (SPSO)

The PSO is inspired by the behavior of bird flying or fish schooling; it is firstly introduced by Kennedy and Eberhart in 1995 [1] as a new heuristic algorithm. In the standard PSO (SPSO) [2], a swarm consists of a set of particles, and each particle represents a potential solution of an optimization problem. Considering the th particle of the swarm with particles in a -dimensional space, its position and velocity at iteration are denoted by and . Then, the new velocity and position on the -dimension of this particle at iteration will be calculated by using the following: where , and is the population size; , and is the dimension of search space; and are two uniformly distributed random numbers in the interval ; acceleration coefficients and are nonnegative constants which control the influence of the cognitive and social components during the search process. , called the personal best solution, represents the best solution found by the th particle itself until iteration ; , called the global best solution, represents the global best solution found by all particles until iteration . is the inertial weight to balance the global and local search abilities of particles in the search space, which is given by where is the initial weight, is the final weight, is the current iteration number, and is the maximum iteration number. Then, update particle’s position using the following: and check , where and represent lower and upper bounds of the th variable, respectively.

3. Human Behavior-Based PSO (HPSO)

In this section, a modified version of SPSO based on human behavior, which is called HPSO, is proposed to improve the performance of SPSO. In SPSO, all particles only learn from the best particles and . Obviously, it is an ideal social condition. However, considering the human behavior, there exist some people who have bad habits or behaviors around us, at the same time, as we all known that these bad habits or behaviors will bring some effects on people around them. If we take warning from these bad habits or behaviors, it is beneficial to us. Conversely, if we learn from these bad habits or behaviors, it is harmful to us. Therefore, we must give an objective and rational view on these bad habits or behavior.

In HPSO, we introduce the global worst particle, who is of the worst fitness in the entire population at each iteration. It is denoted as and defined as follows: where represents the fitness value of the corresponding particle.

To simulate human behavior and make full use of the , we introduce a learning coefficient , which is a random number obeying the standard normal distribution; that is, . If , we consider it as an impelled learning coefficient, which is helpful to enhance the “flying” velocity of the particle; therefore, it can enhance the exploration ability of particle. Conversely, if , we consider it as a penalized learning coefficient, which can decrease the “flying” velocity of the particle; therefore, it is beneficial to enhance the exploitation. If , it represents that these bad habits or behaviors have not effect on the particle. Meanwhile, in order to reduce the parameters sensitivity of the solved problems, we take place of the two acceleration coefficients and with two random learning coefficients and , respectively. Therefore, the velocity equation has been changed as follows: where and are two random numbers in range of and . The random numbers , , and are the same for all but different for each particle, and they are generated anew in each iteration. If overflows the boundary, we set boundary value to it. Consider where and are the minimum and maximum velocity of the -dimensional search space, respectively. Similarly, if flies out of the search space, we limit it to the corresponding bound value.

In SPSO, the cognition and social learning terms move particle towards good solutions based on and in the search space as shown in Figure 1. This strategy makes a particle fly fast to good solutions, so it is easy to trap in local optima. From Figure 2, we can clearly observe that both impelled learning term and penalized term provide a particle with the chance to change flying direction. Therefore, the impelled/penalized term plays a key role in increasing the population diversity, which is beneficial in helping particles to escape from the local optima and enhance the convergence speed. In HPSO, the impelled/penalized learning term performs a proper tradeoff between the exploration and exploitation.

194706.fig.001
Figure 1: Cognition and social terms in PSO.
fig2
Figure 2: Impelled/penalized term in HPSO.

To sum up, Figure 3 illustrates the flowchart of HPSO. Meanwhile, the pseudocodes of implementing the HPSO are listed as shown in Algorithm 1.

alg1
Algorithm 1: HPSO.

194706.fig.003
Figure 3: HPSO flowchart.

4. Experimental Studies and Discussion

To evaluate the performance of HPSO, 28 minimization benchmark functions are selected [22, 24, 25] as detailed in Section 4.1. HPSO is compared with SPSO in different search spaces and the results are given in Section 4.2. In addition, HPSO is compared with some well-known variants of PSO in Section 4.3.

4.1. Benchmark Functions

In the experimental study, we choose 28 minimization benchmark functions, which consist of unimodal, multimodal, rotated, shifted, and shifted rotated functions. Table 1 lists the main information; please refer to papers [22, 24, 25] to obtain further detailed information about these functions. Among these functions, are unimodal functions. is the Rosenbrock function, which is unimodal for and but may have multiple minima in high dimension cases. are unrotated multimodal functions and the number of their local minima increases exponentially with the problem dimension. are rotated functions. are shifted functions and and are shifted rotated multimodal functions and is a randomly generated shift vector located in the search space. To obtain a rotated function, an orthogonal matrix [26] is considered and the rotated variable is computed. Then, the vector is used to evaluate the objective function value.

tab1
Table 1: Functions’ names, dimensions, ranges, and global optimum values of benchmark functions used in the experiments.
4.2. Comparison of HPSO with SPSO

The performance on the convergence accuracy of HPSO is compared with that of SPSO. The test functions listed in Table 1 are evaluated. For a fair comparison, we set the same parameters value. Population size is set to 30 ( ), upper bounds of velocity , and the corresponding lower bounds , where and are the lower and upper bounds of variables, respectively. Inertia weight is linearly decreased from 0.9 to 0.4 in SPSO and HPSO. Acceleration coefficients and in SPSO are set to 2. The two algorithms are independently run 30 times on the benchmark functions. The results in terms of the best, worst, median, mean, and standard deviation (SD) of the solutions obtained in the 30 independent runs by each algorithm in different search spaces are as shown in Tables 2, 3, and 4. At the same time, the maximum iteration is 1000 for , 2000 for , and 3000 for , respectively.

tab2
Table 2: Experimental results obtained by SPSO and HPSO on function from to .
tab3
Table 3: Experimental results obtained by SPSO and HPSO on functions from to .
tab4
Table 4: Experimental results obtained by SPSO and HPSO on functions from to .

From Tables 24, we can clearly observe that the convergence accuracy of HPSO is better than SPSO on the most benchmark functions. An interesting result is that HPSO can find the global optimal solutions on functions , , , , , , , , and in all search spaces; that is to say, HPSO can obtain the 100% success rate on the listed functions. Considering and , though HPSO can find the global optimal solutions in all different search ranges, it only obtained the mean values 333.3333 and 0.8333, respectively, in 100-dimensional space. At the same time, HPSO offers the higher convergence accuracy on functions , , , , , , , , , and . However, we must observe that SPSO has higher performance on function . As for , SPSO has better performance in 30-dimensional search space, but HPSO has better performance in 50- and 100-dimensional search spaces. As for shifted rotated functions and , both SPSO and HPSO have worst convergence accuracy. As seen, the dimension of the selected functions has great effect on SPSO. For example, considering function , SPSO has mean value 666.6686, 3.6667 + 03, and 4.0698 + 04 in 30-dimensional, 50-dimensional, and 100-dimensional search spaces, respectively, while HPSO has mean values 0, 0, and 333.333 in the corresponding search space. Therefore, we also conclude that HPSO has better stability than SPSO from the data in different search spaces.

In the 9th columns of Tables 24, we report the statistical significance level of the difference of the means of the two algorithms. Note that here “+” indicates that the value is significant at a 0.05 level of significance by two-tailed test, and “−” stands for the difference of means that is not statistically significant.

Figure 4 graphically presents the comparison in terms of convergence characteristics of the evolutionary processes in solving the selected benchmark functions in 30-dimensional search space with and .

fig4
Figure 4: Convergence comparison of HPSO and SPSO on the selected test functions with , and .
4.3. Comparison of HPSO with Other PSO Algorithms

In this section, a comparison of HPSO with some well-known PSO algorithms which are listed in Table 5 is performed to evaluate the efficiency of the proposed algorithm.

tab5
Table 5: Some well-known PSOs algorithms in the literature.

At first, we choose 10 unimodal and multimodal test functions for this evaluation. According to [22], the algorithms GPSO [2], LPSO [16], VPSO [27], FIPS [28], HPSO-TVAC [6], DMS-PSO [29], CLPSO [24], and APSO [22] are considered as detailed in Table 5. The experimental results of the algorithms are directly from [22] as shown in Table 6. In this trial, the population size , the dimension , and the maximum fitness evaluations (FEs) were set to also. The parameter configurations of the selected algorithms have been set according to their corresponding references. The inertia weight is linearly decreased from 0.9 to 0.4 in HPSO. HPSO is independently run 30 times and the mean and SD are shown in Table 6. As seen, HPSO has the first rank among the algorithms and obtains the global minimum on functions , , , , , and and gives the good near-global optima on functions and . Meanwhile, HPSO has the worst performance on functions and . As for , APSO has the best convergence accuracy, and HPSO only wins CLPSO. Considering , DMS-PSO has the best performance.

tab6
Table 6: Comparison results of eight PSO algorithms [22] with HPSO on 10 functions ( , , and ).

Then, in the next step, we choose six functions from [25] and seven algorithms of GPSO, QIPSO [30], UPSO [31], FIPS, AFSO [25], and AFSO-Q1 [25] as detailed in Table 5. For a fair comparison, the population size , the dimension , and the maximum iteration also in HPSO, and the inertia weight is linearly decreased from 0.9 to 0.4. HPSO is independently run 30 times and the mean and SD are shown in Table 7. As seen, HPSO shows better performance and has the first rank. HPSO finds the global optimal solution on functions , , , and . FIPS and UPSO have better convergence accuracy on functions and , respectively.

tab7
Table 7: Comparison results of seven PSO algorithms [25] with HPSO on six functions ( , , and = 10,000).

Therefore, it is worth saying that the proposed algorithm has considerably better performance than the other well-known PSO algorithms in unimodal and multimodal high-dimensional functions.

5. Conclusion

In this paper, a modified version of PSO called HPSO has been introduced to enhance the performance of SPSO. To simulate the human behavior, the global worst particle was introduced into the velocity equation of SPSO, and the learning coefficient which obeys the standard normal distribution can balance the exploration and exploitation abilities by changing the flying direction of particles. When the coefficient is positive, it is called impelled leaning coefficient, which is helpful to enhance the exploration ability. When the coefficient is negative, it is called penalized learning coefficient, which is beneficial for improving the exploitation ability. At the same time, the acceleration coefficients and have been replaced with two random numbers, whose sum is equal to  1 in ; this strategy decreases the dependence on parameters of the solved problems. The proposed algorithm has been evaluated on 28 benchmark functions including unimodal, unrotated multimodal, rotated, shifted, and shifted rotated functions, and the experimental results confirm the high performance of HPSO on the main functions. However, as seen, HPSO has the worst performance on shifted rotated functions, so it is worth researching how to enhance the performance of HPSO on shifted rotated functions in the future. Meanwhile, applying HPSO to solve real-world problems is also a research field.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The project is supported by the National Natural Science Foundation of China (Grant no. 61175127) and the Science and Technology Project of Department of Education of Jiangxi Province China (Grant no. GJJ12093).

References

  1. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995. View at Scopus
  2. Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 69–73, May 1998. View at Scopus
  3. Y. Shi and R. C. Eberhart, “Fuzzy adaptive particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, pp. 101–106, May 2001. View at Scopus
  4. R. C. Eberhart and Y. Shi, “Tracking and optimizing dynamic systems with particle swarms,” in Proceedings of the Congress on Evolutionary Computation, pp. 94–100, May 2001. View at Scopus
  5. G. Xu, “An adaptive parameter tuning of particle swarm optimization algorithm,” Applied Mathematics and Computation, vol. 219, no. 9, pp. 4560–4569, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  6. A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at Publisher · View at Google Scholar · View at Scopus
  7. M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at Publisher · View at Google Scholar · View at Scopus
  8. R. C. Eberhart and Y. Shi, “Comparing inertia weights and constriction factors in particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation (CEC '00), pp. 84–88, July 2000. View at Scopus
  9. P. J. Angeline, “Using selection to improve particle swarm optimization,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 84–89, May 1998. View at Scopus
  10. Y.-P. Chen, W.-C. Peng, and M.-C. Jian, “Particle swarm optimization with recombination and dynamic linkage discovery,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 37, no. 6, pp. 1460–1470, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. P. S. Andrews, “An investigation into mutation operators for particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 1044–1051, July 2006. View at Scopus
  12. W.-J. Zhang and X.-F. Xie, “DEPSO: Hybrid particle swarm with differential evolution operator,” in Proceedings of the IEEE International Conference on System Security and Assurance, pp. 3816–3821, October 2003. View at Scopus
  13. M. S. Kıran, M. Gündüz, and K. Baykan, “A novel hybrid algorithm based on particle swarm and ant colony optimization for finding the global minimum,” Applied Mathematics and Computation, vol. 219, no. 4, pp. 1515–1521, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  14. J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle swarm optimizer with local search,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '05), pp. 522–528, September 2005. View at Scopus
  15. J. Kennedy, “Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance,” in Proceedings of IEEE Congress on Evolutionary Computation, pp. 1931–1938, 1999.
  16. J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1671–1676, 2002.
  17. J. Kennedy, “Bare bones particle swarms,” in Proceedings of the IEEE Swarm Intelligence Symposium, pp. 80–87, 2003.
  18. R. A. Krohling and E. Mendel, “Bare bones particle swarm optimization with Gaussian or cauchy jumps,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '09), pp. 3285–3291, May 2009. View at Publisher · View at Google Scholar · View at Scopus
  19. M. G. H. Omran, A. P. Engelbrecht, and A. Salman, “Bare bones differential evolution,” European Journal of Operational Research, vol. 196, no. 1, pp. 128–139, 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. J. Sun, W. Fang, V. Palade, X. Wu, and W. Xu, “Quantum-behaved particle swarm optimization with Gaussian distributed local attractor point,” Applied Mathematics and Computation, vol. 218, no. 7, pp. 3763–3775, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. J. Sun, X. Wu, V. Palade, W. Fang, C.-H. Lai, and W. Xu, “Convergence analysis and improvements of quantum-behaved particle swarm optimization,” Information Sciences, vol. 193, pp. 81–103, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. Z.-H. Zhan, J. Zhang, Y. Li, and H. S.-H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 39, no. 6, pp. 1362–1381, 2009. View at Publisher · View at Google Scholar · View at Scopus
  23. H. Wang, H. Sun, C. Li, S. Rahnamayan, and J.-S. Pan, “Diversity enhanced particle swarm optimization with neighborhood search,” Information Sciences, vol. 223, pp. 119–135, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  24. J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at Publisher · View at Google Scholar · View at Scopus
  25. Y.-T. Juang, S.-L. Tung, and H.-C. Chiu, “Adaptive fuzzy particle swarm optimization for global optimization of multimodal functions,” Information Sciences, vol. 181, no. 20, pp. 4539–4549, 2011. View at Publisher · View at Google Scholar · View at Scopus
  26. R. Salomon, “Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. A survey of some theoretical and practical aspects of genetic algorithms,” BioSystems, vol. 39, no. 3, pp. 263–278, 1996. View at Publisher · View at Google Scholar · View at Scopus
  27. J. Kennedy and R. Mendes, “Neighborhood topologies in fully informed and bestof-neighborhood particle swarms,” IEEE Transactions on Systems, Man, and Cybernetics, Part C, vol. 36, no. 4, pp. 515–519, 2006.
  28. R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: Simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, 2004. View at Publisher · View at Google Scholar · View at Scopus
  29. J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle swarm optimizer,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '05), pp. 127–132, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  30. M. Pant, T. Radha, and V. P. Singh, “A new particle swarm optimization with quadratic interpolation,” in Proceedings of the International Conference on Computational Intelligence and Multimedia Applications (ICCIMA '07), pp. 55–60, December 2007. View at Publisher · View at Google Scholar · View at Scopus
  31. K. E. Parsopoulos and M. N. Vrahatis, “UPSO: a unified particle swarm scheme,” in Proceedings of the International Conference of Computational Methods in Sciences and Engineering, vol. 1 of Lecture Series on Computer and Computational Sciences, pp. 868–873, 2004.