- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Discrete Dynamics in Nature and Society
Volume 2012 (2012), Article ID 791373, 21 pages
A Novel PSO Model Based on Simulating Human Social Communication Behavior
1School of Economics and Management, Tongji University, Shanghai 200092, China
2School of Mathematics and Computer Science, Zunyi Normal College, Zunyi 563002, China
3College of Management, Shenzhen University, Shenzhen 518060, China
4Hefei Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei 230031, China
5E-Business Technology Institute, The University of Hong Kong, Hong Kong
Received 11 May 2012; Revised 22 June 2012; Accepted 25 June 2012
Academic Editor: Vimal Singh
Copyright © 2012 Yanmin Liu and Ben Niu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In order to solve the complicated multimodal problems, this paper presents a variant of particle swarm optimizer (PSO) based on the simulation of the human social communication behavior (HSCPSO). In HSCPSO, each particle initially joins a default number of social circles (SC) that consist of some particles, and its learning exemplars include three parts, namely, its own best experience, the experience of the best performing particle in all SCs, and the experiences of the particles of all SCs it is a member of. The learning strategy takes full advantage of the excellent information of each particle to improve the diversity of the swarm to discourage premature convergence. To weight the effects of the particles on the SCs, the worst performing particles will join more SCs to learn from other particles and the best performing particles will leave SCs to reduce their strong influence on other members. Additionally, to insure the effectiveness of solving multimodal problems, the novel parallel hybrid mutation is proposed to improve the particle’s ability to escape from the local optima. Experiments were conducted on a set of classical benchmark functions, and the results demonstrate the good performance of HSCPSO in escaping from the local optima and solving the complex multimodal problems compared with the other PSO variants.
Particle swarm optimization (PSO), originally introduced by Kennedy and Eberhart , has proven to be a powerful competitor to other evolutionary algorithms (e.g., genetic algorithms) . In PSO, these individuals, instead of being manipulated by the evolution operator such as crossover and mutation, are “evolved” by the cooperation and competition among the individuals through generations. Each individual in the swarm is called a particle (a point) with a velocity that is dynamically adjusted in the search process according to its own flying experience and the best experience of the swarm.
When solving the unconstraint optimization problem, PSO has empirically turned out to perform well on many optimization problems. However, when it comes to solving complex multimodal problems, PSO may easily get trapped in a local optimum. In order to overcome this defect and improve PSO performance, some researchers proposed several methods [3–20]. In this paper, we present an improved PSO based on human social communication. This strategy ensures the swarm’s diversity against the premature convergence, especially when solving the complex multimodal problems.
This paper is organized as follows. Section 2 presents an overview of the original PSO, as well as a discussion of the previous attempts to improve the PSO performance. Section 3 proposed an improved PSO based on simulation of human communication. Section 4 gives the test functions, the experimental setting, and results. Finally, some conclusions and the future works are discussed in Section 5.
2. Particle Swarm Optimization
2.1. The Original PSO (OPSO)
The original PSO algorithm (OPSO) was inspired by the search behavior of the biological organisms, where each particle moves through the search space by a combination of the best position found so far by itself and its neighbors. In the PSO domain, generally there are two main neighborhood topologies, namely, the global and the local neighborhood that are shown in Figures 1(a) and 1(b), respectively.
The two neighborhood topologies derive two classical PSO variants, namely, the global PSO (GPSO) and the local PSO (LPSO) in which the behaviors of the particles are described by and , respectively. In HSCPSO, the neighborhood topology is somewhat similar to four clusters , but not the same as one, as can be seen in Figure 1(c). Here, the black dots represent the particles in each social circle (SC), the circles represent the social circle, and the arrows represent the relationship between SCs. Note that in HSCPSO each particle’s neighborhood is the set of the particles of all SCs that it is a member of (see Section 3.1).
Consider the following: where , ps is the population size; is the iteration counter; is the dimension of the search space; anddenote the acceleration coefficients; and are random vectors with the components uniformly distributed in the range of ; represents the position of the th particle at iteration t; represents the velocity of the th particle; is the best position yielding the best fitness value for the th particle; is the best position discovered by the whole population; is the best position achieved within the neighborhood of current particle . Note, in this paper the original PSO represent two PSOs, one is PSO with inertia weight and ring topology, and the other is PSO with inertia weight and global topology.
2.2. Related Works
Since the introduction of PSO, PSO has attracted a great deal of attention. Many researchers have worked on improving its performance in various ways and derived many interesting variants. In , Clerc and Kennedy indicated that a constriction factor may help to the convergence of PSO. The velocity and position of the th particle are updated as follows: where .
Kennedy and Mendes  claimed that PSO with a small neighborhood might perform better on the complex problems, while the one with a large neighborhood would perform better on the simple problems. In , the author proposed a quantum-behaved particle swarm optimization (QPSO) to improve PSO performance. Some researchers [6–9] also combined some techniques to improve PSO performance (e.g., evolutionary operators). However, these improvements are based on a static neighborhood network, which greatly decreases the swarm diversity due to that the particle only learns from the fixed neighborhood. Hence, Suganthan  and Hu and Eberhart  proposed a dynamic neighborhood topology which transforms gradually from acting like the local neighborhood in the early stage of the search to behaving more like the global neighborhood in the late stage of the search. Liang et al.  proposed an improved PSO called CLPSO, which used a novel learning strategy where all particles’ historical best information is used to update a particle’s velocity. In , the author proposed the fitness-distance-ratio-based PSO (FDR-PSO) where FDR-PSO selects another particle which is supposed to be a higher fitness value and the nearest to the particle being updated. Mohais et al.  proposed a randomly generated directed graph to define neighborhood which is generated by using two methods, namely, the random edge migration method and the neighborhood restructuring method. Janson and Middendorf  arranged the particles in a hierarchy structure, where the best performing particles ascend the tree to influence more particles, replacing relatively worse performing particles which descend the tree. In , clubs-based particle swarm optimization (C-PSO) algorithm was proposed, whose membership is dynamically changed to avoid premature convergence and improve performance. In , Mendes and Kennedy introduced a fully informed PSO where instead of using the pbest (pbest (personal best): personal best position of a given particle, so far) and gbest (gbest (global best): position of the best particle of the entire swarm) positions in the standard algorithm, all the neighbors of the particle are used to update the velocity. The influence of each particle on its neighbors is weighted based on its fitness value and the neighborhood size. In , the authors proposed a cooperative particle swarm optimizer (CPSO-H). Although CPSO-H uses one-dimensional (1D) swarms to search each dimension separately, the results of these searches are integrated by a global swarm to significantly improve the performance of the original PSO on multimodal problems. In recent works [19, 20], the authors proposed the dynamic neighborhood topology where the whole population is divided into a number of sub-swarms. These subswarms are regrouped frequently by using various regrouping schedules and information exchange among all particles in the whole swarm.
The main differences of our approach with respect to the other proposals existing in the above literatures are as follows.(i)In HSCPSO, we create the social circle (SC) for the particles analogous to our communities where people can communicate and study to widen the knowledge to each other.(ii)The learning exemplars of each particle include three parts, namely, its own best experience, the experience of the best performing particle in all SCs, and the experiences of the particles of all SCs it is a member of. This strategy greatly ensures the swarm diversity against the premature convergence.(iii)The parallel hybrid mutation is used to improve the particle’s ability to escape from the local optima.
3. PSO Based on Simulation of Human Communication Behavior in the Society
3.1. Updating Strategy of Particle Velocity
Based on the simulation of the human social communication behavior, each particle can join more than one SC, and each SC can accommodate any number of particles. Vacant SCs also are allowed. Firstly, each particle randomly joins a predefined number of SCs, which is called the default social circle level (DSC). At each run, the worst performing particles are more socialized through joining more SCs to improve their knowledge, while the best performing particles are less socialized through leaving an SC to reduce their strong influences on other members, which leads to the fact that DSC is dynamically adjusted in terms of the particle’s performance. During this cycle of leaving and joining SCs, the particles that no longer show the extreme performance in its neighborhood will gradually return to DSC. The speed of regaining DSC will decide the algorithm performance, so the check is made every iteration (here, n is called the gap iteration for adjusting DSC) to find the particles that have SC level above or below DSC, and then take them back to DSC if they do not show the extreme performance. Thus, the gap iteration for adjusting DSC needs a suitable value to ensure the performance efficiency of HSCPSO.
In order to control when a particle joins or leaves an SC, we designed a mechanism to control the process. If a particle continues to show the worst performance compared with the particles in the SCs that it is a member of, then it will join more SCs one after the other until the number of SCs reaches the maximum of allowed SC number defined by the user, while the one that continues to show the superior performance in the SCs that it is a member of will leave SCs one after another until the number of SC reaches the minimum of allowed SC number. The methods to determine DSC, the gap iteration for adjusting DSC, the maximum of allowed SC number, and the minimum of allowed SC number will be discussed in the following.
Figure 2(a) shows a snapshot of the SCs during an execution of HSCPSO at iteration . In this example, the swarm consists of 7 particles, and there are 6 SCs available for them to join. The minimum of allowed SC number, DSC, and maximum of allowed SC number are 2, 3 and 5, respectively. Particle 1 will leave social circle 1 (SC1), SC3, or SC6 because it is the best performing particle in its neighborhood. Particle 3 will join SC2 or SC4, because it is the worst performing particle in its neighborhood. Particle 4 will leave SC2, SC3, SC4, or SC5 to go back to DSC, while Particle 2 will join SC1, SC3, SC4 or SC6 to return to DSC. Figure 2(b) gives the pseudocode of SCs during an execution of HSCPSO, where neighbori is the set of the neighbors of particle i (note that in HSCPSO each particle’s neighborhood is the set of the particles of all SCs that it is a member of), and is the set of the SCs that particle is a member of.
The thought of HSCPSO is somewhat similar to C-PSO , but not the same as one. In our algorithm, when updating the particle’s velocity, a particle does not learn from all dimensions of the best performing particle in its neighbors, but learns from the corresponding dimension of the best performing particles of the SCs that it is a member of. In order to compare with the difference of the two strategies, the experiment was conducted as follows: HSCPSO and C-PSO are run 20 times on Sphere, Rosenbrock, Ackley, Griewanks, and Rastrigin functions, respectively; the iteration of each run is set as 1000. The mean values of the results are plotted in Figure 3(a).
As all test functions are the minimization problems, the smaller the mean value is, the better the performance works. From Figure 3, we can observe that the learning strategy of C-PSO suffers the premature convergence, while the strategy of HSCPSO not only ensures the diversity of the swarm, but also improves the swarm ability to escape from the local optima.
In the real world, everyone has the ability to make himself be one member in any SC to widen the visual field. In a similar way, we hypothesize that each particle in the swarm has the same ability as human in the society. Based on our previous work , the following updating equation of the velocity and position is employed in HSCPSO: where is the best position for the th particle; denotes the experience of the best performing particle in all SCs; is called the comprehensive strategy in which the particles’ historical best information of the SCs that particle is a member of is used to update a particle’s velocity. Here, , , , and . In the comprehensive strategy, the pseudocode of the learning exemplar choice is shownin Algorithm 1.
3.2. Parallel Hybrid Mutation
In , the author has concluded that PSO converges rapidly in the first search process and then gradually slows down or stagnates. The phenomenon is caused by the loss of diversity in the swarm. In order to conquer the default, some researchers [7–9] applied the evolutionary operator to improve the diversity of the swarm. In this paper, we proposed a parallel hybrid mutation which combines the uniform distribution mutation and the gaussian distribution mutation. The former prompts a global search in a large range, while the latter searches in a small range with the high precision. The process of mutation is given as follows.(i)Give the following expression to set the mutation capability () value for each particle, Figure 4(b) shows an the example of assigned for 40 particles. Each particle has a mutation ability ranging from 0.05 to 0.5.(ii)Choose the mode of the mutation, as follows in Algorithm 2.
Here, is called the mutation factor that denotes the ratio of the uniform distribution mutation and correspondingly, is the ratio of the gaussian distribution mutation. The Gaussian(σ) returns a random number drawn from a Gaussian distribution with a standard deviation σ. ceil(p) rounds the elements of to the nearest integers greater than or equal to . Here, three main mutation factors (Linear, Exponential, and Sigmoid defined in (3.3), (3.4) and (3.5)) are adopted, and their shapes with maximum generation 2000 are shown in Figure 4(a): where denotes the current generation; gen is the maximum generation.
In order to test which mutation factor is appropriate to our algorithm, the following experiment is conducted. On Sphere, Rosenbrock, Griewanks, Ackley, Rastrigin_noncont, and Rastrigin, HSCPSO with the different mutation factor is run 30 times on each function, and the iteration is set as 2000. The experimental results are standardized to the same scale with (3.7). The results are presented in Table 1, where represents the type of mutation factor; is the standardized value after 2000 iterations; Linear represents the linear mutation factor; Exponent denotes the exponential mutation factor; Sigm is Sigmoid mutation factor; No denotes no mutation factor. We can observe that HSCPSO with linear mutation factor achieves the best result, thus the linear mutation factor is adopted in HSCPSO.
3.3. HSCPSO’s Parameter Settings and Analysis of the Swarm’s Diversity
In this section, we will discuss four parameters, namely, default social circle level (DSC), the gap iteration (n) for adjusting DSC, and minimum and maximum of allowed SC number.
3.3.1. Default Social Circle Level (DSC)
In order to explore the different DSC effects on HSCPSO, five ten-dimensional test functions are used (Sphere, Rosenbrock, Ackley, Griewanks, and Rastrigin functions). When dealing with experimental data, it is impossible to combine the raw results from the PSOs with the different DSC for the different functions, as they are scaled differently. Thus, Mendes’s method  is used to deal with the raw results which are standardized to the same scale as follows: where , , and denote the trial results, mean, and standard deviation of the th test function in the jth algorithm, respectively. Note that the parameter represents the algorithms with the different DSC.
By trial and error, we found that the different DSC yielded the different results as can be seen in Figure 5(a). As all test functions are the minimization problems, the smaller the standardizing value is, the better the performance works (i.e., the larger the DSC is, the better the performance goes). However, given a second thought, we found the phenomenon is not logical because in the real-life people have no enough energy to take part in all social activities. Therefore, another experiment (the experiment setting and the method of the data processing are the same as one in Figure 5(a)) is conducted to test the impact of DSC. Figure 5(b) gives the simulation results, and we found that the smaller the DSC is, the slower the convergence speed is, and vice versa. The slower convergence rate, rather than the faster convergence speed, is obviously beneficial to the ability to escape from a local optimum . Based on the above analysis, the parameter DSC is set as 15 in our algorithm. Furthermore, in order to ensure the effectiveness of the DSC choice statistically, we adopted the nonparametric Wilcoxon rank sum tests to determine whether the difference is significant or not. The criterion is the following:
If is less than a, two group numbers are statistically different. If is equal or greater than a, it means that two group numbers are not statistically different. From Table 2, we can observe that the performance of DSC-15 is statistically different from that of the other DSC values except for DSC-10.
3.3.2. The Gap Iteration (n) for Adjusting DSC
This section mainly discusses the effect of the gap iteration () for adjusting DSC on the algorithm. Six different thirty-dimensional test functions (Sphere, Rosenbrock, Griewanks, Ackley, Rastrigin_noncont, and Rastrigin) are used to investigate the impact of this parameter. HSCPSO is run 20 times on each function (the iteration of each run is 1000), and the results are also standardized to the same scale using (3.7). The mean values of the results are plotted in Figure 6(a). It clearly indicates that the gap iteration can influence the results. When is 25, that is about 1/40 of the total iteration, a faster convergence speed and a better result are obtained on all test functions. Furthermore, the standardizing diversity in Figure 6(b) also supports the conclusion.
3.3.3. Minimum and Maximum of Allowed SC Number
At each run, the number of the SCs that a particle is a member of is dynamically changed according to its own performance, which will influence the HSCPSO performance. Additionally, in PSO, as the first search process for the global best position requires the exploration of the possible solutions, the LPSO algorithm is chosen in this stage. While the later search process requires the exploitation of the best found possible solutions by the early search process, the GPSO algorithm is chosen to meet this requirement. Thus, given the above characteristics, in our algorithm, we made the minimum and maximum of allowed SC number dynamically change with the iteration elapsed and empirically proposed to set them, respectively: where the floor operation is round towards minus infinity; t is the iteration counter; is the total number of the iteration. Figure 7(a) represents the minimum and maximum of allowed SC number with the iteration elapsed, respectively, and we can observe that the dynamic minimum and maximum of allowed SC number (dynamic max-min) improves the swarm diversity compared with the fixed max-min. Note that the measure method of the diversity is presented in .
3.3.4. Search Bounds Limitation
There are the bounds on the variables’ ranges in many practical problems, and all test functions used in this paper have the bounds. Thus, in order to make the particles fly within the search range, some researchers use the equation to restrict a particle on the border. Here, a different method, but similar to the above-mentioned one, is presented; that is, only when a particle is within the search range, we calculate its fitness value and update its , , and . As all learning exemplars are within the search range, a particle will eventually return to the search range.
3.3.5. Analysis of Swarm’s Diversity
In order to compare the particle diversity of OPSO with that of HSCPSO, we omit the previous velocity and make and equal to one. Thus, the following velocity updating equation is employed in the experiment:
In terms of , we run HSCPSO and OPSO on Rosenbrock and Rastrigin functions to analyze the number of the best particle (NBP) at each iteration. Figure 8 gives the scatter plot about the index of the best performing particles associated with the numbers of the iteration. For example, a dot (20, 4000) represents that the index 20’s particle has the best global fitness in the iteration number 4000. The velocity updating strategy of HSCPSO has more NBP than OPSO, which shows that the strategy of HSCPSO increases the swarm’s diversity compared with OPSO . The pseudocode of the HSCPSO is shown in Algorithm 3.
4. Experiment Settings and Results
4.1. Test Functions and Parameter Settings of PSO Variants
To compare with the performance of all algorithms, Sphere, Rosenbrock, Acley, Griewanks, Rastrigin, and Rastrigin_noncont functions are selected to test the convergence characteristics, and Ackley, Rastrigin, Rastrigin-noncont, and Rosenbrock functions are rotated with Salomon’s algorithm  to increase the optimization difficulty and test the algorithm robustness (here, the predefined threshold is 0.05, 50, 2, and 3 resp.). Table 3 represents the properties of these functions. Note that in Table 3, the values listed in the search space column are used to specify the range of the initial random particles’ position; the denotes the global optimum, and the is the corresponding fitness value.
In order to make these different algorithms comparable, the population size is set at 30 for all PSOs, and each test function is run 30 times. At each run, the maximum fitness evaluation (FEs) is set at 3 × 104 for the unrotated test functions and 6 × 104 for the rotated. The comparative algorithms and their parameters settings are listed as below.(i)Local version PSO with constriction factor (CF-LPSO) ,(ii)Global version PSO with constriction factor (CF-GPSO) ,(iii)Fitness-distance-ratio-based PSO (FDR-PSO) ,(iv)Fully informed particle swarm optimization (FIPS) ,(v)Cooperative particle swarm optimization (CPSO-) ,(vi)Comprehensive learning particle swarm optimizer (CLPSO) .
The fully informed PSO (FIPS) with a -ring topology that achieved the highest success rate was used. CPSO- is a cooperative PSO model combined with the standard PSO, in which is used in this paper. = 1.492, = 1.492, and are used in all PSOs except HSCPSO.
4.2. Fixed Iteration Results and Analysis
Tables 4 and 5 present the means and 95% confidence interval after 3 × 104 and 6 × 104 function evaluations. The best results among seven PSO algorithms are presented in bold. In addition, to determine whether the result obtained by HSCPSO is statistically different from the results of the other six PSO variants, the Wilcoxon rank sum tests are conducted between the HSCPSO result and the best result achieved by the other five PSO variants on each test problem, and the test results are presented in the last row of Tables 4 and 5. Note that an value of one implies that the performance of the two algorithms is statistically different with 95% certainty, whereas value of zero indicates that the performance is not statistically different. Figures 9 and 10 present the convergence characteristics in terms of the best fitness value of the median run of each algorithm for each test function.
From the above experimental results, we can observe that Sphere function is easily optimized by CF-GPSO and CF-LPSO, while the other five algorithms show the slower convergence speed. Rosenbrock function has a narrow valley from the perceived local optima to the global optimum. In the unrotated case, HSCPSO may avoid the premature convergence. Note that in the rotated case, there is little effect on all algorithms.
There are many local minima positioned in a regular grid on the multimodal Ackley. In the unrotated case, HSCPSO takes the lead, and the FDR-PSO performs better than the other five PSO variants. In the rotated case, the FDR-PSO algorithm is trapped in local optima early on, whereas CLPSO and HSCPSO belong to the performance leaders. As a result of the comprehensive learning strategy in CLPSO, it can discourage premature convergence. At the same time, this learning strategy of HSCPSO can also ensure the diversity of the swarm to be preserved to discourage the premature convergence.
Rastrigin function exhibits a pattern similar to that observed with Ackley function. In the unrotated case, HSCPSO performs very well, but its performance rapidly deteriorates when the search space is rotated. In the rotated case, CLPSO takes the lead. Note that CLPSO is still able to improve the performance in the rotated case.
On the unrotated Rastrigin-noncont function, HSCPSO presents an excellent performance compared with other algorithms, and after 2.6 × 104 function evaluation CLPSO has the strongest ability to escape from local optima. In the unrotated case, all algorithms quickly get trapped in a local minimum besides CLPSO and HSCPSO. These two algorithms can avoid the premature convergence and escape from local minima.
Altogether, according to Wilcoxon rank sum tests, we can observe that the HSCPSO algorithm can perform better than the other six algorithms on functions , , , , , , and . Although HSCPSO has no best performance in and , it shows almost the same convergence character as one of CLPSO.
4.3. Robustness Analysis
Tables 6 and 7 show the results of the robustness analysis. Here, the “robustness” is used to test the stability of the algorithm optimization ability in different conditions (the rotated and unrotated test functions) by a certain criterion. In HSCPSO, the criterion is that the algorithm succeeded in the ability of reaching a specified threshold. A robust algorithm is the one that manages to reach the threshold consistently whether in the rotated or the unrotated. The “Success rate” column lists the rate of algorithm reaching threshold in 60 times run. The “FES” column means the number of the function evaluations when reaching the threshold, and only the dates of the successful runs are used to compute “Success rate.”
As can be seen in Tables 6 and 7, none of all PSOs had any difficulty in reaching the threshold on the Rosenbrock function during 30 runs in any case. CF-GPSO has some difficulties in the unrotated and the rotated Ackley function, but CF-LPSO reaches the threshold on the unrotated Ackley. FDR-PSO and CPSO failed on the rotated Ackley function, but in the unrotated case, they consistently reached the threshold. The FIPS completely failed in both the unrotated and the rotated cases. It is interesting to note that CLPSO reached the threshold during all the runs on the rotated Ackley function. However, in the unrotated case, it did not get a perfect success rate. Only HSCPSO consistently reached the threshold in both the unrotated and rotated cases.
The Rastrigin-noncont function is hard to be solved for the majority algorithms, as can be seen in Tables 6 and 7. CLPSO and HSCPSO consistently reached the threshold in the unrotated case, while in the rotated case, only HSCPSO could achieve a perfect success rate.
On Rastrigin function, HSCPSO and CLPSO consistently reached the threshold on both the unrotated and the rotated cases. CPSO and FDR-PSO reached the threshold in the unrotated case, but they failed in the rotated case. CF-LPSO, FIPS, and CF-GPSO algorithms had some difficulties in both the unrotated and the rotated cases.
Altogether, on Rosenbrock, Ackley, Rastrigin, and Rastrigin-noncont functions, HSCPSO shows the stability of the optimization ability in the different conditions. CLPSO, FDR, CPSO, and FIPS consistently reached the threshold on most of test functions, and they were slightly less robust. CF-LPSO and CF-GPSO seemed to be unreliable on the multimodal benchmark functions.
5. Conclusions and Future Works
This paper proposed an improved PSO based on the simulation of the human social behavior (HSCPSO for short) where each particle can adjust its learning strategy in terms of the current condition self-adaptively. From the convergence character and the robustness analysis, we can conclude that HSCPSO significantly improves the ability to solve the complicated multimodal problems compared with the other PSO versions. In addition, from Wilcoxon rank sum tests, these results achieved by HSCPSO are statistically different from the second best result. Although the HSCPSO algorithm outperformed the other PSO variants on most of the test functions evaluated in this paper, it can be regarded as an effective improvement in the PSO domain.
In the future, we will focus on (i) constructing the model for solving the relevant parameters of PSO, (ii) testing the proposed algorithm effectiveness with more multimodal test problems and several composition functions that are more difficult to be optimized, (iii) applying the proposed algorithm to some practices to verify its effectiveness, and (iv) in the future this proposed algorithm will be tested on CEC 2005 problems.
This work is supported by The National Natural Science Foundation of China (Grants nos. 71001072, 71271140, 71210107016), China Postdoctoral Science Foundation (Grant nos. 20100480705, 2012T50584), the Science and Technology Fund of Guizhou Province ( 234000), Zunyi Normal College Research Funded Project (2012 BSJJ19), Shanghai Postdoctoral Fund Project (12R21416000), Science and Technology Project of Shenzhen (Grant no. JC201005280492A), and the Natural Science Foundation of Guangdong Province (Grant no. 9451806001002294).
- J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, December 1995.
- R. Eberhart and Y. Shi, “Comparison between genetic algorithms and particle swarm optimization,” in Proceedings of the 7th Annual Conference on Evolutionary Programming, San Diego, Calif, USA, 1998.
- M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002.
- J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in Proceedings of the IEEE Congress on Evolution Computation, pp. 1671–1676, Honolulu, Hawaii, USA, 2002.
- J. Sun, J. Liu, and W. Xu, “Using quantum-behaved particle swarm optimization algorithm to solve non-linear programming problems,” International Journal of Computer Mathematics, vol. 84, no. 2, pp. 261–272, 2007.
- P. J. Angeline, “Using selection to improve particle swarm optimization,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC'98), pp. 84–89, Anchorage, Alaska, USA, May 1998.
- M. Lovbjerg and T. Krink, “Hybrid particle swarm optimizer with breeding and subpopulations,” in Proceeding of the Genetic Evolution Computation Conference, pp. 469–476, 2001.
- A. Stacey and M. Jancic, “Particle swarm optimization with mutation,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1425–1430, Canberra, Australia, 2003.
- P. S. Andrews, “An investigation into mutation operators for particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC'06), pp. 1044–1051, July 2006.
- P. N. Suganthan, “Particle swarm optimizer with neighborhood operator,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1958–1962, Washington, DC, USA, 1999.
- X. Hu and R. C. Eberhart, “Multiobjective optimization using dynamic neighborhood particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1677–1681, Honolulu, Hawaii, USA, 2002.
- J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006.
- T. Peram and K. Veeramachaneni, “Fitness-distance-ratio based particle swarm optimization,” in Proceeding of the IEEE Swarm Intelligence Symposium, pp. 174–181, April 2003.
- A. S. Mohais, R. Mendes, C. Ward, and C. Posthoff, “Neighborhood re-structuring in particle swarm optimization,” in AI 2005: Advances in Artificial Intelligence, vol. 3809 of Lecture Notes in Computer Science, pp. 776–785, Springer, Berlin, Germany, 2005.
- S. Janson and M. Middendorf, “A hierarchical particle swarm optimizer and its adaptive variant,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 35, no. 6, pp. 1272–1282, 2005.
- W. Elshamy, H. M. Emara, and A. Bahgat, “Clubs-based particle swarm optimization,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS'07), pp. 289–296, Honolulu, Hawaii, USA, April 2007.
- R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, 2004.
- F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to participle swam optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225–239, 2004.
- S. Z. Zhao, J. J. Liang, P. N. Suganthan, and M. F. Tasgetiren, “Dynamic multi-swarm particle swarm optimizer with local search for large scale global optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC'08), pp. 3845–3852, Hong Kong, China, June 2008.
- Y. M. Liu, Q. Z. Zhao, C. L. Sui, and Z. Z. Shao, “Particle swarm optimizer based on dynamic neighborhood topology and mutation operator,” Control and Decision, vol. 25, no. 7, pp. 968–974, 2010.
- R. Salomon, “Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. A survey of some theoretical and practical aspects of genetic algorithms,” BioSystems, vol. 39, no. 3, pp. 263–278, 1996.