Fusion Global-Local-Topology Particle Swarm Optimization for Global Optimization Problems
In recent years, particle swarm optimization (PSO) has been extensively applied in various optimization problems because of its structural and implementation simplicity. However, the PSO can sometimes find local optima or exhibit slow convergence speed when solving complex multimodal problems. To address these issues, an improved PSO scheme called fusion global-local-topology particle swarm optimization (FGLT-PSO) is proposed in this study. The algorithm employs both global and local topologies in PSO to jump out of the local optima. FGLT-PSO is evaluated using twenty (20) unimodal and multimodal nonlinear benchmark functions and its performance is compared with several well-known PSO algorithms. The experimental results showed that the proposed method improves the performance of PSO algorithm in terms of solution accuracy and convergence speed.
PSO is a population-based metaheuristic algorithm introduced by Kennedy and Eberhart  in 1995. The algorithm imitates the social behavior of bird flocking or fish schooling to find the global best solution. Due to the simple concept, having a few parameters and being easy to implement, PSO has received much more attention to solve real-world optimization problems [2–6] in recent years. Nevertheless, PSO may easily get trapped in local optima when solving complex multimodal problems . Hence, a number of variant PSO algorithms have been proposed in the literature to avoid the local optima and to find the best solution promptly.
The algorithm applies two different topologies to find a good solution: global and local topologies. In global topology, the position of each particle is affected by the best-fitness particles of the entire population in the search space while each particle is influenced by the best-fitness particles of its neighborhood in the local topology. Kennedy and Mendes proposed local (ring) topological structure PSO (LPSO)  and the Von Neumann topological structure PSO (VPSO) . Mendes et al.  introduced the fully informed particle swarm (FIPS) algorithm and Ratnaweera et al.  suggested self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients (HPSO-TVAC). Other researchers presented the several variants of PSO algorithms such as dynamic multiswarm PSO (DMS-PSO) , comprehensive learning PSO (CLPSO) , median-oriented particle swarm optimization (MPSO) , centripetal accelerated particle swarm optimization (CAPSO) , quadratic interpolation PSO (QIPSO) , quantum-behaved particle swarm optimization (QPSO) , and adaptive particle swarm optimization (APSO) .
Although the aforementioned algorithms have obtained satisfactory results, there are still some disadvantages in their utilization. For example, LPSO presents a slow convergence rate in unimodal functions [14, 15] or CLPSO is not good for solving unimodal problems . Moreover, some of the algorithms have a better performance than PSO but their structures are not as simple as PSO.
To overcome the disadvantages, this study introduces fusion global-local-topology particle swarm optimization (FGLT-PSO). The proposed algorithm performs a global search over the entire search space with a fast convergence speed using hybridizing two local and global topologies in PSO to jump out from local optima.
The remainder of this paper is organized as follows. In Section 2, a brief review of PSO is provided followed by some well-known PSO algorithms. The proposed algorithm is described in Section 3 in detail. In Section 4, FGLT-PSO is used to solve several benchmark functions and its performance is compared with the other PSO algorithms in the literature. Finally, conclusions and the future research directions are presented in Section 5.
2. Particle Swarm Optimization (PSO)
2.1. PSO Framework
The PSO algorithm is a population-based metaheuristic algorithm that applies two approaches of global exploration and local exploitation to find the optimum solution. The exploration is the ability of expanding search space, where the exploitation is the ability of finding the optima around a good solution. The algorithm is initialized by creating a swarm, that is, population of particles, with random positions. Every particle is shown as a vector in a -dimensional search space where and are the position and velocity, respectively, and is the personal best position found by the th particle: In addition, the best position obtained by the entire population () is computed to update the particle velocity: Based on and , the next velocity and position of the th particle are computed using (3) and (4) as follows: where and are the next and current velocity of the th particle, respectively. is inertia weight, and are acceleration coefficients, and rand1 and rand2 are random numbers in the interval . is the number of particles; and are the next and current position of the th particle.
Also, and is set to a constant bounded based on the search space bound. A larger value of encourages global exploration (searching new areas), while a smaller value provides a local exploitation.
In (3), the second and the third terms are called cognition and social term, respectively. The two models applied to choose are known as (for global topology) and (for local topology) models. In this paper, the model and model are called PSO and LPSO, respectively.
2.2. Improved PSO Algorithms
Since Kennedy and Eberhart introduced PSO algorithm, the algorithm and its improved schemes have been extensively applied in many problems [20–25]. Many researchers have proposed the variants of modified PSO through swarm topology [8, 9], parameter selection [19, 26], combining PSO with other evolutionary computation (EC) techniques [27, 28], integration of its self-adaptation , and so on.
LPSO  and VPSO  were proposed based on a local topology to avoid premature convergence rate in solving multimodal problems. FIPS algorithm  is another PSO algorithm which uses the information of the entire neighborhood to guide the particles for finding the best solution. Dynamic multiswarm PSO (DMS-PSO)  was suggested by Liang and Suganthan to dynamically enhance the topological structure. Ratnaweera et al.  proposed HPSO-TVAC algorithm based on linearly time-varying acceleration coefficients where a larger and a smaller are set at the beginning and gradually reversed throughout the search. Liang et al.  presented comprehensive learning particle swarm optimization (CLPSO) which focused on avoiding the local optima by encouraging each particle to learn its behavior from other particles on different dimensions.
In another research, a selection operator for PSO was first introduced by Angeline . It is similar to what was used in a genetic algorithm (GA). Other researchers used a part of crossover  and mutation  operations from GA into PSO. Pant et al. proposed a quadratic crossover operator to PSO algorithm called quadratic interpolation PSO (QIPSO) . An adaptive fuzzy particle swarm optimization (AFPSO)  was proposed to utilize fuzzy inferences for adjusting acceleration coefficients. Meanwhile, the quadratic crossover operator  was used in the proposed AFPSO algorithm (AFPSO-QI)  to have better performance in solving multimodal problems. Zhan et al. presented an adaptive particle swarm optimization (APSO)  using a real-time evolutionary state estimation procedure and an elitist learning strategy. A variant of PSO algorithm based on orthogonal learning strategy (OLPSO)  was introduced to guide particles for discovering useful information from their personal best positions and from their neighborhood’s best position in order to fly in better directions. Gao et al.  used PSO with chaotic opposition-based population initialization and stochastic search technique to solve complex multimodal problems. The algorithm called CSPSO finds new solutions in the neighborhoods of the previous best positions in order to escape from local optima in multimodal functions. Beheshti et al. proposed median-oriented particle swarm optimization (MPSO)  and centripetal accelerated particle swarm optimization (CAPSO)  based on Newton’s laws of motion to accelerate the learning and convergence of optimization problems.
3. FGLT-PSO: The Proposed Method
3.1. FGLT-PSO Algorithm
FGLT-PSO tends to overcome the disadvantages of PSO by avoiding local optima and accelerating convergence speed. According to [14, 15], PSO has shown a better performance than LPSO in unimodal problems and LPSO illustrates good results in multimodal problems. Hence, both local and global topologies are hybridized in FGLT-PSO to increase the convergence rate and to avoid trapping into local optima.
In FGLT-PSO algorithm, each particle uses the best position found by its neighbors () to update the particles’ velocities: The next position of each particle is computed based on the current position, , the next velocity, , and the best position found by the swarm, , as follows: In (6), is computed as Also, , and are acceleration coefficients and modified according to (10): where and are the current iteration and the number of maximum iterations, respectively.
The second term in (6) is called the cognition term, and the third terms in (6) and (7) are named the social terms. In (7), and is set to a constant based on the search space bound.
3.2. Analysis of FGLT-PSO
A metaheuristic algorithm explores new spaces to avoid trapping in a local optimum in the initial steps. Due to the poor exploration in the standard PSO (PSO), it can sometimes find local optima in multimodal problems. Sometimes, if a particle falls into a local optimum, it will not be able to get out of it. That is, if obtained through the population lies in a local optimum while the current position and the personal best position of particle are in the same local optimum, the second and third terms of (3) tend to zero and decreases linearly to near zero. Consequently, the next velocity of particle tends to zero, and its next position in (4) does not change; thus, the particle remains in the local optimum. Hence, the main aim in FGLT-PSO is to overcome the poor exploration and to increase the convergence rate by combining the local and global searches as shown in Figure 1. The particles move in the search space based on the best solutions found by their neighbors () and the swarm (). At the beginning, the particles search new spaces. By lapse of iterations, the exploration should fade out and the exploitation should fade in. It means the particles accelerate to the good solution and make search around it to find the best solution.
4. Experimental Results
In this section, the FGLT-PSO algorithm is compared with some well-known PSO algorithms. The algorithms are tested using various unimodal and multimodal functions in different dimensions. Several benchmark functions [34, 35] are selected to evaluate the performance of proposed method.
4.1. Benchmark Functions
Twenty (20) minimization functions are applied in the experimental study including unimodal, multimodal, rotated, shifted, and shifted-rotated functions as detailed in Table 1. In the table, Range and are the feasible bound and the dimension of each function, respectively. is the optimum value of function. Among the benchmarks, functions are unimodal functions and functions are in the class of multimodal functions. Functions are rotated and functions are shifted unimodal and multimodal functions. Two functions (19) and (20) are shifted-rotated multimodal functions.
In unimodal functions, the convergence rate of search algorithm is more interesting than the final results because other methods have been designed to optimize these kinds of functions. In multimodal functions, finding an optimal (or a good near-global optimal) solution is important. These functions are more difficult to optimize because the number of local optima exponentially increases as the dimension increases. Therefore, the search algorithms should not become trapped in a local optimum and should be able to obtain good solutions.
The rotation of function increases the function complexity. It does not affect the shape of function. The variable is computed using an orthogonal matrix  and applied to obtain the fitness value of rotated function as follows:
In shifted functions, the global optimum is shifted to the new position . All the test functions are shown as follows.
(1) Sphere Model (unimodal function). Consider
(2) Shifted’s Schwefel’s Problem (unimodal function). Consider
(3) Schwefel’s Problem 1.2 (unimodal function). Consider
(4) Quartic Function, That Is, Noise (unimodal function). Consider
(5) Rosenbrock’s Function (multimodal function). Consider is unimodal in a 2-dimension or 3-dimension search space but can be treated as a multimodal function in high-dimensional cases.
(6) Generalized Schwefel’s Problem 2.26 (multimodal function). Consider
(7) Ackley’s Function (multimodal function). Consider
(8) Generalized Penalized Function (multimodal function). Consider
(9) Noncontinuous Rastrigin’s Function (multimodal function). Consider
(10) Rotated Schwefel’s Problem 2.22 (unimodal function). Consider
(11) Rotated Minima Function (multimodal function). Consider
(12) Rotated Rastrigin’s Function (multimodal function). Consider
(13) Rotated Weierstrass’ Function (multimodal function). Consider
(14) Rotated Salomon’s Function (multimodal function). Consider
(15) Shifted Schwefel’s Problem 2.21 (unimodal function). Consider
(16) Shifted Generalized Griewank’s Function (multimodal function). Consider
(17) Shifted Rosenbrock’s Function (multimodal function). Consider
(18) Shifted Rastrigin’s Function (multimodal function). Consider
(19) Shifted Rotated Ackley’s Function (multimodal function). Consider where is a linear transformation matrix with condition number = 100.
(20) Shifted Rotated Rastrigin’s Function (multimodal function). Consider where is a linear transformation matrix with condition number = 2.
4.2. Results of FGLT-PSO
The results of FGLT-PSO are provided in three sections. In Section 4.2.1, the acceleration coefficients , and in the proposed method are changed according to (10) and in Section 4.2.2, these factors are constant. In these sections, FGLT-PSO is evaluated using the benchmark functions with dimensions 10, 30, and 50. The number of maximum iterations is set at 5000 for , at 10000 for , and at 15000 for . The population size is set to 50 (). Also, decreases linearly from 0.9 to 0.4.
In Section 4.2.3, the results of FGLT-PSO are compared with those of several well-known PSO algorithms from  on the common functions. In this section, the population size is set to 30 (), is 30, and the number of maximum iterations is set at 10000.
The ring topology is used as the neighborhood structure in the model for the FGLT-PSO and LPSO algorithms and the number of neighbours for each particle is three. The algorithms are run independently 30 times for the benchmark functions and the results are averaged.
4.2.1. The Results of Proposed Method with Variable Acceleration Coefficients
Four algorithms of FGLT-PSO, PSO, LPSO, and QIPSO are randomly initialized and run on benchmark functions. The average best solution, the standard deviation (SD), and the median of the best solution in the last iteration are reported in Tables 2, 3, 6, 7, 10, and 11. The best results from among the algorithms are shown in bold numbers. In the tables, the algorithms are ranked based on the average best results.
Moreover, Wilcoxon’s rank sum test  is conducted in order to determine whether the results obtained by the FGLT-PSO are different from those generated by other algorithms with a statistical significance. The tests are shown in Tables 4, 5, 8, 9, 12, and 13, where -value = 1 indicates the case in which proposed algorithm significantly outperformed the compared algorithm with 95% certainty, -value = −1 represents that the compared algorithm is significantly better than the proposed algorithm, and -value = 0 denotes that the results of the two considered algorithms are not significantly different. In these tables, rows 1 (better), 0 (same), and −1 (worse) give the number of functions that the FGLT-PSO performs significantly better than, almost the same as, and significantly worse than the compared algorithm, respectively.
The acceleration coefficients , , and are updated based on (10). Their minimum and maximum values are as follows: , and = 1.5.
In these tables, the benchmark functions are divided to two categories: unimodal and multimodal functions and rotated, shifted, and shifted-rotated unimodal and multimodal functions. The experimental results demonstrate that FGLT-PSO performs superior results for most of the functions in all tested dimensions.
Tables 2 and 3 show the experimental results for all benchmark functions with dimension . As illustrated, the FGLT-PSO algorithm surpasses the PSO, LPSO, and QIPSO algorithms in minimizing functions , , , , , and (20). Moreover, the proposed method provides significant improvements in functions , , , , , , and . In these functions, the convergent results attain the optimal (or good near optimal) solutions. In Tables 2 and 3, the average iteration for finding the best solution is computed. The average iteration is the required iterations to find the best solution by each algorithm. As shown, FGLT-PSO finds the best solutions faster than the other algorithms in the majority of functions. Also, it is noticeable that the FGLT-PSO algorithm achieves the best solution in a considerably lower iteration in functions and . In these functions, the algorithms show identical results.
According to Wilcoxon’s rank sum test in Tables 4 and 5 for , the results of the FGLT-PSO are statistically significantly different from the three compared algorithms. The superior convergence rate of FGLT-PSO is shown in Figure 2. The results in this figure illustrate that FGLT-PSO tends to find the global optimum in and faster than PSO, LPSO, and QIPSO and obtains the highest accuracy for these functions from among all the algorithms.
The minimization results of the benchmark functions with dimension are presented in Tables 6 and 7. As seen in these tables and Tables 8 and 9, the FGLT-PSO outperforms the PSO, LPSO, and QIPSO algorithms in functions , , , , , , , , , , , , and . The largest difference in performance between the proposed algorithm with PSO, LPSO, and QIPSO occurs for the functions , , , , , , , and . Figure 3 illustrates the progress of the average best solution over 30 runs for and . As demonstrated, the FGLT-PSO shows a higher convergence rate than the other algorithms.
Tables 10 and 11 present the results of algorithms for the test functions with dimension . As illustrated in these tables and regarding the results of Wilcoxon’s rank sum in Tables 12 and 13, the performance of proposed method is the best in most of the functions especially for functions , , , , , , , , , , , , , and . The superior convergence rate of FGLT-PSO is shown in Figure 4. The results in this figure show that the FGLT-PSO performs the best for the test functions and with .
In addition, it is considerable that the PSO, LPSO, and QIPSO algorithms return the results far from the global optima as the dimension increases. This problem is clear in the functions , , , , , , , and in Tables 10 and 11 with . These all indicate that the proposed algorithm, FGLT-PSO, is more powerful and robust than the others for solving unimodal and multimodal functions.
4.2.2. The Results of Proposed Method with Constant Acceleration Coefficients
In this section, the , , and as acceleration coefficients are set at constant values to compare with the presented results in the Section 4.2.1. The coefficient of cognition term () and social terms ( and ) are considered as and . Table 14 shows the results of proposed method for the benchmark functions with dimensions 10, 30, and 50. As seen, the FGLT-PSO with the constant acceleration coefficients performs well in most of the functions. As the dimension increases, the FGLT-PSO with the variable acceleration coefficients (Section 4.2.1) shows the better performance than the constant one for functions , , , , , and . Also, the FGLT-PSO with the constant acceleration coefficients presents a better performance for functions , , and .
4.2.3. Comparison with the Other PSO Algorithms
In this section, several well-known PSO algorithms are selected to assess the performance of proposed algorithm for the benchmarks. The PSO, QIPSO, FIPS, DMS-PSO, CLPSO, AFPSO, and AFPSO-QI algorithms are considered for the comparison. The details of these algorithms are listed in Table 15. The FGLT-PSO is run 30 times and the average best solutions and the SD of results for eight common multimodal benchmark functions are compared with the reported results by  as illustrated in Table 16. The maximum iteration is 10000, , and . As seen, the FGLT-PSO provides better results than the other algorithms for the majority of functions (functions , , , , , and ) and has the first rank.
In this study, a fusion global-local-topology PSO algorithm (FGLT-PSO) has been presented to extend the search capability and to improve convergent efficiency by combining local and global topologies. The algorithm is a global search algorithm with several advantages. The benefits of algorithm can be summarized as the following: FGLT-PSO has a simple concept and structure; it is easy to implement and is not sensitive to increase of the dimension.
A set of standard benchmarks, including unimodal, multimodal, rotated, shifted, and shifted-rotated unimodal and multimodal functions, have been used to evaluate the proposed algorithm. The average best results obtained by the FGLT-PSO have been compared with PSO, LPSO, QIPSO, FIPS, DMS-PSO, CLPSO, AFPSO, and AFPSO-QI. The experimental results show that the proposed FGLT-PSO algorithm enhances the accuracy of results compared with the other algorithms.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The authors thank the Research Management Centre (RMC), Universiti Teknologi Malaysia (UTM), for supporting in R & D and Soft Computing Research Group (SCRG), Universiti Teknologi Malaysia (UTM), Johor Bahru, Malaysia, for the inspiration and moral support in conducting this research. Also, they hereby would like to appreciate the postdoctoral program, Universiti Teknologi Malaysia (UTM), for the financial support and research activities. This work is supported by the Ministry of Higher Education (MOHE) under Fundamental Research Grant Scheme FRGS-4F347.
J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995.View at: Publisher Site | Google Scholar
K. Y. Chan, T. S. Dillon, and C. K. Kwong, “Polynomial modeling for time-varying systems based on a particle swarm optimization algorithm,” Information Sciences, vol. 181, no. 9, pp. 1623–1640, 2011.View at: Publisher Site | Google Scholar
V. Fathi and G. A. Montazer, “An improvement in RBF learning algorithm based on PSO for real time applications,” Neurocomputing, vol. 111, pp. 169–176, 2013.View at: Publisher Site | Google Scholar
H. Huang, H. Qin, Z. Hao, and A. Lim, “Example-based learning particle swarm optimization for continuous optimization,” Information Sciences, vol. 182, no. 1, pp. 125–138, 2012.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
B. Y. Qu, J. J. Liang, and P. N. Suganthan, “Niching particle swarm optimization with local search for multi-modal optimization,” Information Sciences, vol. 197, pp. 131–143, 2012.View at: Publisher Site | Google Scholar
H. Wang, I. Moon, S. Yang, and D. Wang, “A memetic particle swarm optimization algorithm for multimodal optimization problems,” Information Sciences, vol. 197, pp. 38–52, 2012.View at: Publisher Site | Google Scholar
Z. Beheshti and S. M. Shamsuddin, International Journal of Advances in Soft Computing & Its Applications, vol. 5, no. 1, pp. 1–35, 2013.
J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in Proceedings of IEEE international conference on Evolutionary Computation, vol. 2, pp. 1671–1676, 2002.View at: Google Scholar
J. Kennedy and R. Mendes, “Neighborhood topologies in fully informed and best-of-neighborhood particle swarms,” IEEE Transactions on Systems, Man, and Cybernetics Part C, vol. 36, pp. 515–519, 2006.View at: Google Scholar
R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, 2004.View at: Publisher Site | Google Scholar
A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004.View at: Publisher Site | Google Scholar
J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle swarm optimizer,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '05), pp. 127–132, June 2005.View at: Publisher Site | Google Scholar
J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006.View at: Publisher Site | Google Scholar
Z. Beheshti, S. M. H. Shamsuddin, and S. Hasan, “MPSO: median-oriented particle swarm optimization,” Applied Mathematics and Computation, vol. 219, no. 11, pp. 5817–5836, 2013.View at: Publisher Site | Google Scholar | MathSciNet
Z. Beheshti and S. M. Hj. Shamsuddin, “CAPSO: centripetal accelerated particle swarm optimization,” Information Sciences, vol. 258, pp. 54–79, 2014.View at: Publisher Site | Google Scholar | MathSciNet
M. Pant, T. Radha, and V. P. Singh, “A new particle swarm optimization with quadratic interpolation,” in Proceedings of the International Conference on Computational Intelligence and Multimedia Applications (ICCIMA '07), pp. 55–60, December 2007.View at: Publisher Site | Google Scholar
P. Liu, W. Leng, and W. Fang, “Training ANFIS model with an improved quantum-behaved particle swarm optimization algorithm,” Mathematical Problems in Engineering, vol. 2013, Article ID 595639, 10 pages, 2013.View at: Publisher Site | Google Scholar
Z.-H. Zhan, J. Zhang, Y. Li, and H. S.-H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 39, no. 6, pp. 1362–1381, 2009.View at: Publisher Site | Google Scholar
Y.-T. Juang, S.-L. Tung, and H.-C. Chiu, “Adaptive fuzzy particle swarm optimization for global optimization of multimodal functions,” Information Sciences, vol. 181, no. 20, pp. 4539–4549, 2011.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
Z. Beheshti, S. M. Shamsuddin, E. Beheshti, and S. S. Yuhaniz, “Enhancement of artificial neural network learning using centripetal accelerated particle swarm optimization for medical diseases diagnosis,” Soft Computing, 2013.View at: Publisher Site | Google Scholar
Z. Beheshti, S. M. Shamsuddin, and S. S. Yuhaniz, “Binary accelerated particle swarm algorithm (BAPSA) for discrete optimization problems,” Journal of Global Optimization, vol. 57, no. 2, pp. 549–573, 2013.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
M. A. Cavuslu, C. Karakuzu, and F. Karakaya, “Neural identification of dynamic systems on FPGA with improved PSO learning,” Applied Soft Computing Journal, vol. 12, no. 9, pp. 2707–2718, 2012.View at: Publisher Site | Google Scholar
L. Liu, S. Yang, and D. Wang, “Force-imitated particle swarm optimization using the near-neighbor effect for locating multiple optima,” Information Sciences, vol. 182, no. 1, pp. 139–155, 2012.View at: Publisher Site | Google Scholar | MathSciNet
Z. Beheshti and S. M. Shamsuddin, Centripetal accelerated particle swarm optimization and its applications in machine learning [Ph.D. thesis], Universiti Teknologi Malaysia, 2013.
Y. Wang, J. Zhou, C. Zhou, Y. Wang, H. Qin, and Y. Lu, “An improved self-adaptive PSO technique for short-term hydrothermal scheduling,” Expert Systems with Applications, vol. 39, no. 3, pp. 2288–2295, 2012.View at: Publisher Site | Google Scholar
Q. Luo and D. Yi, “A co-evolving framework for robust particle swarm optimization,” Applied Mathematics and Computation, vol. 199, no. 2, pp. 611–622, 2008.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
P. S. Andrews, “An investigation into mutation operators for particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 1044–1051, July 2006.View at: Google Scholar
M. Lovbjerg, T. K. Rasmussen, and T. Krink, “Hybrid particle swarm optimizer with breeding and subpopulations,” in Proceedings of the 3rd Genetic and Evolutionary Computation Conference, pp. 469–476, 2001.View at: Google Scholar
S. Tsafarakis, C. Saridakis, G. Baltas, and N. Matsatsinis, “Hybrid particle swarm optimization with mutation for optimizing industrial product lines: an application to a mixed solution space considering both discrete and continuous design variables,” Industrial Marketing Management, vol. 42, no. 4, pp. 496–506, 2013.View at: Publisher Site | Google Scholar
P. J. Angeline, “Using selection to improve particle swarm optimization,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 84–89, May 1998.View at: Google Scholar
Y.-P. Chen, W.-C. Peng, and M.-C. Jian, “Particle swarm optimization with recombination and dynamic linkage discovery,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 37, no. 6, pp. 1460–1470, 2007.View at: Publisher Site | Google Scholar
Z.-H. Zhan, J. Zhang, Y. Li, and Y.-H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832–847, 2011.View at: Publisher Site | Google Scholar
W.-F. Gao, S.-Y. Liu, and L.-L. Huang, “Particle swarm optimization with chaotic opposition-based population initialization and stochastic search technique,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 11, pp. 4316–4327, 2012.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
K. Tang, X. Yao, P. N. Suganthan et al., “Benchmark functions for the CEC'2008 special session and competition on large scale global optimization,” Tech. Rep., Nature Inspired Computation and Applications Laboratory, USTC, Anhui, China, http://nical.ustc.edu.cn/cec08ss.php.View at: Google Scholar
X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999.View at: Publisher Site | Google Scholar
R. Salomon, “Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. A survey of some theoretical and practical aspects of genetic algorithms,” BioSystems, vol. 39, no. 3, pp. 263–278, 1996.View at: Publisher Site | Google Scholar
F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics Bulletin, vol. 1, no. 6, pp. 80–83, 1945.View at: Publisher Site | Google Scholar