Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2014 (2014), Article ID 293087, 10 pages
http://dx.doi.org/10.1155/2014/293087
Research Article

A Comparison of Selected Modifications of the Particle Swarm Optimization Algorithm

Department of Water Resources and Environmental Modeling, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcká 1176, Prague 6, 165 21 Suchdol, Czech Republic

Received 10 October 2013; Accepted 24 March 2014; Published 15 June 2014

Academic Editor: Nan-Jing Huang

Copyright © 2014 Michala Jakubcová et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We compare 27 modifications of the original particle swarm optimization (PSO) algorithm. The analysis evaluated nine basic PSO types, which differ according to the swarm evolution as controlled by various inertia weights and constriction factor. Each of the basic PSO modifications was analyzed using three different distributed strategies. In the first strategy, the entire swarm population is considered as one unit (OC-PSO), the second strategy periodically partitions the population into equally large complexes according to the particle’s functional value (SCE-PSO), and the final strategy periodically splits the swarm population into complexes using random permutation (SCERand-PSO). All variants are tested using 11 benchmark functions that were prepared for the special session on real-parameter optimization of CEC 2005. It was found that the best modification of the PSO algorithm is a variant with adaptive inertia weight. The best distribution strategy is SCE-PSO, which gives better results than do OC-PSO and SCERand-PSO for seven functions. The sphere function showed no significant difference between SCE-PSO and SCERand-PSO. It follows that a shuffling mechanism improves the optimization process.

1. Introduction

Particle swarm optimization (PSO) is a stochastic, metaheuristic computational technique for searching the optimal regions from multidimensional space. It is an optimization method inspired by social behaviour of organisms and was established by Kennedy and Eberhart in 1995 [1]. The technique is based on iterative work with a population. PSO is an evolutionary computation (EC) method within the group of techniques known as swarm intelligence (SI) [2, 3]. PSO mimics the movement of flock of birds or school of fish using simple rules for adjusting the particle location, which is adjusted by means of its velocity information.

PSO’s main benefits are that there are few parameters to adjust and the method is easy to implement. Another advantage of PSO over derivative based local search methods is that there is no need for the gradient information during the iterative search when solving complicated optimization problems [46].

While it has been successfully applied to solve many tests and real-life optimization problems [79], the PSO method often suffers from premature convergence and, as a result, from the optimization process’s finding a merely local optimum. In order to achieve better algorithm performance, the original PSO algorithm has been modified by adding the parameter inertia weight or constriction factor [1013].

Another important strategy for improving EC algorithms relies on division of the original population into subswarms or complexes which simultaneously search across the parametric space and exchange information according to some prescribed rule. Periodic shuffling is a typical example [9, 14, 15].

In order to explore the interaction of modifications in particle velocities together with the different types of distributed PSO versions, this paper analyzes 27 different PSO variants. Nine modifications of the PSO algorithm, in which the original particle velocities are altered using different approaches for setting the inertia weights and constriction factor [13], are combined with three strategies for swarm distribution. The population is either considered as one unit (OC-PSO) or the swarm is divided into several complexes either according to the functional value (SCE-PSO) [14] or randomly (SCERand-PSO).

The remainder of this paper is organized as follows. Section 2 describes the particle swarm optimization method. The original equations and modifications of PSO algorithm are included. Section 3 describes different strategies for distribution of PSO. The experiment and obtained results are compared in Section 4. Conclusions are discussed in Section 5.

2. Particle Swarm Optimization

Particle swarm optimization is a global optimization method applied to find the optimal solution of objective function . The sought optimum is most generally a minimum value. There exists a population of particles , where is the total number of individuals. All particles search through the problem space of dimension , where is the total number of dimensions. Each particle stores information about its position and velocity. The vector of the particle’s position is and the vector of the particle’s velocity is . Each particle maintains a memory of its previous best position which is represented as . The best position among all particles from the swarm is represented as . Equations (1) and (2) are the original PSO equations for computing a new velocity and new position. Consider for all , , where is a time step, and are acceleration constants predefined by the user, and and are random numbers uniformly distributed in . The component with in (1) is known as the cognition part and it tells us about the individual experience of the particle. The component with is called the social part and it represents the cooperation among particles within the swarm [16].

The simplified original PSO algorithm is shown in Algorithm 1. Initialization of a particle’s position is randomly distributed in the range of as shown in where is a random number uniformly distributed in [0,1], while and are boundaries of the search space and their values depend on the benchmark function [17]. In this paper, initialization of particles is through Latin hypercube sampling (LHS).

alg1
Algorithm 1: Original PSO algorithm.

A particle’s initial velocity could be randomly distributed in the range of or, alternatively, the velocities could be initialized to 0, since the starting positions are already randomized [18]. In initial experiments, the value of was set to 100 000, and in subsequent experiments and applications it was found that a better approach is to limit to [19]. Other authors [20, 21] have set the value of maximum velocity as , where . A larger value of facilitates global exploration, whereas a smaller value of encourages local exploitation [22]. In this paper, was applied for initial particle velocity.

2.1. Modifications of the PSO Algorithm

The original PSO equation was modified to improve the ability for optimization. The first group of modifications consists in incorporating the parameter of inertia weight and the second in using the parameter of constriction factor . In the present study, nine variants of PSO algorithm were used and tested (Table 1), including eight modifications using and one modification with .

tab1
Table 1: Summary of PSO modifications.

The use of the inertia weight parameter (4) was developed by Shi and Eberhart [11] and it has provided for improved performance:

There are many methods of computing the inertia weight value. Nickabadi et al. [13] divided those techniques into three classes which are applied in this paper: (1) constant and random inertia weight, (2) time varying inertia weight strategies, and (3) adaptive inertia weight. They compared all modifications employing benchmark functions and proposed a PSO algorithm using adaptive inertia weight.

The constant (“ConstantW”) and random (“RandomW”) inertia weights are used, where no input information is required. Bansal et al. [23] discussed their work with Shi and Eberhart [10] and set the constant inertia weight to be equal to 0.7. Gimmler et al. [24] proposed using constant inertia weight for hybrid particle swarm optimization. The best constant was 0.2. We set the constant inertia weight for the “ConstantW” modification to for this study, because our experiment is more similar to that of Bansal et al. [23]. Eberhart and Shi [25] had proposed random inertia weight, where is a variable with uniform distribution within [0.5,1].

Time varying inertia weight is defined as a function of time or number of iterations and this method may be linear or nonlinear. In linear decreasing (“LinTimeVaryingW”) developed by Shi and Eberhart [11], inertia weight decreases linearly from to . This method of determining the inertia weight value is very common [26, 27]. Eberhart and Shi [19] compared linearly decreasing with constriction factor and found that better performance was achieved when constriction factor was used. The chaotic model (“ChaoticW”) and chaotic random model (“ChaoticRandW”) of inertia weight were proposed by Feng et al. [28], where and the initial value of is uniformly distributed in . Two modifications of nonlinear time varying inertia weight are used. In the “NonlinTimeW” and the “NonlinTimeConstW”, where parameter is set to , is the initial value for inertia weight uniformly distributed in and is the actual number of functional evaluations [13].

One modification of adaptive inertia weight proposed by Nickabadi et al. [13] is used (“AdaptW”), because it had demonstrated the best performance in the original paper. The value is adapted based on one feedback parameter. The value shows the success of particles and is defined as in (5) and is the success percentage of the swarm and it is computed as in (6), where is the size of the population. The range of inertia weights is . Consider

Beyond variants using inertia weight, the next modification of the original PSO algorithm consists in incorporating the parameter of constriction factor ((7) and (8)). This strategy was first used by Clerc [29] and it increases convergence of the algorithm. We have named the method “ConstrFactor.” Another approach to constriction factor is that of Bui et al. [12]. They proposed a time-dependent strategy, where they used nonlinear decay rules to adapt . Their results are not better than those obtained when using the setting in accordance with Clerc [29], and we therefore calculate the value using where and .

3. Distribution of PSO

All nine modifications are used with three strategies of swarm distribution (SD). We observed changes in behaviour of the population for each modification and strategy. The first distributed strategy considered the whole population as one unit and we called it OC-PSO. In the next SD, the population was divided into several complexes according to the particle’s functional value (SCE-PSO) or through random permutation (SCERand-PSO).

3.1. One Complex Strategy (OC-PSO)

In the OC-PSO method, the entire population is considered as a single unit. All particles participate in the PSO algorithm and share information about the best position achieved so far.

3.2. Shuffled Complex Evolution (SCE-PSO)

During the optimization process a premature convergence to a local optimum could appear instead of the global optimum. Many researchers in this field are devoted avoiding this premature convergence [3032]. To address this, Duan et al. [14] proposed shuffled complex evolution (SCE). Yan et al. [33] combined SCE with the particle swarm optimization algorithm (SCE-PSO).

The SCE-PSO method is described simply below and is shown in Algorithm 2. After the first initialization, the entire population is divided into subswarms according to the functional values of the individuals. All particles are sorted in increasing order and then each complex receives the parent individuals , where is the number of complexes [14, 34]. The PSO algorithm is applied at each complex. After running a predefined number of iterations in all complexes, all particles return to the swarm and the shuffling and redistribution of complexes according to the functional value are again made. This is repeated until the termination criteria are satisfied.

alg2
Algorithm 2: Algorithm SCE-PSO.

The shuffling mechanism preserves the population diversity and helps to prevent premature convergence. For this study, the shuffling was performed after the running of a predefined number of generations in each complex [34]. Another approach allows the shuffling to occur randomly with some associated probability [35].

In the original SCE-PSO method [33], only a predefined number of particles from each complex are chosen to participate in the PSO algorithm. In this paper, the number of participating individuals is equal to the number of particles in the complex. This means that all particles from the complex are inputs to the PSO algorithm.

3.3. Random Shuffled Complex Evolution (SCERand-PSO)

Random shuffled complex evolution differs from SCE-PSO in that the entire population is divided into complexes according to random permutation. There is no sorting by functional value. The algorithm for computing random permutation is according to Durstenfeld [36].

Algorithm 2 is applied for the SCERand-PSO, except that at line 3 the following substitution is made: sorted according to random permutation.

4. Experiment and Results

4.1. Experimental Setup

After running several tests with different parameter settings, the following setup was found to be the best.

The position of individuals is initialized randomly between lower and upper bounds of the search space through Latin hypercube sampling (LHS). The range of the problem space depends on the benchmark function (Table 2). LHS is a type of stratified Monte Carlo sampling first described by McKay et al. in 1979 for the analysis of output from a computer code [37]. The range, which is in PSO optimization defined by lower and upper bounds of the search space, is portioned into intervals of equal probability . The value is in PSO which is equal to the population size. LHS then randomly selects one value from each interval [38]. Due to this selection, particles are uniformly distributed in the search space.

tab2
Table 2: Summary of benchmark functions.

In accordance with Eberhart and Shi [19], the maximum value of velocity is set to . The value of acceleration constants and in variants with inertia weight is set to 2. In modifications with constriction factor, the value is set to and [19].

In accordance with Eberhart and Shi [39], the population size is set to 25. In the OC-PSO method, all particles are solved together. In the SCE-PSO and SCERand-PSO methods, individuals are uniformly divided into 6 complexes, where each complex contains 25 particles. The number of shuffling is set to 5. The maximum number of function evaluations is and the dimension of the solution is set to 30. For analyzing the results, the total number of optimization runs is set to 25. Each run stops when the maximum number of evaluations is achieved.

4.2. Benchmark Problems

For comparison purposes, 11 benchmark functions prepared for the special session on real-parameter optimization of CEC 2005 [40] were used. All functions have shifted global optima, and some of them are rotated or with noise. The benchmark functions are summarized in Table 2. The aim is to find the minimum of all functions.

The optimization problem is constrained except for function . Particles move only in restricted space and cannot cross the boundaries. This means that each position of particle is bounded by lower and upper limits [41]. In the PSO algorithm, it is reflected such that the particles must lie within the range . If a new particle position is outside the boundaries, that particle retains the position of its parent. In function , the global optimum is situated outside the range [40] and therefore the optimization problem is unconstrained and particles can cross the boundaries.

There exist two versions of the PSO algorithm: global and local. In the global variant, the neighborhood consists of all particles of the swarm. In the local variant, each particle is assigned to a neighborhood consisting of a predefined number of particles [41, 42]. For the OC-PSO method, the global variant is used. For the SCE-PSO and SCERand-PSO, the local variant is used and particles share the information about their best positions only with other particles from a given complex.

4.3. Results and Discussion

The nonparametric Wilcoxon test was used for statistical comparison. Inputs to those calculations were the best fitness values achieved for all modifications. The null hypothesis of the Wilcoxon test is that the differences between algorithms have a median of zero. The is rejected if the value is less than 0.05 [43, 44]. Algorithms were written in C++ and computations of value and graphs were made in the program R.

Table 3 reflects the best modification and distributed strategy for each function. In the table, the minimum, 25% quartile, median, 75% quartile, maximum, mean, and standard deviation are indicated. All 25-program runs of each modification and strategy were compared in each numbered evaluation. The values in Table 3 reflect the best fitness achieved and report other statistical indices belonging to the same numbered evaluation. As can be seen, strategy SCE-PSO produced the best solution in seven functions. Strategy SCERand-PSO produced the best solution in two functions () and in one function () the best solution was from strategy OC-PSO. For function , there was no significant difference between strategy SCE-PSO and SCERand-PSO.

tab3
Table 3: Statistical indices of the best solutions of benchmark functions.

Upon closer examination and as seen in Table 4, “AdaptW” and “NonlinTimeConstW” are the best modifications for unimodal functions (). The poorest variants are “ConstantW” and “ConstrFactor.” The best PSO modification for multimodal functions () is “AdaptW” and the poorest is “ConstantW.”

tab4
Table 4: Estimation of the best and poorest modifications.

For rotated functions () the best modification of the PSO algorithm appears to be “AdaptW” and the poorest is “CostantW.” For functions where there is no transformation matrix to rotate them, “AdaptW” is the best variant and the poorest variants are “ConstantW” and “ConstrFactor.”

It is clear that the best modification of the particle swarm optimization algorithm for the selected benchmark functions is “AdaptW,” that is, adaptive inertia weight. The variant called “NonlinTimeConstW” also produced good results. On the other hand, the poorest modifications appear to be “ConstantW” and “ConstrFactor.”

The convergence to the global optimum using particle swarm optimization is good, but only in three of the eleven benchmark functions is the obtained error value less than . In spite of this, the shuffling mechanism improves the optimization. Strategies SCE-PSO and SCERand-PSO are better than OC-PSO in ten functions.

The global minimum was achieved in three benchmark functions,, , and , and our results are comparable with those of Hansen [45] who compares eleven optimization algorithms on twelve functions. The particle swarm optimization algorithm achieved the global minimum in functions , , , , and . Our results using SCE-PSO are better than those of the optimization algorithms BLX-MA and DE and are as good as those from CoEVO as reported by Hansen [45].

Bui et al. [12] achieved the global optimum in two unimodal functions and using the PSO method APSO1 and in three unimodal functions , , and using the DPSO method. APSO is PSO algorithm with adaptive constriction factor and in DPSO the bound of the velocity is adapted. None of their algorithms converge to the global minimum in multimodal functions [12]. In this regard, our results are better.

Nickabadi et al. [13] compare six inertia weight adjusting methods on 15 test problems with dimension set to 30. If we look only at functions solved for this paper, Nickabadi et al. [13] achieved the global optimum in functions , , and . They obtained the best results using the adaptive inertia weight, which they had proposed (modification “AdaptW” in this paper). Their results are comparable with those achieved in the present study. The difference being that our functions have shifted the global optimum, whereas Nickabadi et al. [13] did not.

Figure 1 presents the convergence graphs for each function while utilizing the best modification of the PSO algorithm. The -axis indicates the number of function evaluations and the -axis the logarithmic value of the best fitness, which is the difference between the searched and the best achieved functional value. A decline with the number of evaluations is clearly visible for all functions, thus indicating the approach to the global optimum.

fig1
Figure 1: Convergence graphs for best fitness of the best modifications and strategy of PSO for functions according to Table 3.

5. Conclusions

This paper compared 27 variants of particle swarm optimization algorithm. Eight modifications were performed using the parameter inertia weight and one modification using constriction factor. Both parameters improved the optimization. All modifications were tested with three strategies of swarm distribution, which were in terms of population. The population was either considered as a single unit (OC-PSO) or it was divided into several complexes. Division into complexes was made according to the functional value of each particle (SCE-PSO) or through random permutation (SCERand-PSO).

The main aim of this work was to find the global minima of eleven benchmark functions prepared for the special session on real-parameter optimization of CEC 2005. The achievement of the minimum is when the obtained error value is less than . We obtained the global minimum in two unimodal functions ( and ) and in one multimodal function (). The original particle swarm optimization has slow convergence to the global optimum, but the shuffling mechanism improves the optimization.

The best modification of the PSO algorithm is the variant called “AdaptW.” The best choice for selected benchmark functions is to use the parameter of inertia weight, where the value is adapted based on a feedback parameter. The best strategy for swarm distribution is SCE-PSO. Shuffled complex evolution particle swarm optimization with allocation of particles into complexes according to their functional values is better than OC-PSO and SCERand-PSO.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to the anonymous referees for their valuable comments and suggestions for improving the presentation of this paper. This work was supported by the Internal Grant Agency, Faculty of Environmental Sciences, Czech University of Life Sciences Prague (Project no. 422001312315820144227).

References

  1. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, December 1995. View at Scopus
  2. T. Weise, Global Optimization Algorithms—Theory and Application, 2009, http://www.it-weise.de/projects/bookNew.pdf.
  3. W. Deng, R. Chen, B. He, Y. Liu, L. Yin, and J. Guo, “A novel two-stage hybrid swarm intelligence optimization algorithm and application,” Soft Computing, vol. 16, no. 10, pp. 1707–1722, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. N. Gershenfeld, The Nature of Mathematical Modeling, Cambridge University Press, New York, NY, USA, 1999.
  5. R. Mendes, Population topologies and their influence in particle swarm performance [Ph.D. thesis], University of Minho, 2004.
  6. Z. Michalewicz and D. Fogel, How to Solve It: Modern Heuristics, Springer, New York, NY, USA, 2004.
  7. A. M. Baltar and D. G. Fontane, “Use of multiobjective particle swarm optimization in water resources management,” Journal of Water Resources Planning and Management, vol. 134, no. 3, pp. 257–265, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. M. K. Gill, Y. H. Kaheil, A. Khalil, M. McKee, and L. Bastidas, “Multiobjective particle swarm optimization for parameter estimation in hydrology,” Water Resources Research, vol. 42, no. 7, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. D. M. Munoz, C. H. Llanos, L. D. S. Coelho, and M. Ayala-Rincon, “Opposition-based shuffled PSO with passive congregation applied to FM matching synthesis,” in Proceedings of the IEEE Congress of Evolutionary Computation (CEC '11), pp. 2775–2781, IEEE, New Orleans, La, USA, June 2011. View at Publisher · View at Google Scholar · View at Scopus
  10. Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 69–73, IEEE Computer Society, Washington, DC, USA, May 1998. View at Scopus
  11. Y. Shi and R. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, pp. 1945–1950, Washington, DC, USA, 1999.
  12. L. T. Bui, O. Soliman, and H. A. Abbass, “A modified strategy for the constriction factor in particle swarm optimization,” in Progress in Artificial Life: Proceedings of the 3rd Australian Conference; ACAL 2007 Gold Coast, Australia, December 4–6, 2007, vol. 4828 of Lecture Notes in Computer Science, pp. 333–344, Springer, Berlin, Germany, 2007. View at Publisher · View at Google Scholar
  13. A. Nickabadi, M. M. Ebadzadeh, and R. Safabakhsh, “A novel particle swarm optimization algorithm with adaptive inertia weight,” Applied Soft Computing Journal, vol. 11, no. 4, pp. 3658–3670, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. Q. Y. Duan, V. K. Gupta, and S. Sorooshian, “Shuffled complex evolution approach for effective and efficient global minimization,” Journal of Optimization Theory and Applications, vol. 76, no. 3, pp. 501–521, 1993. View at Publisher · View at Google Scholar · View at Scopus
  15. J. A. Vrugt, B. A. Robinson, and J. M. Hyman, “Self-adaptive multimethod search for global optimization in real-parameter spaces,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 243–259, 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. M. Pant, R. Thangaraj, and A. Abraham, “Particle swarm optimization: performance tuning and empirical analysis,” in Foundations of Computational Intelligence, A. Abraham, A.-E. Hassanien, P. Siarry, and A. Engelbrecht, Eds., vol. 3 of Studies in Computational Intelligence, pp. 101–128, Springer, Berlin, Germany, 2009. View at Google Scholar
  17. R. Hassan, B. Cohanim, O. D. Weck, and G. Venter, “A comparison of particle swarm optimization and the genetic algorithm,” in Proceedings of the 1st AIAA Multidisciplinary Design Optimization Specialist Conference, pp. 1–13, 2005.
  18. F. V. D. Bergh, An analysis of particle swarm optimizers [Ph.D. thesis], University of Pretoria, 2001.
  19. R. C. Eberhart and Y. Shi, “Comparing inertia weights and constriction factors in particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation (CEC '00), vol. 1, pp. 84–88, IEEE, La Jolla, Calif, USA, July 2000. View at Scopus
  20. R. Eberhart, P. Simpson, and R. Dobbins, Computational Intelligence PC Tools, Academic Press Professional, San Diego, Calif, USA, 1996.
  21. D. Corne, M. Dorigo, F. Glover et al., Eds., New Ideas in Optimization, McGraw-Hill, Maidenhead, UK, 1999.
  22. R. C. Eberhart and Y. Shi, “Particle swarm optimization: developments, applications and resources,” in Proceedings of the Congress on Evolutionary Computation, vol. 1, pp. 81–86, Seoul, Republic of Korea, May 2001. View at Scopus
  23. J. C. Bansal, P. K. Singh, M. Saraswat, A. Verma, S. S. Jadon, and A. Abraham, “Inertia weight strategies in particle swarm optimization,” in Proceedings of the 3rd World Congress on Nature and Biologically Inspired Computing (NaBIC '11), pp. 633–640, Salamanca, Spain, October 2011. View at Publisher · View at Google Scholar · View at Scopus
  24. J. Gimmler, T. Stützle, and T. E. Exner, “Hybrid particle swarm optimization: an examination of the influence of iterative improvement algorithms on performance,” in Ant Colony Optimization and Swarm Intelligence: Proceedings of the 5th International Workshop, ANTS 2006, Brussels, Belgium, September 4–7, 2006, vol. 4150 of Lecture Notes in Computer Science, pp. 436–443, Springer, Berlin, Germany, 2006. View at Publisher · View at Google Scholar
  25. R. C. Eberhart and Y. Shi, “Tracking and optimizing dynamic systems with particle swarms,” in Proceedings of the Congress on Evolutionary Computation, vol. 1, pp. 94–100, May 2001. View at Scopus
  26. J. Xin, G. Chen, and Y. Hai, “A particle swarm optimizer with multi-stage linearly-decreasing inertia weight,” in Proceedings of the International Joint Conference on Computational Sciences and Optimization (CSO '09), vol. 1, pp. 505–508, IEEE Computer Society, Sanya, China, April 2009. View at Publisher · View at Google Scholar · View at Scopus
  27. C.-H. Yang, C.-J. Hsiao, and L.-Y. Chuang, “Linearly decreasing weight particle swarm optimization with accelerated strategy for data clustering,” IAENG International Journal of Computer Science, vol. 37, no. 3, p. 1, 2010. View at Google Scholar · View at Scopus
  28. Y. Feng, G.-F. Teng, A.-X. Wang, and Y.-M. Yao, “Chaotic inertia weight in particle swarm optimization,” in Proceedings of the 2nd International Conference on Innovative Computing, Information and Control (ICICIC '07), p. 475, Kumamoto, Japan, September 2007. View at Publisher · View at Google Scholar · View at Scopus
  29. M. Clerc, “The swarm and the queen: towards a deterministic and adaptive particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, pp. 1951–1957, Washington, DC, USA, 1999.
  30. P. J. Angeline, “Using selection to improve particle swarm optimization,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 84–89, Anchorage, Alaska, USA, May 1998. View at Scopus
  31. K. E. Parsopoulos, V. P. Plagianakos, G. D. Magoulas, and M. N. Vrahatis, “Improving particle swarm optimizer by function stretching,” in Advances in Convex Analysis and Global Optimization, pp. 445–457, Springer, New York, NY, USA, 2001. View at Google Scholar · View at Zentralblatt MATH
  32. K. E. Parsopoulos and M. N. Vrahatis, “Initializing the particle swarm optimizer using the nonlinear simplex method,” in Advances in Intelligent Systems, Fuzzy Systems, Evolutionary Computation, pp. 216–221, WSEAS Press, 2002. View at Google Scholar
  33. J. Yan, H. Tiesong, H. Chongchao, W. Xianing, and G. Faling, “A shuffled complex evolution of particle swarm optimization algorithm,” in Adaptive and Natural Computing Algorithms: Proceedings of the 8th International Conference, ICANNGA 2007, Warsaw, Poland, April 11–14, 2007, Part I, vol. 4431 of Lecture Notes in Computer Science, pp. 341–349, Springer, Berlin, Germany, 2007. View at Publisher · View at Google Scholar
  34. V. C. Mariani, L. G. Justi Luvizotto, F. A. Guerra, and L. D. S. Coelho, “A hybrid shuffled complex evolution approach based on differential evolution for unconstrained optimization,” Applied Mathematics and Computation, vol. 217, no. 12, pp. 5822–5829, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  35. M. Weber, F. Neri, and V. Tirronen, “Shuffle or update parallel differential evolution for large-scale optimization,” Soft Computing, vol. 15, no. 11, pp. 2089–2107, 2011. View at Publisher · View at Google Scholar · View at Scopus
  36. R. Durstenfeld, “Algorithm 235: random permutation,” Communications of the ACM, vol. 7, no. 7, p. 420, 1964. View at Publisher · View at Google Scholar
  37. M. D. McKay, R. J. Beckman, and W. J. Conover, “A comparison of three methods for selecting values of input variables in the analysis of output from a computer code,” Technometrics, vol. 21, no. 2, pp. 239–245, 1979. View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  38. G. D. Wyss and K. H. Jorgensen, A User's Guide to LHS: Sandias Latin Hypercube Sampling Software, Sandia National Laboratories, 1998.
  39. R. Eberhart and Y. Shi, Computational Intelligence: Concepts to Implementations, Morgan Kaufmann, Boston, Mass, USA, 2007.
  40. P. N. Suganthan, N. Hansen, J. J. Liang et al., “Problem definitions and evaluation criteria for the CEC, 2005 special session on real-parameter optimization,” Tech. Rep., Nanyang Technological University, Singapore, 2005. View at Google Scholar
  41. J. C. F. Cabrera and C. A. C. Coello, “Handling constraints in particle swarm optimization using a small population size,” in MICAI 2007: Advances in Artificial Intelligence: Proceedings of the 6th Mexican International Conference on Artificial Intelligence, Aguascalientes, Mexico, November 4–10, 2007, vol. 4827 of Lecture Notes in Computer Science, pp. 41–51, Springer, Berlin, Germany, 2007. View at Publisher · View at Google Scholar · View at Scopus
  42. K. Parsopoulos and M. N. Vrahatis, “On the Computation of all global minimizers through particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 211–224, 2004. View at Publisher · View at Google Scholar · View at Scopus
  43. C. Dytham, Choosing and Using Statistics: A Biologist's Guide, Blackwell Science, Oxford, UK, 2011.
  44. S. García, D. Molina, M. Lozano, and F. Herrera, “A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 Special Session on Real Parameter Optimization,” Journal of Heuristics, vol. 15, no. 6, pp. 617–644, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  45. N. Hansen, “Compilation of results on the 2005 CEC benchmark function set,” Online, 2006.