Computational Intelligence and Metaheuristic Algorithms with Applications
View this Special IssueResearch Article  Open Access
Nor Azlina Ab Aziz, Marizan Mubin, Mohd Saberi Mohamad, Kamarulzaman Ab Aziz, "A SynchronousAsynchronous Particle Swarm Optimisation Algorithm", The Scientific World Journal, vol. 2014, Article ID 123019, 17 pages, 2014. https://doi.org/10.1155/2014/123019
A SynchronousAsynchronous Particle Swarm Optimisation Algorithm
Abstract
In the original particle swarm optimisation (PSO) algorithm, the particles’ velocities and positions are updated after the whole swarm performance is evaluated. This algorithm is also known as synchronous PSO (SPSO). The strength of this update method is in the exploitation of the information. Asynchronous update PSO (APSO) has been proposed as an alternative to SPSO. A particle in APSO updates its velocity and position as soon as its own performance has been evaluated. Hence, particles are updated using partial information, leading to stronger exploration. In this paper, we attempt to improve PSO by merging both update methods to utilise the strengths of both methods. The proposed synchronousasynchronous PSO (SAPSO) algorithm divides the particles into smaller groups. The best member of a group and the swarm’s best are chosen to lead the search. Members within a group are updated synchronously, while the groups themselves are asynchronously updated. Five wellknown unimodal functions, four multimodal functions, and a real world optimisation problem are used to study the performance of SAPSO, which is compared with the performances of SPSO and APSO. The results are statistically analysed and show that the proposed SAPSO has performed consistently well.
1. Introduction
Particle swarm optimisation (PSO) was introduced by Kennedy and Eberhart in 1995 [1]. It is a swarmbased stochastic optimisation algorithm that mimics the social behaviour of organisms such as birds and fishes. These organisms’ success in looking for food source is achieved through individual effort as well as corporation with surrounding neighbours. In PSO, the individuals are represented by a swarm of agents called particles. The particles move within the search area to find the optimal solution by updating their velocity and position. These values are influenced by the experience of the particles and their social interactions. The PSO algorithm has been successfully applied in various fields, such as human tremor analysis for biomedical engineering [2, 3], electric power and voltage management [4], machine scheduling [5], robotics [6], and VLSI circuit design [7].
Since its introduction, PSO has undergone numerous evolutionary processes. Many variations of PSO have been proposed to improve the effectiveness of the algorithm. Some of the improvement involves introduction of a new parameter to the algorithm such as inertia weight [8] and constriction factor [9], while others focus on solving specific type of problems such as multiobjective optimization [10, 11], discrete optimization problems [12, 13], and dynamic optimization problems [14].
Here we focus on the effect of the particles’ update sequence on the performance of PSO. In the original PSO, a particle’s information on its neighbourhood’s best found solution is updated after the performance of the whole swarm is evaluated. This version of PSO algorithm is known as synchronous PSO (SPSO). The synchronous update in SPSO provides the perfect information concerning the particles, thus allowing the swarm to choose a better neighbour and exploit the information provided by this neighbour. However, this strategy could cause the particles to converge too fast.
Another variation of PSO, known as asynchronous PSO (APSO), has been discussed by Carlisle and Dozier [15]. In APSO, the best solutions are updated as soon as a particle’s performance has been evaluated. Therefore, a particle’s search is guided by the partial or imperfect information from its neighbourhood. This strategy leads to diversity in the swarm [16], wherein the particles updated at the beginning of an iteration use more information from the previous iteration while particles at the end of the iteration are updated based on the information from the current iteration [17]. In several studies [15, 16, 18], APSO has been claimed to perform better than SPSO. Xue et al. [19] reported that asynchronous updates contribute to a shorter execution time. Imperfect information due to asynchronous updates causes the information of the current best found solution to be communicated to the particles more slowly, thus encouraging more exploration. However, a study conducted by Juan et al. [20] reported that SPSO is better than APSO in terms of the quality of the solution and also the convergence speed. This is due to the stronger exploitation.
The synchronicity of the particles influences exploration and exploitation among the particles [17]. Exploration and exploitation play important roles in determining the quality of a solution. Exploration in asynchronous update ensures that the search space is thoroughly searched so that the area containing the best solution is discovered. However, exploitation in synchronous update helps to fine tune the search so that the best solution can be found. Hence, in this paper, we attempt to improve the PSO algorithm by merging both synchronous and asynchronous updates in the search process so that the advantages of both methods can be utilised. The proposed algorithm, which is named as the synchronousasynchronous PSO (SAPSO), divides the particles into smaller groups. These groups are updated asynchronously, while members within the same group are updated synchronously. After the performance of all the particles in a group is evaluated, the velocities and positions of the particles are updated using a combination of information from the current iteration of their own group and the groups updated before them, as well as the information from the previous iteration of the groups that have not yet been updated. The search for the optimal solution in SAPSO is led by the groups’ best members together with the swarm’s best. This strategy is different from the original SPSO and APSO, where the search is led by the particles’ own experience together with the swarm’s best.
The rest of the paper is organised as follows. The SPSO and APSO algorithms are discussed in Section 2. The proposed SAPSO algorithm is described in detail in Section 3. In Section 4, the performance of the SAPSO algorithm is evaluated using ten benchmark functions comprising of five unimodal functions, four multimodal functions, and a real world optimisation problem. The results of the tests are presented and discussed in Section 5. Our conclusions are presented in Section 6.
2. Particle Swarm Optimisation
2.1. Synchronous PSO
In PSO, the search for the optimal solution is conducted by a swarm of particles. At time , the th particle has a position, , and a velocity, . The position represents a solution suggested by the particle while velocity is the rate of change from the current position to the next position. At the beginning of the algorithm, these two values (position and velocity) are randomly initialised. In subsequent iterations, the search process is conducted by updating the position and velocity using the following equations:
To prevent the particles from venturing too far from the feasible region, the value is clamped to . If the value of is too large, then the exploration range is too wide. Conversely, if the value of is too small, then the particles will favour the local search [21]. In (1), and are the learning factors that control the effect of the cognitive and social influence on a particle. Typically, both and are set to 2 [22]. Two independent random numbers and in the range are incorporated into the velocity equation. These random terms provide stochastic behaviour to the particles, thus encouraging them to explore a wider area. Inertia weight, , which is a term added to improve the PSO’s performance, controls the particles’ momentum. When a good area is found, the particles can switch to fine tuning by manipulating [8]. To ensure convergence, a time decreasing inertia weight is more favourable than a fixed inertia weight [21]. This is because a large inertia weight at the beginning helps to find a good area through exploration and a small inertia weight towards the end—when typically a good area is already found—facilitates fine tuning. The small inertia weight at the end of the search reduces the global search activity [23].
An individual success in PSO is affected not only by the particle’s own effort and experience but also by the information shared by its surrounding neighbours. The particle’s experience is represented in (1) by , which is the best position found so far by the th particle. The neighbours’ influence is represented by , which is the best position found by the swarm up to the current iteration.
The particle’s position, , is updated using (2), in which a particle’s next search is launched from its previous position and the new search is influenced by the past search [24]. Typically, is bounded to prevent the particles from searching in an infeasible region [25]. The quality of is evaluated by a problemdependent fitness function. Each of the particles is evaluated to determine its current fitness. If a new position with a better fitness than the current fitness of or or both is found, then the new position value will accordingly be saved as or ; otherwise the old best values will be adopted. This update process continues until the stopping criterion is met, when either the maximum iteration limit, , is achieved or the target solution is attained. Therefore, for a swarm with number of particles, the maximum number of fitness evaluation in a run is .
The original PSO algorithm is shown in the flowchart of Figure 1. As shown in the algorithm, the particles’ and updates are conducted after the fitness of all the particles has been evaluated. Therefore, this version of PSO is known as synchronous PSO (SPSO). Because the and are updated after all the particles are evaluated, SPSO ensures that all the particles receive perfect and complete information about their neighbourhood, leading to a better choice of and thus allowing the particles to exploit this information so that a better solution can be found. However, this possibly leads the particles in SPSO to converge faster, resulting in a premature convergence.
2.2. Asynchronous PSO
In SPSO, a particle must wait for the whole swarm to be evaluated before it can move to a new position and continue its search. Thus, the first evaluated particle is idle for the longest time, waiting for the whole swarm to be updated. An alternative to SPSO is APSO, in which the particles are updated based on the current state of the swarm. A particle in APSO is updated as soon as its fitness is evaluated. The particle selects using a combination of information from the current and the previous iteration. This is different from SPSO, in which all the particles use information from the same iteration. Consequently, in APSO, particles of the same iteration might use various values of , as it is selected based on the available information during a particle’s update process.
The flowchart in Figure 2 shows the APSO algorithm. The flow of APSO is different than SPSO; however the fitness function is still called for times per iteration, once for each particle. Therefore, the maximum number of fitness evaluation is . This is similar to SPSO. The velocity and position are calculated using the same equations as SPSO.
Other than the variety of information, the lack of synchronicity in APSO solves the issue of idle particles faced in SPSO [26]. An asynchronous update also enables the update sequence of the particles to change dynamically or a particle to be updated more than once [26, 27]. The change in the update sequence offers different levels of available information among the particles, and such differences can prevent the particles from being trapped in local optima [17].
3. The Proposed SynchronousAsynchronous PSO (SAPSO)
In this paper, the PSO algorithm is improved by merging both update methods. The proposed algorithm, synchronousasynchronous PSO (SAPSO), divides the particles into smaller groups. In SPSO and APSO, the particles learn from their own best experience, and . However, in the proposed algorithm, instead of using their own experience, the particles learn from their group’s performance.
The algorithm proposed is presented in the flowchart shown in Figure 3. The algorithm starts with initialisation of particles. The particles in SAPSO are divided into groups, each of which consists of number of particles. Initially, central particles, one for each group, are randomly initialized within the search space. This is followed by random placement of number of members for each group. The distances of members are within the radius of from the central particle of their respective groups. Therefore, is the maximum distance of a particle from the central particle of its group. This parameter is only used once throughout the execution of the algorithm—during the initialisation phase. Group memberships remain fixed throughout the search process. The total number of particles, , is for the SAPSO algorithm.
The groups are updated one by one; that is, asynchronous update is used across groups. The particles from the group that is being updated use three groups of information to update their velocity. The first group of information is the current information of the particles’ group members; the particles use this information to try to match their group’s best performer. The particles also use recent information from the groups that were updated earlier and information from the previous iteration for the groups to be updated later.
When a group is updated, the group members’ velocity and position updates are performed after the whole group performance is evaluated. Therefore, the particles in a group are updated synchronously.
When a group evaluates the performance of its members, the fitness function is called for times. One by one of the groups’ members are updated in an iteration. Since there is number of groups, hence the fitness function is called for times, which is equivalent to times per iteration. Therefore, although the particles in SAPSO are divided into groups, the maximum number of fitness evaluation per run is the same as SPSO and APSO which is .
The velocity at time of th particle that belongs to th group, , is updated using the following equation: Equation (3) shows that the information used to update the velocity are and . is the best member of th group, where is , and it is chosen among the particle’s best of th group, . This value, together with the swarm’s best, , leads the particles’ search in the SAPSO algorithm. The is updated after all once a new outperforms . Thus, is the best . The communication among the groups in SAPSO is conducted through the best performing member of the groups. The position of the particle, , is updated using The algorithm is ended when either the ideal fitness is achieved or maximum iteration is reached.
The SAPSO algorithm takes advantage of both APSO and SPSO algorithms. In APSO, the particles are updated using imperfect information, which contributes to the diversity and exploration. In SPSO, the quality of the solution found is ensured by evaluating the performance of the whole swarm first. The SPSO particles are then updated by exploiting this information. The asynchronous update characteristic of APSO is imitated by SAPSO by updating the groups one after another. Hence, members of a group are updated using the information from mixed iterations. This strategy encourages exploration due to the imperfect information. However, the performance of all members of a group in SAPSO is evaluated first before the velocity and position update process starts. This is the synchronous aspect of SAPSO. It provides the complete information of the group and allows the members to exploit the available information.
4. Experiments
The proposed SAPSO and the existing SPSO and APSO were implemented using MATLAB. The parameter settings are summarised in Table 1. Each experiment was subjected to 500 runs. The initial velocity was set to random value subject to the velocity clamping range, . The position of the particles was randomly initialised within the search space. A linear decreasing inertia weight ranging from 0.9 to 0.4 was employed to encourage fine tuning towards the end of the search. The cognitive and social learning factors were set to 2 which is a typical value for and . The search was terminated either due to the number of iterations reaching 2000 or the ideal solution being found. The maximum number of iteration is set to 2000 to limit the computational time taken. The final values were recorded. The setting for the additional parameters in SAPSO is given in Table 2. Exclusively for SAPSO, the members of the groups were randomly initialised with their distance to group centres, . The group centres were randomly initialised within the boundary of the search space.


A group of benchmark test problems had been identified for assessing the performance of the proposed SAPSO and the original SPSO and APSO algorithms. The benchmark test problems consist of five unimodal functions, four multimodal functions, and one real world optimisation problem, namely, frequencymodulated (FM) sound wave synthesis which is taken from CEC2011 competition on testing evolutionary algorithms on real world optimisation problems [28]. These functions are given in Table 3. All functions used are minimisation functions with ideal fitness value of . The dimension of the unimodal and multimodal problems, , was set to 30. The search spaces for these problems are therefore high dimensional [29, 30]. Note that the FM sound wave problem is a sixdimensional problem.

The solutions found by the algorithms tested are presented here using boxplot. A boxplot shows the quality and also the consistency of an algorithm’s performance. The size of the box shows the magnitude of the variance of the results; thus a smaller box suggests a consistent performance of the algorithm. Because the benchmark functions used in this study are minimisation problems, a lower boxplot is desirable as it indicates better quality of the solutions found.
The algorithms are compared using a nonparametric test due to the nature of the solutions found, where they are not normally distributed. The test chosen is the Friedman test with significance level . This test is suitable for comparison of more than two algorithms [31]. The algorithms are first ranked based on their average performance for each benchmark function. The average rank is then used to calculate the Friedman statistic value. According to the test, if the statistic value is lesser than the critical value, the algorithms tested are identical to each other; otherwise, significant differences exist. If a significant difference is found, the algorithms are then compared using a post hoc procedure. The chosen post hoc procedure here is the Holm procedure. It is able to pinpoint the algorithms that are not identical to each other, a result that cannot be detected by the Friedman test.
5. Results and Discussion
5.1. SAPSO versus SPSO and APSO
The boxplots in Figure 4 show the quality of the results for unimodal test functions using the three algorithms. The results obtained by SPSO and APSO algorithms contain multiple outliers. These outofnorm observations are caused by the stochastic behaviour of the algorithms. The proposed SAPSO exhibits no outliers for the unimodal test functions. The particles in SAPSO are led by two particles with good experience, and , instead of only like SPSO and APSO. Learning from of each group reduces the effect of the stochastic behaviour in SAPSO.
(a) Quadric with outliers
(b) Quadric with outliers excluded
(c) Quartic with outliers
(d) Quartic with outliers excluded
(e) Rosenbrock with outliers
(f) Rosenbrock with outliers excluded
(g) Spherical with outliers
(h) Spherical with outliers excluded
(i) Hyperellipsoid with outliers
(j) Hyperellipsoid with outliers excluded
The presence of the outliers makes it difficult to observe the variance of the results through the box plot. Therefore, the outliers are trimmed in the boxplots of Figures 4(b), 4(d), 4(f), 4(h), and 4(j). The benchmark functions tested here are minimisation functions; hence, a lower boxplot indicates better quality of the algorithm. It can be observed from the figure that SAPSO continuously gives good performance in all the unimodal functions tested. The sizes of the boxplots show that the SAPSO algorithm provides a more consistent performance with smaller variance.
The results of the test on multimodal problems are shown in the boxplots in Figure 5. SPSO and APSO have outliers for Ackley and Rastrigin while SAPSO only has outliers in the results of Rastrigin. The Rastrigin function has a nonprotruding minima, which complicates the convergence [32]. However, SAPSO has fewer outliers compared to SPSO and APSO. This observation once again proves the efficiency of learning from two good particles, and .
(a) Ackley with outliers
(b) Ackley with outliers excluded
(c) Griewank
(d) Salomon
(e) Rastrigin with outliers
(f) Rastrigin with outliers excluded
Similar to the boxplots for unimodal test functions, the boxplots, after trimming of the outliers, show that the variance of the solutions found by SAPSO is small. The variance proves the consistency of SAPSO’s performance. SAPSO found much better results for the Griewank function compared to the other two algorithms.
The three algorithms tested have similar performance for the FM sound wave parameter estimation problem as shown in Figure 6. However, from the distribution of the solution in the boxplot, it could be seen that SAPSO and APSO have slightly better performance than SPSO as more solutions found are at the lower part of the box.
In Table 4, the Friedman test is conducted to analyse whether significant differences exist between the algorithms. The performances of the algorithms for all test functions are ranked based on their mean value. The means used here are calculated inclusive of the outliers because the outliers are genuine outliers that are neither measurement nor clerical errors and are therefore valid solutions. The means are shown in the boxplots (before trimming of outliers) using the * symbol. According to the Friedman test, SAPSO ranked the best among the three algorithms. The Friedman statistic value shows that significant differences exist between the algorithms. Therefore, the Holm procedure is conducted, and the three algorithms are compared against each other. The results in Table 5 show that there is significant difference between SAPSO and the APSO algorithm. The Holm procedure also shows that the performance of SAPSO is on a par with SPSO.


5.2. Effect of SAPSO Parameters
The number of particles can influence the size and the number of groups. To study the effect of these parameters, the number of particles is varied from 20 to 50. Only test functions one to nine are used here as they have similar dimension. There are 7 experiments conducted each for size of the groups and number of groups as listed in Tables 6 and 7. In the experiments for the size of the group, the number of groups is fixed at 5 and the size of the groups is increased from 4 to 10 members. The effect of the number of groups is studied using groups of 5 members; the number of groups is increased from 4, 5, 6, 7, 8, 9, and 10.


The average results for the effect of size of groups and number of groups are presented in Tables 8 and 9. Generally the results show that, similar to the original PSO algorithm, the number of particles affects the performance of SAPSO. A higher number of particles, that is, bigger groups or higher number of groups, contributes to a better performance. However, the effect is also influenced by the test function. This can be observed in Figure 7, for quadric and Ackley functions, the effect is more obvious compared to other functions.


(a) Quadric
(b) Quadric without outliers
(c) Quartic
(d) Rosenbrock
(e) Spherical
(f) Hyperellipsoid
(g) Hyperellipsoid without outliers
(h) Ackley
(i) Ackley without outliers
(j) Griewank
(k) Rastrigin
(l) Salomon
Friedman test is performed on the experimental results in Tables 8 and 9. The test is conducted to study the effect of number of group and group’s size on SAPSO’s performance. The average rank is presented in Table 10.

The result of Friedman test shows that significant difference exists in the SAPSO performance for different number of groups. Hence, Holm procedure is conducted and its statistical values are tabulated in Table 11. The result of the Holm procedure shows that significant differences exist between SAPSO implementations if the populations in each implementation consist of unequal number of groups and the difference in the number of groups is greater than three.

The Friedman test performed on the effect of the group size shows that the SAPSO implemented with groups of different sizes are significantly different. This observation is further studied using Holm procedure as in Table 12. The outcome of Holm procedure reveals that significant difference exists between two implementations of SAPSO algorithm if the difference in the group size is greater than three particles.

is a new parameter introduced in SAPSO. It represents the maximum distance of a particle to its group head during the initialisation stage of the algorithm. The value of determines the distribution of the particles within the search space. A small will result in close groups, while a large will result in groups with a bigger radius. The effect of is tested here, and the test parameters are listed in Table 13. For each of the test functions, the value is set to 1%, 5%, 10%, 50%, and 100% of the length of the search space. The average performance for different values of is listed in Table 14.


The Friedman statistic shows that using different values makes no significant difference to SAPSO, thus showing that the performance of SAPSO is not greatly affected by the choice of . This result is confirmed by boxplots in Figure 8 where the sizes of the box in most of the test functions are similar to each other.
(a) Quadric
(b) Quartic
(c) Quartic without outliers
(d) Rosenbrock
(e) Spherical
(f) Spherical without outliers
(g) Hyperellipsoid
(h) Hyperellipsoid without outliers
(i) Ackley
(j) Griewank
(k) Rastrigin
(l) Salomon
6. Conclusion
A synchronousasynchronous PSO algorithm (SAPSO) is proposed in this paper. The particles in this algorithm are updated in groups; the groups are updated asynchronously—one by one—while particles within a group are updated synchronously. A group’s search is led by the group’s best performer, , and the best member of the swarm, . The algorithm benefits from good exploitation and fine tuning provided by synchronous update while also taking advantage of the exploration by the asynchronous update. Learning from also contributes to the good performance of the SAPSO algorithm. Overall, the performance of the algorithm proposed is better and more consistent than the original SPSO and APSO.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This research is funded by the Department of Higher Education of Malaysia under the Fundamental Research Grant Scheme (FRGS/1/2012/TK06/MMU/03/7), the UMUMRG Scheme (0312013), Ministry of Higher Education of Malaysia/High Impact Research Grant (UMD00001616001), and UMPostgraduate Research Grant (PG0972013A). The authors also would like to acknowledge the anonymous reviewers for their valuable comments and insights.
References
 J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks (ICNN ’95), vol. 4, pp. 1942–1948, Perth, Western Australia, NovemberDecember 1995. View at: Publisher Site  Google Scholar
 R. C. Eberhart and X. Hu, “Human tremor analysis using particle swarm optimization,” Proceedings of the Congress on Evolutionary Computation (CEC '99), pp. 1927–1930, 1999, Cat. no. 99TH8406. View at: Google Scholar
 Z. Ibrahim, N. K. Khalid, J. A. A. Mukred et al., “A DNA sequence design for DNA computation based on binary vector evaluated particle swarm optimization,” International Journal of Unconventional Computing, vol. 8, no. 2, pp. 119–137, 2012. View at: Google Scholar
 J. Hazra and A. K. Sinha, “Congestion management using multiobjective particle swarm optimization,” IEEE Transactions on Power Systems, vol. 22, no. 4, pp. 1726–1734, 2007. View at: Publisher Site  Google Scholar
 M. F. Tasgetiren, Y. Liang, M. Sevkli, and G. Gencyilmaz, “Particle swarm optimization algorithm for single machine total weighted tardiness problem,” in Proceedings of the Congress on Evolutionary Computation (CEC '04), pp. 1412–1419, June 2004. View at: Google Scholar
 A. Adam, A. F. Zainal Abidin, Z. Ibrahim, A. R. Husain, Z. Md Yusof, and I. Ibrahim, “A particle swarm optimization approach to Robotic Drill route optimization,” in Proceedings of the 4th International Conference on Mathematical Modelling and Computer Simulation (AMS '10), pp. 60–64, May 2010. View at: Publisher Site  Google Scholar
 M. N. Ayob, Z. M. Yusof, A. Adam et al., “A particle swarm optimization approach for routing in VLSI,” in Proceedings of the 2nd International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN '10), pp. 49–53, Liverpool, UK, July 2010. View at: Publisher Site  Google Scholar
 Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation and IEEE World Congress on Computational Intelligence, (Cat. No.98TH8360), pp. 69–73, Anchorage, Alaska, USA, May 1998. View at: Publisher Site  Google Scholar
 M. Clerc and J. Kennedy, “The particle swarmexplosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at: Publisher Site  Google Scholar
 M. ReyesSierra and C. A. Coello Coello, “Multiobjective particle swarm optimizers: a survey of the stateoftheart,” International Journal of Computational Intelligence Research, vol. 2, no. 3, pp. 287–308, 2006. View at: Google Scholar  MathSciNet
 K. S. Lim, Z. Ibrahim, S. Buyamin et al., “Improving vector evaluated particle swarm optimisation by incorporating nondominated solutions,” The Scientific World Journal, vol. 2013, Article ID 510763, 19 pages, 2013. View at: Publisher Site  Google Scholar
 J. Kennedy and R. C. Eberhart, “Discrete binary version of the particle swarm algorithm,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Computational Cybernetics and Simulation, vol. 5, pp. 4104–4108, Orlando, Fla, USA, October 1997. View at: Publisher Site  Google Scholar
 M. S. Mohamad, S. Omatu, S. Deris, M. Yoshioka, A. Abdullah, and Z. Ibrahim, “An enhancement of binary particle swarm optimization for gene selection in classifying cancer classes,” Algorithms for Molecular Biology, vol. 8, article 15, 2013. View at: Publisher Site  Google Scholar
 J. RadaVilela, M. Zhang, and W. Seah, “A performance study on the effects of noise and evaporation in particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '12), pp. 1–8, June 2012. View at: Publisher Site  Google Scholar
 A. Carlisle and G. Dozier, “An OffTheShelf PSO,” in Proceedings of the Workshop on Particle Swarm Optimization, 2001. View at: Google Scholar
 L. Mussi, S. Cagnoni, and F. Daolio, “Empirical assessment of the effects of update synchronization in particle swarm optimization,” in Proceeding of the AIIA Workshop on Complexity, Evolution and Emergent Intelligence, pp. 1–10, 2009. View at: Google Scholar
 J. RadaVilela, M. Zhang, and W. Seah, “A performance study on synchronicity and neighborhood size in particle swarm optimization,” Soft Computing, vol. 17, no. 6, pp. 1019–1030, 2013. View at: Publisher Site  Google Scholar
 B. Jiang, N. Wang, and X. He, “Asynchronous particle swarm optimizer with relearning strategy,” in Proceedings of the 37th Annual Conference of the IEEE Industrial Electronics Society (IECON '11), pp. 2341–2346, November 2011. View at: Publisher Site  Google Scholar
 S. Xue, J. Zhang, and J. Zeng, “Parallel asynchronous control strategy for target search with swarm robots,” International Journal of BioInspired Computation, vol. 1, no. 3, pp. 151–163, 2009. View at: Publisher Site  Google Scholar
 R. Juan, M. Zhang, and W. Seah, “A performance study on synchronous and asynchronous updates in Particle Swarm Optimization,” in Proceedings of the 13th Annual Genetic and Evolutionary Computation Conference (GECCO '11), pp. 21–28, July 2011. View at: Publisher Site  Google Scholar
 Y. Shi and R. Eberhart, “Parameter selection in particle swarm optimization,” in Evolutionary Programming VII, vol. 1447 of Lecture Notes in Computer Science, pp. 591–600, Springer, New York, NY, USA, 1998. View at: Publisher Site  Google Scholar
 Y. Kennedy, J. Eberhart, and R. Shi, Swarm Intelligence, Morgan Kaufmann, Boston, Mass, USA, 2001.
 Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation (CEC '99), pp. 1945–1950, 1999, Cat. no. 99TH8406. View at: Google Scholar
 J. Kennedy, “Why does it need velocity?” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '95), pp. 38–44, 2005. View at: Google Scholar
 R. C. Eberhart and Y. Shi, “Comparing inertia weights and constriction factors in particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation (CEC '00), vol. 1 of (Cat. No.00TH8512), pp. 84–88, July 2000. View at: Google Scholar
 J. RadaVilela, M. Zhang, and W. Seah, “Random asynchronous PSO,” in Proceedings of the 5th International Conference on Automation, Robotics and Applications (ICARA '11), pp. 220–225, Wellington, New Zealand, December 2011. View at: Publisher Site  Google Scholar
 L. Dioşan and M. Oltean, “Evolving the structure of the particle swarm optimization algorithms,” in Proceedings of the 6th European Conference on Evolutionary Computation in Combinatorial Optimization (EvoCOP '06), vol. 3906 of Lecture Notes in Computer Science, pp. 25–36, Springer, 2006. View at: Publisher Site  Google Scholar
 S. Das and P. N. Suganthan, Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems, 2011.
 T. Hatanaka, T. Korenaga, and N. Kondo, Search Performance Improvement for PSO in High Dimensional Space, 2007.
 T. Hendtlass, “Particle swarm optimisation and high dimensional problem spaces,” in Proceedings of the Eleventh conference on Congress on Evolutionary Computation (CEC '09), pp. 1988–1994, IEEE Press, Piscataway, NJ, USA, May 2009. View at: Google Scholar
 J. Derrac, S. García, D. Molina, and F. Herrera, “A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms,” Swarm and Evolutionary Computation, vol. 1, no. 1, pp. 3–18, 2011. View at: Publisher Site  Google Scholar
 J. Dieterich and B. Hartke, “Empirical review of standard benchmark functions using evolutionary global optimization,” In press, http://arxiv.org/abs/1207.4318. View at: Google Scholar
Copyright
Copyright © 2014 Nor Azlina Ab Aziz et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.