Abstract

In the original particle swarm optimisation (PSO) algorithm, the particles’ velocities and positions are updated after the whole swarm performance is evaluated. This algorithm is also known as synchronous PSO (S-PSO). The strength of this update method is in the exploitation of the information. Asynchronous update PSO (A-PSO) has been proposed as an alternative to S-PSO. A particle in A-PSO updates its velocity and position as soon as its own performance has been evaluated. Hence, particles are updated using partial information, leading to stronger exploration. In this paper, we attempt to improve PSO by merging both update methods to utilise the strengths of both methods. The proposed synchronous-asynchronous PSO (SA-PSO) algorithm divides the particles into smaller groups. The best member of a group and the swarm’s best are chosen to lead the search. Members within a group are updated synchronously, while the groups themselves are asynchronously updated. Five well-known unimodal functions, four multimodal functions, and a real world optimisation problem are used to study the performance of SA-PSO, which is compared with the performances of S-PSO and A-PSO. The results are statistically analysed and show that the proposed SA-PSO has performed consistently well.

1. Introduction

Particle swarm optimisation (PSO) was introduced by Kennedy and Eberhart in 1995 [1]. It is a swarm-based stochastic optimisation algorithm that mimics the social behaviour of organisms such as birds and fishes. These organisms’ success in looking for food source is achieved through individual effort as well as corporation with surrounding neighbours. In PSO, the individuals are represented by a swarm of agents called particles. The particles move within the search area to find the optimal solution by updating their velocity and position. These values are influenced by the experience of the particles and their social interactions. The PSO algorithm has been successfully applied in various fields, such as human tremor analysis for biomedical engineering [2, 3], electric power and voltage management [4], machine scheduling [5], robotics [6], and VLSI circuit design [7].

Since its introduction, PSO has undergone numerous evolutionary processes. Many variations of PSO have been proposed to improve the effectiveness of the algorithm. Some of the improvement involves introduction of a new parameter to the algorithm such as inertia weight [8] and constriction factor [9], while others focus on solving specific type of problems such as multiobjective optimization [10, 11], discrete optimization problems [12, 13], and dynamic optimization problems [14].

Here we focus on the effect of the particles’ update sequence on the performance of PSO. In the original PSO, a particle’s information on its neighbourhood’s best found solution is updated after the performance of the whole swarm is evaluated. This version of PSO algorithm is known as synchronous PSO (S-PSO). The synchronous update in S-PSO provides the perfect information concerning the particles, thus allowing the swarm to choose a better neighbour and exploit the information provided by this neighbour. However, this strategy could cause the particles to converge too fast.

Another variation of PSO, known as asynchronous PSO (A-PSO), has been discussed by Carlisle and Dozier [15]. In A-PSO, the best solutions are updated as soon as a particle’s performance has been evaluated. Therefore, a particle’s search is guided by the partial or imperfect information from its neighbourhood. This strategy leads to diversity in the swarm [16], wherein the particles updated at the beginning of an iteration use more information from the previous iteration while particles at the end of the iteration are updated based on the information from the current iteration [17]. In several studies [15, 16, 18], A-PSO has been claimed to perform better than S-PSO. Xue et al. [19] reported that asynchronous updates contribute to a shorter execution time. Imperfect information due to asynchronous updates causes the information of the current best found solution to be communicated to the particles more slowly, thus encouraging more exploration. However, a study conducted by Juan et al. [20] reported that S-PSO is better than A-PSO in terms of the quality of the solution and also the convergence speed. This is due to the stronger exploitation.

The synchronicity of the particles influences exploration and exploitation among the particles [17]. Exploration and exploitation play important roles in determining the quality of a solution. Exploration in asynchronous update ensures that the search space is thoroughly searched so that the area containing the best solution is discovered. However, exploitation in synchronous update helps to fine tune the search so that the best solution can be found. Hence, in this paper, we attempt to improve the PSO algorithm by merging both synchronous and asynchronous updates in the search process so that the advantages of both methods can be utilised. The proposed algorithm, which is named as the synchronous-asynchronous PSO (SA-PSO), divides the particles into smaller groups. These groups are updated asynchronously, while members within the same group are updated synchronously. After the performance of all the particles in a group is evaluated, the velocities and positions of the particles are updated using a combination of information from the current iteration of their own group and the groups updated before them, as well as the information from the previous iteration of the groups that have not yet been updated. The search for the optimal solution in SA-PSO is led by the groups’ best members together with the swarm’s best. This strategy is different from the original S-PSO and A-PSO, where the search is led by the particles’ own experience together with the swarm’s best.

The rest of the paper is organised as follows. The S-PSO and A-PSO algorithms are discussed in Section 2. The proposed SA-PSO algorithm is described in detail in Section 3. In Section 4, the performance of the SA-PSO algorithm is evaluated using ten benchmark functions comprising of five unimodal functions, four multimodal functions, and a real world optimisation problem. The results of the tests are presented and discussed in Section 5. Our conclusions are presented in Section 6.

2. Particle Swarm Optimisation

2.1. Synchronous PSO

In PSO, the search for the optimal solution is conducted by a swarm of particles. At time , the th particle has a position, , and a velocity, . The position represents a solution suggested by the particle while velocity is the rate of change from the current position to the next position. At the beginning of the algorithm, these two values (position and velocity) are randomly initialised. In subsequent iterations, the search process is conducted by updating the position and velocity using the following equations:

To prevent the particles from venturing too far from the feasible region, the value is clamped to . If the value of is too large, then the exploration range is too wide. Conversely, if the value of is too small, then the particles will favour the local search [21]. In (1), and are the learning factors that control the effect of the cognitive and social influence on a particle. Typically, both and are set to 2 [22]. Two independent random numbers and in the range are incorporated into the velocity equation. These random terms provide stochastic behaviour to the particles, thus encouraging them to explore a wider area. Inertia weight, , which is a term added to improve the PSO’s performance, controls the particles’ momentum. When a good area is found, the particles can switch to fine tuning by manipulating [8]. To ensure convergence, a time decreasing inertia weight is more favourable than a fixed inertia weight [21]. This is because a large inertia weight at the beginning helps to find a good area through exploration and a small inertia weight towards the end—when typically a good area is already found—facilitates fine tuning. The small inertia weight at the end of the search reduces the global search activity [23].

An individual success in PSO is affected not only by the particle’s own effort and experience but also by the information shared by its surrounding neighbours. The particle’s experience is represented in (1) by , which is the best position found so far by the th particle. The neighbours’ influence is represented by , which is the best position found by the swarm up to the current iteration.

The particle’s position, , is updated using (2), in which a particle’s next search is launched from its previous position and the new search is influenced by the past search [24]. Typically, is bounded to prevent the particles from searching in an infeasible region [25]. The quality of is evaluated by a problem-dependent fitness function. Each of the particles is evaluated to determine its current fitness. If a new position with a better fitness than the current fitness of or or both is found, then the new position value will accordingly be saved as or ; otherwise the old best values will be adopted. This update process continues until the stopping criterion is met, when either the maximum iteration limit, , is achieved or the target solution is attained. Therefore, for a swarm with number of particles, the maximum number of fitness evaluation in a run is .

The original PSO algorithm is shown in the flowchart of Figure 1. As shown in the algorithm, the particles’ and updates are conducted after the fitness of all the particles has been evaluated. Therefore, this version of PSO is known as synchronous PSO (S-PSO). Because the and are updated after all the particles are evaluated, S-PSO ensures that all the particles receive perfect and complete information about their neighbourhood, leading to a better choice of and thus allowing the particles to exploit this information so that a better solution can be found. However, this possibly leads the particles in S-PSO to converge faster, resulting in a premature convergence.

2.2. Asynchronous PSO

In S-PSO, a particle must wait for the whole swarm to be evaluated before it can move to a new position and continue its search. Thus, the first evaluated particle is idle for the longest time, waiting for the whole swarm to be updated. An alternative to S-PSO is A-PSO, in which the particles are updated based on the current state of the swarm. A particle in A-PSO is updated as soon as its fitness is evaluated. The particle selects using a combination of information from the current and the previous iteration. This is different from S-PSO, in which all the particles use information from the same iteration. Consequently, in A-PSO, particles of the same iteration might use various values of , as it is selected based on the available information during a particle’s update process.

The flowchart in Figure 2 shows the A-PSO algorithm. The flow of A-PSO is different than S-PSO; however the fitness function is still called for times per iteration, once for each particle. Therefore, the maximum number of fitness evaluation is . This is similar to S-PSO. The velocity and position are calculated using the same equations as S-PSO.

Other than the variety of information, the lack of synchronicity in A-PSO solves the issue of idle particles faced in S-PSO [26]. An asynchronous update also enables the update sequence of the particles to change dynamically or a particle to be updated more than once [26, 27]. The change in the update sequence offers different levels of available information among the particles, and such differences can prevent the particles from being trapped in local optima [17].

3. The Proposed Synchronous-Asynchronous PSO (SA-PSO)

In this paper, the PSO algorithm is improved by merging both update methods. The proposed algorithm, synchronous-asynchronous PSO (SA-PSO), divides the particles into smaller groups. In S-PSO and A-PSO, the particles learn from their own best experience, and . However, in the proposed algorithm, instead of using their own experience, the particles learn from their group’s performance.

The algorithm proposed is presented in the flowchart shown in Figure 3. The algorithm starts with initialisation of particles. The particles in SA-PSO are divided into groups, each of which consists of number of particles. Initially, central particles, one for each group, are randomly initialized within the search space. This is followed by random placement of number of members for each group. The distances of members are within the radius of from the central particle of their respective groups. Therefore, is the maximum distance of a particle from the central particle of its group. This parameter is only used once throughout the execution of the algorithm—during the initialisation phase. Group memberships remain fixed throughout the search process. The total number of particles, , is for the SA-PSO algorithm.

The groups are updated one by one; that is, asynchronous update is used across groups. The particles from the group that is being updated use three groups of information to update their velocity. The first group of information is the current information of the particles’ group members; the particles use this information to try to match their group’s best performer. The particles also use recent information from the groups that were updated earlier and information from the previous iteration for the groups to be updated later.

When a group is updated, the group members’ velocity and position updates are performed after the whole group performance is evaluated. Therefore, the particles in a group are updated synchronously.

When a group evaluates the performance of its members, the fitness function is called for times. One by one of the groups’ members are updated in an iteration. Since there is number of groups, hence the fitness function is called for times, which is equivalent to times per iteration. Therefore, although the particles in SA-PSO are divided into groups, the maximum number of fitness evaluation per run is the same as S-PSO and A-PSO which is .

The velocity at time of th particle that belongs to th group, , is updated using the following equation: Equation (3) shows that the information used to update the velocity are and . is the best member of th group, where is , and it is chosen among the particle’s best of th group, . This value, together with the swarm’s best, , leads the particles’ search in the SA-PSO algorithm. The is updated after all once a new outperforms . Thus, is the best . The communication among the groups in SA-PSO is conducted through the best performing member of the groups. The position of the particle, , is updated using The algorithm is ended when either the ideal fitness is achieved or maximum iteration is reached.

The SA-PSO algorithm takes advantage of both A-PSO and S-PSO algorithms. In A-PSO, the particles are updated using imperfect information, which contributes to the diversity and exploration. In S-PSO, the quality of the solution found is ensured by evaluating the performance of the whole swarm first. The S-PSO particles are then updated by exploiting this information. The asynchronous update characteristic of A-PSO is imitated by SA-PSO by updating the groups one after another. Hence, members of a group are updated using the information from mixed iterations. This strategy encourages exploration due to the imperfect information. However, the performance of all members of a group in SA-PSO is evaluated first before the velocity and position update process starts. This is the synchronous aspect of SA-PSO. It provides the complete information of the group and allows the members to exploit the available information.

4. Experiments

The proposed SA-PSO and the existing S-PSO and A-PSO were implemented using MATLAB. The parameter settings are summarised in Table 1. Each experiment was subjected to 500 runs. The initial velocity was set to random value subject to the velocity clamping range, . The position of the particles was randomly initialised within the search space. A linear decreasing inertia weight ranging from 0.9 to 0.4 was employed to encourage fine tuning towards the end of the search. The cognitive and social learning factors were set to 2 which is a typical value for and . The search was terminated either due to the number of iterations reaching 2000 or the ideal solution being found. The maximum number of iteration is set to 2000 to limit the computational time taken. The final values were recorded. The setting for the additional parameters in SA-PSO is given in Table 2. Exclusively for SA-PSO, the members of the groups were randomly initialised with their distance to group centres, . The group centres were randomly initialised within the boundary of the search space.

A group of benchmark test problems had been identified for assessing the performance of the proposed SA-PSO and the original S-PSO and A-PSO algorithms. The benchmark test problems consist of five unimodal functions, four multimodal functions, and one real world optimisation problem, namely, frequency-modulated (FM) sound wave synthesis which is taken from CEC2011 competition on testing evolutionary algorithms on real world optimisation problems [28]. These functions are given in Table 3. All functions used are minimisation functions with ideal fitness value of . The dimension of the unimodal and multimodal problems, , was set to 30. The search spaces for these problems are therefore high dimensional [29, 30]. Note that the FM sound wave problem is a six-dimensional problem.

The solutions found by the algorithms tested are presented here using boxplot. A boxplot shows the quality and also the consistency of an algorithm’s performance. The size of the box shows the magnitude of the variance of the results; thus a smaller box suggests a consistent performance of the algorithm. Because the benchmark functions used in this study are minimisation problems, a lower boxplot is desirable as it indicates better quality of the solutions found.

The algorithms are compared using a nonparametric test due to the nature of the solutions found, where they are not normally distributed. The test chosen is the Friedman test with significance level . This test is suitable for comparison of more than two algorithms [31]. The algorithms are first ranked based on their average performance for each benchmark function. The average rank is then used to calculate the Friedman statistic value. According to the test, if the statistic value is lesser than the critical value, the algorithms tested are identical to each other; otherwise, significant differences exist. If a significant difference is found, the algorithms are then compared using a post hoc procedure. The chosen post hoc procedure here is the Holm procedure. It is able to pinpoint the algorithms that are not identical to each other, a result that cannot be detected by the Friedman test.

5. Results and Discussion

5.1. SA-PSO versus S-PSO and A-PSO

The boxplots in Figure 4 show the quality of the results for unimodal test functions using the three algorithms. The results obtained by S-PSO and A-PSO algorithms contain multiple outliers. These out-of-norm observations are caused by the stochastic behaviour of the algorithms. The proposed SA-PSO exhibits no outliers for the unimodal test functions. The particles in SA-PSO are led by two particles with good experience, and , instead of only like S-PSO and A-PSO. Learning from of each group reduces the effect of the stochastic behaviour in SA-PSO.

The presence of the outliers makes it difficult to observe the variance of the results through the box plot. Therefore, the outliers are trimmed in the boxplots of Figures 4(b), 4(d), 4(f), 4(h), and 4(j). The benchmark functions tested here are minimisation functions; hence, a lower boxplot indicates better quality of the algorithm. It can be observed from the figure that SA-PSO continuously gives good performance in all the unimodal functions tested. The sizes of the boxplots show that the SA-PSO algorithm provides a more consistent performance with smaller variance.

The results of the test on multimodal problems are shown in the boxplots in Figure 5. S-PSO and A-PSO have outliers for Ackley and Rastrigin while SA-PSO only has outliers in the results of Rastrigin. The Rastrigin function has a nonprotruding minima, which complicates the convergence [32]. However, SA-PSO has fewer outliers compared to S-PSO and A-PSO. This observation once again proves the efficiency of learning from two good particles, and .

Similar to the boxplots for unimodal test functions, the boxplots, after trimming of the outliers, show that the variance of the solutions found by SA-PSO is small. The variance proves the consistency of SA-PSO’s performance. SA-PSO found much better results for the Griewank function compared to the other two algorithms.

The three algorithms tested have similar performance for the FM sound wave parameter estimation problem as shown in Figure 6. However, from the distribution of the solution in the boxplot, it could be seen that SA-PSO and A-PSO have slightly better performance than S-PSO as more solutions found are at the lower part of the box.

In Table 4, the Friedman test is conducted to analyse whether significant differences exist between the algorithms. The performances of the algorithms for all test functions are ranked based on their mean value. The means used here are calculated inclusive of the outliers because the outliers are genuine outliers that are neither measurement nor clerical errors and are therefore valid solutions. The means are shown in the boxplots (before trimming of outliers) using the * symbol. According to the Friedman test, SA-PSO ranked the best among the three algorithms. The Friedman statistic value shows that significant differences exist between the algorithms. Therefore, the Holm procedure is conducted, and the three algorithms are compared against each other. The results in Table 5 show that there is significant difference between SA-PSO and the A-PSO algorithm. The Holm procedure also shows that the performance of SA-PSO is on a par with S-PSO.

5.2. Effect of SA-PSO Parameters

The number of particles can influence the size and the number of groups. To study the effect of these parameters, the number of particles is varied from 20 to 50. Only test functions one to nine are used here as they have similar dimension. There are 7 experiments conducted each for size of the groups and number of groups as listed in Tables 6 and 7. In the experiments for the size of the group, the number of groups is fixed at 5 and the size of the groups is increased from 4 to 10 members. The effect of the number of groups is studied using groups of 5 members; the number of groups is increased from 4, 5, 6, 7, 8, 9, and 10.

The average results for the effect of size of groups and number of groups are presented in Tables 8 and 9. Generally the results show that, similar to the original PSO algorithm, the number of particles affects the performance of SA-PSO. A higher number of particles, that is, bigger groups or higher number of groups, contributes to a better performance. However, the effect is also influenced by the test function. This can be observed in Figure 7, for quadric and Ackley functions, the effect is more obvious compared to other functions.

Friedman test is performed on the experimental results in Tables 8 and 9. The test is conducted to study the effect of number of group and group’s size on SA-PSO’s performance. The average rank is presented in Table 10.

The result of Friedman test shows that significant difference exists in the SA-PSO performance for different number of groups. Hence, Holm procedure is conducted and its statistical values are tabulated in Table 11. The result of the Holm procedure shows that significant differences exist between SA-PSO implementations if the populations in each implementation consist of unequal number of groups and the difference in the number of groups is greater than three.

The Friedman test performed on the effect of the group size shows that the SA-PSO implemented with groups of different sizes are significantly different. This observation is further studied using Holm procedure as in Table 12. The outcome of Holm procedure reveals that significant difference exists between two implementations of SA-PSO algorithm if the difference in the group size is greater than three particles.

is a new parameter introduced in SA-PSO. It represents the maximum distance of a particle to its group head during the initialisation stage of the algorithm. The value of determines the distribution of the particles within the search space. A small will result in close groups, while a large will result in groups with a bigger radius. The effect of is tested here, and the test parameters are listed in Table 13. For each of the test functions, the value is set to 1%, 5%, 10%, 50%, and 100% of the length of the search space. The average performance for different values of is listed in Table 14.

The Friedman statistic shows that using different values makes no significant difference to SA-PSO, thus showing that the performance of SA-PSO is not greatly affected by the choice of . This result is confirmed by boxplots in Figure 8 where the sizes of the box in most of the test functions are similar to each other.

6. Conclusion

A synchronous-asynchronous PSO algorithm (SA-PSO) is proposed in this paper. The particles in this algorithm are updated in groups; the groups are updated asynchronously—one by one—while particles within a group are updated synchronously. A group’s search is led by the group’s best performer, , and the best member of the swarm, . The algorithm benefits from good exploitation and fine tuning provided by synchronous update while also taking advantage of the exploration by the asynchronous update. Learning from also contributes to the good performance of the SA-PSO algorithm. Overall, the performance of the algorithm proposed is better and more consistent than the original S-PSO and A-PSO.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research is funded by the Department of Higher Education of Malaysia under the Fundamental Research Grant Scheme (FRGS/1/2012/TK06/MMU/03/7), the UM-UMRG Scheme (031-2013), Ministry of Higher Education of Malaysia/High Impact Research Grant (UM-D000016-16001), and UM-Postgraduate Research Grant (PG097-2013A). The authors also would like to acknowledge the anonymous reviewers for their valuable comments and insights.