Abstract

The Particle Swarm Optimization (PSO) Algorithm is a popular optimization method that is widely used in various applications, due to its simplicity and capability in obtaining optimal results. However, ordinary PSOs may be trapped in the local optimal point, especially in high dimensional problems. To overcome this problem, an efficient Global Particle Swarm Optimization (GPSO) algorithm is proposed in this paper, based on a new updated strategy of the particle position. This is done through sharing information of particle position between the dimensions (variables) at any iteration. The strategy can enhance the exploration capability of the GPSO algorithm to determine the optimum global solution and avoid traps at the local optimum. The proposed GPSO algorithm is validated on a 12-benchmark mathematical function and compared with three different types of PSO techniques. The performance of this algorithm is measured based on the solutions’ quality, convergence characteristics, and their robustness after 50 trials. The simulation results showed that the new updated strategy in GPSO assists in realizing a better optimum solution with the smallest standard deviation value compared to other techniques. It can be concluded that the proposed GPSO method is a superior technique for solving high dimensional numerical function optimization problems.

1. Introduction

Nowadays, there are various types of modern optimization techniques, such as Evolutionary Programming (EP), Artificial Immune System (AIS), Ant Colony Optimization (ACO), and Artificial Bees Colony (ABC). All of them can be regarded as heuristic optimization methods, due to the randomization involved in their respective initial steps. Despite the usage of randomized values, the mutation process and other steps in the algorithm render the optimization method capable of solving both linear and nonlinear problems. From the literature, it can be seen that optimization techniques are applicable in many fields. However, among the optimization methods, PSO has become very popular, due to its simplicity and its affinity towards manufacturing and robotics [13], electrical power systems [48], and engineering [911] and in other areas [1218].

There is no specific optimization algorithm that can reach a global solution for every optimization problem. Some algorithms can only provide the best solution to a particular problem, but not to others [19]. In classical PSO cases, PSO sometimes fails to find a global optimum solution when dealing with high dimensional problems. This problem occurs due to the fact that the particles are trapped at a local optimum (premature) solution.

In order to address such problems, several variants of PSO have been proposed to enhance the exploration and exploitation capability and convergence speed and to avoid being trapped at the local optimum [2026]. The modification is made on the PSO algorithm itself or by hybridizing the PSO with other optimization techniques. A new adaptive weight technique that was proposed by Nickabadi et al. [27] was among the recent works involving the modification of the PSO algorithm. There are 3 main constant parameters that will affect the performance of the PSO, which are inertia weight (), cognitive constant (), and social constant (); the changes of these parameters will result in different performances of the algorithm. In a traditional PSO, these parameters are unspecified and need to be adjusted several times in order to obtain a suitable value [28, 29]. This implies that there are possibilities for the user to obtain inaccurate settings on the initial values (, , and ), which will result in the PSO not converging. Therefore, the tool on time varying inertia weight technique in PSO has been introduced in [3034] and also the adaptive inertia weight methods in [3538]. In [27], the authors have used a new adaptive inertia weight approach, which uses the success rate of the swarm as a feedback parameter to determine the particles’ situation in the search space to enhance the performance of PSO. The results showed that the proposed method produced superior results compared to other weight adjustment techniques. Although the results in the study were better, it requires a large number of iterations (×105) before achieving the global optimal value compared to other techniques. Thus, for larger dimensions, the adjustment on inertia weight parameter alone might not be sufficient for the algorithm to realize the global optimal value.

In this paper, a novel Global Particle Swarm Optimization (GPSO) is proposed to improve and enhance the convergence speed of classical PSO. The working principle of the proposed algorithm is based on a new proposed updated parameter () in order to enrich the capability of the algorithm in finding a global solution. This new updating strategy can explore more possible solutions in searching space. The performance of GPSO is compared with three other optimization methods, which are traditional PSO, Iteration Particle Swarm Optimization (IPSO) method [39], and Evolutionary Particle Swarm Optimization (EPSO) method [40]. The performance for all these PSO types will be tested on 12 common mathematical test functions in the dimensions of 20, 30, 50, and 100. The maximum number of iterations for the algorithms was set to 1000. The remainder of this paper is organized in the following fashion. In Section 2, the basic operation of PSO, IPSO, and EPSO and their respective updated strategy in obtaining optimal solutions are briefly explained. The details of the proposed GPSO algorithm and the new updating strategy are described in Section 3. The simulation results between GPSO performance with the other PSO algorithms and the influence of parameter setting on GPSO are presented in Section 4, respectively. Finally, Section 5 concludes this paper.

2. Review of Particle Swarm Optimization and Its Variants

2.1. Particle Swarm Optimization (PSO) Algorithm

The original Particle Swarm Optimization (PSO) Algorithm proposed by Kennedy and Eberhart [28] in 1995 is adapted from the food searching behavior of birds and fish (particles). All these particles will move within the searching space towards the location of food (optimal location) in a specific speed and continuously change their respective positions to arrive at the destination. The position and velocity of ith particle in -dimensional search space are represented as and , respectively. Their movement is guided by their own experience () and the experience of the other particles in a group (). The best position which has been visited by the th particle, until iteration is recorded as , while the best position among the in a group is defined as . Each particle will update its velocity () and position () in the following manner: where is the number of iterations, is the inertia weight, is the velocity of th particle, and are the acceleration coefficients for cognitive and social components, respectively, and and are independently uniform distributed random numbers between 0 and 1.

2.2. Iteration Particle Swarm Optimization (IPSO) Algorithm

Iteration Particle Swarm Optimization (IPSO) is a modification PSO method proposed by Lee and Chen [39] to improve the solution quality and computing time of the algorithm. In the IPSO, three best values were used to update the velocity and the position of the particles, which are known as , , and . The definition and the method to determine the and values in the IPSO are similar to the traditional PSO (discussed in Section 2.1). Meanwhile, the new parameter is defined as the best value of the fitness function that has been achieved by any particle in the present iteration. In other words, value is the value that is randomly selected among the existing particles from the current population. In addition, the authors also introduced a dynamic acceleration constant parameter, value for in IPSO. Therefore, the new velocity formula for the IPSO algorithm is [39] where is the acceleration constant that pulls each particle towards . The value of will change according to and the number of iterations ().

2.3. Evolutionary Particle Swarm Optimization (EPSO) Algorithm

The Evolutionary Particle Swarm Optimization (EPSO) Algorithm introduced by Angeline [40] utilizes the hybridization concept by combining the Evolutionary Programming (EP) with the traditional PSO. There are few methods in EP optimization methods, and one of them utilized the competition, sorting, and selection processes to select the survival particles and render the process in determining the optimum value quicker. By implementing this concept in PSO algorithm, it causes the particles in EPSO to move faster than its counterpart towards the optimal point. Furthermore, the implementation of EP in PSO will also cause the process of finding the and for each iteration to become different and simpler [41]. The EPSO algorithm can get the and values just after the selection process based on the survival particle. Thus, the and values can be easily determined, and the new velocity and position are updated using (1).

3. A Novel Global Particle Swarm Optimization Algorithm

In this study, a novel Global Particle Swarm Optimization (GPSO) Algorithm is introduced. Certain modifications are made on the velocity formula of the traditional PSO algorithm, so that it would reach and obtain the best solution (global solution) quicker than the traditional PSO.

In the traditional PSO, the having a better result is used in the updating process, while the remaining is terminated (replaced). This means that all the values that have been achieved at the previous iteration will not be referring back. In other words, there is no information sharing between the best values that are achieved at the previous iteration with the current results. By using the “Experience” concept, the knowledge and information about what had previously happened can be used as guides for accurate decisions. Therefore, the value at any dimension (variable) and at any iteration will be randomly selected and used to update the velocity value, and this variable is called .

Furthermore, the acceleration coefficient for the parameter () is obtained from the average of cognitive () and social () constant parameter values. Thus, the new velocity formula that updates the next position of particles in GPSO is where the 4th term in the GPSO velocity equation is referred to as the Improvement Factor (IF).

Figure 1 illustrates an example of the GPSO process to determine the value. The optimization problem supposedly consists of 3 variables (, , and ); the number of particles is 6 (the positions of the particles are represented by different colours) and has reached the th iteration (current iteration). Similar to the PSO algorithm, the value for the population at each iteration will be obtained in order to update the next particles’ position. In this example, the population at th iteration has the value that is achieved by the “blue” particle. Therefore, the values for each dimension (variable) at the th iteration are , , and . For the current iteration, the value is achieved by the “light purple” particle; thus, the value can be presented as , where = 1, 2, and 3 (number of dimensions).

However, in the GPSO, an additional parameter is needed for the updating process, which is the value. From the previous definition, the value is obtained from the value that is randomly selected at any iteration and dimension. By using the Gaussian distributed random function, the value from 2nd dimension (2nd variable) at the th iteration is selected to be the value for the current iteration. Thus, in this example, the for the current iteration population is

From the above description, it can be noticed that the value for each iteration will be different for each time. Unlike the concept, where the comparison process is needed to determine the surviving value for the current iteration, the parameter will not require any comparison, and the possibility of having similar values in a next iteration is minuscule, especially for high dimensional problems. Therefore, the particles will learn and know about different “Experience” for each process. Furthermore, the Improvement Factor (IF) will also help the algorithm have the suitable velocity value for updating the next particles’ positions. All these factors made the GPSO superior to its original counterpart. Figure 2 and Algorithm 1 show the flow chart for the GPSO algorithm in solving the high dimensional optimization problem and the corresponding MATLAB codes for the IF parameter.

for
  Ch_iter = fix(rand* ) + 1;
  Ch_part = fix(rand* ) + 1;
while (Ch_part= = )
  Ch_part = fix(rand* ) + 1;
end;
IF ( ) = *rand*( (1, Ch_part, Ch_iter) − X( ));
End
V( ) = W(t)*V( ) + *rand*( ( ) − X( ))
     + *rand*(G( ) − X( )) + IF( );
Where N: number of iteration, t: current iteration, D: Dimension,
i: particle’s number, X: variable, G: the current gbest result

4. Result and Discussion

Twelve well-known classical benchmark functions that consist of unimodal, multimodal, separable, and non-separable types are used to evaluate the performance of the CPSO, IWPSO, IPSO and GPSO algorithms, respectively, as shown in Table 1.

In order to have a fair comparison, all of the PSO methods in the analysis utilized the same parametric values. The population size for the particles is set to 20 (), with the cognitive and social components ( and ) values set to 0.5. The algorithm will run until the maximum iteration (max. iter. = 1000). Furthermore, the performance of these PSOs will also be tested in 4 different dimensions, which are 20, 30, 50, and 100, and each test set is executed 50 times with the new random population values.

4.1. Results from Benchmark Simulation

Tables 2, 3, 4, and 5 show the results of the best, worst, mean, and standard deviation (SD) values that are achieved by the PSO, IPSO, EPSO, and GPSO for all 12 benchmark functions in four different dimensional sizes, respectively. The standard optimal value for all functions in this analysis is zero except for function 10 (), where the standard optimal value for is −78.33. Since the dimension of each test function is very high (20, 30, 50, and 100 dimensions), the PSO, EPSO, and IPSO algorithms could not provide the global optimal value in the same manner that the GPSO could.

Figure 3 shows the example of the best results that are achieved by all PSO methods in different functions and dimensions. From the results, it can be seen that the concept of “Experience” in GPSO has helped the algorithm obtain the lowest convergence value compared to other PSOs. Not only that, but the GPSO has also reached the global value faster than others, either when the global minimum value is zero (Figures 3(a) and 3(b)) or not (Figure 3(c)). For example, in Figure 3(c), the EPSO is the slowest algorithm that makes the high dimensional problem arrive near its convergence solution (nearly 200 iterations) compared to PSO and IPSO, whilst GPSO only required a small number of iteration (12 iterations) to do the same. This makes GPSO the fastest algorithm in terms of obtaining convergence solution in high dimensional problems and the lowest optimal results when compared to other PSOs methods.

On top of that, the limitation on 1000 iterations is not the cause for the PSO, EPSO, and IPSO methods for not realizing the best optimal value. Although the numbers of iterations increase to 20000, the results of optimization problem is still give the same optimal value given by 1000 iterations. It means that, for the high dimensional problems, these three algorithms tend to be trapped at the local optimal solution. Therefore, the parameter not only gives a faster convergence solution, it also helped the algorithm to avoid being trapped at the local optimal solution. Thus, from the “best” results that are archived by all PSO methods in Tables 2 to 5, the GPSO can provide the optimal results very close to or similar to the standard minimum value regardless of the dimension size of the problems.

Furthermore, the GPSO can provide consistent results for all 50-sample tests. This can be seen from the standard deviation (SD) value where the SD values for GPSO are the lowest compared to the results from the other three PSOs (Tables 25: column “SD”). Since the SD value indicates how close the results that are obtained from the 50 different trials to its average value (mean value), the largest of the SD value shows that the algorithm could not provide a consistent result in the analysis. This occurred due to the algorithms that are always trapped in the local results, especially for high dimensional problem. Figure 4 shows the SD values for all functions when the dimension is 50.

It can be clearly seen that the SD values for , , and for PSO, EPSO, and IPSO, respectively, are very high compared to GPSO results. From Table 3, the “best” values that is achieved by PSO, EPSO, and IPSO in for dimension 50 are , , and , while the “worst” value that were achieved by the algorithms are , , and , respectively. The large difference between the “best” and the “worst” values influences the SD values. However, for functions , , and , the SD results for all the PSO methods are smaller than the other functions results. Therefore, the PSO, EPSO, and IPSO can still give consistent results in some of the high dimensional problems, but not as well as the GPSO’s results. Overall, since the SD value for GPSO in all of the test functions and in all of the dimensions is very small (0), it can be concluded that the GPSO can provide consistent results in any high dimensional problem, as shown in Tables 25 (columns “best” and “worst”).

In summary, the GPSO algorithm can reach the global optimal value within 1000 iterations for all cases on high dimensional test functions. Not only that, but the GPSO also has a very low standard deviation value compared to the PSO, EPSO, and IPSO. Therefore, the GPSO has shown very high efficiency in solving any high dimensional numerical optimization problem.

4.2. Impacts of Acceleration Coefficient Selection for GPSO Algorithm

From the previous sections, the GPSO algorithm demonstrated the best performance in solving the high dimension numerical optimization problems. However, based on the velocity formula in (3), the settings of the acceleration value will also influence GPSO’s performance, albeit in a different manner. Thus, the investigation on the suitable range for coefficient acceleration in solving high dimensional problems is needed. The dimension for all the benchmark functions that are used in this analysis is 50. The maximum iteration number still remains the same, which is 1000 iterations, and the average results and the standard deviation values from 50 runs are presented in Table 6.

From Table 6, the GPSO algorithm can provide the average results near the optimal solution when the acceleration () value is between 0.2 and 1.5, where , and 3. For case , , and , all the average results for to and to are between 10−11 and 10−3. Since the standard optimal value for these functions is zero, the GPSO gave the average results that are very close to the standard optimal value. For the acceleration coefficient , the to and to functions gave the optimal value from 10−5 to 10−1. For the function, the average results given by until are similar, which is , and equal to the standard optimal results . However, when the acceleration coefficients are set to 2.0, the results of the GPSO are not as good as the results when the acceleration values are less than 1.5, where some of the average value that is obtained by GPSO are in the tens, despite the fact that the optimal results should be zero.

Thus, in order to optimize the GPSO’s performance in solving high dimensional problems, the acceleration coefficient that had been set by the user must be less than 1.5.

Figure 5 shows the standard deviation value for all of the functions when the acceleration varies from 0.2 and 2.0. From the results, when the acceleration coefficient values for GPSO are between 0.2 to 1.0 (, , and ), the SD value for the algorithm is equal to zero for all functions.

In other words, GPSO is capable of providing consistent results, when the acceleration coefficient values are within range. However, when , some of the functions will have higher SD values, such as , where the SD value is near to 1.0. The worst condition occurred when , where almost all functions have SD values higher than 0, except for functions 3 and 10. Therefore, by considering the average value in 50-test sets’ performance and the SD results, it can be concluded that the suitable value on acceleration coefficient for the GPSO algorithm must be less than 1.0 in order to realize the best performance for GPSO in solving high dimensional problems.

5. Conclusion

This paper proposed a new algorithm for Particle Swarm Optimization, which is known as Global Particle Swarm Optimization (GPSO), to determine the global optimal value for high dimensional optimization problems. The performance of GPSO is tested on 12 classical benchmark high dimensional test functions and compared with 3 other PSO methods, which are Original Particle Swarm Optimization (PSO), Evolutionary Particle Swarm Optimization (EPSO), and Iteration Particle Swarm Optimization (IPSO).

The simulation results showed that the GPSO method performed well in solving high dimensional problems. The implementation of in the GPSO did not only help the algorithm reach the global optimal results in a few iterations, but it also prevented the GPSO from being trapped in local optimal results. Furthermore, the GPSO algorithm also gave consistent results, even with different initial values based on the smallest standard deviation obtained in 50-test sets. From the acceleration coefficient varying analysis, the suitable value for the GPSO algorithm must be less than 1.0 in order to realize the best performance in solving high dimensional problems.

Therefore, from all the results shown by the GPSO algorithm, it can be concluded that GPSO is the superior method in solving high dimensional numerical optimization problems compared to PSO, EPSO, and IPSO algorithms.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.