Journal of Applied Mathematics

Journal of Applied Mathematics / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 329193 | 14 pages | https://doi.org/10.1155/2014/329193

Global Particle Swarm Optimization for High Dimension Numerical Functions Analysis

Academic Editor: Marcelo A. Savi
Received21 Oct 2013
Revised18 Dec 2013
Accepted10 Jan 2014
Published25 Feb 2014

Abstract

The Particle Swarm Optimization (PSO) Algorithm is a popular optimization method that is widely used in various applications, due to its simplicity and capability in obtaining optimal results. However, ordinary PSOs may be trapped in the local optimal point, especially in high dimensional problems. To overcome this problem, an efficient Global Particle Swarm Optimization (GPSO) algorithm is proposed in this paper, based on a new updated strategy of the particle position. This is done through sharing information of particle position between the dimensions (variables) at any iteration. The strategy can enhance the exploration capability of the GPSO algorithm to determine the optimum global solution and avoid traps at the local optimum. The proposed GPSO algorithm is validated on a 12-benchmark mathematical function and compared with three different types of PSO techniques. The performance of this algorithm is measured based on the solutions’ quality, convergence characteristics, and their robustness after 50 trials. The simulation results showed that the new updated strategy in GPSO assists in realizing a better optimum solution with the smallest standard deviation value compared to other techniques. It can be concluded that the proposed GPSO method is a superior technique for solving high dimensional numerical function optimization problems.

1. Introduction

Nowadays, there are various types of modern optimization techniques, such as Evolutionary Programming (EP), Artificial Immune System (AIS), Ant Colony Optimization (ACO), and Artificial Bees Colony (ABC). All of them can be regarded as heuristic optimization methods, due to the randomization involved in their respective initial steps. Despite the usage of randomized values, the mutation process and other steps in the algorithm render the optimization method capable of solving both linear and nonlinear problems. From the literature, it can be seen that optimization techniques are applicable in many fields. However, among the optimization methods, PSO has become very popular, due to its simplicity and its affinity towards manufacturing and robotics [13], electrical power systems [48], and engineering [911] and in other areas [1218].

There is no specific optimization algorithm that can reach a global solution for every optimization problem. Some algorithms can only provide the best solution to a particular problem, but not to others [19]. In classical PSO cases, PSO sometimes fails to find a global optimum solution when dealing with high dimensional problems. This problem occurs due to the fact that the particles are trapped at a local optimum (premature) solution.

In order to address such problems, several variants of PSO have been proposed to enhance the exploration and exploitation capability and convergence speed and to avoid being trapped at the local optimum [2026]. The modification is made on the PSO algorithm itself or by hybridizing the PSO with other optimization techniques. A new adaptive weight technique that was proposed by Nickabadi et al. [27] was among the recent works involving the modification of the PSO algorithm. There are 3 main constant parameters that will affect the performance of the PSO, which are inertia weight (), cognitive constant (), and social constant (); the changes of these parameters will result in different performances of the algorithm. In a traditional PSO, these parameters are unspecified and need to be adjusted several times in order to obtain a suitable value [28, 29]. This implies that there are possibilities for the user to obtain inaccurate settings on the initial values (, , and ), which will result in the PSO not converging. Therefore, the tool on time varying inertia weight technique in PSO has been introduced in [3034] and also the adaptive inertia weight methods in [3538]. In [27], the authors have used a new adaptive inertia weight approach, which uses the success rate of the swarm as a feedback parameter to determine the particles’ situation in the search space to enhance the performance of PSO. The results showed that the proposed method produced superior results compared to other weight adjustment techniques. Although the results in the study were better, it requires a large number of iterations (×105) before achieving the global optimal value compared to other techniques. Thus, for larger dimensions, the adjustment on inertia weight parameter alone might not be sufficient for the algorithm to realize the global optimal value.

In this paper, a novel Global Particle Swarm Optimization (GPSO) is proposed to improve and enhance the convergence speed of classical PSO. The working principle of the proposed algorithm is based on a new proposed updated parameter () in order to enrich the capability of the algorithm in finding a global solution. This new updating strategy can explore more possible solutions in searching space. The performance of GPSO is compared with three other optimization methods, which are traditional PSO, Iteration Particle Swarm Optimization (IPSO) method [39], and Evolutionary Particle Swarm Optimization (EPSO) method [40]. The performance for all these PSO types will be tested on 12 common mathematical test functions in the dimensions of 20, 30, 50, and 100. The maximum number of iterations for the algorithms was set to 1000. The remainder of this paper is organized in the following fashion. In Section 2, the basic operation of PSO, IPSO, and EPSO and their respective updated strategy in obtaining optimal solutions are briefly explained. The details of the proposed GPSO algorithm and the new updating strategy are described in Section 3. The simulation results between GPSO performance with the other PSO algorithms and the influence of parameter setting on GPSO are presented in Section 4, respectively. Finally, Section 5 concludes this paper.

2. Review of Particle Swarm Optimization and Its Variants

2.1. Particle Swarm Optimization (PSO) Algorithm

The original Particle Swarm Optimization (PSO) Algorithm proposed by Kennedy and Eberhart [28] in 1995 is adapted from the food searching behavior of birds and fish (particles). All these particles will move within the searching space towards the location of food (optimal location) in a specific speed and continuously change their respective positions to arrive at the destination. The position and velocity of ith particle in -dimensional search space are represented as and , respectively. Their movement is guided by their own experience () and the experience of the other particles in a group (). The best position which has been visited by the th particle, until iteration is recorded as , while the best position among the in a group is defined as . Each particle will update its velocity () and position () in the following manner: where is the number of iterations, is the inertia weight, is the velocity of th particle, and are the acceleration coefficients for cognitive and social components, respectively, and and are independently uniform distributed random numbers between 0 and 1.

2.2. Iteration Particle Swarm Optimization (IPSO) Algorithm

Iteration Particle Swarm Optimization (IPSO) is a modification PSO method proposed by Lee and Chen [39] to improve the solution quality and computing time of the algorithm. In the IPSO, three best values were used to update the velocity and the position of the particles, which are known as , , and . The definition and the method to determine the and values in the IPSO are similar to the traditional PSO (discussed in Section 2.1). Meanwhile, the new parameter is defined as the best value of the fitness function that has been achieved by any particle in the present iteration. In other words, value is the value that is randomly selected among the existing particles from the current population. In addition, the authors also introduced a dynamic acceleration constant parameter, value for in IPSO. Therefore, the new velocity formula for the IPSO algorithm is [39] where is the acceleration constant that pulls each particle towards . The value of will change according to and the number of iterations ().

2.3. Evolutionary Particle Swarm Optimization (EPSO) Algorithm

The Evolutionary Particle Swarm Optimization (EPSO) Algorithm introduced by Angeline [40] utilizes the hybridization concept by combining the Evolutionary Programming (EP) with the traditional PSO. There are few methods in EP optimization methods, and one of them utilized the competition, sorting, and selection processes to select the survival particles and render the process in determining the optimum value quicker. By implementing this concept in PSO algorithm, it causes the particles in EPSO to move faster than its counterpart towards the optimal point. Furthermore, the implementation of EP in PSO will also cause the process of finding the and for each iteration to become different and simpler [41]. The EPSO algorithm can get the and values just after the selection process based on the survival particle. Thus, the and values can be easily determined, and the new velocity and position are updated using (1).

3. A Novel Global Particle Swarm Optimization Algorithm

In this study, a novel Global Particle Swarm Optimization (GPSO) Algorithm is introduced. Certain modifications are made on the velocity formula of the traditional PSO algorithm, so that it would reach and obtain the best solution (global solution) quicker than the traditional PSO.

In the traditional PSO, the having a better result is used in the updating process, while the remaining is terminated (replaced). This means that all the values that have been achieved at the previous iteration will not be referring back. In other words, there is no information sharing between the best values that are achieved at the previous iteration with the current results. By using the “Experience” concept, the knowledge and information about what had previously happened can be used as guides for accurate decisions. Therefore, the value at any dimension (variable) and at any iteration will be randomly selected and used to update the velocity value, and this variable is called .

Furthermore, the acceleration coefficient for the parameter () is obtained from the average of cognitive () and social () constant parameter values. Thus, the new velocity formula that updates the next position of particles in GPSO is where the 4th term in the GPSO velocity equation is referred to as the Improvement Factor (IF).

Figure 1 illustrates an example of the GPSO process to determine the value. The optimization problem supposedly consists of 3 variables (, , and ); the number of particles is 6 (the positions of the particles are represented by different colours) and has reached the th iteration (current iteration). Similar to the PSO algorithm, the value for the population at each iteration will be obtained in order to update the next particles’ position. In this example, the population at th iteration has the value that is achieved by the “blue” particle. Therefore, the values for each dimension (variable) at the th iteration are , , and . For the current iteration, the value is achieved by the “light purple” particle; thus, the value can be presented as , where = 1, 2, and 3 (number of dimensions).

However, in the GPSO, an additional parameter is needed for the updating process, which is the value. From the previous definition, the value is obtained from the value that is randomly selected at any iteration and dimension. By using the Gaussian distributed random function, the value from 2nd dimension (2nd variable) at the th iteration is selected to be the value for the current iteration. Thus, in this example, the for the current iteration population is

From the above description, it can be noticed that the value for each iteration will be different for each time. Unlike the concept, where the comparison process is needed to determine the surviving value for the current iteration, the parameter will not require any comparison, and the possibility of having similar values in a next iteration is minuscule, especially for high dimensional problems. Therefore, the particles will learn and know about different “Experience” for each process. Furthermore, the Improvement Factor (IF) will also help the algorithm have the suitable velocity value for updating the next particles’ positions. All these factors made the GPSO superior to its original counterpart. Figure 2 and Algorithm 1 show the flow chart for the GPSO algorithm in solving the high dimensional optimization problem and the corresponding MATLAB codes for the IF parameter.

for
  Ch_iter = fix(rand* ) + 1;
  Ch_part = fix(rand* ) + 1;
while (Ch_part= = )
  Ch_part = fix(rand* ) + 1;
end;
IF ( ) = *rand*( (1, Ch_part, Ch_iter) − X( ));
End
V( ) = W(t)*V( ) + *rand*( ( ) − X( ))
     + *rand*(G( ) − X( )) + IF( );
Where N: number of iteration, t: current iteration, D: Dimension,
i: particle’s number, X: variable, G: the current gbest result

4. Result and Discussion

Twelve well-known classical benchmark functions that consist of unimodal, multimodal, separable, and non-separable types are used to evaluate the performance of the CPSO, IWPSO, IPSO and GPSO algorithms, respectively, as shown in Table 1.


Test functionRange


In order to have a fair comparison, all of the PSO methods in the analysis utilized the same parametric values. The population size for the particles is set to 20 (), with the cognitive and social components ( and ) values set to 0.5. The algorithm will run until the maximum iteration (max. iter. = 1000). Furthermore, the performance of these PSOs will also be tested in 4 different dimensions, which are 20, 30, 50, and 100, and each test set is executed 50 times with the new random population values.

4.1. Results from Benchmark Simulation

Tables 2, 3, 4, and 5 show the results of the best, worst, mean, and standard deviation (SD) values that are achieved by the PSO, IPSO, EPSO, and GPSO for all 12 benchmark functions in four different dimensional sizes, respectively. The standard optimal value for all functions in this analysis is zero except for function 10 (), where the standard optimal value for is −78.33. Since the dimension of each test function is very high (20, 30, 50, and 100 dimensions), the PSO, EPSO, and IPSO algorithms could not provide the global optimal value in the same manner that the GPSO could.


FunctionDim.Min.BestWorstMeanSD

20PSO
IPSO
EPSO
GPSO
30PSO
IPSO
EPSO
0GPSO
50PSO
IPSO
EPSO
GPSO
100PSO
IPSO
EPSO
GPSO

20PSO
IPSO
EPSO
GPSO
30PSO
IPSO
EPSO
0GPSO
50PSO
IPSO
EPSO
GPSO
100PSO
IPSO
EPSO
GPSO

20PSO
IPSO
EPSO
GPSO
30PSO
IPSO
EPSO
0GPSO
50PSO
IPSO
EPSO
GPSO
100PSO
IPSO
EPSO
GPSO


FunctionDim.Min.BestWorstMeanSD

20PSO
IPSO
EPSO
GPSO
30PSO
IPSO
EPSO