Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2014, Article ID 329193, 14 pages
http://dx.doi.org/10.1155/2014/329193
Research Article

Global Particle Swarm Optimization for High Dimension Numerical Functions Analysis

1Faculty of Electrical Engineering, Universiti Teknologi Malaysia (UTM), 81310 Johor Bahru, Johor, Malaysia
2Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia
3Faculty of Electrical and Electronics Engineering, Universiti Tun Hussein Onn Malaysia, 86400 Batu Pahat, Johor, Malaysia

Received 21 October 2013; Revised 18 December 2013; Accepted 10 January 2014; Published 25 February 2014

Academic Editor: Marcelo A. Savi

Copyright © 2014 J. J. Jamian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The Particle Swarm Optimization (PSO) Algorithm is a popular optimization method that is widely used in various applications, due to its simplicity and capability in obtaining optimal results. However, ordinary PSOs may be trapped in the local optimal point, especially in high dimensional problems. To overcome this problem, an efficient Global Particle Swarm Optimization (GPSO) algorithm is proposed in this paper, based on a new updated strategy of the particle position. This is done through sharing information of particle position between the dimensions (variables) at any iteration. The strategy can enhance the exploration capability of the GPSO algorithm to determine the optimum global solution and avoid traps at the local optimum. The proposed GPSO algorithm is validated on a 12-benchmark mathematical function and compared with three different types of PSO techniques. The performance of this algorithm is measured based on the solutions’ quality, convergence characteristics, and their robustness after 50 trials. The simulation results showed that the new updated strategy in GPSO assists in realizing a better optimum solution with the smallest standard deviation value compared to other techniques. It can be concluded that the proposed GPSO method is a superior technique for solving high dimensional numerical function optimization problems.

1. Introduction

Nowadays, there are various types of modern optimization techniques, such as Evolutionary Programming (EP), Artificial Immune System (AIS), Ant Colony Optimization (ACO), and Artificial Bees Colony (ABC). All of them can be regarded as heuristic optimization methods, due to the randomization involved in their respective initial steps. Despite the usage of randomized values, the mutation process and other steps in the algorithm render the optimization method capable of solving both linear and nonlinear problems. From the literature, it can be seen that optimization techniques are applicable in many fields. However, among the optimization methods, PSO has become very popular, due to its simplicity and its affinity towards manufacturing and robotics [13], electrical power systems [48], and engineering [911] and in other areas [1218].

There is no specific optimization algorithm that can reach a global solution for every optimization problem. Some algorithms can only provide the best solution to a particular problem, but not to others [19]. In classical PSO cases, PSO sometimes fails to find a global optimum solution when dealing with high dimensional problems. This problem occurs due to the fact that the particles are trapped at a local optimum (premature) solution.

In order to address such problems, several variants of PSO have been proposed to enhance the exploration and exploitation capability and convergence speed and to avoid being trapped at the local optimum [2026]. The modification is made on the PSO algorithm itself or by hybridizing the PSO with other optimization techniques. A new adaptive weight technique that was proposed by Nickabadi et al. [27] was among the recent works involving the modification of the PSO algorithm. There are 3 main constant parameters that will affect the performance of the PSO, which are inertia weight (), cognitive constant (), and social constant (); the changes of these parameters will result in different performances of the algorithm. In a traditional PSO, these parameters are unspecified and need to be adjusted several times in order to obtain a suitable value [28, 29]. This implies that there are possibilities for the user to obtain inaccurate settings on the initial values (, , and ), which will result in the PSO not converging. Therefore, the tool on time varying inertia weight technique in PSO has been introduced in [3034] and also the adaptive inertia weight methods in [3538]. In [27], the authors have used a new adaptive inertia weight approach, which uses the success rate of the swarm as a feedback parameter to determine the particles’ situation in the search space to enhance the performance of PSO. The results showed that the proposed method produced superior results compared to other weight adjustment techniques. Although the results in the study were better, it requires a large number of iterations (×105) before achieving the global optimal value compared to other techniques. Thus, for larger dimensions, the adjustment on inertia weight parameter alone might not be sufficient for the algorithm to realize the global optimal value.

In this paper, a novel Global Particle Swarm Optimization (GPSO) is proposed to improve and enhance the convergence speed of classical PSO. The working principle of the proposed algorithm is based on a new proposed updated parameter () in order to enrich the capability of the algorithm in finding a global solution. This new updating strategy can explore more possible solutions in searching space. The performance of GPSO is compared with three other optimization methods, which are traditional PSO, Iteration Particle Swarm Optimization (IPSO) method [39], and Evolutionary Particle Swarm Optimization (EPSO) method [40]. The performance for all these PSO types will be tested on 12 common mathematical test functions in the dimensions of 20, 30, 50, and 100. The maximum number of iterations for the algorithms was set to 1000. The remainder of this paper is organized in the following fashion. In Section 2, the basic operation of PSO, IPSO, and EPSO and their respective updated strategy in obtaining optimal solutions are briefly explained. The details of the proposed GPSO algorithm and the new updating strategy are described in Section 3. The simulation results between GPSO performance with the other PSO algorithms and the influence of parameter setting on GPSO are presented in Section 4, respectively. Finally, Section 5 concludes this paper.

2. Review of Particle Swarm Optimization and Its Variants

2.1. Particle Swarm Optimization (PSO) Algorithm

The original Particle Swarm Optimization (PSO) Algorithm proposed by Kennedy and Eberhart [28] in 1995 is adapted from the food searching behavior of birds and fish (particles). All these particles will move within the searching space towards the location of food (optimal location) in a specific speed and continuously change their respective positions to arrive at the destination. The position and velocity of ith particle in -dimensional search space are represented as and , respectively. Their movement is guided by their own experience () and the experience of the other particles in a group (). The best position which has been visited by the th particle, until iteration is recorded as , while the best position among the in a group is defined as . Each particle will update its velocity () and position () in the following manner: where is the number of iterations, is the inertia weight, is the velocity of th particle, and are the acceleration coefficients for cognitive and social components, respectively, and and are independently uniform distributed random numbers between 0 and 1.

2.2. Iteration Particle Swarm Optimization (IPSO) Algorithm

Iteration Particle Swarm Optimization (IPSO) is a modification PSO method proposed by Lee and Chen [39] to improve the solution quality and computing time of the algorithm. In the IPSO, three best values were used to update the velocity and the position of the particles, which are known as , , and . The definition and the method to determine the and values in the IPSO are similar to the traditional PSO (discussed in Section 2.1). Meanwhile, the new parameter is defined as the best value of the fitness function that has been achieved by any particle in the present iteration. In other words, value is the value that is randomly selected among the existing particles from the current population. In addition, the authors also introduced a dynamic acceleration constant parameter, value for in IPSO. Therefore, the new velocity formula for the IPSO algorithm is [39] where is the acceleration constant that pulls each particle towards . The value of will change according to and the number of iterations ().

2.3. Evolutionary Particle Swarm Optimization (EPSO) Algorithm

The Evolutionary Particle Swarm Optimization (EPSO) Algorithm introduced by Angeline [40] utilizes the hybridization concept by combining the Evolutionary Programming (EP) with the traditional PSO. There are few methods in EP optimization methods, and one of them utilized the competition, sorting, and selection processes to select the survival particles and render the process in determining the optimum value quicker. By implementing this concept in PSO algorithm, it causes the particles in EPSO to move faster than its counterpart towards the optimal point. Furthermore, the implementation of EP in PSO will also cause the process of finding the and for each iteration to become different and simpler [41]. The EPSO algorithm can get the and values just after the selection process based on the survival particle. Thus, the and values can be easily determined, and the new velocity and position are updated using (1).

3. A Novel Global Particle Swarm Optimization Algorithm

In this study, a novel Global Particle Swarm Optimization (GPSO) Algorithm is introduced. Certain modifications are made on the velocity formula of the traditional PSO algorithm, so that it would reach and obtain the best solution (global solution) quicker than the traditional PSO.

In the traditional PSO, the having a better result is used in the updating process, while the remaining is terminated (replaced). This means that all the values that have been achieved at the previous iteration will not be referring back. In other words, there is no information sharing between the best values that are achieved at the previous iteration with the current results. By using the “Experience” concept, the knowledge and information about what had previously happened can be used as guides for accurate decisions. Therefore, the value at any dimension (variable) and at any iteration will be randomly selected and used to update the velocity value, and this variable is called .

Furthermore, the acceleration coefficient for the parameter () is obtained from the average of cognitive () and social () constant parameter values. Thus, the new velocity formula that updates the next position of particles in GPSO is where the 4th term in the GPSO velocity equation is referred to as the Improvement Factor (IF).

Figure 1 illustrates an example of the GPSO process to determine the value. The optimization problem supposedly consists of 3 variables (, , and ); the number of particles is 6 (the positions of the particles are represented by different colours) and has reached the th iteration (current iteration). Similar to the PSO algorithm, the value for the population at each iteration will be obtained in order to update the next particles’ position. In this example, the population at th iteration has the value that is achieved by the “blue” particle. Therefore, the values for each dimension (variable) at the th iteration are , , and . For the current iteration, the value is achieved by the “light purple” particle; thus, the value can be presented as , where = 1, 2, and 3 (number of dimensions).

329193.fig.001
Figure 1: Illustration for Global Particle Swarm Optimization concept.

However, in the GPSO, an additional parameter is needed for the updating process, which is the value. From the previous definition, the value is obtained from the value that is randomly selected at any iteration and dimension. By using the Gaussian distributed random function, the value from 2nd dimension (2nd variable) at the th iteration is selected to be the value for the current iteration. Thus, in this example, the for the current iteration population is

From the above description, it can be noticed that the value for each iteration will be different for each time. Unlike the concept, where the comparison process is needed to determine the surviving value for the current iteration, the parameter will not require any comparison, and the possibility of having similar values in a next iteration is minuscule, especially for high dimensional problems. Therefore, the particles will learn and know about different “Experience” for each process. Furthermore, the Improvement Factor (IF) will also help the algorithm have the suitable velocity value for updating the next particles’ positions. All these factors made the GPSO superior to its original counterpart. Figure 2 and Algorithm 1 show the flow chart for the GPSO algorithm in solving the high dimensional optimization problem and the corresponding MATLAB codes for the IF parameter.

alg1
Algorithm 1: The coding for determined IF and updated position in GPSO.

329193.fig.002
Figure 2: The process of Global Particle Swarm Optimization (GPSO) Algorithm.

4. Result and Discussion

Twelve well-known classical benchmark functions that consist of unimodal, multimodal, separable, and non-separable types are used to evaluate the performance of the CPSO, IWPSO, IPSO and GPSO algorithms, respectively, as shown in Table 1.

tab1
Table 1: High dimensional benchmark functions used in experiments.

In order to have a fair comparison, all of the PSO methods in the analysis utilized the same parametric values. The population size for the particles is set to 20 (), with the cognitive and social components ( and ) values set to 0.5. The algorithm will run until the maximum iteration (max. iter. = 1000). Furthermore, the performance of these PSOs will also be tested in 4 different dimensions, which are 20, 30, 50, and 100, and each test set is executed 50 times with the new random population values.

4.1. Results from Benchmark Simulation

Tables 2, 3, 4, and 5 show the results of the best, worst, mean, and standard deviation (SD) values that are achieved by the PSO, IPSO, EPSO, and GPSO for all 12 benchmark functions in four different dimensional sizes, respectively. The standard optimal value for all functions in this analysis is zero except for function 10 (), where the standard optimal value for is −78.33. Since the dimension of each test function is very high (20, 30, 50, and 100 dimensions), the PSO, EPSO, and IPSO algorithms could not provide the global optimal value in the same manner that the GPSO could.

tab2
Table 2: Best, worst, mean, and standard deviation values obtained by PSO, IPSO, EPSO, and GPSO through 50 independent runs on functions from to .
tab3
Table 3: Best, worst, mean, and standard deviation values obtained by PSO, IPSO, EPSO, and GPSO through 50 independent runs on functions from to .
tab4
Table 4: Best, worst, mean, and standard deviation values obtained by PSO, IPSO, EPSO, and GPSO through 50 independent runs on functions from to .
tab5
Table 5: Best, worst, mean, and standard deviation values obtained by PSO, IPSO, EPSO, and GPSO through 50 independent runs on functions from to .

Figure 3 shows the example of the best results that are achieved by all PSO methods in different functions and dimensions. From the results, it can be seen that the concept of “Experience” in GPSO has helped the algorithm obtain the lowest convergence value compared to other PSOs. Not only that, but the GPSO has also reached the global value faster than others, either when the global minimum value is zero (Figures 3(a) and 3(b)) or not (Figure 3(c)). For example, in Figure 3(c), the EPSO is the slowest algorithm that makes the high dimensional problem arrive near its convergence solution (nearly 200 iterations) compared to PSO and IPSO, whilst GPSO only required a small number of iteration (12 iterations) to do the same. This makes GPSO the fastest algorithm in terms of obtaining convergence solution in high dimensional problems and the lowest optimal results when compared to other PSOs methods.

fig3
Figure 3: The performance of PSO, EPSO, IPSO, and GPSO in solving high dimension problems.

On top of that, the limitation on 1000 iterations is not the cause for the PSO, EPSO, and IPSO methods for not realizing the best optimal value. Although the numbers of iterations increase to 20000, the results of optimization problem is still give the same optimal value given by 1000 iterations. It means that, for the high dimensional problems, these three algorithms tend to be trapped at the local optimal solution. Therefore, the parameter not only gives a faster convergence solution, it also helped the algorithm to avoid being trapped at the local optimal solution. Thus, from the “best” results that are archived by all PSO methods in Tables 2 to 5, the GPSO can provide the optimal results very close to or similar to the standard minimum value regardless of the dimension size of the problems.

Furthermore, the GPSO can provide consistent results for all 50-sample tests. This can be seen from the standard deviation (SD) value where the SD values for GPSO are the lowest compared to the results from the other three PSOs (Tables 25: column “SD”). Since the SD value indicates how close the results that are obtained from the 50 different trials to its average value (mean value), the largest of the SD value shows that the algorithm could not provide a consistent result in the analysis. This occurred due to the algorithms that are always trapped in the local results, especially for high dimensional problem. Figure 4 shows the SD values for all functions when the dimension is 50.

329193.fig.004
Figure 4: Comparison of standard deviation between GPSO and other PSOs for all functions with Dim. = 50.

It can be clearly seen that the SD values for , , and for PSO, EPSO, and IPSO, respectively, are very high compared to GPSO results. From Table 3, the “best” values that is achieved by PSO, EPSO, and IPSO in for dimension 50 are , , and , while the “worst” value that were achieved by the algorithms are , , and , respectively. The large difference between the “best” and the “worst” values influences the SD values. However, for functions , , and , the SD results for all the PSO methods are smaller than the other functions results. Therefore, the PSO, EPSO, and IPSO can still give consistent results in some of the high dimensional problems, but not as well as the GPSO’s results. Overall, since the SD value for GPSO in all of the test functions and in all of the dimensions is very small (0), it can be concluded that the GPSO can provide consistent results in any high dimensional problem, as shown in Tables 25 (columns “best” and “worst”).

In summary, the GPSO algorithm can reach the global optimal value within 1000 iterations for all cases on high dimensional test functions. Not only that, but the GPSO also has a very low standard deviation value compared to the PSO, EPSO, and IPSO. Therefore, the GPSO has shown very high efficiency in solving any high dimensional numerical optimization problem.

4.2. Impacts of Acceleration Coefficient Selection for GPSO Algorithm

From the previous sections, the GPSO algorithm demonstrated the best performance in solving the high dimension numerical optimization problems. However, based on the velocity formula in (3), the settings of the acceleration value will also influence GPSO’s performance, albeit in a different manner. Thus, the investigation on the suitable range for coefficient acceleration in solving high dimensional problems is needed. The dimension for all the benchmark functions that are used in this analysis is 50. The maximum iteration number still remains the same, which is 1000 iterations, and the average results and the standard deviation values from 50 runs are presented in Table 6.

tab6
Table 6: The average (mean) and standard deviation values of GPSO with various acceleration coefficients for all functions.

From Table 6, the GPSO algorithm can provide the average results near the optimal solution when the acceleration () value is between 0.2 and 1.5, where , and 3. For case , , and , all the average results for to and to are between 10−11 and 10−3. Since the standard optimal value for these functions is zero, the GPSO gave the average results that are very close to the standard optimal value. For the acceleration coefficient , the to and to functions gave the optimal value from 10−5 to 10−1. For the function, the average results given by until are similar, which is , and equal to the standard optimal results . However, when the acceleration coefficients are set to 2.0, the results of the GPSO are not as good as the results when the acceleration values are less than 1.5, where some of the average value that is obtained by GPSO are in the tens, despite the fact that the optimal results should be zero.

Thus, in order to optimize the GPSO’s performance in solving high dimensional problems, the acceleration coefficient that had been set by the user must be less than 1.5.

Figure 5 shows the standard deviation value for all of the functions when the acceleration varies from 0.2 and 2.0. From the results, when the acceleration coefficient values for GPSO are between 0.2 to 1.0 (, , and ), the SD value for the algorithm is equal to zero for all functions.

329193.fig.005
Figure 5: Standard deviation results for GPSO in varying the acceleration values.

In other words, GPSO is capable of providing consistent results, when the acceleration coefficient values are within range. However, when , some of the functions will have higher SD values, such as , where the SD value is near to 1.0. The worst condition occurred when , where almost all functions have SD values higher than 0, except for functions 3 and 10. Therefore, by considering the average value in 50-test sets’ performance and the SD results, it can be concluded that the suitable value on acceleration coefficient for the GPSO algorithm must be less than 1.0 in order to realize the best performance for GPSO in solving high dimensional problems.

5. Conclusion

This paper proposed a new algorithm for Particle Swarm Optimization, which is known as Global Particle Swarm Optimization (GPSO), to determine the global optimal value for high dimensional optimization problems. The performance of GPSO is tested on 12 classical benchmark high dimensional test functions and compared with 3 other PSO methods, which are Original Particle Swarm Optimization (PSO), Evolutionary Particle Swarm Optimization (EPSO), and Iteration Particle Swarm Optimization (IPSO).

The simulation results showed that the GPSO method performed well in solving high dimensional problems. The implementation of in the GPSO did not only help the algorithm reach the global optimal results in a few iterations, but it also prevented the GPSO from being trapped in local optimal results. Furthermore, the GPSO algorithm also gave consistent results, even with different initial values based on the smallest standard deviation obtained in 50-test sets. From the acceleration coefficient varying analysis, the suitable value for the GPSO algorithm must be less than 1.0 in order to realize the best performance in solving high dimensional problems.

Therefore, from all the results shown by the GPSO algorithm, it can be concluded that GPSO is the superior method in solving high dimensional numerical optimization problems compared to PSO, EPSO, and IPSO algorithms.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. T. Navalertporn and N. V. Afzulpurkar, “Optimization of tile manufacturing process using particle swarm optimization,” Swarm and Evolutionary Computation, vol. 1, no. 2, pp. 97–109, 2011. View at Google Scholar
  2. G. S. Chyan and S. G. Ponnambalam, “Obstacle avoidance control of redundant robots using variants of particle swarm optimization,” Robotics and Computer-Integrated Manufacturing, vol. 28, no. 2, pp. 147–153, 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. Q. Tang and P. Eberhard, “A PSO-based algorithm designed for a swarm of mobile robots,” Structural and Multidisciplinary Optimization, vol. 44, no. 4, pp. 483–498, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. D. N. Jeyakumar, T. Jayabarathi, and T. Raghunathan, “Particle swarm optimization for various types of economic dispatch problems,” International Journal of Electrical Power and Energy Systems, vol. 28, no. 1, pp. 36–42, 2006. View at Publisher · View at Google Scholar · View at Scopus
  5. Q. Kang, T. Lan, Y. Yan, L. Wang, and Q. Wu, “Group search optimizer based optimal location and capacity of distributed generations,” Neurocomputing, vol. 78, no. 1, pp. 55–63, 2012. View at Publisher · View at Google Scholar · View at Scopus
  6. J.-Y. Kim, K.-J. Mun, H.-S. Kim, and J. H. Park, “Optimal power system operation using parallel processing system and PSO algorithm,” International Journal of Electrical Power and Energy Systems, vol. 33, no. 8, pp. 1457–1461, 2011. View at Publisher · View at Google Scholar · View at Scopus
  7. Y. W. Ying, J. Zhou, C. Z. Chao Zhou, Y. W. Wang, H. Q. Qin, and Y. L. Lu, “An improved self-adaptive PSO technique for short-term hydrothermal scheduling,” Expert Systems with Applications, vol. 39, no. 3, pp. 2288–2295, 2012. View at Publisher · View at Google Scholar · View at Scopus
  8. G. Isazadeh, R.-A. Hooshmand, and A. Khodabakhshian, “Modeling and optimization of an adaptive dynamic load shedding using the ANFIS-PSO algorithm,” Simulation, vol. 88, no. 2, pp. 181–196, 2012. View at Publisher · View at Google Scholar · View at Scopus
  9. H. Liu, Z. Cai, and Y. Wang, “Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization,” Applied Soft Computing Journal, vol. 10, no. 2, pp. 629–640, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. A. Farshidianfar, A. Saghafi, S. M. Kalami, and I. Saghafi, “Active vibration isolation of machinery and sensitive equipment using H control criterion and particle swarm optimization method,” Meccanica, vol. 47, no. 2, pp. 437–453, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. S. M. Seyedpoor, J. Salajegheh, and E. Salajegheh, “Shape optimal design of materially nonlinear arch dams including dam-water-foundation rock interaction using an improved PSO algorithm,” Optimization and Engineering, vol. 13, no. 1, pp. 79–100, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  12. A. M. El-Zonkoly, “Optimal placement of multi-distributed generation units including different load models using particle swarm optimization,” Swarm and Evolutionary Computation, vol. 1, no. 1, pp. 50–59, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. R. J. Kuo and C. Y. Yang, “Simulation optimization using particle swarm optimization algorithm with application to assembly line design,” Applied Soft Computing Journal, vol. 11, no. 1, pp. 605–613, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. E. Pekşen, T. Yas, A. Y. Kayman, and C. Özkan, “Application of particle swarm optimization on self-potential data,” Journal of Applied Geophysics, vol. 75, no. 2, pp. 305–318, 2011. View at Publisher · View at Google Scholar · View at Scopus
  15. W.-T. Pan, “Combining PSO cluster and nonlinear mapping algorithm to perform clustering performance analysis: take the enterprise financial alarming as example,” Quality and Quantity, vol. 45, no. 6, pp. 1291–1302, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. L. Goel, D. Gupta, and V. K. Panchal, “Hybrid bio-inspired techniques for land cover feature extraction: a remote sensing perspective,” Applied Soft Computing Journal, vol. 12, no. 2, pp. 832–849, 2012. View at Publisher · View at Google Scholar · View at Scopus
  17. H. M. Jiang, C. K. Kwong, W. H. Ip, and T. C. Wong, “Modeling customer satisfaction for new product development using a PSO-based ANFIS approach,” Applied Soft Computing Journal, vol. 12, no. 2, pp. 726–734, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. J.-W. Li, Y.-C. Chang, C.-P. Chu, and C.-C. Tsai, “A self-adjusting e-course generation process for personalized learning,” Expert Systems with Applications, vol. 39, no. 3, pp. 3223–3232, 2012. View at Publisher · View at Google Scholar · View at Scopus
  19. V. Vassiliadis and G. Dounias, “Nature-inspired intelligence: a review of selected methods and applications,” International Journal on Artificial Intelligence Tools, vol. 18, no. 4, pp. 487–516, 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. H. Liu and A. Abraham, “An hybrid fuzzy variable neighborhood particle swarm optimization algorithm for solving quadratic assignment problems,” Journal of Universal Computer Science, vol. 13, no. 7, pp. 1032–1054, 2007. View at Google Scholar · View at Scopus
  21. J. Jie, J. Zeng, C. Han, and Q. Wang, “Knowledge-based cooperative particle swarm optimization,” Applied Mathematics and Computation, vol. 205, no. 2, pp. 861–873, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  22. S. M. Morkos and H. A. Kamal, “Optimal tuning of PID controller using adaptive hybrid particle swarm optimization algorithm,” International Journal of Computers, Communications and Control, vol. 7, no. 1, pp. 101–114, 2012. View at Google Scholar · View at Scopus
  23. L.-Y. Chuang, S.-W. Tsai, and C.-H. Yang, “Chaotic catfish particle swarm optimization for solving global numerical optimization problems,” Applied Mathematics and Computation, vol. 217, no. 16, pp. 6900–6916, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  24. W.-M. Lin, H.-J. Gow, and M.-T. Tsai, “Hybridizing particle swarm optimization with signal-to-noise ratio for numerical optimization,” Expert Systems with Applications, vol. 38, no. 11, pp. 14086–14093, 2011. View at Publisher · View at Google Scholar · View at Scopus
  25. M. I. Menhas, M. Fei, L. Wang, and L. Qian, “Real/binary co-operative and co-evolving swarms based multivariable PID controller design of ball mill pulverizing system,” Energy Conversion and Management, vol. 54, no. 1, pp. 67–80, 2012. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Xi, J. Sun, and W. Xu, “An improved quantum-behaved particle swarm optimization algorithm with weighted mean best position,” Applied Mathematics and Computation, vol. 205, no. 2, pp. 751–759, 2008. View at Publisher · View at Google Scholar · View at Scopus
  27. A. Nickabadi, M. M. Ebadzadeh, and R. Safabakhsh, “A novel particle swarm optimization algorithm with adaptive inertia weight,” Applied Soft Computing Journal, vol. 11, no. 4, pp. 3658–3670, 2011. View at Publisher · View at Google Scholar · View at Scopus
  28. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, December 1995. View at Scopus
  29. Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC'98), pp. 69–73, Anchorage, Alaska, May 1998. View at Scopus
  30. A. Chatterjee and P. Siarry, “Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization,” Computers and Operations Research, vol. 33, no. 3, pp. 859–871, 2006. View at Publisher · View at Google Scholar · View at Scopus
  31. A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at Publisher · View at Google Scholar · View at Scopus
  32. M. S. Arumugam, G. R. Murthy, M. V. C. Rao, and C. K. Loo, “A novel effective particle swarm optimization like algorithm via extrapolation technique,” in Proceedings of the International Conference on Intelligent and Advanced Systems (ICIAS'07), pp. 516–521, Kuala Lumpur, Malaysia, November 2007. View at Publisher · View at Google Scholar · View at Scopus
  33. W. Fan, Z. Cui, Y. Chen, and Y. Tan, “Nonlinear time-varying stability analysis of particle swarm optimization,” in Proceedings of the International Conference on Computational Aspects of Social Networks (CASoN'10), pp. 3–6, Taiyuan, China, September 2010. View at Publisher · View at Google Scholar · View at Scopus
  34. Y.-l. Zheng, L.-h. Ma, L.-y. Zhang, and J.-x. Qian, “Empirical study of particle swarm optimizer with an increasing inertia weight,” in Proceedings of the Congress on Evolutionary Computation (CEC'03), pp. 221–226, Canberra, Australia, 2003.
  35. C. Dong, G. Wang, Z. Chen, and Z. Yu, “A method of self-adaptive inertia weight for PSO,” in Proceedings of the International Conference on Computer Science and Software Engineering (CSSE'08), pp. 1195–1198, Hubei, China, December 2008. View at Publisher · View at Google Scholar · View at Scopus
  36. M. Lin and Z. Hua, “Improved PSO algorithm with adaptive inertia weight and mutation,” in Proceedings of the WRI World Congress on Computer Science and Information Engineering (CSIE'09), pp. 622–625, Los Angeles, Calif, USA, April 2009. View at Publisher · View at Google Scholar · View at Scopus
  37. W. Lin, C. Jiang, and J. Qian, “The identification research of nonlinear system based on PSO with fuzzy adaptive inertia weight,” in Proceedings of the 5th World Congress on Intelligent Control and Automation (WCICA'04), pp. 267–271, Hangzhou, China, June 2004. View at Scopus
  38. Z. Wu and J. Zhou, “A self-adaptive particle swarm optimization algorithm with individual coefficients adjustment,” in Proceedings of the International Conference on Computational Intelligence and Security (CIS'07), pp. 133–136, Heilong Jiang, China, December 2007. View at Publisher · View at Google Scholar · View at Scopus
  39. T.-Y. Lee and C.-L. Chen, “Unit commitment with probabilistic reserve: an IPSO approach,” Energy Conversion and Management, vol. 48, no. 2, pp. 486–493, 2007. View at Publisher · View at Google Scholar · View at Scopus
  40. P. J. Angeline, “Using selection to improve particle swarm optimization,” in Proceedings of the IEEE International Conference on Evolutionary Computation Proceedings, IEEE World Congress on Computational Intelligence, pp. 84–89, Anchorage, Alaska, USA, May 1998. View at Scopus
  41. J. J. Jamian, M. W. Mustafa, H. Mokhlis, and M. N. Abdullah, “Comparative study on distributed generator sizing using three types of particle swarm optimization,” in Proceedings of the 3rd International Conference on Intelligent Systems Modelling and Simulation (ISMS'12), pp. 131–136, Sabah, Malaysia, February 2012. View at Publisher · View at Google Scholar · View at Scopus