Research Article | Open Access
An Efficient Algorithm for Unconstrained Optimization
This paper presents an original and efficient PSO algorithm, which is divided into three phases: (1) stabilization, (2) breadth-first search, and (3) depth-first search. The proposed algorithm, called PSO-3P, was tested with 47 benchmark continuous unconstrained optimization problems, on a total of 82 instances. The numerical results show that the proposed algorithm is able to reach the global optimum. This work mainly focuses on unconstrained optimization problems from 2 to 1,000 variables.
In general, an optimization problem can be defined as
In recent decades several heuristic optimization methods have been developed. These techniques are able to find solutions close to the optimum, where exact or analytic methods cannot produce optimal solutions within reasonable computation time. This is especially true when a global optimum is surrounded by many local optima, a situation known as deep valleys or black holes.
Different heuristic solution methods have been developed, among others, Tabu Search (TS) , Simulated Annealing (SA) , genetic algorithms (GA) , Scatter Search (SS) , and particle swarm optimization (PSO) .
In particular, for PSO there are so many different versions, a treaty on its taxonomy can be found in . For a review of its variants, see . With respect to unconstrained optimization,  proposed a modified algorithm to ensure the rational flight of every particle’s dimensional component; two parameters of fitness function evaluation, particle-distribution-degree, and particle-dimension-distance were introduced in order to avoid premature convergence. Reference  proposed a two-layer PSO (TLPSO) to increase the diversity of the particles so that the drawback of getting trapped in a local optimum was avoided. Reference  introduced a hybrid approach combining particle swarm optimization with genetic algorithm. Reference  presented the particle swarm optimization with flexible swarm; the algorithm was tested over 14 benchmark functions for 300,000 iterations for each function. Reference  presented two hybrids, a real-coded genetic algorithm-based PSO (RGA-PSO) method, and an artificial immune algorithm-based PSO (AIA-PSO) method. These algorithms were tested over 14 benchmark functions, with 20,000 objective function evaluations of the internal PSO approach for a problem with decision variables and 60,000 objective function evaluations of the internal PSO approach for a problem with . Reference  proposed IRPEO, an improved real-coded population-based EO (extremal optimization) method, that was compared with PSO and PSO-EO; the algorithms were tested over 12 benchmark functions with 10,000–50,000 iterations for problems with dimension .
In this paper, we present a PSO-3P  based algorithm that uses three phases to guide the search through the solution space. In order to test the performance of PSO-3P, it was applied to 47 benchmark continuous unconstrained optimization problems, over a total of 82 instances. After some computational experiments, we observed that PSO-3P is able to escape from suboptimal entrapments.
In particular, the proposed PSO-3P algorithm was able to reach the global optimum for the Griewank function with 120,000 variables, , in 40 seconds in average, using only 3 particles and 90 iterations (see Figures 1(a) and 1(b)). This instance was solved using Matlab and run on a Notebook with an Intel Atom N280 processor at 1.66 Ghz.
(a) Griewank function ( hardness 6.08)
(b) Convergence with PSO-3P for Griewank function (dimension = 120,000)
The remainder of this paper is divided as follows: a background of PSO is presented in the following section. The general guidelines of PSO-3P are described in Section 3. Numerical examples are provided in Section 4. Finally, Section 5 includes conclusions and future research.
The particle swarm optimization is a metaheuristic based on swarm intelligence and has its roots in artificial life, social psychology, engineering, and computer science. PSO differs from evolutionary computation (c.f. ) because the population members or agents, also called particles, are “flying” through the problem hyperspace.
PSO is an adaptive method that uses agents or particles moving through the search space using the principles of evaluation, comparison, and imitation .
PSO is based on the use of a set of particles or agents that correspond to states of an optimization problem, where each particle moves across the solution space in search of an optimal position or at least a good solution. In PSO, agents communicate with each other, and the agent with the best position (measured according to an objective function) influences the others by attracting them towards itself.
The population is started by assigning an initial random position and speed for each element. At each iteration, the velocity of each particle is randomly accelerated towards its best position (where the value of the fitness function or objective function improves) and also considering the best positions of their neighbors.
To solve a problem, PSO uses a dynamic management of particles; this approach allows breaking cycles and diversifying the search. In this work, -particle swarm is represented at time under the formwith ; then a movement of the swarm is defined according towhere the velocity is given inwhere is space of feasible solutions, is speed at time of the th particle, is speed at time of the th particle, is th particle at time , is the particle with the best value found so far (i.e., before time ), is best position found so far by the th particle (before time ), is random number uniformly distributed over the interval , and is inertia weight factor.
The PSO algorithm is described in Algorithm 1.
In this section, the main characteristics of the proposed algorithm, called PSO-3P, are described. The PSO-3P is based on a traditional PSO heuristic. However, the position of the particles can be modified using different strategies, which are sequentially applied in three phases of the searching process.
In phase 1, called stabilization, according to the description presented in Section 2, the PSO-3P algorithm generates randomly a set of particles in the solution space. Then, during iterations the position of the particles is modified using (3) and (4). Thus, at the end of this phase the particles are concentrated, or stabilized, in a promising region.
When phase 1 is completed, a breadth-first search strategy, called phase 2, is incorporated. In this phase, if the global best solution is not improved after consecutive iterations, a random particle is created and, with a probability bigger than 0.5, takes the place of one particle randomly selected in the swarm. This process of creation and replacement is repeated times. However, the particle with the best known position is preserved. Thus, the population is dispersed in the solution space, but it can be attracted to the best region visited so far. This diversification strategy is considered during iterations.
Finally, phase 3 is initialized. During iterations the following depth-first search strategy is applied. If the global best solution is not improved after consecutive iterations, particles are randomly created in a neighborhood of the best known solution and take the place of equal number of randomly selected particles in the swarm. Thus, phase 3 includes an intensification process in a promising region.
The main steps of the PSO-3P algorithm are described in Algorithm 2.
4. Computational Results
In order to evaluate the performance of the proposed PSO-3P algorithm, 47 unconstrained optimization problems taken from [16–18] were used. An in-depth study for multiobjective optimization, and constrained optimization, will be presented in a future work.
These problems include functions that are scalable to arbitrary dimensions such as Ackley, Rastrigin, Rosenbrock, Sphere, and Zakharov. So far, these problems have been widely used as benchmarks for study with different methods by many researchers; see [13, 19–25]. Also functions that are especially difficult to solve were included; for example, according to , the overall success of different global optimizers on DeVilliersGlasser02, Damavandi, CrossLegTable, XinSheYang03, Griewank, and XinSheYang02 was 0%, 0.25%, 0.83%, 1.08%, 6.08%, and 31.33%, respectively; see Figure 2.
(a) Damavandi function
(b) CrossLegTable function
(c) XinSheYang03 function
(d) XinSheYang02 function
In order to evaluate the performance of the algorithm we used the following efficiency measure:where is the optimum of function and is the best solutions found by the algorithm after one run. When the efficiency was greater than 0.999999, it was rounded to 1.
The tuning of the operating parameters was realized using a brute-force approach, as described in . Finally, the parameters were set as follows: and ; then,For phase 2 and phase 3 the parameters were set as , , , , and
The algorithm was implemented in Matlab R2008a and was run in an Intel Core i5-3210M processor computer at 2.5 GHz, running on Windows 8.
4.1. Unconstrained Optimization
In this section, we present the results obtained for problems with 120 or less variables. Data reported in each of the following tables derive from 24 independent executions of the algorithm for each function. The algorithm stops when a maximum of 1500 iterations has been done or the optimum has been found. In order to get a good mean efficiency, the number of particles used for each function was tuned using a brute-force approach; thus it can range between 3 and 240 particles for different problems.
Tables 1 and 2 include, for each function, the number of particles (Part.), the average efficiency (Mean Eff.) using (5), the average time per run in seconds (Time (sec)), and the average number of iterations per run (Mean Iter.).
|The overall success of different global optimizers according to Gavana .|
|The overall success of different global optimizers according to Gavana .|
The performance of the PSO-3P is remarkable for DeVilliersGlasser02, Damavandi, and CrossLegTable functions, where the success rate was superior to those reported in .
4.2. Experiments Using Few Particles
As an additional study on the behavior of the proposed PSO-3P, results obtained with small populations are investigated in this section. Indeed, for some instances, the global optimum was found using only 3 or 6 particles and, in average, less than 3000 evaluations of the objective function.
In this case, the algorithm was executed 24 times, and it stops when a maximum of 3000 iterations per run has been done or when the optimum has been found. Tables 3 and 4 include the functions that were solved to optimality, at least in one run among the 24. These tables show the name of the function in column 2. The number of particles used for each case is presented in column 3. Column 4 includes the average time per run in seconds. The percentage of runs in which the global optimum was obtained is included in column 5. The average number of iterations per run is displayed in column 6. Finally, the number of evaluations of the objective function, EOF, is presented in column 7.
Even for this case, when the algorithm was restricted to use few particles and iterations, the success rate was satisfactory for DeVilliersGlasser02 and CrossLegTable functions. It is worth remembering that the number of particles, or iterations, reported in Table 1 is higher, since the data presented in the previous section was generated after tuning the parameters of the algorithm for each function, in order to increase its efficiency.
4.3. High Dimensional Problems
Finally, results obtained for some scalable functions are presented in this section, in order to investigate the performance of PSO-3P on high-dimensional problems. The test functions used in this framework are Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov. The experiments include dimensions 10, 30, 50, 100, 120, and 1000. Results reported by other authors, as far as we know, only include up to 100 dimensions.
Tables 5–10 highlight the results of PSO-3P and different algorithms for these functions. The name of the algorithm is included in the first column; the second column shows the dimension of the problem (Dim.). The number of generations (G), or iterations (I), is incorporated in column three. The fourth column includes the number of particles; the number of evaluations of the objective (EOF), or fitness, function is presented in column five. The sixth column shows the best value found by the algorithm; the average and median value are presented in columns seven and eight, respectively; finally, the last column includes the running time in seconds. We must remark that not all this information was reported in the reviewed literature; thus some cells remain empty.
For Ackley’s function, see Table 5; the PSO-3P always found solutions very close to the global optimum; besides the median was very close to the global optimum for all dimensions (10, 30, 50, 100, 120, and 1000). Additionally, for the Ackley function with 30 dimensions, PSO-3P required less than 50% of the number of evaluations of the objective function compared to the other algorithms.
For the Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov functions, see Tables 6–10; the PSO-3P always found the global optimum in at least one run, and the median always was the global optimum for all dimensions (10, 30, 50, 100, 120, and 1000).
Special attention should be given to the number of iterations and evaluations of the objective, fitness, and function. In average, the PSO-3P required 609 evaluations of the objective function to achieve the global optimum for dimensions 10, 30, 50, 100, 120, and 1000 and less than 0.74 seconds.
The normalized solutions for 17 instances are shown in Table 11; in this table values range between 0 and 1. If an algorithm found a solution close to the best reported value, it is associated with 0. On the other hand, if the best solution found by an algorithm is close to the worst reported value, it is associate with 1. Based on Table 11, we observe that PSO-3P found the best results in 94% of the instances and only in 6% get the second best result. Also, PSO-3P is a competitive alternative to solve Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov problems.
|Note: ELIPSO is represented by 1 ; LDWIPSO is represented by 2 ; FIWPSO is represented by 3 ; CIWPSO is represented by 4 ; RIWPSO is represented by 5 ; TVACPSO is represented by 6 ; COPSO is represented by 7 ; IRPEO is represented by 8 ; pPSA is represented by 9 ; PSO is represented by 10 ; SFLA is represented by 11 ; MSFLA is represented by 12 ; MSFLA-EO is represented by 13 ; PSO-EO is represented by 14 ; LX-PM is represented by 15 ; LX-MPTM is represented by 16 ; LX-NUM is represented by 17 ; HX-PM is represented by 18 ; HX-MPTM is represented by 19 ; HX-NUM is represented by 20 ; ADM is represented by 21 ; RM is represented by 22 ; PLM is represented by 23 ; NUM is represented by 24 ; MNUM is represented by 25 ; PM is represented by 26 ; CSA is represented by 27 ; ACSA is represented by 28 ; ELPSO is represented by 29 ; CPSO is represented by 30 ; HS is represented by 31 ; GA is represented by 32 ; FSO is represented by 33 ; GSA is represented by 34 ; BSOA is represented by 35 ; ABC is represented by 36 .|
Results of the Wilcoxon rank sum test are shown in Figure 3; in this case, a dark square means that the algorithms are statically similar, while a clear square means the opposite. Results of the Wilcoxon rank sum test 3 show that PSO-3P is a metaheuristic similar to PSO-EO , LX-PM , HX-PM , and ACSA . On the other hand, our method is different from the remaining methods. However, PSO-3P required less number of evaluations of the objective function than its counterparts for getting good results. values involved in the Wilcoxon rank sum test are shown in the Figure 4.
5. Conclusions and Further Research
In this work, a novel PSO based algorithm is with three phases: stabilization, breadth-first search, and depth-first search. The resulting PSO-3P algorithm was tested over a set of single-objective unconstrained optimization benchmark instances. The empirical evidence shows that PSO-3P is very efficient.
Moreover, for all benchmark functions considered in this work, PSO-3P converged faster and required fewer iterations than many specialized algorithms, regardless of the dimension of the function. PSO-3P used on average 609 evaluations of the objective function to converge to the global optimum, which represent on average 66% of the number of evaluations reported for these problems by other algorithms, and an average running time of 0.74 seconds for problems of dimension 10 to 1000. As a matter of fact, some solutions are presented for problems of high dimension (120, 1000) that have not been reported before. Also, the numerical results of the Wilcoxon test show that the results obtained by PSO-3P are similar to those reported by IRPEO, LX-MPTM, MNUM, pPSA, LX-NUM, PSO, HX-PM, CSA, SFLA, HX-MPTM, ACSA, MSFLA, HX-NUM, PSO, MSFLA-EO, ADM, PSO-EO, RM, MSFLA, LX-PM, PLM, MSFLA-EO, and NUM. However, the PSO-3P needs less than 1000 evaluations of the objective function for generating good results; in contrast, the other methods need more than 2500 evaluations.
It was observed that, in all the cases studied, PSO-3P can reach the global optimum or a solution can be very close to it, with small number of iterations, as well as the ability to jump deep valleys, for unconstrained optimization with 2 to 1,000 variables.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Sergio Gerardo de-los-Cobos-Silva would like to thank D. Sto. and P. V. Gpe. for their inspiration, and his family Ma, Ser, Mon, Chema, and his Flaquita for all their support.
- F. Glover, “Tabu Search, part I,” ORSA Journal on Computing, vol. 1, no. 3, pp. 190–206, 1989.
- S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983.
- J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Mich, USA, 1975.
- F. Glover, “A template for scatter search and path relinking,” in Artificial Evolution, vol. 1363 of Lecture Notes in Computer Science, pp. 1–51, Springer, Berlin, Germany, 1998.
- J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, December 1995.
- D. Sedighizadeh and E. Masehian, “Particle swarm optimization methods, taxonomy and applications,” International Journal of Computer Theory and Engineering, vol. 1, no. 5, pp. 486–502, 2009.
- I. Muhammad, H. Rathiah, and A. K. Noor Elaiza, “An overview of particle swarm optimization variants,” Procedia Engineering, vol. 53, pp. 491–496, 2013.
- Y. Zhao, W. Zu, and H. Zeng, “A modified particle swarm optimization via particle visual modeling analysis,” Computers & Mathematics with Applications, vol. 57, no. 11-12, pp. 2022–2029, 2009.
- C.-C. Chen, “Two-layer particle swarm optimization for unconstrained optimization problems,” Applied Soft Computing Journal, vol. 11, no. 1, pp. 295–304, 2011.
- W. F. Abd-El-Wahed, A. A. Mousa, and M. A. El-Shorbagy, “Integrating particle swarm optimization with genetic algorithms for solving nonlinear optimization problems,” Journal of Computational and Applied Mathematics, vol. 235, no. 5, pp. 1446–1453, 2011.
- H. Kahramanh and N. Allahverdi, “Partcle swarm optimization with flexible swarm for unconstrained optimization,” International Journal of Intelligent Systems and Applications in Engineering, vol. 1, no. 1, pp. 8–13, 2013.
- J.-Y. Wu, “Solving unconstrained global optimization problems via hybrid swarm intelligence approaches,” Mathematical Problems in Engineering, vol. 2013, Article ID 256180, 15 pages, 2013.
- G.-Q. Zeng, K.-D. Lu, J. Chen et al., “An improved real-coded population-based extremal optimization method for continuous unconstrained optimization problems,” Mathematical Problems in Engineering, vol. 2014, Article ID 420652, 9 pages, 2014.
- S. G. De-los-Cobos-Silva, “SC—system of convergence: theory and foundations,” Revista de Matemática: Teoría y Aplicaciones, vol. 22, no. 2, pp. 341–367, 2015.
- J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence, Morgan Kaufmann Publishers, San Diego, Calif, USA, 2001.
- A. R. Hedar, “Global Optimization Test Problems,” 2014, http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_files/TestGO.htm.
- S. Surjanovic and D. Bingham, “Optimization Test Problems,” 2014, http://www.sfu.ca/~ssurjano/optimization.html.
- A. Gavana, Global Optimization Benchmarks and AMPGO, 2014, http://infinity77.net/global_optimization/test_functions.html.
- A. R. Jordehi, “Enhanced leader PSO (ELPSO): a new PSO variant for solving global optimisation problems,” Applied Soft Computing Journal, vol. 26, pp. 401–417, 2015.
- Z. Xinchao, “A perturbed particle swarm algorithm for numerical optimization,” Applied Soft Computing Journal, vol. 10, no. 1, pp. 119–124, 2010.
- X. Li, J. Luo, M.-R. Chen, and N. Wang, “An improved shuffled frog-leaping algorithm with extremal optimisation for continuous optimisation,” Information Sciences, vol. 192, pp. 143–151, 2012.
- M.-R. Chen, X. Li, X. Zhang, and Y.-Z. Lu, “A novel particle swarm optimizer hybridized with extremal optimization,” Applied Soft Computing Journal, vol. 10, no. 2, pp. 367–373, 2010.
- K. Deep and M. Thakur, “A new mutation operator for real coded genetic algorithms,” Applied Mathematics and Computation, vol. 193, no. 1, pp. 211–230, 2007.
- P.-H. Tang and M.-H. Tseng, “Adaptive directed mutation for real-coded genetic algorithms,” Applied Soft Computing Journal, vol. 13, no. 1, pp. 600–614, 2013.
- P. Ong, “Adaptive cuckoo search algorithm for unconstrained optimization,” The Scientific World Journal, vol. 2014, Article ID 943403, 8 pages, 2014.
- M. Birattari, Tuning Metaheuristics: A Machine Learning Perspective, Springer, 2009.
Copyright © 2015 Sergio Gerardo de-los-Cobos-Silva et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.