Research Article  Open Access
Sergio Gerardo delosCobosSilva, Miguel Ángel GutiérrezAndrade, Roman Anselmo MoraGutiérrez, Pedro LaraVelázquez, Eric Alfredo RincónGarcía, Antonin Ponsich, "An Efficient Algorithm for Unconstrained Optimization", Mathematical Problems in Engineering, vol. 2015, Article ID 178545, 17 pages, 2015. https://doi.org/10.1155/2015/178545
An Efficient Algorithm for Unconstrained Optimization
Abstract
This paper presents an original and efficient PSO algorithm, which is divided into three phases: (1) stabilization, (2) breadthfirst search, and (3) depthfirst search. The proposed algorithm, called PSO3P, was tested with 47 benchmark continuous unconstrained optimization problems, on a total of 82 instances. The numerical results show that the proposed algorithm is able to reach the global optimum. This work mainly focuses on unconstrained optimization problems from 2 to 1,000 variables.
1. Introduction
In general, an optimization problem can be defined as
In recent decades several heuristic optimization methods have been developed. These techniques are able to find solutions close to the optimum, where exact or analytic methods cannot produce optimal solutions within reasonable computation time. This is especially true when a global optimum is surrounded by many local optima, a situation known as deep valleys or black holes.
Different heuristic solution methods have been developed, among others, Tabu Search (TS) [1], Simulated Annealing (SA) [2], genetic algorithms (GA) [3], Scatter Search (SS) [4], and particle swarm optimization (PSO) [5].
In particular, for PSO there are so many different versions, a treaty on its taxonomy can be found in [6]. For a review of its variants, see [7]. With respect to unconstrained optimization, [8] proposed a modified algorithm to ensure the rational flight of every particle’s dimensional component; two parameters of fitness function evaluation, particledistributiondegree, and particledimensiondistance were introduced in order to avoid premature convergence. Reference [9] proposed a twolayer PSO (TLPSO) to increase the diversity of the particles so that the drawback of getting trapped in a local optimum was avoided. Reference [10] introduced a hybrid approach combining particle swarm optimization with genetic algorithm. Reference [11] presented the particle swarm optimization with flexible swarm; the algorithm was tested over 14 benchmark functions for 300,000 iterations for each function. Reference [12] presented two hybrids, a realcoded genetic algorithmbased PSO (RGAPSO) method, and an artificial immune algorithmbased PSO (AIAPSO) method. These algorithms were tested over 14 benchmark functions, with 20,000 objective function evaluations of the internal PSO approach for a problem with decision variables and 60,000 objective function evaluations of the internal PSO approach for a problem with . Reference [13] proposed IRPEO, an improved realcoded populationbased EO (extremal optimization) method, that was compared with PSO and PSOEO; the algorithms were tested over 12 benchmark functions with 10,000–50,000 iterations for problems with dimension .
In this paper, we present a PSO3P [14] based algorithm that uses three phases to guide the search through the solution space. In order to test the performance of PSO3P, it was applied to 47 benchmark continuous unconstrained optimization problems, over a total of 82 instances. After some computational experiments, we observed that PSO3P is able to escape from suboptimal entrapments.
In particular, the proposed PSO3P algorithm was able to reach the global optimum for the Griewank function with 120,000 variables, , in 40 seconds in average, using only 3 particles and 90 iterations (see Figures 1(a) and 1(b)). This instance was solved using Matlab and run on a Notebook with an Intel Atom N280 processor at 1.66 Ghz.
(a) Griewank function ([18] hardness 6.08)
(b) Convergence with PSO3P for Griewank function (dimension = 120,000)
The remainder of this paper is divided as follows: a background of PSO is presented in the following section. The general guidelines of PSO3P are described in Section 3. Numerical examples are provided in Section 4. Finally, Section 5 includes conclusions and future research.
2. PSO
The particle swarm optimization is a metaheuristic based on swarm intelligence and has its roots in artificial life, social psychology, engineering, and computer science. PSO differs from evolutionary computation (c.f. [15]) because the population members or agents, also called particles, are “flying” through the problem hyperspace.
PSO is an adaptive method that uses agents or particles moving through the search space using the principles of evaluation, comparison, and imitation [15].
PSO is based on the use of a set of particles or agents that correspond to states of an optimization problem, where each particle moves across the solution space in search of an optimal position or at least a good solution. In PSO, agents communicate with each other, and the agent with the best position (measured according to an objective function) influences the others by attracting them towards itself.
The population is started by assigning an initial random position and speed for each element. At each iteration, the velocity of each particle is randomly accelerated towards its best position (where the value of the fitness function or objective function improves) and also considering the best positions of their neighbors.
To solve a problem, PSO uses a dynamic management of particles; this approach allows breaking cycles and diversifying the search. In this work, particle swarm is represented at time under the formwith ; then a movement of the swarm is defined according towhere the velocity is given inwhere is space of feasible solutions, is speed at time of the th particle, is speed at time of the th particle, is th particle at time , is the particle with the best value found so far (i.e., before time ), is best position found so far by the th particle (before time ), is random number uniformly distributed over the interval , and is inertia weight factor.
The PSO algorithm is described in Algorithm 1.

3. PSO3P
In this section, the main characteristics of the proposed algorithm, called PSO3P, are described. The PSO3P is based on a traditional PSO heuristic. However, the position of the particles can be modified using different strategies, which are sequentially applied in three phases of the searching process.
In phase 1, called stabilization, according to the description presented in Section 2, the PSO3P algorithm generates randomly a set of particles in the solution space. Then, during iterations the position of the particles is modified using (3) and (4). Thus, at the end of this phase the particles are concentrated, or stabilized, in a promising region.
When phase 1 is completed, a breadthfirst search strategy, called phase 2, is incorporated. In this phase, if the global best solution is not improved after consecutive iterations, a random particle is created and, with a probability bigger than 0.5, takes the place of one particle randomly selected in the swarm. This process of creation and replacement is repeated times. However, the particle with the best known position is preserved. Thus, the population is dispersed in the solution space, but it can be attracted to the best region visited so far. This diversification strategy is considered during iterations.
Finally, phase 3 is initialized. During iterations the following depthfirst search strategy is applied. If the global best solution is not improved after consecutive iterations, particles are randomly created in a neighborhood of the best known solution and take the place of equal number of randomly selected particles in the swarm. Thus, phase 3 includes an intensification process in a promising region.
The main steps of the PSO3P algorithm are described in Algorithm 2.

4. Computational Results
In order to evaluate the performance of the proposed PSO3P algorithm, 47 unconstrained optimization problems taken from [16–18] were used. An indepth study for multiobjective optimization, and constrained optimization, will be presented in a future work.
These problems include functions that are scalable to arbitrary dimensions such as Ackley, Rastrigin, Rosenbrock, Sphere, and Zakharov. So far, these problems have been widely used as benchmarks for study with different methods by many researchers; see [13, 19–25]. Also functions that are especially difficult to solve were included; for example, according to [18], the overall success of different global optimizers on DeVilliersGlasser02, Damavandi, CrossLegTable, XinSheYang03, Griewank, and XinSheYang02 was 0%, 0.25%, 0.83%, 1.08%, 6.08%, and 31.33%, respectively; see Figure 2.
(a) Damavandi function
(b) CrossLegTable function
(c) XinSheYang03 function
(d) XinSheYang02 function
In order to evaluate the performance of the algorithm we used the following efficiency measure:where is the optimum of function and is the best solutions found by the algorithm after one run. When the efficiency was greater than 0.999999, it was rounded to 1.
The tuning of the operating parameters was realized using a bruteforce approach, as described in [26]. Finally, the parameters were set as follows: and ; then,For phase 2 and phase 3 the parameters were set as , , , , and
The algorithm was implemented in Matlab R2008a and was run in an Intel Core i53210M processor computer at 2.5 GHz, running on Windows 8.
4.1. Unconstrained Optimization
In this section, we present the results obtained for problems with 120 or less variables. Data reported in each of the following tables derive from 24 independent executions of the algorithm for each function. The algorithm stops when a maximum of 1500 iterations has been done or the optimum has been found. In order to get a good mean efficiency, the number of particles used for each function was tuned using a bruteforce approach; thus it can range between 3 and 240 particles for different problems.
Tables 1 and 2 include, for each function, the number of particles (Part.), the average efficiency (Mean Eff.) using (5), the average time per run in seconds (Time (sec)), and the average number of iterations per run (Mean Iter.).
 
The overall success of different global optimizers according to Gavana [18]. 
 
The overall success of different global optimizers according to Gavana [18]. 
The performance of the PSO3P is remarkable for DeVilliersGlasser02, Damavandi, and CrossLegTable functions, where the success rate was superior to those reported in [18].
4.2. Experiments Using Few Particles
As an additional study on the behavior of the proposed PSO3P, results obtained with small populations are investigated in this section. Indeed, for some instances, the global optimum was found using only 3 or 6 particles and, in average, less than 3000 evaluations of the objective function.
In this case, the algorithm was executed 24 times, and it stops when a maximum of 3000 iterations per run has been done or when the optimum has been found. Tables 3 and 4 include the functions that were solved to optimality, at least in one run among the 24. These tables show the name of the function in column 2. The number of particles used for each case is presented in column 3. Column 4 includes the average time per run in seconds. The percentage of runs in which the global optimum was obtained is included in column 5. The average number of iterations per run is displayed in column 6. Finally, the number of evaluations of the objective function, EOF, is presented in column 7.


Even for this case, when the algorithm was restricted to use few particles and iterations, the success rate was satisfactory for DeVilliersGlasser02 and CrossLegTable functions. It is worth remembering that the number of particles, or iterations, reported in Table 1 is higher, since the data presented in the previous section was generated after tuning the parameters of the algorithm for each function, in order to increase its efficiency.
4.3. High Dimensional Problems
Finally, results obtained for some scalable functions are presented in this section, in order to investigate the performance of PSO3P on highdimensional problems. The test functions used in this framework are Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov. The experiments include dimensions 10, 30, 50, 100, 120, and 1000. Results reported by other authors, as far as we know, only include up to 100 dimensions.
Tables 5–10 highlight the results of PSO3P and different algorithms for these functions. The name of the algorithm is included in the first column; the second column shows the dimension of the problem (Dim.). The number of generations (G), or iterations (I), is incorporated in column three. The fourth column includes the number of particles; the number of evaluations of the objective (EOF), or fitness, function is presented in column five. The sixth column shows the best value found by the algorithm; the average and median value are presented in columns seven and eight, respectively; finally, the last column includes the running time in seconds. We must remark that not all this information was reported in the reviewed literature; thus some cells remain empty.
 
Average. 
 
Average. 
 
Average. 
 
Average. 
 
Average. 
 
Average. 
For Ackley’s function, see Table 5; the PSO3P always found solutions very close to the global optimum; besides the median was very close to the global optimum for all dimensions (10, 30, 50, 100, 120, and 1000). Additionally, for the Ackley function with 30 dimensions, PSO3P required less than 50% of the number of evaluations of the objective function compared to the other algorithms.
For the Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov functions, see Tables 6–10; the PSO3P always found the global optimum in at least one run, and the median always was the global optimum for all dimensions (10, 30, 50, 100, 120, and 1000).
Special attention should be given to the number of iterations and evaluations of the objective, fitness, and function. In average, the PSO3P required 609 evaluations of the objective function to achieve the global optimum for dimensions 10, 30, 50, 100, 120, and 1000 and less than 0.74 seconds.
The normalized solutions for 17 instances are shown in Table 11; in this table values range between 0 and 1. If an algorithm found a solution close to the best reported value, it is associated with 0. On the other hand, if the best solution found by an algorithm is close to the worst reported value, it is associate with 1. Based on Table 11, we observe that PSO3P found the best results in 94% of the instances and only in 6% get the second best result. Also, PSO3P is a competitive alternative to solve Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov problems.
 
Note: ELIPSO is represented by 1 [19]; LDWIPSO is represented by 2 [19]; FIWPSO is represented by 3 [19]; CIWPSO is represented by 4 [19]; RIWPSO is represented by 5 [19]; TVACPSO is represented by 6 [19]; COPSO is represented by 7 [19]; IRPEO is represented by 8 [13]; pPSA is represented by 9 [20]; PSO is represented by 10 [21]; SFLA is represented by 11 [21]; MSFLA is represented by 12 [21]; MSFLAEO is represented by 13 [21]; PSOEO is represented by 14 [22]; LXPM is represented by 15 [23]; LXMPTM is represented by 16 [23]; LXNUM is represented by 17 [23]; HXPM is represented by 18 [23]; HXMPTM is represented by 19 [23]; HXNUM is represented by 20 [23]; ADM is represented by 21 [24]; RM is represented by 22 [24]; PLM is represented by 23 [24]; NUM is represented by 24 [24]; MNUM is represented by 25 [24]; PM is represented by 26 [24]; CSA is represented by 27 [25]; ACSA is represented by 28 [25]; ELPSO is represented by 29 [19]; CPSO is represented by 30 [19]; HS is represented by 31 [19]; GA is represented by 32 [19]; FSO is represented by 33 [19]; GSA is represented by 34 [19]; BSOA is represented by 35 [19]; ABC is represented by 36 [19]. 
Results of the Wilcoxon rank sum test are shown in Figure 3; in this case, a dark square means that the algorithms are statically similar, while a clear square means the opposite. Results of the Wilcoxon rank sum test 3 show that PSO3P is a metaheuristic similar to PSOEO [22], LXPM [23], HXPM [23], and ACSA [25]. On the other hand, our method is different from the remaining methods. However, PSO3P required less number of evaluations of the objective function than its counterparts for getting good results. values involved in the Wilcoxon rank sum test are shown in the Figure 4.
5. Conclusions and Further Research
In this work, a novel PSO based algorithm is with three phases: stabilization, breadthfirst search, and depthfirst search. The resulting PSO3P algorithm was tested over a set of singleobjective unconstrained optimization benchmark instances. The empirical evidence shows that PSO3P is very efficient.
Tables 5–10 highlight the fact that PSO3P gives good results and competitive solutions for some difficult previously reported problems.
Moreover, for all benchmark functions considered in this work, PSO3P converged faster and required fewer iterations than many specialized algorithms, regardless of the dimension of the function. PSO3P used on average 609 evaluations of the objective function to converge to the global optimum, which represent on average 66% of the number of evaluations reported for these problems by other algorithms, and an average running time of 0.74 seconds for problems of dimension 10 to 1000. As a matter of fact, some solutions are presented for problems of high dimension (120, 1000) that have not been reported before. Also, the numerical results of the Wilcoxon test show that the results obtained by PSO3P are similar to those reported by IRPEO, LXMPTM, MNUM, pPSA, LXNUM, PSO, HXPM, CSA, SFLA, HXMPTM, ACSA, MSFLA, HXNUM, PSO, MSFLAEO, ADM, PSOEO, RM, MSFLA, LXPM, PLM, MSFLAEO, and NUM. However, the PSO3P needs less than 1000 evaluations of the objective function for generating good results; in contrast, the other methods need more than 2500 evaluations.
It was observed that, in all the cases studied, PSO3P can reach the global optimum or a solution can be very close to it, with small number of iterations, as well as the ability to jump deep valleys, for unconstrained optimization with 2 to 1,000 variables.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
Sergio Gerardo delosCobosSilva would like to thank D. Sto. and P. V. Gpe. for their inspiration, and his family Ma, Ser, Mon, Chema, and his Flaquita for all their support.
References
 F. Glover, “Tabu Search, part I,” ORSA Journal on Computing, vol. 1, no. 3, pp. 190–206, 1989. View at: Publisher Site  Google Scholar
 S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. View at: Publisher Site  Google Scholar
 J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Mich, USA, 1975.
 F. Glover, “A template for scatter search and path relinking,” in Artificial Evolution, vol. 1363 of Lecture Notes in Computer Science, pp. 1–51, Springer, Berlin, Germany, 1998. View at: Google Scholar
 J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, December 1995. View at: Google Scholar
 D. Sedighizadeh and E. Masehian, “Particle swarm optimization methods, taxonomy and applications,” International Journal of Computer Theory and Engineering, vol. 1, no. 5, pp. 486–502, 2009. View at: Publisher Site  Google Scholar
 I. Muhammad, H. Rathiah, and A. K. Noor Elaiza, “An overview of particle swarm optimization variants,” Procedia Engineering, vol. 53, pp. 491–496, 2013. View at: Google Scholar
 Y. Zhao, W. Zu, and H. Zeng, “A modified particle swarm optimization via particle visual modeling analysis,” Computers & Mathematics with Applications, vol. 57, no. 1112, pp. 2022–2029, 2009. View at: Publisher Site  Google Scholar
 C.C. Chen, “Twolayer particle swarm optimization for unconstrained optimization problems,” Applied Soft Computing Journal, vol. 11, no. 1, pp. 295–304, 2011. View at: Publisher Site  Google Scholar
 W. F. AbdElWahed, A. A. Mousa, and M. A. ElShorbagy, “Integrating particle swarm optimization with genetic algorithms for solving nonlinear optimization problems,” Journal of Computational and Applied Mathematics, vol. 235, no. 5, pp. 1446–1453, 2011. View at: Publisher Site  Google Scholar
 H. Kahramanh and N. Allahverdi, “Partcle swarm optimization with flexible swarm for unconstrained optimization,” International Journal of Intelligent Systems and Applications in Engineering, vol. 1, no. 1, pp. 8–13, 2013. View at: Google Scholar
 J.Y. Wu, “Solving unconstrained global optimization problems via hybrid swarm intelligence approaches,” Mathematical Problems in Engineering, vol. 2013, Article ID 256180, 15 pages, 2013. View at: Publisher Site  Google Scholar
 G.Q. Zeng, K.D. Lu, J. Chen et al., “An improved realcoded populationbased extremal optimization method for continuous unconstrained optimization problems,” Mathematical Problems in Engineering, vol. 2014, Article ID 420652, 9 pages, 2014. View at: Publisher Site  Google Scholar
 S. G. DelosCobosSilva, “SC—system of convergence: theory and foundations,” Revista de Matemática: Teoría y Aplicaciones, vol. 22, no. 2, pp. 341–367, 2015. View at: Google Scholar
 J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence, Morgan Kaufmann Publishers, San Diego, Calif, USA, 2001.
 A. R. Hedar, “Global Optimization Test Problems,” 2014, http://wwwoptima.amp.i.kyotou.ac.jp/member/student/hedar/Hedar_files/TestGO.htm. View at: Google Scholar
 S. Surjanovic and D. Bingham, “Optimization Test Problems,” 2014, http://www.sfu.ca/~ssurjano/optimization.html. View at: Google Scholar
 A. Gavana, Global Optimization Benchmarks and AMPGO, 2014, http://infinity77.net/global_optimization/test_functions.html.
 A. R. Jordehi, “Enhanced leader PSO (ELPSO): a new PSO variant for solving global optimisation problems,” Applied Soft Computing Journal, vol. 26, pp. 401–417, 2015. View at: Publisher Site  Google Scholar
 Z. Xinchao, “A perturbed particle swarm algorithm for numerical optimization,” Applied Soft Computing Journal, vol. 10, no. 1, pp. 119–124, 2010. View at: Publisher Site  Google Scholar
 X. Li, J. Luo, M.R. Chen, and N. Wang, “An improved shuffled frogleaping algorithm with extremal optimisation for continuous optimisation,” Information Sciences, vol. 192, pp. 143–151, 2012. View at: Publisher Site  Google Scholar
 M.R. Chen, X. Li, X. Zhang, and Y.Z. Lu, “A novel particle swarm optimizer hybridized with extremal optimization,” Applied Soft Computing Journal, vol. 10, no. 2, pp. 367–373, 2010. View at: Publisher Site  Google Scholar
 K. Deep and M. Thakur, “A new mutation operator for real coded genetic algorithms,” Applied Mathematics and Computation, vol. 193, no. 1, pp. 211–230, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 P.H. Tang and M.H. Tseng, “Adaptive directed mutation for realcoded genetic algorithms,” Applied Soft Computing Journal, vol. 13, no. 1, pp. 600–614, 2013. View at: Publisher Site  Google Scholar
 P. Ong, “Adaptive cuckoo search algorithm for unconstrained optimization,” The Scientific World Journal, vol. 2014, Article ID 943403, 8 pages, 2014. View at: Publisher Site  Google Scholar
 M. Birattari, Tuning Metaheuristics: A Machine Learning Perspective, Springer, 2009.
Copyright
Copyright © 2015 Sergio Gerardo delosCobosSilva et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.