Abstract

This paper presents an original and efficient PSO algorithm, which is divided into three phases: (1) stabilization, (2) breadth-first search, and (3) depth-first search. The proposed algorithm, called PSO-3P, was tested with 47 benchmark continuous unconstrained optimization problems, on a total of 82 instances. The numerical results show that the proposed algorithm is able to reach the global optimum. This work mainly focuses on unconstrained optimization problems from 2 to 1,000 variables.

1. Introduction

In general, an optimization problem can be defined as

In recent decades several heuristic optimization methods have been developed. These techniques are able to find solutions close to the optimum, where exact or analytic methods cannot produce optimal solutions within reasonable computation time. This is especially true when a global optimum is surrounded by many local optima, a situation known as deep valleys or black holes.

Different heuristic solution methods have been developed, among others, Tabu Search (TS) [1], Simulated Annealing (SA) [2], genetic algorithms (GA) [3], Scatter Search (SS) [4], and particle swarm optimization (PSO) [5].

In particular, for PSO there are so many different versions, a treaty on its taxonomy can be found in [6]. For a review of its variants, see [7]. With respect to unconstrained optimization, [8] proposed a modified algorithm to ensure the rational flight of every particle’s dimensional component; two parameters of fitness function evaluation, particle-distribution-degree, and particle-dimension-distance were introduced in order to avoid premature convergence. Reference [9] proposed a two-layer PSO (TLPSO) to increase the diversity of the particles so that the drawback of getting trapped in a local optimum was avoided. Reference [10] introduced a hybrid approach combining particle swarm optimization with genetic algorithm. Reference [11] presented the particle swarm optimization with flexible swarm; the algorithm was tested over 14 benchmark functions for 300,000 iterations for each function. Reference [12] presented two hybrids, a real-coded genetic algorithm-based PSO (RGA-PSO) method, and an artificial immune algorithm-based PSO (AIA-PSO) method. These algorithms were tested over 14 benchmark functions, with 20,000 objective function evaluations of the internal PSO approach for a problem with decision variables and 60,000 objective function evaluations of the internal PSO approach for a problem with . Reference [13] proposed IRPEO, an improved real-coded population-based EO (extremal optimization) method, that was compared with PSO and PSO-EO; the algorithms were tested over 12 benchmark functions with 10,000–50,000 iterations for problems with dimension .

In this paper, we present a PSO-3P [14] based algorithm that uses three phases to guide the search through the solution space. In order to test the performance of PSO-3P, it was applied to 47 benchmark continuous unconstrained optimization problems, over a total of 82 instances. After some computational experiments, we observed that PSO-3P is able to escape from suboptimal entrapments.

In particular, the proposed PSO-3P algorithm was able to reach the global optimum for the Griewank function with 120,000 variables, , in 40 seconds in average, using only 3 particles and 90 iterations (see Figures 1(a) and 1(b)). This instance was solved using Matlab and run on a Notebook with an Intel Atom N280 processor at 1.66 Ghz.

The remainder of this paper is divided as follows: a background of PSO is presented in the following section. The general guidelines of PSO-3P are described in Section 3. Numerical examples are provided in Section 4. Finally, Section 5 includes conclusions and future research.

2. PSO

The particle swarm optimization is a metaheuristic based on swarm intelligence and has its roots in artificial life, social psychology, engineering, and computer science. PSO differs from evolutionary computation (c.f. [15]) because the population members or agents, also called particles, are “flying” through the problem hyperspace.

PSO is an adaptive method that uses agents or particles moving through the search space using the principles of evaluation, comparison, and imitation [15].

PSO is based on the use of a set of particles or agents that correspond to states of an optimization problem, where each particle moves across the solution space in search of an optimal position or at least a good solution. In PSO, agents communicate with each other, and the agent with the best position (measured according to an objective function) influences the others by attracting them towards itself.

The population is started by assigning an initial random position and speed for each element. At each iteration, the velocity of each particle is randomly accelerated towards its best position (where the value of the fitness function or objective function improves) and also considering the best positions of their neighbors.

To solve a problem, PSO uses a dynamic management of particles; this approach allows breaking cycles and diversifying the search. In this work, -particle swarm is represented at time under the formwith ; then a movement of the swarm is defined according towhere the velocity is given inwhere is space of feasible solutions, is speed at time of the th particle, is speed at time of the th particle, is th particle at time , is the particle with the best value found so far (i.e., before time ), is best position found so far by the th particle (before time ), is random number uniformly distributed over the interval , and is inertia weight factor.

The PSO algorithm is described in Algorithm 1.

(1) Begin.
(2) while  Termination criterion is not satisfied  do
(3)  Create a population of particles distributed in the feasible space.
(4)  Evaluate each position of the particles according to the objective function (fitness function).
(5)  If the current position of a particle is better than the previous one, update it.
(6)  Determine the best particle (according to the best previous positions).
(7)  Update the particle velocities according to (4).
(8)  Move the particles to new positions according to (3).
(9) end

3. PSO-3P

In this section, the main characteristics of the proposed algorithm, called PSO-3P, are described. The PSO-3P is based on a traditional PSO heuristic. However, the position of the particles can be modified using different strategies, which are sequentially applied in three phases of the searching process.

In phase 1, called stabilization, according to the description presented in Section 2, the PSO-3P algorithm generates randomly a set of particles in the solution space. Then, during iterations the position of the particles is modified using (3) and (4). Thus, at the end of this phase the particles are concentrated, or stabilized, in a promising region.

When phase 1 is completed, a breadth-first search strategy, called phase 2, is incorporated. In this phase, if the global best solution is not improved after consecutive iterations, a random particle is created and, with a probability bigger than 0.5, takes the place of one particle randomly selected in the swarm. This process of creation and replacement is repeated times. However, the particle with the best known position is preserved. Thus, the population is dispersed in the solution space, but it can be attracted to the best region visited so far. This diversification strategy is considered during iterations.

Finally, phase 3 is initialized. During iterations the following depth-first search strategy is applied. If the global best solution is not improved after consecutive iterations, particles are randomly created in a neighborhood of the best known solution and take the place of equal number of randomly selected particles in the swarm. Thus, phase 3 includes an intensification process in a promising region.

The main steps of the PSO-3P algorithm are described in Algorithm 2.

(1) Begin.
(2) while  Termination criterion is not satisfied  do
(3)  Set variables , , , , and .
(4)  Create a population of nPop random particles.
(5)  Set and . Evaluate each position of the particles according to the fitness function.
(6)  If the current position of a particle is better (respect to the fitness function) than the previous update it.
(7)  Determine the best particle (according to the best previous positions against the optimization criterion).
     If a better particle cannot be founded, let .
(8)  Update the particle velocities according to (4).
(9)  (Phase 1: Stabilization) if    then
(10)  go to Step (34).
(11)   end
(12)  (Phase 2: Breadth-first search) if    then
(13)  if cont = c  then
(14)   Set . while    do
(15)    Create a random particle and, with a probability bigger than 0.5 substitute randomly a particle in the swarm.
(16)    Set .
(17)   end
(18)   Set .
(19)  end
(20) go to Step (34).
(21)  end
(22) end
(23) (Phase 3: Depth-first search). if    then
(24)if    then
(25)  Set . while  . do
(26)   Create a random particle in a variable neighborhood of and substitute randomly a particle in the swarm.
(27)   Set .
(28)  end
(29)end
(30) Set .
(31)  go to Step (34).
(32) end
(33) Select the best particles according to optimization criterion.
(34) Set . Go to Step (3) until the termination criterion is satisfied.

4. Computational Results

In order to evaluate the performance of the proposed PSO-3P algorithm, 47 unconstrained optimization problems taken from [1618] were used. An in-depth study for multiobjective optimization, and constrained optimization, will be presented in a future work.

These problems include functions that are scalable to arbitrary dimensions such as Ackley, Rastrigin, Rosenbrock, Sphere, and Zakharov. So far, these problems have been widely used as benchmarks for study with different methods by many researchers; see [13, 1925]. Also functions that are especially difficult to solve were included; for example, according to [18], the overall success of different global optimizers on DeVilliersGlasser02, Damavandi, CrossLegTable, XinSheYang03, Griewank, and XinSheYang02 was 0%, 0.25%, 0.83%, 1.08%, 6.08%, and 31.33%, respectively; see Figure 2.

In order to evaluate the performance of the algorithm we used the following efficiency measure:where is the optimum of function and is the best solutions found by the algorithm after one run. When the efficiency was greater than 0.999999, it was rounded to 1.

The tuning of the operating parameters was realized using a brute-force approach, as described in [26]. Finally, the parameters were set as follows: and ; then,For phase 2 and phase 3 the parameters were set as , , , , and

The algorithm was implemented in Matlab R2008a and was run in an Intel Core i5-3210M processor computer at 2.5 GHz, running on Windows 8.

4.1. Unconstrained Optimization

In this section, we present the results obtained for problems with 120 or less variables. Data reported in each of the following tables derive from 24 independent executions of the algorithm for each function. The algorithm stops when a maximum of 1500 iterations has been done or the optimum has been found. In order to get a good mean efficiency, the number of particles used for each function was tuned using a brute-force approach; thus it can range between 3 and 240 particles for different problems.

Tables 1 and 2 include, for each function, the number of particles (Part.), the average efficiency (Mean Eff.) using (5), the average time per run in seconds (Time (sec)), and the average number of iterations per run (Mean Iter.).

The performance of the PSO-3P is remarkable for DeVilliersGlasser02, Damavandi, and CrossLegTable functions, where the success rate was superior to those reported in [18].

4.2. Experiments Using Few Particles

As an additional study on the behavior of the proposed PSO-3P, results obtained with small populations are investigated in this section. Indeed, for some instances, the global optimum was found using only 3 or 6 particles and, in average, less than 3000 evaluations of the objective function.

In this case, the algorithm was executed 24 times, and it stops when a maximum of 3000 iterations per run has been done or when the optimum has been found. Tables 3 and 4 include the functions that were solved to optimality, at least in one run among the 24. These tables show the name of the function in column 2. The number of particles used for each case is presented in column 3. Column 4 includes the average time per run in seconds. The percentage of runs in which the global optimum was obtained is included in column 5. The average number of iterations per run is displayed in column 6. Finally, the number of evaluations of the objective function, EOF, is presented in column 7.

Even for this case, when the algorithm was restricted to use few particles and iterations, the success rate was satisfactory for DeVilliersGlasser02 and CrossLegTable functions. It is worth remembering that the number of particles, or iterations, reported in Table 1 is higher, since the data presented in the previous section was generated after tuning the parameters of the algorithm for each function, in order to increase its efficiency.

4.3. High Dimensional Problems

Finally, results obtained for some scalable functions are presented in this section, in order to investigate the performance of PSO-3P on high-dimensional problems. The test functions used in this framework are Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov. The experiments include dimensions 10, 30, 50, 100, 120, and 1000. Results reported by other authors, as far as we know, only include up to 100 dimensions.

Tables 510 highlight the results of PSO-3P and different algorithms for these functions. The name of the algorithm is included in the first column; the second column shows the dimension of the problem (Dim.). The number of generations (G), or iterations (I), is incorporated in column three. The fourth column includes the number of particles; the number of evaluations of the objective (EOF), or fitness, function is presented in column five. The sixth column shows the best value found by the algorithm; the average and median value are presented in columns seven and eight, respectively; finally, the last column includes the running time in seconds. We must remark that not all this information was reported in the reviewed literature; thus some cells remain empty.

For Ackley’s function, see Table 5; the PSO-3P always found solutions very close to the global optimum; besides the median was very close to the global optimum for all dimensions (10, 30, 50, 100, 120, and 1000). Additionally, for the Ackley function with 30 dimensions, PSO-3P required less than 50% of the number of evaluations of the objective function compared to the other algorithms.

For the Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov functions, see Tables 610; the PSO-3P always found the global optimum in at least one run, and the median always was the global optimum for all dimensions (10, 30, 50, 100, 120, and 1000).

Special attention should be given to the number of iterations and evaluations of the objective, fitness, and function. In average, the PSO-3P required 609 evaluations of the objective function to achieve the global optimum for dimensions 10, 30, 50, 100, 120, and 1000 and less than 0.74 seconds.

The normalized solutions for 17 instances are shown in Table 11; in this table values range between 0 and 1. If an algorithm found a solution close to the best reported value, it is associated with 0. On the other hand, if the best solution found by an algorithm is close to the worst reported value, it is associate with 1. Based on Table 11, we observe that PSO-3P found the best results in 94% of the instances and only in 6% get the second best result. Also, PSO-3P is a competitive alternative to solve Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov problems.

Results of the Wilcoxon rank sum test are shown in Figure 3; in this case, a dark square means that the algorithms are statically similar, while a clear square means the opposite. Results of the Wilcoxon rank sum test 3 show that PSO-3P is a metaheuristic similar to PSO-EO [22], LX-PM [23], HX-PM [23], and ACSA [25]. On the other hand, our method is different from the remaining methods. However, PSO-3P required less number of evaluations of the objective function than its counterparts for getting good results. values involved in the Wilcoxon rank sum test are shown in the Figure 4.

5. Conclusions and Further Research

In this work, a novel PSO based algorithm is with three phases: stabilization, breadth-first search, and depth-first search. The resulting PSO-3P algorithm was tested over a set of single-objective unconstrained optimization benchmark instances. The empirical evidence shows that PSO-3P is very efficient.

Tables 510 highlight the fact that PSO-3P gives good results and competitive solutions for some difficult previously reported problems.

Moreover, for all benchmark functions considered in this work, PSO-3P converged faster and required fewer iterations than many specialized algorithms, regardless of the dimension of the function. PSO-3P used on average 609 evaluations of the objective function to converge to the global optimum, which represent on average 66% of the number of evaluations reported for these problems by other algorithms, and an average running time of 0.74 seconds for problems of dimension 10 to 1000. As a matter of fact, some solutions are presented for problems of high dimension (120, 1000) that have not been reported before. Also, the numerical results of the Wilcoxon test show that the results obtained by PSO-3P are similar to those reported by IRPEO, LX-MPTM, MNUM, pPSA, LX-NUM, PSO, HX-PM, CSA, SFLA, HX-MPTM, ACSA, MSFLA, HX-NUM, PSO, MSFLA-EO, ADM, PSO-EO, RM, MSFLA, LX-PM, PLM, MSFLA-EO, and NUM. However, the PSO-3P needs less than 1000 evaluations of the objective function for generating good results; in contrast, the other methods need more than 2500 evaluations.

It was observed that, in all the cases studied, PSO-3P can reach the global optimum or a solution can be very close to it, with small number of iterations, as well as the ability to jump deep valleys, for unconstrained optimization with 2 to 1,000 variables.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

Sergio Gerardo de-los-Cobos-Silva would like to thank D. Sto. and P. V. Gpe. for their inspiration, and his family Ma, Ser, Mon, Chema, and his Flaquita for all their support.