Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2015 / Article

Research Article | Open Access

Volume 2015 |Article ID 178545 | 17 pages | https://doi.org/10.1155/2015/178545

An Efficient Algorithm for Unconstrained Optimization

Academic Editor: Peide Liu
Received29 Apr 2015
Revised04 Aug 2015
Accepted06 Aug 2015
Published30 Sep 2015

Abstract

This paper presents an original and efficient PSO algorithm, which is divided into three phases: (1) stabilization, (2) breadth-first search, and (3) depth-first search. The proposed algorithm, called PSO-3P, was tested with 47 benchmark continuous unconstrained optimization problems, on a total of 82 instances. The numerical results show that the proposed algorithm is able to reach the global optimum. This work mainly focuses on unconstrained optimization problems from 2 to 1,000 variables.

1. Introduction

In general, an optimization problem can be defined as

In recent decades several heuristic optimization methods have been developed. These techniques are able to find solutions close to the optimum, where exact or analytic methods cannot produce optimal solutions within reasonable computation time. This is especially true when a global optimum is surrounded by many local optima, a situation known as deep valleys or black holes.

Different heuristic solution methods have been developed, among others, Tabu Search (TS) [1], Simulated Annealing (SA) [2], genetic algorithms (GA) [3], Scatter Search (SS) [4], and particle swarm optimization (PSO) [5].

In particular, for PSO there are so many different versions, a treaty on its taxonomy can be found in [6]. For a review of its variants, see [7]. With respect to unconstrained optimization, [8] proposed a modified algorithm to ensure the rational flight of every particle’s dimensional component; two parameters of fitness function evaluation, particle-distribution-degree, and particle-dimension-distance were introduced in order to avoid premature convergence. Reference [9] proposed a two-layer PSO (TLPSO) to increase the diversity of the particles so that the drawback of getting trapped in a local optimum was avoided. Reference [10] introduced a hybrid approach combining particle swarm optimization with genetic algorithm. Reference [11] presented the particle swarm optimization with flexible swarm; the algorithm was tested over 14 benchmark functions for 300,000 iterations for each function. Reference [12] presented two hybrids, a real-coded genetic algorithm-based PSO (RGA-PSO) method, and an artificial immune algorithm-based PSO (AIA-PSO) method. These algorithms were tested over 14 benchmark functions, with 20,000 objective function evaluations of the internal PSO approach for a problem with decision variables and 60,000 objective function evaluations of the internal PSO approach for a problem with . Reference [13] proposed IRPEO, an improved real-coded population-based EO (extremal optimization) method, that was compared with PSO and PSO-EO; the algorithms were tested over 12 benchmark functions with 10,000–50,000 iterations for problems with dimension .

In this paper, we present a PSO-3P [14] based algorithm that uses three phases to guide the search through the solution space. In order to test the performance of PSO-3P, it was applied to 47 benchmark continuous unconstrained optimization problems, over a total of 82 instances. After some computational experiments, we observed that PSO-3P is able to escape from suboptimal entrapments.

In particular, the proposed PSO-3P algorithm was able to reach the global optimum for the Griewank function with 120,000 variables, , in 40 seconds in average, using only 3 particles and 90 iterations (see Figures 1(a) and 1(b)). This instance was solved using Matlab and run on a Notebook with an Intel Atom N280 processor at 1.66 Ghz.

The remainder of this paper is divided as follows: a background of PSO is presented in the following section. The general guidelines of PSO-3P are described in Section 3. Numerical examples are provided in Section 4. Finally, Section 5 includes conclusions and future research.

2. PSO

The particle swarm optimization is a metaheuristic based on swarm intelligence and has its roots in artificial life, social psychology, engineering, and computer science. PSO differs from evolutionary computation (c.f. [15]) because the population members or agents, also called particles, are “flying” through the problem hyperspace.

PSO is an adaptive method that uses agents or particles moving through the search space using the principles of evaluation, comparison, and imitation [15].

PSO is based on the use of a set of particles or agents that correspond to states of an optimization problem, where each particle moves across the solution space in search of an optimal position or at least a good solution. In PSO, agents communicate with each other, and the agent with the best position (measured according to an objective function) influences the others by attracting them towards itself.

The population is started by assigning an initial random position and speed for each element. At each iteration, the velocity of each particle is randomly accelerated towards its best position (where the value of the fitness function or objective function improves) and also considering the best positions of their neighbors.

To solve a problem, PSO uses a dynamic management of particles; this approach allows breaking cycles and diversifying the search. In this work, -particle swarm is represented at time under the formwith ; then a movement of the swarm is defined according towhere the velocity is given inwhere is space of feasible solutions, is speed at time of the th particle, is speed at time of the th particle, is th particle at time , is the particle with the best value found so far (i.e., before time ), is best position found so far by the th particle (before time ), is random number uniformly distributed over the interval , and is inertia weight factor.

The PSO algorithm is described in Algorithm 1.

(1) Begin.
(2) while  Termination criterion is not satisfied  do
(3)  Create a population of particles distributed in the feasible space.
(4)  Evaluate each position of the particles according to the objective function (fitness function).
(5)  If the current position of a particle is better than the previous one, update it.
(6)  Determine the best particle (according to the best previous positions).
(7)  Update the particle velocities according to (4).
(8)  Move the particles to new positions according to (3).
(9) end

3. PSO-3P

In this section, the main characteristics of the proposed algorithm, called PSO-3P, are described. The PSO-3P is based on a traditional PSO heuristic. However, the position of the particles can be modified using different strategies, which are sequentially applied in three phases of the searching process.

In phase 1, called stabilization, according to the description presented in Section 2, the PSO-3P algorithm generates randomly a set of particles in the solution space. Then, during iterations the position of the particles is modified using (3) and (4). Thus, at the end of this phase the particles are concentrated, or stabilized, in a promising region.

When phase 1 is completed, a breadth-first search strategy, called phase 2, is incorporated. In this phase, if the global best solution is not improved after consecutive iterations, a random particle is created and, with a probability bigger than 0.5, takes the place of one particle randomly selected in the swarm. This process of creation and replacement is repeated times. However, the particle with the best known position is preserved. Thus, the population is dispersed in the solution space, but it can be attracted to the best region visited so far. This diversification strategy is considered during iterations.

Finally, phase 3 is initialized. During iterations the following depth-first search strategy is applied. If the global best solution is not improved after consecutive iterations, particles are randomly created in a neighborhood of the best known solution and take the place of equal number of randomly selected particles in the swarm. Thus, phase 3 includes an intensification process in a promising region.

The main steps of the PSO-3P algorithm are described in Algorithm 2.

(1) Begin.
(2) while  Termination criterion is not satisfied  do
(3)  Set variables , , , , and .
(4)  Create a population of nPop random particles.
(5)  Set and . Evaluate each position of the particles according to the fitness function.
(6)  If the current position of a particle is better (respect to the fitness function) than the previous update it.
(7)  Determine the best particle (according to the best previous positions against the optimization criterion).
     If a better particle cannot be founded, let .
(8)  Update the particle velocities according to (4).
(9)  (Phase 1: Stabilization) if    then
(10)  go to Step (34).
(11)   end
(12)  (Phase 2: Breadth-first search) if    then
(13)  if cont = c  then
(14)   Set . while    do
(15)    Create a random particle and, with a probability bigger than 0.5 substitute randomly a particle in the swarm.
(16)    Set .
(17)   end
(18)   Set .
(19)  end
(20) go to Step (34).
(21)  end
(22) end
(23) (Phase 3: Depth-first search). if    then
(24)if    then
(25)  Set . while  . do
(26)   Create a random particle in a variable neighborhood of and substitute randomly a particle in the swarm.
(27)   Set .
(28)  end
(29)end
(30) Set .
(31)  go to Step (34).
(32) end
(33) Select the best particles according to optimization criterion.
(34) Set . Go to Step (3) until the termination criterion is satisfied.

4. Computational Results

In order to evaluate the performance of the proposed PSO-3P algorithm, 47 unconstrained optimization problems taken from [1618] were used. An in-depth study for multiobjective optimization, and constrained optimization, will be presented in a future work.

These problems include functions that are scalable to arbitrary dimensions such as Ackley, Rastrigin, Rosenbrock, Sphere, and Zakharov. So far, these problems have been widely used as benchmarks for study with different methods by many researchers; see [13, 1925]. Also functions that are especially difficult to solve were included; for example, according to [18], the overall success of different global optimizers on DeVilliersGlasser02, Damavandi, CrossLegTable, XinSheYang03, Griewank, and XinSheYang02 was 0%, 0.25%, 0.83%, 1.08%, 6.08%, and 31.33%, respectively; see Figure 2.

In order to evaluate the performance of the algorithm we used the following efficiency measure:where is the optimum of function and is the best solutions found by the algorithm after one run. When the efficiency was greater than 0.999999, it was rounded to 1.

The tuning of the operating parameters was realized using a brute-force approach, as described in [26]. Finally, the parameters were set as follows: and ; then,For phase 2 and phase 3 the parameters were set as , , , , and

The algorithm was implemented in Matlab R2008a and was run in an Intel Core i5-3210M processor computer at 2.5 GHz, running on Windows 8.

4.1. Unconstrained Optimization

In this section, we present the results obtained for problems with 120 or less variables. Data reported in each of the following tables derive from 24 independent executions of the algorithm for each function. The algorithm stops when a maximum of 1500 iterations has been done or the optimum has been found. In order to get a good mean efficiency, the number of particles used for each function was tuned using a brute-force approach; thus it can range between 3 and 240 particles for different problems.

Tables 1 and 2 include, for each function, the number of particles (Part.), the average efficiency (Mean Eff.) using (5), the average time per run in seconds (Time (sec)), and the average number of iterations per run (Mean Iter.).


Function(dimension) Part.Mean Eff.Time (sec)Mean Iter.

1 Ackley(10) 600.9946796.99424.63
2 Ackley(120) 31.0000001.46322.08
3 Ackley(2) 240.9981710.7058.58
4 Ackley(30) 121.0000001.39185.25
5 Ackley(6)360.9982722.37227.00
6 Beale(2) 601.0000000.5822.88
7 Bohachevsky 1(2) 2401.0000004.8621.88
8 Bohachevsky 2(2) 2401.0000004.7120.42
9 Booth(2) 2401.0000005.0118.71
10 Branin(2)240.9999600.5058.46
11 Bukin6(2) 120.8753252.96499.00
12 Colville(4) 30.7917432.56749.00
13 CrossLegTable(2) (0.83) 600.8334268.07198.42
14 Cross-in-Tray(2) 30.9999081.61422.38
15 Damavandi(2) (0.25) 600.98971912.42490.25
16 DevillierGlasser02(5) (0.00) 121.0000002.60499.00
17 Drop Wave(2) 31.0000000.78223.54
18 Easom(2) 241.0000000.2717.54
19 Eggholder(2) 2400.98002832.87499.00
20 Goldstein-Price(2) 601.0000000.5219.00
21 Griewank(10) 121.0000001.23185.50
22 Griewank(100) 241.0000000.7387.00
23 Griewank(2) 241.0000001.53191.04
24 Griewank(30) 121.0000001.38218.21
25 Griewank(6) 361.0000001.3786.13
26 Hartmann(3) 240.9999990.5743.50
27 Hartmann(6) 120.9864732.21399.50
28 Holder Table(2)240.9998631.79201.79
29 Levy(12) 30.9858931.75499.00
30 Levy(120) 240.9999186.07499.00
31 Levy(2) 2401.0000005.5317.92
32 Levy13(2) 2401.0000004.6020.25
33 Matyas(2) 2401.0000004.9217.92
34 McCormick(2) 120.9999473.32499.00
35 Michalewicz(2) 240.9999980.3839.33
36 Michalewicz(5) 1200.95719913.90407.80
37 Powell(10)121.0000002.24385.54
38 Powell(12) 31.0000001.52466.29
39 Powell(120) 31.0000001.62489.13
40 Powell(2) 31.0000001.28499.00
41 Power Sum(4) 2400.994381100.60477.00
42 Rastrigin(10) 241.0000001.91252.75

The overall success of different global optimizers according to Gavana [18].

Function(dimension) Part.Mean Eff.Time (sec)Mean Iter.

43 Rastrigin(100) 241.0000002.07262.50
44 Rastrigin(3) 240.9999205.56542.21
45 Rastrigin(30) 241.0000002.03265.33
46 Rastrigin(6) 241.0000001.84245.08
47 Rosenbrock(10) 241.0000001.64218.04
48 Rosenbrock(100) 240.9669262.32674.92
49 Rosenbrock(2) 241.0000000.2426.33
50 Rosenbrock(30) 120.9997061.68258.35
51 Rosenbrock(6) 241.0000002.77241.67
52 Rotated Hyper-Ellipsoid(12) 31.0000001.21361.04
53 Rotated Hyper-Ellipsoid(120) 31.0000001.25323.78
54 Rotated Hyper-Ellipsoid(2) 2401.0000004.6620.33
55 Schaffer2(2) 2401.0000004.6821.17
56 Schwefel 26(100) 1200.92410816.26490.50
57 Schwefel 26(2) 1200.99410315.93499.00
58 Shekel(4)600.9995805.36322.50
59 Shubert(2) 2401.0000004.4514.08
60 Six-Hump Camel(2) 120.9999742.73499.00
61 Sphere(12) 31.0000001.02303.42
62 Sphere(120) 31.0000001.08315.79
63 Sphere(2) 2401.0000004.5816.96
64 Styblinski-Tang(12) 30.9999001.81499.00
65 Styblinski-Tang(120) 30.9998841.96499.00
66 Styblinski-Tang(2) 240.9999050.2425.17
67 Sum of Different Powers(12) 31.0000000.73213.79
68 Sum of Different Powers(120) 31.0000000.98277.65
69 Sum of Different Powers(2) 2401.00000015.37193.75
70 Sum Squares(12) 31.0000000.91262.38
71 Sum Squares(120) 31.0000001.03297.25
72 Sum Squares(2) 241.0000000.1615.38
73 Three-Hump Camel(2) 2401.0000004.5417.29
74 Trid(10) 480.99391810.11499.00
75 Trid(6) 2400.99684232.12499.00
76 Xin She Yang02(12) 481.0000005.18157.92
77 Xin She Yang02(120) 481.0000006.86254.54
78 Xin She Yang02(2) 481.0000004.83148.83
79 Xin She Yang03(2) (1.08) 31.0000000.55160.33
80 Zakharov(12) 31.0000001.15357.21
81 Zakharov(120) 30.9999581.35430.13
82 Zakharov(2) 2401.0000004.5517.75

The overall success of different global optimizers according to Gavana [18].

The performance of the PSO-3P is remarkable for DeVilliersGlasser02, Damavandi, and CrossLegTable functions, where the success rate was superior to those reported in [18].

4.2. Experiments Using Few Particles

As an additional study on the behavior of the proposed PSO-3P, results obtained with small populations are investigated in this section. Indeed, for some instances, the global optimum was found using only 3 or 6 particles and, in average, less than 3000 evaluations of the objective function.

In this case, the algorithm was executed 24 times, and it stops when a maximum of 3000 iterations per run has been done or when the optimum has been found. Tables 3 and 4 include the functions that were solved to optimality, at least in one run among the 24. These tables show the name of the function in column 2. The number of particles used for each case is presented in column 3. Column 4 includes the average time per run in seconds. The percentage of runs in which the global optimum was obtained is included in column 5. The average number of iterations per run is displayed in column 6. Finally, the number of evaluations of the objective function, EOF, is presented in column 7.


Function(Dim.) Part.Time (sec)Perc.Mean Iter.EOF

1 Ackley(10) 31.420.8455.61366.8
2 Ackley(120) 31.562.5385.51156.5
3 Ackley(2) 31.237.5379.21137.6
4 Bohachevsky 1(2) 30.8100236.9710.7
5 Bohachevsky 2(2) 30.8100241.8725.4
6 Bohachevsky 3(2) 30.8100247.5742.5
7 Branin(2)63.44.2484.42906.4
8 Colville(4) 31.64.24851455
9 CrossLegTable(2) (0.83) 30.875223.6670.8
10 Cross-in-Tray(2) 31.716.74601380
11 DevillierGlasser02(5) (0.00) 30.6100151.9455.7
12 Drop Wave(2) 31.2100142426
13 Easom(2) 31.58.3466.51399.5
14 Goldstein-Price(2) 31.64.24831449
15 Griewank(10) 30.310088.5265.5
16 Griewank(100) 30.410088.3264.9
17 Griewank(2) 30.7100209.5628.5
18 Griewank(30) 30.975193.7581.1
19 Griewank(6) 30.310088.2264.6
20 Hartmann(3) 31.84.2485.51456.5
21 Holder Table(2)62.64.2478.92873.4
22 Levy(12) 62.48.3475.52853
23 Levy(120) 62.912.5474.82848.8
24 Levy(2) 62.38.3465.32791.8
25 Levy13(2) 62.44.2484.32905.8
26 Matyas(2) 30.8100237.6712.8
27 Michalewicz(2) 3254.23671101
28 Powell(10)31.74.2497.31491.9
29 Powell(12) 30.695.8200.3600.9
30 Powell(120) 30.7100207.6622.8
31 Powell(2) 30.795.8261.5784.5
32 Rastrigin(10) 30.8100261.8785.4


Function(dim) Part.Time (sec)Perc.Mean Iter.EOF

33 Rastrigin(100) 31.3100289.8869.4
34 Rastrigin(3) 62.88.3487.12922.6
35 Rastrigin(30) 31.395.8291.4874.2
36 Rastrigin(6) 61.8100272.51635
37 Rosenbrock(10) 31.166.7331993
38 Rosenbrock(2) 30.579.2140.3420.9
39 Rosenbrock(30) 31.250379.21137.6
40 Rosenbrock(6) 30.310087.5262.5
41 Rotated Hyper-Ellipsoid(12) 30.6100179537
42 Rotated Hyper-Ellipsoid(120) 30.995.8221.5664.5
43 Rotated Hyper-Ellipsoid(2) 30.8100231693
44 Schaffer2(2)30.7100221663
45 Shekel(4)30.8100216.5649.5
46 Six-Hump Camel(2) 31.78.34681404
47 Sphere(12) 30.595.8162.6487.8
48 Sphere(120) 30.595.8166.9500.7
49 Sphere(2) 30.7100217.4652.2
50 Styblinski-Tang(2) 31.520.8416.91250.7
51 Sum of Different Powers(12) 30.595.8144.1432.3
52 Sum of Different Powers(120) 30.5100128.3384.9
53 Sum of Different Powers(2) 30.9100270.2810.6
54 Sum Squares(12) 30.5100148.9446.7
55 Sum Squares(120) 30.6100186.8560.4
56 Sum Squares(2) 30.7100222.7668.1
57 Three-Hump Camel(2) 30.7100209.8629.4
58 Xin She Yang02(12) 31.295.83511053
59 Xin She Yang02(120) 32.825451.11353.3
60 Xin She Yang02(2) 30.695.8170.1510.3
61 Xin She Yang03(2) (1.08) 30.5100155.3465.9
62 Zakharov(12) 30.695.8178.3534.9
63 Zakharov(120) 31.275373.61120.8
64 Zakharov(2) 31.64.2488.91466.7

Even for this case, when the algorithm was restricted to use few particles and iterations, the success rate was satisfactory for DeVilliersGlasser02 and CrossLegTable functions. It is worth remembering that the number of particles, or iterations, reported in Table 1 is higher, since the data presented in the previous section was generated after tuning the parameters of the algorithm for each function, in order to increase its efficiency.

4.3. High Dimensional Problems

Finally, results obtained for some scalable functions are presented in this section, in order to investigate the performance of PSO-3P on high-dimensional problems. The test functions used in this framework are Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov. The experiments include dimensions 10, 30, 50, 100, 120, and 1000. Results reported by other authors, as far as we know, only include up to 100 dimensions.

Tables 510 highlight the results of PSO-3P and different algorithms for these functions. The name of the algorithm is included in the first column; the second column shows the dimension of the problem (Dim.). The number of generations (G), or iterations (I), is incorporated in column three. The fourth column includes the number of particles; the number of evaluations of the objective (EOF), or fitness, function is presented in column five. The sixth column shows the best value found by the algorithm; the average and median value are presented in columns seven and eight, respectively; finally, the last column includes the running time in seconds. We must remark that not all this information was reported in the reviewed literature; thus some cells remain empty.


AlgorithmDim.M. G/IP.EOFBestAverageMedianTime (s)

PSO-3P 1050035157.99E − 157.56E − 057.99E − 150.655

PSO-3P 3050034827.99E − 151.40E − 057.99E − 150.558
ELIPSO [19] 3010015000.0780.22790.3169
LDWIPSO [19] 3010015000.00091.88962.0099
FIWPSO [19] 3010015001.15522.69222.4962
CIWPSO [19] 3010015001.35672.56942.4482
RIWPSO [19] 3010015000.93132.31452.3584
TVACPSO [19] 3010015001.1572.12552.221
COPSO [19] 3010015002.12012.96632.8098
IRPEO [13] 3050000501.15E − 042.91E − 04
pPSA [20] 302000301.52E − 031.19E + 009.31E − 01
PSO [21] 3010002.58E + 00
SFLA [21] 3010002.40E + 001.83
MSFLA [21] 3010004900002.10E − 016.25
MSFLA-EO [21] 30100029000000.00E + 00
PSO-EO [22] 3010000309.50E − 049.88E − 040.26
LX-PM [23] 305000300866611.01E − 101.625
LX-MPTM [23] 3050003001167312.18E − 072.243
LX-NUM [23] 3050003001300416.66E − 072.473
HX-PM [23] 3050003001783011.01E − 102.8
HX-MPTM [23] 3050003001964013.17E − 043.565
HX-NUM [23] 3050003002329011.28E − 034.134
ADM [24] 3023483.28E − 0191.25
RM [24] 30298241.42E − 033389.03
PLM [24] 30300003.47E − 043053.57
NUM [24] 30295905.91E − 04874.63
MNUM [24] 30172901.82E − 01453.05
PM [24] 30255544.46E − 01933.23

PSO-3P 5050035117.99E − 159.60E − 057.99E − 150.632
CSA [25] 50200040005.00E − 01
ACSA [25] 50200025008.71E − 06

PSO-3P 10050039528.88E − 168.88E − 168.88E − 161.040
ELPSO [19] 10010050000.04750.22550.1589
CPSO [19] 1003.65884.74314.6822
HS [19] 10012.87613.399913.5271
GA [19] 10010.222510.703710.5791
FSO [19] 1000.02560.02750.0273
GSA [19] 1001.745611.43276.0956
BSOA [19] 1006.43118.01227.9955
ABC [19] 10017.192817.591417.5324

PSO-3P 12050039477.99E − 156.22E − 077.99E − 151.638

PSO-3P 100050034707.99E − 154.52E − 057.99E − 150.555

Average.

AlgorithmDim.M. G/IP.EOFBestAverageMedianTime (s)

PSO-3P 1050032820000.338

PSO-3P 30500351301.61E − 0500.86
ELIPSO [19] 3010015001.63E − 042.27E − 042.55E − 04
LDWIPSO [19] 3010015000.00020.01350.0089
FIWPSO [19] 3010015000.00070.01180.0097
CIWPSO [19] 3010015000.00150.01240.0119
RIWPSO [19] 30100150000.01140.0083
TVACPSO [19] 30100150000.02090.0148
COPSO [19] 30100150000.01360.0136
IRPEO [13] 3050000501.04E − 051.92E − 05
pPSA [20] 302000303.12E − 071.21E − 029.86E − 03
PSO [21] 3010017000000.52
SFLA [21] 301001200001.00E − 020.45
MSFLA [21] 301007000000.28
MSFLA-EO [21] 301005000000.21
PSO-EO [22] 302000030000.23
LX-PM [23] 3050003001000011.94E − 031.851
LX-MPTM [23] 3050003001433711.14E − 032.723
LX-NUM [23] 3050003001599718.57E − 033.035
HX-PM [23] 3050003002363911.94E − 033.554
HX-MPTM [23] 3050003002539311.52E − 034.486
HX-NUM [23] 3050003002525713.68E − 034.457
ADM [24] 305194.43E − 0320.47
RM [24] 30287431.36E − 023278.67
PLM [24] 302772885.74E − 032987.69
NUM [24] 30279152.45E − 02819.56
MNUM [24] 30151183.66E − 02478.1
PM [24] 30288641.02E + 00900.25

PSO-3P 50500350109.54E − 0600.646
PSO [21] 50100130000004.32
SFLA [21] 5010034000001.00E − 0211.98
MSFLA [21] 5010046000001.91
MSFLA-EO [21] 5010082000002.06
CSA [25] 50200040001.01E + 00
ACSA [25] 50200025007.00E − 01

PSO-3P 100500348501.44E − 0600.589
ELPSO [19] 10010050000.18460.26120.2127
CPSO [19] 1000.23470.28410.2793
HS [19] 100244.5269271.336268.3095
GA [19] 10086.0552104.867108.7177
FSO [19] 1000.01360.01520.0151
GSA [19] 1000.21261.13210.6543
BSOA [19] 10060.978183.531184.5294
ABC [19] 100722.3432817.3863844.5736

PSO-3P 120500349801.44E − 0700.624

PSO-3P 1000500350701.18E − 0400.617

Average.

AlgorithmDim.M. G/IP.EOFBestAverageMedianTime (s)

PSO-3P 10500387804.72E − 0201.101

PSO-3P 30500351003.21E − 0700.584
ELIPSO [19] 3010015003.89418.64038.8185
LDWIPSO [19] 30100150013.929425.6726.8639
FIWPSO [19] 30100150022.884129.550328.3564
CIWPSO [19] 3010015007.959719.004219.8992
RIWPSO [19] 30100150011.939725.570725.3715
TVACPSO [19] 3010015009.949624.774525.3714
COPSO [19] 30100150030.843751.68747.758
IRPEO [13] 3050000502.47E − 056.19E − 05
pPSA [20] 302000304.78E + 018.11E + 018.21E + 01
PSO-EO [22] 30200001000.00E + 000.61
LX-PM [23] 3050003001654710.00E + 002.997
LX-MPTM [23] 3050003003505411.23E − 126.515
LX-NUM [23] 3050003007.30E + 00
HX-PM [23] 3050003001678510.00E + 002.441
HX-MPTM [23] 3050003002452819.22E − 124.219
HX-NUM [23] 3050003004266513.11E − 137.358
ADM [24] 3014561.36E + 0157.03
RM [24] 30129781.30E − 021638.42
PLM [24] 30104674.31E − 03971.72
NUM [24] 30164104.43E − 02516.03
MNUM [24] 3064681.80E + 01214.95
PM [24] 30207128.23E + 00709.02

PSO-3P 50500341301.63E − 0700.482
PSO [21] 50100003.14E + 01
SFLA [21] 50100005.96E + 00
MSFLA [21] 5010000100000002.00E − 0319.72
MSFLA-EO [21] 501000046000000.00E + 009.41
CSA [25] 50200040009.72E − 07
ACSA [25] 50200025008.56E − 09

PSO-3P 100500351008.48E − 0500.595
ELPSO [19] 10010050001.04485.54025.7829
CPSO [19] 10049.861878.78282.187
HS [19] 100300.2257415.5759360.5646
GA [19] 100720.2649768.2887775.294
FSO [19] 100169.1787214.1549218.9371
GSA [19] 10027.858929.848829.8488
BSOA [19] 100135.4258164.6546146.3873
ABC [19] 100735.1485753.3054756.6776

PSO-3P 120500346609.16E − 0600.372

PSO-3P 1000500351601.28E − 0400.4

Average.

Algorithm Dim.M. G/IP.EOFBestAverageMedianTime (s)

PSO-3P 10500358703.71E − 0100.69

PSO-3P 30500352201.22E − 0200.61
ELIPSO [19] 3010015001.13375.81725.8938
LDWIPSO [19] 3010015004.467652.709929.2173
FIWPSO [19] 30100150026.923852.521649.8166
CIWPSO [19] 3010015000.01868.68840.561
RIWPSO [19] 3010015000.224328.386224.8883
TVACPSO [19] 3010015002.40E − 033.93E + 012.73E + 01
COPSO [19] 3010015001.28E + 004.55E + 0145.7893
pPSA [20] 302000301.43E + 016.37E + 012.65E + 01
IRPEO [13] 3050000505.36E − 028.64E − 01
PSO-EO [22] 30100000309.50E − 049.88E − 04117.1
LX-PM [23] 3050003001.58E + 01
LX-MPTM [23] 3050003001.85E + 01
X-NUM [23] 3050003001.85E + 01
HX-PM [23] 3050003006697511.84E + 015.453
HX-MPTM [23] 3050003001.61E + 01
HX-NUM [23] 30500030013686911.25E + 0114.621
ADM [24] 30300006.58E + 0041.46
RM [24] 30298636.98E + 011491.56
PLM [24] 30296255.13E + 01647.87
NUM [24] 30300006.67E + 01379.56
MNUM [24] 30300004.03E + 00173.61
PM [24] 30300004.04E + 03404.41

PSO-3P 4040395301.91E − 010.00E + 001.1
PSO [21] 40200004.02E + 01
SFLA [21] 40200005.28E + 00
MSFLA [21] 40200004.00E − 03
MSFLA-EO [21] 4020000530000002.00E − 03359

PSO-3P 50500356502.02E + 0000.604
CSA [25] 50200040002.94E + 00
ACSA [25] 50200025001.75E + 00

PSO-3P 100500355304.09E + 000.00E + 000.65
ELPSO [19] 10010050003.74418.7398.3733
CPSO [19] 10070.9597209.9321225.3372
HS [19] 1002.94E + 053.19E + 053.17E + 05
GA [19] 1002.13E + 042.61E + 042.58E + 04
FSO [19] 10096.487297.800197.7481
GSA [19] 10093.000593.076893.0122
BSOA [19] 100156.385206.5988215.7586
ABC [19] 1001.25E + 061.38E + 061.28E + 06

PSO-3P 120500359904.91E + 0000.69

PSO-3P 10007503194803.57E − 012.14E − 022.576

Average.

Algorithm Dim.M. G/IP.EOFBestAverageMedianTime (s)

PSO-3P 10500348304.75E − 0800.569

PSO-3P 30500348901.51E − 0800.56
ELIPSO [19] 3010015002.979E − 085.244E − 085.170E − 06
LDWIPSO [19] 3010015001.360E − 086.110E − 085.920E − 08
FIWPSO [19] 3010015007.280E − 081.701E − 071.173E − 07
CIWPSO [19] 3010015005.400E − 081.565E − 071.670E − 07
RIWPSO [19] 3010015001.600E − 084.810E − 076.900E − 08
TVACPSO [19] 3010015004.430E − 101.077E − 091.046E − 09
COPSO [19] 3010015005.080E − 099.550E − 099.470E − 09
IRPEO [13] 3050000501.88E − 012.96E − 01
pPSA [20] 302000305.60E − 067.88E − 067.93E − 06
PSO-EO [22] 30200001000.00E + 000.61
LX-PM [23] 305000300435414.75E − 110.505
LX-MPTM [23] 305000300598611.32E − 070.724
LX-NUM [23] 305000300676613.34E − 110.819
HX-PM [23] 305000300897514.75E − 110.733
HX-MPTM [23] 305000300870315.48E − 050.944
HX-NUM [23] 3050003001075813.64E − 041.165
ADM [24] 30285011.45
RM [24] 30295003.31E − 062577.08
PLM [24] 30296785.62E − 072538.16
NUM [24] 30297881.13E − 06899.87
MNUM [24] 30115850.00E + 00336.02
PM [24] 30284981.20E − 02994.49

PSO-3P 50500346903.03E − 0800.539

PSO-3P 100500348101.27E − 0700.569
ELPSO [19] 10010050004.297E − 056.035E − 055.811E − 05
CPSO [19] 1002.708E − 045.398E − 044.095E − 04
HS [19] 10069.011176.084478.0859
GA [19] 10023.773429.408131.2806
FSO [19] 1002.001E − 042.211E − 042.216E − 04
GSA [19] 1000.05620.06720.0691
BSOA [19] 1000.34060.36960.3720
ABC [19] 100198.6946230.8250237.4058

PSO-3P 120500341801.39E − 0800.492

PSO-3P 1000500348603.64E − 0800.599

Average.

AlgorithmDim.M. G/IP.EOFBestAverageMedianTime (s)

PSO-3P 10500350702.33E − 0500.603

PSO-3P 30500354208.79E − 0500.643
ELIPSO [19] 3010015000.00010.00030.0002
LDWIPSO [19] 3010015000.00010.00050.0004
FIWPSO [19] 3010015000.00080.00320.0028
CIWPSO [19] 3010015000.0010.0030.0021
RIWPSO [19] 3010015000.00030.00190.0012
TVACPSO [19] 3010015004.95E − 051.36E − 041.37E − 04
COPSO [19] 3010015000.05246.94592.1362
IRPEO [13] 3050000501.88E − 015.59E − 01
LX-PM [23] 305000300632711.95E − 201.563
LX-MPTM [23] 305000300657118.04E − 151.642
LX-NUM [23] 305000300786813.32E − 101.969
HX-PM [23] 305000300928911.95E − 201.959
HX-MPTM [23] 305000300947017.86E − 092.243
HX-NUM [23] 3050003002107912.86E − 015.06
ADM [24] 301769070.38
RM [24] 30142864.05E + 001249.5
PLM [24] 30152523.08E − 051306.32
NUM [24] 3065364.88E + 01239.33
MNUM [24] 30277440687.27
PM [24] 3054547.09E + 01203.16

PSO-3P 50500364902.11E − 0500.783

PSO-3P 100500381408.14E − 0200.944
ELPSO [19] 10010050000.22682.50162.2814
CPSO [19] 10017.726340.934139.9646
HS [19] 100287.52342694.91488.3287
GA [19] 100369.4271412.8078382.7739
FSO [19] 10062.832574.421275.6089
GSA [19] 100167.7459171.3504168.1632
BSOA [19] 10091.003553.6407130.7713
ABC [19] 1001052.21278.71337.6

PSO-3P 120500376802.22E − 0200.932

PSO-3P 10005003108009.66E + 0701.268

Average.

For Ackley’s function, see Table 5; the PSO-3P always found solutions very close to the global optimum; besides the median was very close to the global optimum for all dimensions (10, 30, 50, 100, 120, and 1000). Additionally, for the Ackley function with 30 dimensions, PSO-3P required less than 50% of the number of evaluations of the objective function compared to the other algorithms.

For the Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov functions, see Tables 610; the PSO-3P always found the global optimum in at least one run, and the median always was the global optimum for all dimensions (10, 30, 50, 100, 120, and 1000).

Special attention should be given to the number of iterations and evaluations of the objective, fitness, and function. In average, the PSO-3P required 609 evaluations of the objective function to achieve the global optimum for dimensions 10, 30, 50, 100, 120, and 1000 and less than 0.74 seconds.

The normalized solutions for 17 instances are shown in Table 11; in this table values range between 0 and 1. If an algorithm found a solution close to the best reported value, it is associated with 0. On the other hand, if the best solution found by an algorithm is close to the worst reported value, it is associate with 1. Based on Table 11, we observe that PSO-3P found the best results in 94% of the instances and only in 6% get the second best result. Also, PSO-3P is a competitive alternative to solve Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and Zakharov problems.


NS versus PAckleyGriewankRastriginRosenbrockSphereZakharov
Dim.3050100305010030501003040501003010030100

17273626273691032, 36261027368362631

0.93, 4

0.8531

0.762824

0.6232, 347

0.5353136

0.493

0.330312, 5, 63331

0.2264, 21, 251135PSO-3P3132

0.11, 21, 2532, 351, 17, 263011327, 2234

08, 14–2029, 308, 14–2012, 131–92829, 301–7, 929, 301–6, 829, 30
22–242829, 331–2510–1333, 3422–2427, 2829, 3414–2512, 1332–3514–2633–3515–23, 2533, 35
PSO-3PPSO-3PPSO-3PPSO-3PPSO-3PPSO-3PPSO-3PPSO-3PPSO-3PPSO-3PPSO-3PPSO-3PPSO-3PPSO-3PPSO-3PPSO-3P

Note: ELIPSO is represented by 1 [19]; LDWIPSO is represented by 2 [19]; FIWPSO is represented by 3 [19]; CIWPSO is represented by 4 [19]; RIWPSO is represented by 5 [19]; TVACPSO is represented by 6 [19]; COPSO is represented by 7 [19]; IRPEO is represented by 8 [13]; pPSA is represented by 9 [20]; PSO is represented by 10 [21]; SFLA is represented by 11 [21]; MSFLA is represented by 12 [21]; MSFLA-EO is represented by 13 [21]; PSO-EO is represented by 14 [22]; LX-PM is represented by 15 [23]; LX-MPTM is represented by 16 [23]; LX-NUM is represented by 17 [23]; HX-PM is represented by 18 [23]; HX-MPTM is represented by 19 [23]; HX-NUM is represented by 20 [23]; ADM is represented by 21 [24]; RM is represented by 22 [24]; PLM is represented by 23 [24]; NUM is represented by 24 [24]; MNUM is represented by 25 [24]; PM is represented by 26 [24]; CSA is represented by 27 [25]; ACSA is represented by 28 [25]; ELPSO is represented by 29 [19]; CPSO is represented by 30 [19]; HS is represented by 31 [19]; GA is represented by 32 [19]; FSO is represented by 33 [19]; GSA is represented by 34 [19]; BSOA is represented by 35 [19]; ABC is represented by 36 [19].

Results of the Wilcoxon rank sum test are shown in Figure 3; in this case, a dark square means that the algorithms are statically similar, while a clear square means the opposite. Results of the Wilcoxon rank sum test 3 show that PSO-3P is a metaheuristic similar to PSO-EO [22], LX-PM [23], HX-PM [23], and ACSA [25]. On the other hand, our method is different from the remaining methods. However, PSO-3P required less number of evaluations of the objective function than its counterparts for getting good results. values involved in the Wilcoxon rank sum test are shown in the Figure 4.

5. Conclusions and Further Research

In this work, a novel PSO based algorithm is with three phases: stabilization, breadth-first search, and depth-first search. The resulting PSO-3P algorithm was tested over a set of single-objective unconstrained optimization benchmark instances. The empirical evidence shows that PSO-3P is very efficient.

Tables 510 highlight the fact that PSO-3P gives good results and competitive solutions for some difficult previously reported problems.

Moreover, for all benchmark functions considered in this work, PSO-3P converged faster and required fewer iterations than many specialized algorithms, regardless of the dimension of the function. PSO-3P used on average 609 evaluations of the objective function to converge to the global optimum, which represent on average 66% of the number of evaluations reported for these problems by other algorithms, and an average running time of 0.74 seconds for problems of dimension 10 to 1000. As a matter of fact, some solutions are presented for problems of high dimension (120, 1000) that have not been reported before. Also, the numerical results of the Wilcoxon test show that the results obtained by PSO-3P are similar to those reported by IRPEO, LX-MPTM, MNUM, pPSA, LX-NUM, PSO, HX-PM, CSA, SFLA, HX-MPTM, ACSA, MSFLA, HX-NUM, PSO, MSFLA-EO, ADM, PSO-EO, RM, MSFLA, LX-PM, PLM, MSFLA-EO, and NUM. However, the PSO-3P needs less than 1000 evaluations of the objective function for generating good results; in contrast, the other methods need more than 2500 evaluations.

It was observed that, in all the cases studied, PSO-3P can reach the global optimum or a solution can be very close to it, with small number of iterations, as well as the ability to jump deep valleys, for unconstrained optimization with 2 to 1,000 variables.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

Sergio Gerardo de-los-Cobos-Silva would like to thank D. Sto. and P. V. Gpe. for their inspiration, and his family Ma, Ser, Mon, Chema, and his Flaquita for all their support.

References

  1. F. Glover, “Tabu Search, part I,” ORSA Journal on Computing, vol. 1, no. 3, pp. 190–206, 1989. View at: Publisher Site | Google Scholar
  2. S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. View at: Publisher Site | Google Scholar
  3. J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Mich, USA, 1975.
  4. F. Glover, “A template for scatter search and path relinking,” in Artificial Evolution, vol. 1363 of Lecture Notes in Computer Science, pp. 1–51, Springer, Berlin, Germany, 1998. View at: Google Scholar
  5. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, December 1995. View at: Google Scholar
  6. D. Sedighizadeh and E. Masehian, “Particle swarm optimization methods, taxonomy and applications,” International Journal of Computer Theory and Engineering, vol. 1, no. 5, pp. 486–502, 2009. View at: Publisher Site | Google Scholar
  7. I. Muhammad, H. Rathiah, and A. K. Noor Elaiza, “An overview of particle swarm optimization variants,” Procedia Engineering, vol. 53, pp. 491–496, 2013. View at: Google Scholar
  8. Y. Zhao, W. Zu, and H. Zeng, “A modified particle swarm optimization via particle visual modeling analysis,” Computers & Mathematics with Applications, vol. 57, no. 11-12, pp. 2022–2029, 2009. View at: Publisher Site | Google Scholar
  9. C.-C. Chen, “Two-layer particle swarm optimization for unconstrained optimization problems,” Applied Soft Computing Journal, vol. 11, no. 1, pp. 295–304, 2011. View at: Publisher Site | Google Scholar
  10. W. F. Abd-El-Wahed, A. A. Mousa, and M. A. El-Shorbagy, “Integrating particle swarm optimization with genetic algorithms for solving nonlinear optimization problems,” Journal of Computational and Applied Mathematics, vol. 235, no. 5, pp. 1446–1453, 2011. View at: Publisher Site | Google Scholar
  11. H. Kahramanh and N. Allahverdi, “Partcle swarm optimization with flexible swarm for unconstrained optimization,” International Journal of Intelligent Systems and Applications in Engineering, vol. 1, no. 1, pp. 8–13, 2013. View at: Google Scholar
  12. J.-Y. Wu, “Solving unconstrained global optimization problems via hybrid swarm intelligence approaches,” Mathematical Problems in Engineering, vol. 2013, Article ID 256180, 15 pages, 2013. View at: Publisher Site | Google Scholar
  13. G.-Q. Zeng, K.-D. Lu, J. Chen et al., “An improved real-coded population-based extremal optimization method for continuous unconstrained optimization problems,” Mathematical Problems in Engineering, vol. 2014, Article ID 420652, 9 pages, 2014. View at: Publisher Site | Google Scholar
  14. S. G. De-los-Cobos-Silva, “SC—system of convergence: theory and foundations,” Revista de Matemática: Teoría y Aplicaciones, vol. 22, no. 2, pp. 341–367, 2015. View at: Google Scholar
  15. J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence, Morgan Kaufmann Publishers, San Diego, Calif, USA, 2001.
  16. A. R. Hedar, “Global Optimization Test Problems,” 2014, http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_files/TestGO.htm. View at: Google Scholar
  17. S. Surjanovic and D. Bingham, “Optimization Test Problems,” 2014, http://www.sfu.ca/~ssurjano/optimization.html. View at: Google Scholar
  18. A. Gavana, Global Optimization Benchmarks and AMPGO, 2014, http://infinity77.net/global_optimization/test_functions.html.
  19. A. R. Jordehi, “Enhanced leader PSO (ELPSO): a new PSO variant for solving global optimisation problems,” Applied Soft Computing Journal, vol. 26, pp. 401–417, 2015. View at: Publisher Site | Google Scholar
  20. Z. Xinchao, “A perturbed particle swarm algorithm for numerical optimization,” Applied Soft Computing Journal, vol. 10, no. 1, pp. 119–124, 2010. View at: Publisher Site | Google Scholar
  21. X. Li, J. Luo, M.-R. Chen, and N. Wang, “An improved shuffled frog-leaping algorithm with extremal optimisation for continuous optimisation,” Information Sciences, vol. 192, pp. 143–151, 2012. View at: Publisher Site | Google Scholar
  22. M.-R. Chen, X. Li, X. Zhang, and Y.-Z. Lu, “A novel particle swarm optimizer hybridized with extremal optimization,” Applied Soft Computing Journal, vol. 10, no. 2, pp. 367–373, 2010. View at: Publisher Site | Google Scholar
  23. K. Deep and M. Thakur, “A new mutation operator for real coded genetic algorithms,” Applied Mathematics and Computation, vol. 193, no. 1, pp. 211–230, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  24. P.-H. Tang and M.-H. Tseng, “Adaptive directed mutation for real-coded genetic algorithms,” Applied Soft Computing Journal, vol. 13, no. 1, pp. 600–614, 2013. View at: Publisher Site | Google Scholar
  25. P. Ong, “Adaptive cuckoo search algorithm for unconstrained optimization,” The Scientific World Journal, vol. 2014, Article ID 943403, 8 pages, 2014. View at: Publisher Site | Google Scholar
  26. M. Birattari, Tuning Metaheuristics: A Machine Learning Perspective, Springer, 2009.

Copyright © 2015 Sergio Gerardo de-los-Cobos-Silva et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1364 Views | 571 Downloads | 8 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.