Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 907386 | 19 pages | https://doi.org/10.1155/2014/907386

Fusion Global-Local-Topology Particle Swarm Optimization for Global Optimization Problems

Academic Editor: Dan Simon
Received22 Feb 2014
Revised18 May 2014
Accepted24 May 2014
Published25 Jun 2014

Abstract

In recent years, particle swarm optimization (PSO) has been extensively applied in various optimization problems because of its structural and implementation simplicity. However, the PSO can sometimes find local optima or exhibit slow convergence speed when solving complex multimodal problems. To address these issues, an improved PSO scheme called fusion global-local-topology particle swarm optimization (FGLT-PSO) is proposed in this study. The algorithm employs both global and local topologies in PSO to jump out of the local optima. FGLT-PSO is evaluated using twenty (20) unimodal and multimodal nonlinear benchmark functions and its performance is compared with several well-known PSO algorithms. The experimental results showed that the proposed method improves the performance of PSO algorithm in terms of solution accuracy and convergence speed.

1. Introduction

PSO is a population-based metaheuristic algorithm introduced by Kennedy and Eberhart [1] in 1995. The algorithm imitates the social behavior of bird flocking or fish schooling to find the global best solution. Due to the simple concept, having a few parameters and being easy to implement, PSO has received much more attention to solve real-world optimization problems [26] in recent years. Nevertheless, PSO may easily get trapped in local optima when solving complex multimodal problems [7]. Hence, a number of variant PSO algorithms have been proposed in the literature to avoid the local optima and to find the best solution promptly.

The algorithm applies two different topologies to find a good solution: global and local topologies. In global topology, the position of each particle is affected by the best-fitness particles of the entire population in the search space while each particle is influenced by the best-fitness particles of its neighborhood in the local topology. Kennedy and Mendes proposed local (ring) topological structure PSO (LPSO) [8] and the Von Neumann topological structure PSO (VPSO) [9]. Mendes et al. [10] introduced the fully informed particle swarm (FIPS) algorithm and Ratnaweera et al. [11] suggested self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients (HPSO-TVAC). Other researchers presented the several variants of PSO algorithms such as dynamic multiswarm PSO (DMS-PSO) [12], comprehensive learning PSO (CLPSO) [13], median-oriented particle swarm optimization (MPSO) [14], centripetal accelerated particle swarm optimization (CAPSO) [15], quadratic interpolation PSO (QIPSO) [16], quantum-behaved particle swarm optimization (QPSO) [17], and adaptive particle swarm optimization (APSO) [18].

Although the aforementioned algorithms have obtained satisfactory results, there are still some disadvantages in their utilization. For example, LPSO presents a slow convergence rate in unimodal functions [14, 15] or CLPSO is not good for solving unimodal problems [13]. Moreover, some of the algorithms have a better performance than PSO but their structures are not as simple as PSO.

To overcome the disadvantages, this study introduces fusion global-local-topology particle swarm optimization (FGLT-PSO). The proposed algorithm performs a global search over the entire search space with a fast convergence speed using hybridizing two local and global topologies in PSO to jump out from local optima.

The remainder of this paper is organized as follows. In Section 2, a brief review of PSO is provided followed by some well-known PSO algorithms. The proposed algorithm is described in Section 3 in detail. In Section 4, FGLT-PSO is used to solve several benchmark functions and its performance is compared with the other PSO algorithms in the literature. Finally, conclusions and the future research directions are presented in Section 5.

2. Particle Swarm Optimization (PSO)

2.1. PSO Framework

The PSO algorithm is a population-based metaheuristic algorithm that applies two approaches of global exploration and local exploitation to find the optimum solution. The exploration is the ability of expanding search space, where the exploitation is the ability of finding the optima around a good solution. The algorithm is initialized by creating a swarm, that is, population of particles, with random positions. Every particle is shown as a vector in a -dimensional search space where and are the position and velocity, respectively, and is the personal best position found by the th particle: In addition, the best position obtained by the entire population () is computed to update the particle velocity: Based on and , the next velocity and position of the th particle are computed using (3) and (4) as follows: where and are the next and current velocity of the th particle, respectively. is inertia weight, and are acceleration coefficients, and rand1 and rand2 are random numbers in the interval . is the number of particles; and are the next and current position of the th particle.

Also, and is set to a constant bounded based on the search space bound. A larger value of encourages global exploration (searching new areas), while a smaller value provides a local exploitation.

In (3), the second and the third terms are called cognition and social term, respectively. The two models applied to choose are known as (for global topology) and (for local topology) models. In this paper, the model and model are called PSO and LPSO, respectively.

2.2. Improved PSO Algorithms

Since Kennedy and Eberhart introduced PSO algorithm, the algorithm and its improved schemes have been extensively applied in many problems [2025]. Many researchers have proposed the variants of modified PSO through swarm topology [8, 9], parameter selection [19, 26], combining PSO with other evolutionary computation (EC) techniques [27, 28], integration of its self-adaptation [29], and so on.

LPSO [8] and VPSO [9] were proposed based on a local topology to avoid premature convergence rate in solving multimodal problems. FIPS algorithm [10] is another PSO algorithm which uses the information of the entire neighborhood to guide the particles for finding the best solution. Dynamic multiswarm PSO (DMS-PSO) [12] was suggested by Liang and Suganthan to dynamically enhance the topological structure. Ratnaweera et al. [11] proposed HPSO-TVAC algorithm based on linearly time-varying acceleration coefficients where a larger and a smaller are set at the beginning and gradually reversed throughout the search. Liang et al. [13] presented comprehensive learning particle swarm optimization (CLPSO) which focused on avoiding the local optima by encouraging each particle to learn its behavior from other particles on different dimensions.

In another research, a selection operator for PSO was first introduced by Angeline [30]. It is similar to what was used in a genetic algorithm (GA). Other researchers used a part of crossover [31] and mutation [29] operations from GA into PSO. Pant et al. proposed a quadratic crossover operator to PSO algorithm called quadratic interpolation PSO (QIPSO) [16]. An adaptive fuzzy particle swarm optimization (AFPSO) [19] was proposed to utilize fuzzy inferences for adjusting acceleration coefficients. Meanwhile, the quadratic crossover operator [16] was used in the proposed AFPSO algorithm (AFPSO-QI) [19] to have better performance in solving multimodal problems. Zhan et al. presented an adaptive particle swarm optimization (APSO) [18] using a real-time evolutionary state estimation procedure and an elitist learning strategy. A variant of PSO algorithm based on orthogonal learning strategy (OLPSO) [32] was introduced to guide particles for discovering useful information from their personal best positions and from their neighborhood’s best position in order to fly in better directions. Gao et al. [33] used PSO with chaotic opposition-based population initialization and stochastic search technique to solve complex multimodal problems. The algorithm called CSPSO finds new solutions in the neighborhoods of the previous best positions in order to escape from local optima in multimodal functions. Beheshti et al. proposed median-oriented particle swarm optimization (MPSO) [14] and centripetal accelerated particle swarm optimization (CAPSO) [15] based on Newton’s laws of motion to accelerate the learning and convergence of optimization problems.

3. FGLT-PSO: The Proposed Method

3.1. FGLT-PSO Algorithm

FGLT-PSO tends to overcome the disadvantages of PSO by avoiding local optima and accelerating convergence speed. According to [14, 15], PSO has shown a better performance than LPSO in unimodal problems and LPSO illustrates good results in multimodal problems. Hence, both local and global topologies are hybridized in FGLT-PSO to increase the convergence rate and to avoid trapping into local optima.

In FGLT-PSO algorithm, each particle uses the best position found by its neighbors () to update the particles’ velocities: The next position of each particle is computed based on the current position, , the next velocity, , and the best position found by the swarm, , as follows: In (6), is computed as Also, , and are acceleration coefficients and modified according to (10): where and are the current iteration and the number of maximum iterations, respectively.

The second term in (6) is called the cognition term, and the third terms in (6) and (7) are named the social terms. In (7), and is set to a constant based on the search space bound.

3.2. Analysis of FGLT-PSO

A metaheuristic algorithm explores new spaces to avoid trapping in a local optimum in the initial steps. Due to the poor exploration in the standard PSO (PSO), it can sometimes find local optima in multimodal problems. Sometimes, if a particle falls into a local optimum, it will not be able to get out of it. That is, if obtained through the population lies in a local optimum while the current position and the personal best position of particle are in the same local optimum, the second and third terms of (3) tend to zero and decreases linearly to near zero. Consequently, the next velocity of particle tends to zero, and its next position in (4) does not change; thus, the particle remains in the local optimum. Hence, the main aim in FGLT-PSO is to overcome the poor exploration and to increase the convergence rate by combining the local and global searches as shown in Figure 1. The particles move in the search space based on the best solutions found by their neighbors () and the swarm (). At the beginning, the particles search new spaces. By lapse of iterations, the exploration should fade out and the exploitation should fade in. It means the particles accelerate to the good solution and make search around it to find the best solution.

4. Experimental Results

In this section, the FGLT-PSO algorithm is compared with some well-known PSO algorithms. The algorithms are tested using various unimodal and multimodal functions in different dimensions. Several benchmark functions [34, 35] are selected to evaluate the performance of proposed method.

4.1. Benchmark Functions

Twenty (20) minimization functions are applied in the experimental study including unimodal, multimodal, rotated, shifted, and shifted-rotated functions as detailed in Table 1. In the table, Range and are the feasible bound and the dimension of each function, respectively. is the optimum value of function. Among the benchmarks, functions are unimodal functions and functions are in the class of multimodal functions. Functions are rotated and functions are shifted unimodal and multimodal functions. Two functions (19) and (20) are shifted-rotated multimodal functions.


Test functionDimension ( ) Range

10/30/50 00
10/30/50 00
10/30/50 00
10/30/50 00
10/30/50 10
10/30/50
10/30/50 00
10/30/50 00
10/30/50 00
10/30/50 00
10/30/50 00
10/30/50 00
10/30/50 00
10/30/50 00
10/30/50 0−450
10/30/50 0−180
10/30/50 1390
10/30/50 0−330
10/30/50 0−140
10/30/50 0−330

In unimodal functions, the convergence rate of search algorithm is more interesting than the final results because other methods have been designed to optimize these kinds of functions. In multimodal functions, finding an optimal (or a good near-global optimal) solution is important. These functions are more difficult to optimize because the number of local optima exponentially increases as the dimension increases. Therefore, the search algorithms should not become trapped in a local optimum and should be able to obtain good solutions.

The rotation of function increases the function complexity. It does not affect the shape of function. The variable is computed using an orthogonal matrix [36] and applied to obtain the fitness value of rotated function as follows:

In shifted functions, the global optimum is shifted to the new position . All the test functions are shown as follows.

(1) Sphere Model (unimodal function). Consider

(2) Shifted’s Schwefel’s Problem (unimodal function). Consider

(3) Schwefel’s Problem 1.2 (unimodal function). Consider

(4) Quartic Function, That Is, Noise (unimodal function). Consider

(5) Rosenbrock’s Function (multimodal function). Consider is unimodal in a 2-dimension or 3-dimension search space but can be treated as a multimodal function in high-dimensional cases.

(6) Generalized Schwefel’s Problem 2.26 (multimodal function). Consider

(7) Ackley’s Function (multimodal function). Consider

(8) Generalized Penalized Function (multimodal function). Consider

(9) Noncontinuous Rastrigin’s Function (multimodal function). Consider

(10) Rotated Schwefel’s Problem 2.22 (unimodal function). Consider

(11) Rotated Minima Function (multimodal function). Consider

(12) Rotated Rastrigin’s Function (multimodal function). Consider

(13) Rotated Weierstrass’ Function (multimodal function). Consider

(14) Rotated Salomon’s Function (multimodal function). Consider

(15) Shifted Schwefel’s Problem 2.21 (unimodal function). Consider

(16) Shifted Generalized Griewank’s Function (multimodal function). Consider

(17) Shifted Rosenbrock’s Function (multimodal function). Consider

(18) Shifted Rastrigin’s Function (multimodal function). Consider

(19) Shifted Rotated Ackley’s Function (multimodal function). Consider where is a linear transformation matrix with condition number = 100.

(20) Shifted Rotated Rastrigin’s Function (multimodal function). Consider where is a linear transformation matrix with condition number = 2.

4.2. Results of FGLT-PSO

The results of FGLT-PSO are provided in three sections. In Section 4.2.1, the acceleration coefficients , and in the proposed method are changed according to (10) and in Section 4.2.2, these factors are constant. In these sections, FGLT-PSO is evaluated using the benchmark functions with dimensions 10, 30, and 50. The number of maximum iterations is set at 5000 for , at 10000 for , and at 15000 for . The population size is set to 50 (). Also, decreases linearly from 0.9 to 0.4.

In Section 4.2.3, the results of FGLT-PSO are compared with those of several well-known PSO algorithms from [19] on the common functions. In this section, the population size is set to 30 (), is 30, and the number of maximum iterations is set at 10000.

The ring topology is used as the neighborhood structure in the model for the FGLT-PSO and LPSO algorithms and the number of neighbours for each particle is three. The algorithms are run independently 30 times for the benchmark functions and the results are averaged.

4.2.1. The Results of Proposed Method with Variable Acceleration Coefficients

Four algorithms of FGLT-PSO, PSO, LPSO, and QIPSO are randomly initialized and run on benchmark functions. The average best solution, the standard deviation (SD), and the median of the best solution in the last iteration are reported in Tables 2, 3, 6, 7, 10, and 11. The best results from among the algorithms are shown in bold numbers. In the tables, the algorithms are ranked based on the average best results.


FunctionFGLT-PSOPSOLPSOQIPSO

Avg. best solution 0.000e + 0001.405e − 1351.452e − 0631.526e − 138
SD0.000e + 0006.789e − 1355.681e − 0635.723e − 138
Median best solution0.000e + 0001.243e − 1397.381e − 0653.961e − 142
Avg. iteration for finding the best solution2646500050005000

Avg. best solution 2.741e − 2732.992e − 0774.130e − 0386.663e − 079
SD0.000e + 0001.490e − 0767.564e − 0381.234e − 078
Median best solution6.578e − 2773.849e − 0799.945e − 0391.019e − 079
Avg. iteration for finding the best solution4952500049995000

Avg. best solution 1.637e − 1163.215e − 0445.110e − 0143.919e − 044
SD8.969e − 1161.565e − 0431.126e − 0132.117e − 043
Median best solution2.369e − 1271.857e − 0485.320e − 0153.570e − 048
Avg. iteration for finding the best solution4630499949994999

Avg. best solution 3.209e − 0044.738e − 0041.269e − 0036.514e − 004
SD3.006e − 0042.057e − 0044.861e − 0043.241e − 004
Median best solution2.114e − 0044.595e − 0041.197e − 0035.797e − 004
Avg. iteration for finding the best solution3715440444144459

Avg. best solution 2.973e − 0011.820e + 0001.661e + 0002.040e + 000
SD1.015e + 0001.336e + 0001.281e + 0001.622e + 000
Median best solution4.614e − 0042.014e + 0001.727e + 0001.862e + 000
Avg. iteration for finding the best solution4537481149884837

Avg. best solution −3.910e + 003 −3.452e + 003 −3.892e + 003 −3.424e + 003
SD1.184e + 0022.307e + 0021.843e + 0022.775e + 002
Median best solution −3.953e + 003 −3.475e + 003 −3.834e + 003 −3.475e + 003
Avg. iteration for finding the best solution2974272639182635

Avg. best solution 4.441e − 0154.322e − 0154.441e − 0154.441e − 015
SD0.000e + 0006.486e − 0160.000e + 0000.000e + 000
Median best solution4.441e − 0154.441e − 0154.441e − 0154.441e − 015
Avg. iteration for finding the best solution376314737723109

Avg. best solution 4.712e − 0324.712e − 0324.712e − 0324.712e − 032
SD1.670e − 0471.670e − 0471.670e − 0471.670e − 047
Median best solution4.712e − 0324.712e − 0324.712e − 0324.712e − 032
Avg. iteration for finding the best solution420316638952687

Avg. best solution 9.495e − 0029.667e − 0011.574e + 0000.000e + 000
SD2.720e − 0014.560e + 0001.065e + 0000.000e + 000
Median best solution9.236e − 0080.000e + 0002.000e + 0000.000e + 000
Avg. iteration for finding the best solution4856335743663401

Avg. rank1.12.93.42.6
Final rank1342
AlgorithmsFGLT-PSOPSOLPSOQIPSO


FunctionFGLT-PSOPSOLPSOQIPSO

Avg. best solution 7.056e − 2084.585e − 0592.358e − 0211.127e − 058
SD0.000e + 0008.736e − 0597.425e − 0215.346e − 058
Median best solution1.930e − 2097.785e − 0603.417e − 0221.023e − 060
Avg. iteration for finding the best solution4741499949974999

Avg. best solution −6.602e + 001−6.645e + 001−6.616e + 001−6.613e + 001
SD1.736e + 0001.175e + 0009.989e − 0011.720e + 000
Median best solution−6.606e + 001−6.614e + 001−6.607e + 001−6.588e + 001
Avg. iteration for finding the best solution3408293030993185

Avg. best solution 1.604e + 0012.352e + 0012.319e + 0012.278e + 001
SD2.974e + 0003.762e + 0004.012e + 0004.387e + 000
Median best solution1.589e + 0012.358e + 0012.351e + 0012.240e + 001
Avg. iteration for finding the best solution2394365637863660

Avg. best solution 0.000e + 0001.351e + 0007.438e − 0087.957e − 001
SD0.000e + 0003.079e + 0004.072e − 0072.428e + 000
Median best solution0.000e + 0000.000e + 0000.000e + 0000.000e + 000
Avg. iteration for finding the best solution448319248773446

Avg. best solution 1.165e − 0011.032e − 0019.987e − 0021.132e − 001
SD3.790e − 0021.826e − 0020.000e + 0003.457e − 002
Median best solution9.987e − 0029.987e − 0029.987e − 0029.987e − 002
Avg. iteration for finding the best solution1998291632682795

Avg. best solution −4.500e + 002−4.396e + 002−4.480e + 002−4.406e + 002
SD8.039e − 0147.334e + 0002.470e + 0007.991e + 000
Median best solution−4.500e + 002−4.450e + 002−4.500e + 002−4.450e + 002
Avg. iteration for finding the best solution132475735611213

Avg. best solution −1.798e + 002−1.745e + 002−1.793e + 002−1.766e + 002
SD3.789e − 0018.186e + 0005.222e − 0013.309e + 000
Median best solution−1.800e + 002−1.780e + 002−1.789e + 002−1.780e + 002
Avg. iteration for finding the best solution2922259233982711

Avg. best solution 3.915e + 0029.048e + 0066.752e + 0046.596e + 006
SD6.315e + 0001.678e + 0073.676e + 0051.638e + 007
Median best solution3.900e + 0022.014e + 0063.933e + 0024.867e + 002
Avg. iteration for finding the best solution4831398348844090

Avg. best solution −3.287e + 002−3.215e + 002−3.247e + 002−3.233e + 002
SD1.113e + 0005.979e + 0003.144e + 0004.822e + 000
Median best solution−3.290e + 002−3.225e + 002−3.250e + 002−3.225e + 002
Avg. iteration for finding the best solution4809319345093331

Avg. best solution −119.744−119.745−119.804−119.746
SD5.750e − 0027.142e − 0025.558e − 0026.911e − 002
Median best solution−119.731−119.748−119.804−119.73
Avg. iteration for finding the best solution1849224020722414

Avg. best solution −3.196e + 002−3.091e + 002−3.137e + 002−3.078e + 002
SD4.236e + 0007.174e + 0005.338e + 0007.995e + 000
Median best solution−3.196e + 002−3.102e + 002−3.130e + 002−3.079e + 002
Avg. iteration for finding the best solution3252331547213479

Avg. rank1.83.22.12.9
Final rank1423
AlgorithmsFGLT-PSOPSOLPSOQIPSO

Moreover, Wilcoxon’s rank sum test [37] is conducted in order to determine whether the results obtained by the FGLT-PSO are different from those generated by other algorithms with a statistical significance. The tests are shown in Tables 4, 5, 8, 9, 12, and 13, where -value = 1 indicates the case in which proposed algorithm significantly outperformed the compared algorithm with 95% certainty, -value = −1 represents that the compared algorithm is significantly better than the proposed algorithm, and -value = 0 denotes that the results of the two considered algorithms are not significantly different. In these tables, rows 1 (better), 0 (same), and −1 (worse) give the number of functions that the FGLT-PSO performs significantly better than, almost the same as, and significantly worse than the compared algorithm, respectively.


FunctionWilcoxon’s rank sum testPSOLPSOQIPSO

value1.2118e − 0121.2118e − 0121.2118e − 012
-value111
-value −7.10402−7.10402−7.10402

value3.0199e − 0113.0199e − 0113.0199e − 011
-value111
-value −6.6456−6.6456−6.6456

value3.0199e − 0113.0199e − 0113.0199e − 011
-value111
-value−6.6456−6.6456−6.6456

value1.0576e − 0036.1210e − 0103.5923e − 005
-value111
-value−3.27475−6.18728−4.13225

value1.5581e − 0081.8731e − 0071.0095e − 008
-value111
-value−5.65504−5.21151−5.72912

value6.3474e − 0111.2724e − 0019.3829e − 010
-value101
-value−6.53532−1.52509−6.11957

value3.3371e − 001
-value000
-value0.966667

value
-value000
-value

value1.9097e − 0051.6703e − 0066.2470e − 010
-value−11−1
-value4.27519−4.78976.18407

1 (better)666
0 (same)232
−1 (worse)101


FunctionWilcoxon’s rank sum testPSOLPSOQIPSO

value3.0199e − 0113.0199e − 0113.0199e − 011
-value111
-value−6.6456−6.6456−6.6456

value4.1191e − 0017.3940e − 0018.4180e − 001
-value000
-value0.8205360.332650.19959

value1.8567e − 0091.8500e − 0085.5329e − 008
-value111
-value−6.00987−5.62547−5.43328

value2.1577e − 0022.1577e − 0028.1523e − 002
-value110
-value−2.29773−2.29773−1.74192

value1.1439e − 0011.4425e − 0047.7181e − 001
-value0−10
-value1.578783.800760.290007

value1.5553e − 0111.2883e − 0111.7966e − 008
-value111
-value−6.74264−6.76995−5.63053

value1.2755e − 0101.9667e − 0035.5338e − 010
-value111
-value−6.43006−3.09521−6.20317

value7.3270e − 0113.6443e − 0085.0030e − 010
-value111
-value−6.51381−5.50727−6.21901

value5.7512e − 0063.5043e − 0074.0269e − 005
-value111
-value−4.53533−5.09408−4.10593

value9.2344e − 0012.3885e − 0048.6499e − 001
-value0−10
-value0.09609883.673930.170021

value5.2991e − 0089.2051e − 0054.6756e − 008
-value111
-value−5.44097−3.91064−5.46322

1 (better)887
0 (same)314
−1 (worse)020


FunctionFGLT-PSOPSOLPSOQIPSO

Avg. best solution 0.000e + 0005.397e − 0692.747e − 0283.502e − 069
SD0.000e + 0001.161e − 0684.078e − 0281.664e − 068
Median best solution0.000e + 0002.321e − 0701.047e − 0282.727e − 072

Avg. best solution 7.598e − 1107.000e + 0003.849e − 0282.333e + 000
SD4.085e − 1097.944e + 0009.015e − 0284.302e + 000
Median best solution1.684e − 1361.000e + 0011.331e − 0311.188e − 071

Avg. best solution 5.959e − 0201.189e + 0041.399e + 0025.733e + 003
SD1.762e − 0196.605e + 0039.380e + 0015.484e + 003
Median best solution5.483e − 0211.083e + 0041.254e + 0025.000e + 003

Avg. best solution 6.415e − 0031.320e − 0039.664e − 0031.635e − 003
SD2.858e − 0038.008e − 0043.621e − 0039.357e − 004
Median best solution5.941e − 0031.088e − 0039.106e − 0031.333e − 003

Avg. best solution 5.718e − 0011.129e + 0031.751e + 0013.858e + 002
SD1.225e + 0003.023e + 0032.041e + 0011.832e + 003
Median best solution1.707e − 0017.428e + 0019.494e + 0002.139e + 001

Avg. best solution −1.048e + 004−9.095e + 003−9.858e + 003−9.227e + 003
SD5.996e + 0027.444e + 0024.872e + 0027.524e + 002
Median best solution−1.058e + 004−9.051e + 003−9.828e + 003−9.232e + 003

Avg. best solution 7.771e − 0021.013e − 0142.931e − 0148.941e − 015
SD2.015e − 0013.312e − 0151.433e − 0142.457e − 015
Median best solution7.994e − 0157.994e − 0152.576e − 0147.994e − 015

Avg. best solution 5.164e − 0031.037e − 0021.160e − 0251.571e − 032
SD2.274e − 0023.163e − 0023.552e − 0255.567e − 048
Median best solution1.571e − 0321.591e − 0323.737e − 0271.571e − 032

Avg. best solution 2.136e + 0015.827e + 0014.281e + 0014.310e + 001
SD8.612e + 0003.080e + 0012.064e + 0012.901e + 001
Median best solution2.212e + 0015.450e + 0013.753e + 0013.500e + 001

Avg. rank1.83.32.62.3
Final rank1432
AlgorithmsFGLT-PSOPSOLPSOQIPSO


FunctionFGLT-PSOPSOLPSOQIPSO

Avg. best solution 8.707e − 0203.422e + 0027.818e + 0005.318e + 001
SD4.547e − 0191.556e + 0032.173e + 0011.986e + 002
Median best solution4.028e − 0431.332e − 0062.267e − 0038.579e − 020

Avg. best solution −4.963e + 001 −4.981e + 001 −4.933e + 0014.993e + 001
SD1.098e + 0001.324e + 0001.417e + 0001.489e + 000
Median best solution −4.930e + 001 −4.985e + 001 −4.927e + 001 −4.968e + 001

Avg. best solution 1.514e + 0021.935e + 0021.902e + 0021.886e + 002
SD8.330e + 0001.509e + 0011.246e + 0011.233e + 001
Median best solution1.511e + 0021.978e + 0021.899e + 0021.850e + 002

Avg. best solution 7.160e − 0013.101e + 0011.185e + 0012.046e + 001
SD9.412e − 0011.248e + 0011.136e + 0011.804e + 001
Median best solution2.067e − 0013.613e + 0015.997e + 0003.319e + 001

Avg. best solution 4.065e − 0013.465e − 0013.632e − 0013.599e − 001
SD1.230e − 0015.074e − 0025.561e − 0027.240e − 002
Median best solution3.999e − 0012.999e − 0013.999e − 0013.499e − 001

Avg. best solution −4.280e + 002 −4.044e + 002 −4.175e + 0024.500e + 002
SD7.441e + 0007.592e + 0008.860e + 0002.151e − 006
Median best solution −4.266e + 002 −4.056e + 002 −4.155e + 002 −4.500e + 002

Avg. best solution −1.794e + 002 −8.340e + 001 −1.705e + 002 −9.806e + 001
SD1.112e + 0004.560e + 0014.344e + 0003.839e + 001
Median best solution −1.798e + 002 −9.269e + 001 −1.704e + 002 −1.053e + 002

Avg. best solution 4.225e + 0021.658e + 0098.168e + 0071.549e + 009
SD5.410e + 0011.409e + 0095.644e + 0071.599e + 009
Median best solution3.940e + 0021.136e + 0096.932e + 0071.205e + 009

Avg. best solution −2.874e + 002 −2.174e + 002 −2.455e + 002 −2.219e + 002
SD1.680e + 0012.897e + 0011.196e + 0012.614e + 001
Median best solution −2.902e + 002 −2.199e + 002 −2.458e + 002 −2.227e + 002

Avg. best solution −1.1912e + 002 −1.1912e + 002 −1.1915e + 002 −1.1913e + 002
SD5.597e − 0025.015e − 0025.377e − 0026.957e − 002
Median best solution −1.1911e + 002 −1.1911e + 002 −1.1915e + 002 −1.1913e + 002

Avg. best solution −2.367e + 002 −1.237e + 002 −1.499e + 002 −1.375e + 002
SD2.561e + 0013.612e + 0012.719e + 0014.160e + 001
Median best solution −2.427e + 002 −1.256e + 002 −1.481e + 002 −1.374e + 002

Avg. rank1.83.52.42.4
Final rank1322
AlgorithmsFGLT-PSOPSOLPSOQIPSO


FunctionWilcoxon’s rank sum testPSOLPSOQIPSO

value1.2118e − 0121.2118e − 0121.2118e − 012
-value111
-value−7.10402−7.10402−7.10402

value2.8991e − 0113.0199e − 0112.9822e − 011
-value111
-value−6.65161−6.6456−6.64745

value3.0199e − 0113.0199e − 0113.0199e − 011
-value111
-value−6.6456−6.6456−6.6456

value1.2057e − 0102.2539e − 0041.9568e − 010
-value−11−1
-value6.43862−3.688716.3647

value1.2018e − 0081.2472e − 0045.0922e − 008
-value111
-value−5.69948−3.83667−5.44806

value1.2018e − 0081.2472e − 0045.0922e − 008
-value111
-value−5.69948−3.83667−5.44806

value4.2942e − 0023.7277e − 0012.8314e − 003
-value−10−1
-value2.02428−0.8912892.98547

value1.7374e − 0012.4305e − 0026.6167e − 004
-value01−1
-value−1.36028−2.252283.40499

value1.9990e − 0062.7726e − 0057.2926e − 004
-value111
-value−4.75352−4.19138−3.37834

1 (better)686
0 (same)110
−1 (worse)203


FunctionWilcoxon’s rank sum testPSOLPSOQIPSO

value3.2922e − 0101.4041e − 0103.1451e − 009
-value111
-value−6.28435−6.41545−5.92384

value1.0233e − 0013.0418e − 0014.4642e − 001
-value000
-value1.63368−1.027520.761398

value1.0937e − 0101.4643e − 0103.0199e − 011
-value111
-value−6.4534−6.40905−6.6456

value1.4288e − 0081.3281e − 0101.1736e − 003
-value111
-value−5.66991−6.42392−3.24523

value1.3470e − 0036.7707e − 0033.5527e − 003
-value−1−1−1
-value3.205782.707922.91537

value2.8840e − 0104.7657e − 0052.8252e − 011
-value11−1
-value−6.30489−4.066836.65541

value2.9543e − 0115.3712e − 0112.9543e − 011
-value111
-value−6.64883−6.56027−6.64883

value3.0199e − 113.0199e − 113.0199e − 11
-value111
-value−6.6456−6.6456−6.6456

value1.0937e − 0101.2870e − 0091.4643e − 010
-value111
-value−6.4534−6.06901−6.40905

value8.7663e − 0014.0595e − 0027.7312e − 001
-value010
-value0.1552362.047640.288296

value3.0199e − 0117.3891e − 0114.9752e − 011
-value111
-value−6.6456−6.51254−6.57168

1 (better)897
0 (same)212
−1 (worse)112


FunctionFGLT-PSOPSOLPSOQIPSO

Avg. best solution 5.239e − 2327.5857e − 0497.297e − 0202.446e − 049
SD0.000e + 0002.1243e − 0487.695e − 0208.478e − 049
Median best solution2.541e − 2514.5415e − 0504.843e − 0201.395e − 050

Avg. best solution 9.246e − 0753.400e + 0012.194e − 0151.367e + 001
SD3.646e − 0741.714e + 0011.439e − 0151.159e + 001
Median best solution2.684e − 0803.000e + 0011.881e − 0151.000e + 001

Avg. best solution 1.098e − 0084.167e + 0042.775e + 0043.585e + 004
SD2.405e − 0081.404e + 0048.532e + 0031.967e + 004
Median best solution5.928e − 0094.002e + 0042.859e + 0043.434e + 004

Avg. best solution 5.070e − 0026.015e + 0006.452e − 0022.055e − 002
SD2.802e − 0025.352e + 0001.488e − 0024.363e − 003
Median best solution4.205e − 0025.385e + 0006.409e − 0021.971e − 002

Avg. best solution 6.128e + 0001.568e + 0036.024e + 0014.700e + 002
SD5.716e + 0003.409e + 0033.032e + 0011.817e + 003
Median best solution5.080e + 0008.179e + 0017.397e + 0017.942e + 001

Avg. best solution −1.505e + 004−1.364e + 004−1.470e + 004−1.339e + 004
SD9.481e + 0029.581e + 0028.491e + 0021.025e + 003
Median best solution−1.515e + 004−1.349e + 004−1.445e + 004−1.343e + 004

Avg. best solution 1.299e + 0002.049e + 0001.145e − 0051.782e − 014
SD5.560e − 0014.661e + 0006.223e − 0053.695e − 015
Median best solution1.282e + 0002.220e − 0141.782e − 0091.510e − 014

Avg. best solution 1.061e − 0011.451e − 0023.831e − 0122.074e − 003
SD1.935e − 0013.535e − 0021.984e − 0111.136e − 002
Median best solution2.081e − 0051.308e − 0322.881e − 0159.423e − 033

Avg. best solution 4.759e + 0011.980e + 0021.872e + 0021.546e + 002
SD2.294e + 0014.888e + 0013.958e + 0015.192e + 001
Median best solution4.636e + 0011.890e + 0021.850e + 0021.475e + 002

Avg. rank1.73.72.32.3
Final rank1322
AlgorithmsFGLT-PSOPSOLPSOQIPSO


FunctionFGLT-PSOPSOLPSOQIPSO

Avg. best solution 9.936e − 0316.202e + 0095.150e + 0033.989e + 006
SD5.442e − 0302.576e + 0101.153e + 0041.825e + 007
Median best solution1.121e − 0481.763e + 0051.184e + 0025.637e − 007

Avg. best solution −4.401e + 001−4.328e + 001−4.357e + 001−4.385e + 001
SD1.043e + 0001.331e + 0009.431e − 0011.329e + 000
Median best solution−4.391e + 001−4.344e + 001−4.339e + 001−4.367e + 001

Avg. best solution 3.155e + 0024.234e + 0023.995e + 0024.045e + 002
SD1.166e + 0013.689e + 0011.813e + 0012.468e + 001
Median best solution3.171e + 0024.236e + 0023.995e + 0024.062e + 002

Avg. best solution 1.660e + 0016.640e + 0015.210e + 0014.946e + 001
SD6.875e + 0002.371e + 0001.517e + 0012.599e + 001
Median best solution1.759e + 0016.697e + 0016.163e + 0016.361e + 001

Avg. best solution 7.599e − 0015.899e − 0018.067e − 0016.099e − 001
SD3.103e − 0018.030e − 0029.046e − 0028.847e − 002
Median best solution6.999e − 0015.999e − 0017.999e − 0015.999e − 001

Avg. best solution −3.976e + 002−3.795e + 002−4.019e + 002−4.499e + 002
SD5.964e + 0001.974e + 0016.096e + 0001.849e − 003
Median best solution−3.974e + 002−3.815e + 002−4.021e + 002−4.50e + 002

Avg. best solution −1.791e + 0028.313e + 001−1.551e + 0026.894e + 001
SD1.212e + 0009.121e + 0011.216e + 0017.896e + 001
Median best solution−1.797e + 0027.911e + 001−1.565e + 0027.481e + 001

Avg. best solution 4.260e + 0028.648e + 0093.341e + 0088.196e + 009
SD7.188e + 0014.442e + 0092.597e + 0085.588e + 009
Median best solution3.977e + 0028.332e + 0092.599e + 0086.614e + 009

Avg. best solution −1.920e + 002−4.497e + 001−9.832e + 001−3.612e + 001
SD2.634e + 0014.206e + 0011.963e + 0014.934e + 001
Median best solution−1.944e + 002−4.804e + 001−9.679e + 001−2.665e + 001

Avg. best solution −1.189e + 002−1.189e + 002−1.190e + 002−1.189e + 002
SD4.303e − 0024.364e − 0025.121e − 0025.576e − 002
Median best solution−1.189e + 002−1.189e + 002−1.190e + 002−1.189e + 002

Avg. best solution −1.016e + 0021.537e + 0025.974e + 0011.082e + 002
SD4.898e + 0018.906e + 0015.117e + 0019.020e + 001
Median best solution−1.004e + 0021.539e + 0026.594e + 0011.035e + 002

Avg. rank1.53.62.32.7
Final rank1423
AlgorithmsFGLT-PSOPSOLPSOQIPSO


FunctionWilcoxon’s rank sum testPSOLPSOQIPSO

value3.0199e − 0113.0199e − 0113.0199e − 011
-value111
-value−6.6456−6.6456−6.6456

value2.9673e − 0113.0199e − 0112.9229e − 011
-value111
-value−6.64819−6.6456−6.65041

value3.0199e − 0113.0199e − 0113.0199e − 011
-value111
-value−6.6456−6.6456−6.6456

value2.5306e − 0043.9881e − 0043.1589e − 010
-value11−1
-value−3.65915−3.540876.29077

value6.1210e − 0109.7555e − 0101.2870e − 009
-value111
-value−6.18728−6.11336−6.06901

value2.8790e − 0069.3341e − 0022.5711e − 007
-value101
-value−4.67927−1.67803−5.15244

value7.7050e − 0063.0199e − 0111.4811e − 011
-value−1−1−1
-value4.473226.64566.74974

value7.7087e − 0028.2796e − 0031.4193e − 007
-value0−1−1
-value1.767842.640455.26273

value3.3384e − 0113.3384e − 0114.6159e − 010
-value111
-value−6.63081−6.63081−6.23164

1 (better)766
0 (same)110
−1 (worse)123


FunctionWilcoxon’s rank sum testPSOLPSOQIPSO

value3.0199e − 0113.0199e − 0113.0199e − 011
-value111
-value−6.6456−6.6456−6.6456

value3.0317e − 0026.5671e − 0026.7350e − 001
-value100
-value−2.16592−1.84066−0.421356

value3.0199e − 0113.0199e − 0113.0199e − 011
-value111
-value−6.6456−6.6456−6.6456

value3.0199e − 0111.2057e − 0101.8916e − 004
-value111
-value−6.6456−6.43862−3.73307

value7.3131e − 0033.0749e − 0023.0494e − 002
-value−11−1
-value2.68224−2.160312.16361

value2.2256e − 0045.4038e − 0042.7547e − 011
-value1−1−1
-value−3.691933.45996.65912

value3.0142e − 0113.0123e − 0113.0142e − 011
-value111
-value−6.64588−6.64597−6.64588

value3.0199e − 0113.0199e − 0113.0199e − 011
-value111
-value−6.6456−6.6456−6.6456

value3.0199e − 0114.0772e − 0113.0199e − 011
-value111
-value−6.6456−6.60125−6.6456

value2.3399e − 0015.8737e − 0049.2344e − 001
-value0−10
-value1.190153.437380.0960988

value3.3384e − 0111.3289e − 0108.9934e − 011
-value111
-value−6.63081−6.42383−6.48297

1 (better)987
0 (same)112
−1 (worse)122

The acceleration coefficients , , and are updated based on (10). Their minimum and maximum values are as follows: , and = 1.5.

In these tables, the benchmark functions are divided to two categories: unimodal and multimodal functions and rotated, shifted, and shifted-rotated unimodal and multimodal functions. The experimental results demonstrate that FGLT-PSO performs superior results for most of the functions in all tested dimensions.

Tables 2 and 3 show the experimental results for all benchmark functions with dimension . As illustrated, the FGLT-PSO algorithm surpasses the PSO, LPSO, and QIPSO algorithms in minimizing functions , , , , , and (20). Moreover, the proposed method provides significant improvements in functions , , , , , , and . In these functions, the convergent results attain the optimal (or good near optimal) solutions. In Tables 2 and 3, the average iteration for finding the best solution is computed. The average iteration is the required iterations to find the best solution by each algorithm. As shown, FGLT-PSO finds the best solutions faster than the other algorithms in the majority of functions. Also, it is noticeable that the FGLT-PSO algorithm achieves the best solution in a considerably lower iteration in functions and . In these functions, the algorithms show identical results.

According to Wilcoxon’s rank sum test in Tables 4 and 5 for , the results of the FGLT-PSO are statistically significantly different from the three compared algorithms. The superior convergence rate of FGLT-PSO is shown in Figure 2. The results in this figure illustrate that FGLT-PSO tends to find the global optimum in and faster than PSO, LPSO, and QIPSO and obtains the highest accuracy for these functions from among all the algorithms.

The minimization results of the benchmark functions with dimension are presented in Tables 6 and 7. As seen in these tables and Tables 8 and 9, the FGLT-PSO outperforms the PSO, LPSO, and QIPSO algorithms in functions , , , , , , , , , , , , and . The largest difference in performance between the proposed algorithm with PSO, LPSO, and QIPSO occurs for the functions , , , , , , , and . Figure 3 illustrates the progress of the average best solution over 30 runs for and . As demonstrated, the FGLT-PSO shows a higher convergence rate than the other algorithms.

Tables 10 and 11 present the results of algorithms for the test functions with dimension . As illustrated in these tables and regarding the results of Wilcoxon’s rank sum in Tables 12 and 13, the performance of proposed method is the best in most of the functions especially for functions , , , , , , , , , , , , , and . The superior convergence rate of FGLT-PSO is shown in Figure 4. The results in this figure show that the FGLT-PSO performs the best for the test functions and with .

In addition, it is considerable that the PSO, LPSO, and QIPSO algorithms return the results far from the global optima as the dimension increases. This problem is clear in the functions , , , , , , , and in Tables 10 and 11 with . These all indicate that the proposed algorithm, FGLT-PSO, is more powerful and robust than the others for solving unimodal and multimodal functions.

4.2.2. The Results of Proposed Method with Constant Acceleration Coefficients

In this section, the , , and as acceleration coefficients are set at constant values to compare with the presented results in the Section 4.2.1. The coefficient of cognition term () and social terms ( and ) are considered as and . Table 14 shows the results of proposed method for the benchmark functions with dimensions 10, 30, and 50. As seen, the FGLT-PSO with the constant acceleration coefficients performs well in most of the functions. As the dimension increases, the FGLT-PSO with the variable acceleration coefficients (Section 4.2.1) shows the better performance than the constant one for functions , , , , , and . Also, the FGLT-PSO with the constant acceleration coefficients presents a better performance for functions , , and .


FunctionsIteration = 5000, Iteration = 10000, Iteration = 15000,
Avg. best solution ± SDAvg. best solution ± SDAvg. best solution ± SD

0.000e + 000 ± 0.000e + 0001.936e − 067 ± 1.060e − 0661.687e − 038 ± 5.602e − 038
6.678e − 160 ± 3.658e − 1594.922e − 049 ± 2.305e − 0487.064e − 032 ± 3.167e − 031
7.520e − 035 ± 3.579e − 0342.026e − 003 ± 9.321e − 0032.062e + 000 ± 2.616e + 000
1.621e − 004 ± 8.583e − 0059.078e − 004 ± 3.306e − 0041.930e − 003 ± 6.461e − 004
3.086e + 000 ± 1.772e + 0003.542e + 001 ± 2.864e + 0017.854e + 001 ± 4.435e + 001
−3.957e + 003 ± 1.375e + 002−1.052e + 004 ± 4.069e + 002−1.643e + 004 ± 7.233e + 002
4.086e − 015 ± 1.084e − 0155.507e − 015 ± 1.656e − 0159.484e − 002 ± 3.612e − 001
F 84.712e − 032 ± 1.670e − 0474.492e − 002 ± 1.379e − 0017.063e − 002 ± 1.508e − 001
F 92.567e + 000 ± 9.714e − 0012.040e + 001 ± 6.100e + 0005.183e + 001 ± 1.117e + 001
F 106.599e − 249 ± 0.000e + 0003.539e − 066 ± 1.939e − 0654.923e − 019 ± 2.696e − 018
F 11−6.620e + 001 ± 1.111e + 000−5.036e + 001 ± 9.041e − 001−4.482e + 001 ± 9.061e − 001
F 121.659e + 001 ± 4.479e + 0001.602e + 002 ± 1.096e + 0013.369e + 002 ± 1.585e + 001
F 130.000e + 000 ± 0.000e + 0000.000e + 000 ± 0.000e + 0001.5294e − 007 ± 1.1896e − 007
F 149.987e − 002 ± 2.247e − 0173.932e − 001 ± 1.081e − 0013.065e − 001 ± 2.537e − 002
F 15−4.500e + 002 ± 1.493e − 014−4.268e + 002 ± 1.016e + 001−4.420e + 002 ± 3.137e + 000
F 16−1.799e + 002 ± 2.005e − 001−1.787e + 002 ± 3.851e + 000−1.775e + 002 ± 3.055e + 000
F 174.001e + 002 ± 2.060e + 0014.661e + 002 ± 1.083e + 0025.748e + 005 ± 1.022e + 006
F 18−3.283e + 002 ± 8.796e − 001−2.956e + 002 ± 8.540e + 000−2.396e + 002 ± 1.763e + 001
F 19−1.1974e + 002 ± 5.990e − 002−1.191e + 002 ± 6.798e − 002−1.189e + 002 ± 3.710e − 002
F 20−3.196e + 002 ± 4.136e + 000−1.994e + 002 ± 2.829e + 001−2.684e + 001 ± 4.506e + 001

4.2.3. Comparison with the Other PSO Algorithms

In this section, several well-known PSO algorithms are selected to assess the performance of proposed algorithm for the benchmarks. The PSO, QIPSO, FIPS, DMS-PSO, CLPSO, AFPSO, and AFPSO-QI algorithms are considered for the comparison. The details of these algorithms are listed in Table 15. The FGLT-PSO is run 30 times and the average best solutions and the SD of results for eight common multimodal benchmark functions are compared with the reported results by [19] as illustrated in Table 16. The maximum iteration is 10000, , and . As seen, the FGLT-PSO provides better results than the other algorithms for the majority of functions (functions , , , , , and ) and has the first rank.


AlgorithmTopologyParameter settings

PSOGlobal starω: 0.9–0.4,
QIPSOGlobal starω: 0.9–0.4,
FIPSLocal U-ringχ = 0.729, = 4.1
DMS-PSODynamic multiswarmω: 0.9–0.2,
CLPSOComprehensive learningω: 0.9–0.4,
AFPSOGlobal starω: 0.9–0.4, are based on fuzzy rule [19]
AFPSO-QIGlobal starω: 0.9–0.4, are based on fuzzy rule [19]
FGLT-PSOGlobal star and local ringω: 0.9–0.4, : 0.5–2, : 1-2, : 0.5–1.5


PSOsFunctions
Avg. best solution ± SDAvg. best solution ± SDAvg. best solution ± SD

PSO−4.529e + 001 ± 1.911e + 0003.202e + 002 ± 1.470e + 0013.835e + 001 ± 1.482e + 000
QIPSO −3.332e + 001 ± 1.781e + 0003.175e + 002 ± 2.324e + 0014.077e + 001 ± 2.015e + 000
FIPS −1.955e + 001 ± 8.477e + 0004.341e + 002 ± 3.499e + 0014.155e + 001 ± 1.363e + 000
DMS-PSO −4.572e + 001 ± 1.703e + 0002.837e + 002 ± 1.606e + 0013.632e + 001 ± 1.225e + 000
CLPSO−4.529e + 001 ± 1.269e + 0002.633e + 002 ± 1.196e + 0013.496e + 001 ± 1.768e + 000
AFPSO −4.547e + 001 ± 1.608e + 0002.663e + 002 ± 1.200e + 0013.609e + 001 ± 2.540e + 000
AFPSO-QI−4.678e + 001 ± 1.212e + 0002.533e + 002 ± 1.263e + 0013.135e + 001 ± 3.301e + 000
FGLT-PSO−4.920e + 001 ± 1.091e + 0001.580e + 002 ± 1.065e + 0019.933e + 000 ± 3.737e + 000

PSO1.703e + 001 ± 2.554e + 0003.217e + 009 ± 3.880e + 009−1.953e + 002 ± 3.282e + 001
QIPSO 1.520e + 001 ± 1.319e + 0002.347e + 009 ± 1.872e + 009−1.963e + 002 ± 2.964e + 001
FIPS 2.660e + 001 ± 1.417e + 0001.340e + 003 ± 2.044e + 003−2.173e + 002 ± 3.076e + 001
DMS-PSO 1.292e + 001 ± 1.328e + 0003.362e + 008 ± 3.089e + 008−2.456e + 002 ± 1.293e + 001
CLPSO1.194e + 001 ± 1.365e + 0005.943e + 002 ± 5.069e + 001−2.605e + 002 ± 7.359e + 000
AFPSO 1.038e + 001 ± 1.379e + 0009.700e + 007 ± 1.197e + 008−2.718e + 002 ± 1.072e + 001
AFPSO-QI8.462e + 000 ± 9.477e − 0018.832e + 007 ± 9.793e + 007−2.736e + 002 ± 9.667e + 000
FGLT-PSO5.899e − 001 ± 2.820e − 0015.204e + 002 ± 1.328e + 002−2.658e + 002 ± 2.078e + 001

Avg. rankFinal rank
PSO−1.191e + 002 ± 7.093e − 002−1.118e + 002 ± 4.289e + 00188
QIPSO −1.191e + 002 ± 5.677e − 001−1.152e + 002 ± 3.390e + 0016.67
FIPS 1.199e + 002 ± 3.239e − 002−1.437e + 002 ± 5.164e + 0015.66
DMS-PSO −1.192e + 002 ± 6.117e − 002−1.910e + 002 ± 1.994e + 0014.45
CLPSO−1.190e + 002 ± 3.781e − 002−1.312e + 002 ± 2.437e + 0014.34
AFPSO −1.197e + 002 ± 4.280e − 002−1.267e + 002 ± 2.727e + 0013.93
AFPSO-QI−1.198e + 002 ± 3.854e − 001−1.339e + 002 ± 2.208e + 0012.42
FGLT-PSO−1.191e + 002 ± 4.805e − 002−2.166e + 002 ± 3.117e + 0011.81

5. Conclusions

In this study, a fusion global-local-topology PSO algorithm (FGLT-PSO) has been presented to extend the search capability and to improve convergent efficiency by combining local and global topologies. The algorithm is a global search algorithm with several advantages. The benefits of algorithm can be summarized as the following: FGLT-PSO has a simple concept and structure; it is easy to implement and is not sensitive to increase of the dimension.

A set of standard benchmarks, including unimodal, multimodal, rotated, shifted, and shifted-rotated unimodal and multimodal functions, have been used to evaluate the proposed algorithm. The average best results obtained by the FGLT-PSO have been compared with PSO, LPSO, QIPSO, FIPS, DMS-PSO, CLPSO, AFPSO, and AFPSO-QI. The experimental results show that the proposed FGLT-PSO algorithm enhances the accuracy of results compared with the other algorithms.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors thank the Research Management Centre (RMC), Universiti Teknologi Malaysia (UTM), for supporting in R & D and Soft Computing Research Group (SCRG), Universiti Teknologi Malaysia (UTM), Johor Bahru, Malaysia, for the inspiration and moral support in conducting this research. Also, they hereby would like to appreciate the postdoctoral program, Universiti Teknologi Malaysia (UTM), for the financial support and research activities. This work is supported by the Ministry of Higher Education (MOHE) under Fundamental Research Grant Scheme FRGS-4F347.

References

  1. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995. View at: Publisher Site | Google Scholar
  2. K. Y. Chan, T. S. Dillon, and C. K. Kwong, “Polynomial modeling for time-varying systems based on a particle swarm optimization algorithm,” Information Sciences, vol. 181, no. 9, pp. 1623–1640, 2011. View at: Publisher Site | Google Scholar
  3. V. Fathi and G. A. Montazer, “An improvement in RBF learning algorithm based on PSO for real time applications,” Neurocomputing, vol. 111, pp. 169–176, 2013. View at: Publisher Site | Google Scholar
  4. H. Huang, H. Qin, Z. Hao, and A. Lim, “Example-based learning particle swarm optimization for continuous optimization,” Information Sciences, vol. 182, no. 1, pp. 125–138, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  5. B. Y. Qu, J. J. Liang, and P. N. Suganthan, “Niching particle swarm optimization with local search for multi-modal optimization,” Information Sciences, vol. 197, pp. 131–143, 2012. View at: Publisher Site | Google Scholar
  6. H. Wang, I. Moon, S. Yang, and D. Wang, “A memetic particle swarm optimization algorithm for multimodal optimization problems,” Information Sciences, vol. 197, pp. 38–52, 2012. View at: Publisher Site | Google Scholar
  7. Z. Beheshti and S. M. Shamsuddin, International Journal of Advances in Soft Computing & Its Applications, vol. 5, no. 1, pp. 1–35, 2013.
  8. J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in Proceedings of IEEE international conference on Evolutionary Computation, vol. 2, pp. 1671–1676, 2002. View at: Google Scholar
  9. J. Kennedy and R. Mendes, “Neighborhood topologies in fully informed and best-of-neighborhood particle swarms,” IEEE Transactions on Systems, Man, and Cybernetics Part C, vol. 36, pp. 515–519, 2006. View at: Google Scholar
  10. R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, 2004. View at: Publisher Site | Google Scholar
  11. A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at: Publisher Site | Google Scholar
  12. J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle swarm optimizer,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '05), pp. 127–132, June 2005. View at: Publisher Site | Google Scholar
  13. J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at: Publisher Site | Google Scholar
  14. Z. Beheshti, S. M. H. Shamsuddin, and S. Hasan, “MPSO: median-oriented particle swarm optimization,” Applied Mathematics and Computation, vol. 219, no. 11, pp. 5817–5836, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  15. Z. Beheshti and S. M. Hj. Shamsuddin, “CAPSO: centripetal accelerated particle swarm optimization,” Information Sciences, vol. 258, pp. 54–79, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  16. M. Pant, T. Radha, and V. P. Singh, “A new particle swarm optimization with quadratic interpolation,” in Proceedings of the International Conference on Computational Intelligence and Multimedia Applications (ICCIMA '07), pp. 55–60, December 2007. View at: Publisher Site | Google Scholar
  17. P. Liu, W. Leng, and W. Fang, “Training ANFIS model with an improved quantum-behaved particle swarm optimization algorithm,” Mathematical Problems in Engineering, vol. 2013, Article ID 595639, 10 pages, 2013. View at: Publisher Site | Google Scholar
  18. Z.-H. Zhan, J. Zhang, Y. Li, and H. S.-H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 39, no. 6, pp. 1362–1381, 2009. View at: Publisher Site | Google Scholar
  19. Y.-T. Juang, S.-L. Tung, and H.-C. Chiu, “Adaptive fuzzy particle swarm optimization for global optimization of multimodal functions,” Information Sciences, vol. 181, no. 20, pp. 4539–4549, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  20. Z. Beheshti, S. M. Shamsuddin, E. Beheshti, and S. S. Yuhaniz, “Enhancement of artificial neural network learning using centripetal accelerated particle swarm optimization for medical diseases diagnosis,” Soft Computing, 2013. View at: Publisher Site | Google Scholar
  21. Z. Beheshti, S. M. Shamsuddin, and S. S. Yuhaniz, “Binary accelerated particle swarm algorithm (BAPSA) for discrete optimization problems,” Journal of Global Optimization, vol. 57, no. 2, pp. 549–573, 2013. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  22. M. A. Cavuslu, C. Karakuzu, and F. Karakaya, “Neural identification of dynamic systems on FPGA with improved PSO learning,” Applied Soft Computing Journal, vol. 12, no. 9, pp. 2707–2718, 2012. View at: Publisher Site | Google Scholar
  23. L. Liu, S. Yang, and D. Wang, “Force-imitated particle swarm optimization using the near-neighbor effect for locating multiple optima,” Information Sciences, vol. 182, no. 1, pp. 139–155, 2012. View at: Publisher Site | Google Scholar | MathSciNet
  24. Z. Beheshti and S. M. Shamsuddin, Centripetal accelerated particle swarm optimization and its applications in machine learning [Ph.D. thesis], Universiti Teknologi Malaysia, 2013.
  25. Y. Wang, J. Zhou, C. Zhou, Y. Wang, H. Qin, and Y. Lu, “An improved self-adaptive PSO technique for short-term hydrothermal scheduling,” Expert Systems with Applications, vol. 39, no. 3, pp. 2288–2295, 2012. View at: Publisher Site | Google Scholar
  26. Q. Luo and D. Yi, “A co-evolving framework for robust particle swarm optimization,” Applied Mathematics and Computation, vol. 199, no. 2, pp. 611–622, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  27. P. S. Andrews, “An investigation into mutation operators for particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 1044–1051, July 2006. View at: Google Scholar
  28. M. Lovbjerg, T. K. Rasmussen, and T. Krink, “Hybrid particle swarm optimizer with breeding and subpopulations,” in Proceedings of the 3rd Genetic and Evolutionary Computation Conference, pp. 469–476, 2001. View at: Google Scholar
  29. S. Tsafarakis, C. Saridakis, G. Baltas, and N. Matsatsinis, “Hybrid particle swarm optimization with mutation for optimizing industrial product lines: an application to a mixed solution space considering both discrete and continuous design variables,” Industrial Marketing Management, vol. 42, no. 4, pp. 496–506, 2013. View at: Publisher Site | Google Scholar
  30. P. J. Angeline, “Using selection to improve particle swarm optimization,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 84–89, May 1998. View at: Google Scholar
  31. Y.-P. Chen, W.-C. Peng, and M.-C. Jian, “Particle swarm optimization with recombination and dynamic linkage discovery,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 37, no. 6, pp. 1460–1470, 2007. View at: Publisher Site | Google Scholar
  32. Z.-H. Zhan, J. Zhang, Y. Li, and Y.-H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832–847, 2011. View at: Publisher Site | Google Scholar
  33. W.-F. Gao, S.-Y. Liu, and L.-L. Huang, “Particle swarm optimization with chaotic opposition-based population initialization and stochastic search technique,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 11, pp. 4316–4327, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  34. K. Tang, X. Yao, P. N. Suganthan et al., “Benchmark functions for the CEC'2008 special session and competition on large scale global optimization,” Tech. Rep., Nature Inspired Computation and Applications Laboratory, USTC, Anhui, China, http://nical.ustc.edu.cn/cec08ss.php. View at: Google Scholar
  35. X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at: Publisher Site | Google Scholar
  36. R. Salomon, “Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. A survey of some theoretical and practical aspects of genetic algorithms,” BioSystems, vol. 39, no. 3, pp. 263–278, 1996. View at: Publisher Site | Google Scholar
  37. F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics Bulletin, vol. 1, no. 6, pp. 80–83, 1945. View at: Publisher Site | Google Scholar

Copyright © 2014 Zahra Beheshti et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1083 Views | 528 Downloads | 10 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.