Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 5264547 | https://doi.org/10.1155/2020/5264547

Guoliang Li, Jinhong Sun, Mohammad N.A. Rana, Yinglei Song, Chunmei Liu, Zhi-yu Zhu, "Optimizing High-Dimensional Functions with an Efficient Particle Swarm Optimization Algorithm", Mathematical Problems in Engineering, vol. 2020, Article ID 5264547, 10 pages, 2020. https://doi.org/10.1155/2020/5264547

Optimizing High-Dimensional Functions with an Efficient Particle Swarm Optimization Algorithm

Academic Editor: Ricardo Perera
Received22 Dec 2019
Revised20 May 2020
Accepted17 Jun 2020
Published09 Jul 2020

Abstract

The optimization of high-dimensional functions is an important problem in both science and engineering. Particle swarm optimization is a technique often used for computing the global optimum of a multivariable function. In this paper, we develop a new particle swarm optimization algorithm that can accurately compute the optimal value of a high-dimensional function. The iteration process of the algorithm is comprised of a number of large iteration steps, where a large iteration step consists of two stages. In the first stage, an expansion procedure is utilized to effectively explore the high-dimensional variable space. In the second stage, the traditional particle swarm optimization algorithm is employed to compute the global optimal value of the function. A translation step is applied to each particle in the swarm after a large iteration step is completed to start a new large iteration step. Based on this technique, the variable space of a function can be extensively explored. Our analysis and testing results on high-dimensional benchmark functions show that this algorithm can achieve optimization results with significantly improved accuracy, compared with traditional particle swarm optimization algorithms and a few other state-of-the-art optimization algorithms based on particle swarm optimization.

1. Introduction

In both science and engineering, the particle swarm optimization (PSO) algorithm [1] is an important optimization technique that has been extensively used to find the global optima of multivariable functions. The PSO algorithm searches the variable space of a multivariable function by simulating the social behaviors of a group of animals. The animals in the group act cooperatively to find the potential location of the global optimum point, and the search pattern of each particle in the group is adjusted based on the exploring experiences of its own and other particles in the group.

Although the PSO algorithm can effectively search the variable space of a large number of multivariable functions and accurately determine their global optima, it has two disadvantages that may adversely affect its performance on some multivariable functions. Specifically, the speed of convergence is slow, especially in cases where the function that needs to be optimized is defined on a high-dimensional space. In addition, the algorithm is not guaranteed to find a global optimum and may converge to a local minimum point when the dimensionality of the function is large. A large number of techniques thus have been developed to further improve the performance of the PSO algorithm in the past two decades.

One strategy to improve the performance of the traditional PSO algorithm is to divide the individuals in the swarm into subgroups. In [2], the concept of subpopulation and a reproduction operator is introduced into the PSO algorithm. In [3], a dynamic multiswarm PSO algorithm is developed. Recently, a symbiotic particle swarm optimization (SPSO) algorithm is developed in [4] to optimize the neural fuzzy networks. In [5], the traditional PSO algorithm is modified to solve multimodal function optimization problems. In [6], the particles in the swarm are divided into mentors, mentees, and independent learner groups based on differences in fitness values and the Euclidian distances from the particle with the best fitness value. As a result, a new dynamic mentoring and self-regulation-based particle swarm optimization (DMeSR-PSO) algorithm is developed. Testing results show that DMeSR-PSO algorithm can achieve excellent and reliable performance on a number of benchmark datasets.

Another technique often used to improve the performance of PSO algorithms is to develop new rules for updating the positions of the particles. In [7], the cognitive and social behaviors of the swarm are randomized based on chaotic sequences and Gaussian distribution. Particles can thus search in a much broader area than the traditional PSO algorithm. In [8], a chaotic PSO algorithm is developed based on a virtual quadratic objective function constructed from the optimum point of an individual particle and the global optimum point. Recently, in [9], an improved PSO algorithm is developed to improve the performance of the traditional PSO by using a new strategy to move each particle. Specifically, particles in the PSO fly to their own predefined target instead of the location of the global optimum that has been found so far.

A third class of methods improves the performance of optimization by designing new strategies to update the velocities of particles. In [10], a self-adaptive PSO algorithm (SAPSO-MVS) that uses a novel velocity updating strategy to balance the exploring and exploiting capabilities of the PSO algorithm is developed to improve the performance. In [11], a new PSO-based optimization algorithm that can adaptively adjust the particle velocities in each step of iteration is developed. The adjustment is based on the current distance of each particle from the location with the best fitness found so far by all particles. Recently, in [12], a mode-dependent velocity updating approach is developed to balance the capabilities of the PSO algorithm for local search and those for the global search, which can effectively reduce the chance to be trapped within local minima.

Recently, in addition to the methods discussed above, a large number of other techniques [2, 3, 6, 8, 1331] have been developed to further enhance the performance of PSO-based optimization algorithms, and a comprehensive survey of these methods is available in [31]. In general, most of the existing techniques require sophisticated processing of the locations and fitness values of particles to determine the location and velocity of each particle for the next step of iteration [32, 33]. These approaches thus may become computationally inefficient when the function to be optimized is of a large dimensionality [34]. In practice, the optimization of these functions is often important [35]. For example, a protein sequence generally contains a few hundred amino acids. An amino acid usually consists of around 10 atoms. A protein sequence of moderate length is thus comprised of a few thousand atoms. The native folding of a protein sequence is generally predicted by minimizing the free energy of the system formed by the atoms in the sequence. The free energy of a protein sequence is thus often a function of a few thousand variables, and the accuracy of its minimization is crucial for determining the native folding of a protein sequence.

In this paper, we develop a new efficient PSO algorithm for the optimization of high-dimensionality functions. The algorithm performs optimization by a number of large iteration steps. Each large iteration step consists of two stages. In the first stage, an expelling force field is applied to each particle in the swarm to significantly enlarge the area the particles can explore. In the second stage, the standard PSO algorithm is applied to the swarm to converge to a global optimum. After a large iteration step is completed, a translation procedure is applied to each particle in the swarm to further expand the region explored by the swarm. An iteration step in this new algorithm can be efficiently performed, and the algorithm thus can be applied to the optimization of high-dimensional functions. Our analysis and testing results on high-dimensional benchmark functions show that this new algorithm can achieve significantly improved performance on optimization, compared with the traditional PSO algorithm and its a few existing variants.

2. The Proposed Algorithm

The standard PSO algorithm performs the optimization of a function by simulating the social behavior of a swarm of animals. Specifically, a swarm of particles with randomly generated initial locations is created in the variable space of the function. The fitness of a location in the variable space of the function is defined to be the function value of the location. Each particle is also assigned a randomly generated velocity. In each iteration step, the location with the best fitness value that has been found by all individuals in the swarm is determined. In addition, the location with the best fitness in the trajectory of each individual is also determined. The velocity of each particle is adjusted based on the location with the globally best fitness value and the one with the best fitness value in its own trajectory. The location of each particle is then updated based on the current velocity of the particle.

Given a swarm of particles, let be the locations of the particles in the th step of iteration and be their velocities, respectively. denotes the location with the globally best fitness value that has been found by the swarm after steps of iteration and is the location with the best fitness value in the trajectory of particle . The velocity of particle in the th step of iteration can be computed as follows:where and are two positive constants and and are two randomly generated numbers between 0 and 1; is a positive constant between 0 and 1. The location of particle in the th step of iteration can be computed from as follows:

To effectively enhance the ability of a PSO algorithm to explore within the variable space of a high-dimensional function, we propose to perform the procedure of optimization by a number of large iteration steps. Each large iteration step contains two stages. In the first stage, the particles in the swarm are expelled from the center of mass for the particles in the swarm to explore a wider area in the variable space. This stage is the exploring stage of the optimization procedure. After the exploring stage is completed, the standard PSO algorithm is then applied to compute the global optimum of the function. The second stage is thus the converging stage of the optimization procedure.

In the exploring stage, the velocity of particle in the th step of iteration can be computed from its location and velocity in the th step of iteration as follows:where are the same as those shown in equation (1); and are positive numbers and is a random number between and . is the center of mass for the particles in the swarm for the th iteration step and can be conveniently computed as follows:

The exploring stage executes for a certain number of iteration steps to perform an extensive exploration of the variable space. The converging stage starts execution after the exploring stage is completed. In the converging stage, equations (1) and (2) are used to update the velocity and location of each particle in the swarm until the specified number of iterations has been executed.

Before a new large iteration step starts, a translation procedure is applied to the particles in the swarm such that each particle is relocated to a different position. The velocity of each particle remains unchanged after the translation occurs. The translation of particle is performed as follows:where is the location of particle before the translation is applied and is its location after the translation; is a positive constant and is a random vector of the same dimensionality as , and each component of is a random number between−1.0 and 1.0.

3. Analysis of the Algorithm

In this section, we show that the algorithm can effectively explore the variable space of a high-dimensional function in the exploring stage when appropriate values are selected for parameters and the population size. From equations (3)–(5), no coupling exists between any two different dimensions in the coordinate of a particle in the swarm. Each individual dimension can thus be analyzed independent of others. The analysis of the algorithm can be simplified by choosing one particular dimension in the space and analyzing the behavior of the swarm in this dimension. In the rest of this paper, we assume that all follow a uniform distribution in interval .

Definition 1. Given positive constants and a positive integer, let be the identity matrix and be an matrix where every element in is . The partial transition matrix is defined as follows:

Definition 2. Given positive constants and a positive integer, let be the identity matrix. The complete transition matrix is a square matrix and it can be determined from the partial transition matrix as follows:We assume that the variable space of a high-dimensional function is a “cubic” region in a multidimensional space, where the value of the th dimension is within interval and both and are positive numbers. The following theorem shows that, for certain values of , and , the expected value of the th dimension of each particle explores the corresponding dimension of the cubic region with a function of exponential order.

Theorem 1. Given a “cubic” region in an -dimensional space, where the value of the th dimension is within interval and both and are positive numbers, is the th dimension of particle in iteration step , and is the expected value of . Let be the eigenvalues of the complete transition matrix constructed from and the population size, are mutually different, and both and hold for . There exist nonnegative constants such that holds for large enough .

Proof. Let be the th dimension of the velocity of particle in iteration step , be the th dimension of the best location found by the swarm in iteration step , and be the th dimension of the best location in the trajectory of particle in iteration step . From equations (4) and (5), when , can be written as follows:where is defined to beTaking the expected value for equation (8), we havewhere is the expected value of .
Define , , and as follows:Equation (10) can then be written as follows:where and are defined as follows:From the fact thatEquation (14) can be further written asLet , equation (18) can be written asDefine -dimensional vectors and as follows:Equation (19) can be written as follows:Since has different eigenvalues, it can be decomposed into a form as follows:where is a diagonal matrix with on its diagonal positions and the columns of are the corresponding eigenvectors of . Based on equation (23), equation (22) can be further written as follows:where and . Let be the th dimension of , it is clear that the following can be obtained for :From equation (25), we are able to obtain the following equation:We assume that all particles in the swarm are within the “cubic” region in the iteration steps from 1 to . From equations (12) and (13), we haveFrom equation (16), the following equation holds for when is sufficiently large:where and are selected to guarantee that or holds. Since , , both and are constant matrices and and . We immediately obtain from equations (17), (26), and (29) that when is large enough, there exist positive constants such that holds for each . The theorem thus follows.
Next, we show that , and can be selected to guarantee that holds. The proof is based on the following well-known Gershgorin circle theorem.

Theorem 2. Let be an matrix and denotes the circle in the complex plane with center and radius , that is,where denotes the set of complex numbers. The eigenvalues of are contained within . Moreover, the union of any of these circles that do not intersect the remaining contains precisely (counting multiplicities) of the eigenvalues.

Theorem 3. Let be the eigenvalue of with the largest magnitude, , if .

Proof. Let and , from Definition 2 and Theorem 2, there exist two circles and such that all eigenvalues of are contained in . is centered at in the complex plane and its radius is and is centered at in the complex plane and its radius is .
It is clear that when, holds and and are thus disjoint and there is thus at least one eigenvalue contained in . We haveThe lemma thus follows.
From Theorems 1 and 3, it is clear that when appropriate values are selected for , and , each particle in the swarm is able to enlarge its searched area with the order of an exponential function and the entire “cubic” region can thus be extensively explored by the swarm in a logarithmic number of steps. In practice, the value of should be appropriate to guarantee a both efficient and extensive search in a desired region. Theorem 3 only provides a sufficient condition for to hold, and experiments are often needed in practice to determine the appropriate values for , and .

4. Testing Results

We have implemented this new PSO-based optimization algorithm into a computer program PSOTS in MATLAB and evaluated its performance on the minimization of a few high-dimensional functions in the Benchmark Dataset for PSO functions, which can be downloaded for free from the website at https://www.cil.pku.edu.cn/resources/somso/271214.htm. Its performance is also compared with that of a few other PSO-based algorithms, including the traditional PSO algorithm, the AsynLnPSO algorithm [14], the LinWPSO algorithm [13], the GPSO algorithm [19], the CCAS algorithm [21], and the VPSO algorithm [30]. GPSO, CCAS, and VPSO are state-of-the-art PSO-based algorithms particularly designed for the optimization of high-dimensional functions. A large amount of testing has been performed to determine the parameters that can optimize the performance of each of the four algorithms. For PSOTS, the population size is determined to be 200, the values of ,, and are set to be 0.7, 0.5, and 0.9, respectively. is 1.0 and is 0.8.

The performance of all four algorithms is then evaluated and compared. To evaluate the performance of a given algorithm on a function, the algorithm is executed 10 times on a randomly rotated version of the function. For a fair comparison, the number of iterations in each execution of an algorithm is set to be 3000. The minimum function value obtained in all 10 executions and the distribution of the minimized function values in all 10 executions is obtained. Tables 1 and 2 show the minimum function values, the mean values, and standard deviations of the minimized function values obtained with the seven algorithms on 9 functions when the dimensionality of each function is 100 and 150, respectively. It is clear from both tables that PSOTS outperforms the other six algorithms in both the best and mean of the minimized function values on 8 out of the 9 functions. The minimized function values obtained with PSOTS on Ackley, Cigar, Rastrigin, and Noncon-rastrigin are significantly lower than those obtained with the other six algorithms. On the other hand, all other algorithms outperform PSOTS on function Schwefel.


FunctionAckleyCigarGriewankRastriginRosenbrockNoncon-rastSchwefelTabletEllipse

PSOTSBest0.2671514.1200.00346.419115.97927.43246.6440.51043.013
Mean0.3652926.3850.00554.378127.79953.14754.4760.72093.883
STD0.078828.0500.0016.99312.42314.2214.5980.16128.614

PSOBest0.3292592.7900.005186.276119.860379.09436.2580.73253.693
Mean0.4703733.5020.007250.535138.929491.46642.3991.03389.141
STD0.068664.9970.00232.15110.42682.7554.2120.16924.346

APSOBest0.4113565.9810.006175.738133.197413.46935.9500.90265.058
Mean0.4784494.7360.007241.698143.726497.66544.7981.137104.126
STD0.040679.3450.00144.6407.90072.0394.6440.23826.315

LPSOBest0.3332522.9800.005161.634127.660385.24935.3800.79873.928
Mean0.4493742.6960.007252.637138.899467.99943.0311.131107.747
STD0.0701030.4570.00143.4809.27267.8045.2110.19429.712

GPSOBest0.3122221.0120.005121.513131.571227.31244.3720.73555.721
Mean0.4273251.7600.008203.624168.632359.35153.9211.12995.463
STD0.071943.2790.00241.53111.21265.2315.1370.13423.461

CCASBest0.3082013.5370.004124.249121.326195.27242.6810.68352.381
Mean0.3923121.5830.007227.515173.625293.42251.7321.12791.272
STD0.068856.8630.00152.23410.52174.3214.9280.18325.641

VPSOBest0.2811923.3610.00493.627126.731143.36833.2910.65449.728
Mean0.3533251.6720.008182.676165.472252.43143.5620.97288.362
STD0.042943.6120.00245.5729.83778.6355.1380.13626.732


FunctionAckleyCigarGriewankRastriginRosenbrockNoncon-rastSchwefelTabletEllipse

PSOTSBest0.4255116.7500.00590.681190.55492.81991.3980.71773.556
Mean0.4816324.4940.007116.478205.230121.12299.6911.200220.456
STD0.0401068.8230.00120.75312.05434.7803.7560.33677.121

PSOBest0.5117021.0590.008469.011224.680695.86678.9571.479137.010
Mean0.6189136.4160.011546.090255.356872.59989.7711.782250.101
STD0.0571282.8030.00139.74722.179101.1966.4170.23066.171

APSOBest0.6057502.9590.008479.003230.769686.90082.9571.367127.008
Mean0.6799991.9760.011551.005251.660818.98690.3271.839310.903
STD0.0611541.4390.00248.77114.57081.0695.3410.32193.523

LPSOBest0.5215367.8660.008472.005217.039788.82375.9471.367106.011
Mean0.5988151.6040.011542.003231.982872.19683.7361.711275.302
STD0.0521557.2690.00156.97210.09656.0594.2340.24091.931

GPSOBest0.4746025.0120.008295.561239.459577.88389.1981.14596.957
Mean0.5777641.6400.012450.332300.165743.809110.7521.708247.262
STD0.0641539.4310.00267.28014.74474.0405.3090.24180.445

CCASBest0.4695396.2790.007337.274211.787522.90578.2341.173125.268
Mean0.5347310.7470.010510.999292.134630.27894.9431.712257.922
STD0.0731607.4750.00299.34917.17689.9136.6300.27288.161

VPSOBest0.4385077.6730.007211.627207.912322.43667.8781.073126.744
Mean0.4907205.7050.013389.462241.185549.31889.5871.568238.459
STD0.0431766.4420.00383.07717.64280.9357.5670.21374.187

It is also interesting to observe that the best minimized function value obtained with PSOTS on Ellipse is less than that obtained with the traditional PSO algorithm when the dimensionality is 100. However, the mean of the minimized function values obtained with PSOTS is larger than that obtained with the traditional PSO algorithm in the same dimensionality. When the dimensionality increases from 100 to 150, PSOTS outperforms the traditional PSO algorithm in both the best and mean of minimized function values. This particular example suggests that the exploration ability of PSOTS is stronger than that of the traditional PSO, and the advantage of its exploring ability over the traditional PSO algorithm is further enhanced when the dimensionality of the function that needs to be optimized increases.

We then increase the dimensionality of the functions to higher values and tested the performance of all seven algorithms on the same set of functions when the dimensionality is 200, 300, and 500, respectively. Tables 36 show the performance of the seven algorithms on 10 functions from the benchmark dataset when the dimensionality is selected to be 200, 300, 500, and 1000, respectively. It is clear from the tables that PSOTS significantly outperforms the other six algorithms in 9 tested functions in both the best and mean of minimized function values. Although the other six algorithms still outperform PSOTS on function Schwefel, the relative performance of PSOTS is improved significantly when the dimensionality increases. Table 5 shows that PSOTS outperforms the traditional PSO and AsynLnPSO in the best of the minimized function values of Schwefel when the dimensionality of the function reaches a value of 500.


FunctionAckleyCigarGriewankRastriginRosenbrockNoncon-rastSchwefelTabletEllipse

PSOTSBest0.4318313.3190.006125.642266.531142.388135.9221.145213.868
Mean0.51010573.3000.009190.698291.663180.543148.8721.749426.983
STD0.0591297.3490.00144.40215.71528.0698.7720.394157.209

PSOBest0.62313592.3100.011828.157330.3241162.641129.0831.924442.334
Mean0.73315953.1100.013908.032360.4061258.925137.8392.415567.524
STD0.0952141.9550.00165.74525.67957.5246.5670.29087.083

APSOBest0.62813927.0900.012761.458333.1221168.319125.7602.115374.350
Mean0.72816153.5600.016911.018363.2641284.167137.6592.590615.184
STD0.0631431.2410.00269.22822.18751.5578.9610.287153.836

LPSOBest0.54011784.8300.010766.147287.3881230.984119.9612.078411.546
Mean0.65113929.3900.012880.708337.6121375.003130.1682.392548.331
STD0.0552044.3890.00273.13225.340110.8088.5680.21292.605

GPSOBest0.50210276.4900.011513.656345.566922.450142.3231.765320.551
Mean0.62013015.9400.017746.526432.6681113.325166.4862.458549.419
STD0.0871986.9960.002109.37720.7677101.24911.5010.283108.289

CCASBest0.4839755.1120.010523.218302.989782.621126.1011.574386.631
Mean0.62712079.0510.013841.210422.458944.007144.2372.459523.554
STD0.1141665.6080.002206.59626.121120.58413.7620.292102.498

VPSOBest0.4649264.1620.010353.963298.692541.078102.7961.436370.183
Mean0.55212345.6130.017641.174347.596829.347138.7592.130502.851
STD0.0451925.6640.003156.16934.864138.68310.8120.24697.771


FunctionAckleyCigarGriewankRastriginRosenbrockNoncon-rastSchwefelTabletEllipse

PSOTSBest0.48011239.3100.007270.220435.735243.118236.8581.757768.211
Mean0.55116700.5000.010304.659460.780309.250245.8612.6421044.533
STD0.0522864.6340.00225.14920.49340.1358.7720.626199.579

PSOBest0.72526182.6800.016797.438558.0171958.180218.2713.6071087.085
Mean0.83729781.4500.0171558.475581.0112144.747230.4634.0631383.916
STD0.0713263.520.002277.03719.79593.0449.4530.347199.579

APSOBest0.79226675.4200.015506.867569.283622.970225.8353.5061164.937
Mean0.87230096.9300.0191530.612604.8362103.667234.0354.4811444.022
STD0.0542666.4450.002363.65123.546527.0973.8230.511196.814

LPSOBest0.67621480.7000.014499.562510.4211961.123210.5763.2631058.076
Mean0.75124369.2400.0171575.189559.4712243.721223.6464.0701214.656
STD0.0532254.5950.002454.68133.480167.3289.5510.558105.178

GPSOBest0.61014351.6130.013386.601578.391290.975248.0152.805837.559
Mean0.73021013.4710.0201204.562702.838735.345276.6374.0171290.958
STD0.0832958.2140.002295.83224.86983.0569.5440.529188.628

CCASBest0.56413233.7920.011566.908512.184179.650215.8262.632941.745
Mean0.69021699.9610.0161365.832671.090721.552242.2253.9301207.535
STD0.1032854.3120.003121.69627.090193.9678.8060.481173.677

VPSOBest0.57413157.770.014550.812507.583164.3968180.9372.272888.696
Mean0.65220398.050.0191024.967578.245677.9356234.5223.3891159.823
STD0.0422905.1440.004181.69121.837189.8155.1690.557220.522


FunctionAckleyCigarGriewankRastriginRosenbrockNoncon-rastSchwefelTabletEllipse

PSOTSBest0.49225412.6300.009478.891706.558493.205414.9283.4751496.384
Mean0.58129691.4500.012553.646803.327557.334435.6524.9472092.524
STD0.0513513.8550.00270.57351.24850.48210.0661.145320.057

PSOBest0.82945422.8700.019999.2991022.0243829.275423.3145.7662244.141
Mean0.88656384.0800.0222780.2081111.3544051.348433.9867.5503255.934
STD0.0525545.3540.002916.32347.555167.9727.3130.963199.579

APSOBest0.83350024.7200.0191126.1651000.4563746.506418.2796.8072937.273
Mean0.92956943.4800.0233075.0811084.6084065.724433.2108.1003627.655
STD0.0494846.6180.003691.81248.341217.8948.0161.265422.285

LPSOBest0.74243541.1200.017920.020943.4073905.713411.0025.9752373.675
Mean0.80648230.9000.0202685.457985.0574112.780422.9417.0513115.352
STD0.0433195.9870.002933.48744.286135.46910.6540.603443.890

GPSOBest0.69728284.3510.015768.4161048.064924.820441.7504.4851907.092
Mean0.77538520.9820.0242072.0271253.4451278.837503.8687.0903093.015
STD0.0614961.2060.002740.98736.754112.256815.3671.076769.6676

CCASBest0.60226503.2430.0131169.143852.805890.7993423.0284.5601809.055
Mean0.73839164.0560.0202645.1881270.9971264.482462.0796.9072947.759
STD0.0764060.9120.004301.95050.363132.75212.6890.606274.5918

VPSOBest0.59927384.4750.012776.843901.369860.366434.5184.0891731.14
Mean0.69138449.6130.0221961.5211062.9141363.843455.6976.0492571.514
STD0.0384278.4370.005357.14846.818136.644.9911.399720.7601


FunctionAckleyCigarGriewankRastriginRosenbrockNoncon-rastSphereTabletEllipse

PSOTSBest0.55158336.0900.0131049.7271583.4461036.3635.6917.0214561.731
Mean0.65466590.6200.0171206.1431695.6831224.4816.6418.8666034.033
STD0.0668530.5370.003108.67080.271122.1910.8721.7671089.670

PSOBest0.883101581.4000.0221856.5722091.1782176.87310.58812.7608299.737
Mean0.959119206.0010.0303752.3082202.0017460.91612.39513.7519268.079
STD0.05210297.2400.0042558.991100.1122777.1441.5001.3451063.982

APSOBest0.903114610.1000.0192051.0272102.3632168.40211.28412.8648528.893
Mean0.976126324.8000.0293395.7362268.2298025.08912.31816.51810217.47
STD0.0445550.2010.0052225.827119.6972069.6620.6231.7831131.887

LPSOBest0.84096167.0600.0221844.3051959.6178590.6219.89212.6136859.916
Mean0.864105884.3000.0284760.0952053.5728861.42810.80415.0718193.032
STD0.0297576.2680.0033024.81175.338135.4690.8141.484916.128

GPSOBest0.67163817.0530.0181630.0472305.8461779.1366.7819.2076002.318
Mean0.82784967.9520.0314267.2522552.4182407.9858.05113.5068425.598
STD0.05210801.0340.0031479.18785.12814132.8540.8332.2041820.534

CCASBest0.67658706.4830.0131363.0151847.0011681.6286.6329.9316127.127
Mean0.82082986.1210.0274717.9212528.3462605.8357.76715.7457943.676
STD0.0929234.5370.007843.3347113.59741637.3410.8251.039755.427

VPSOBest0.65757804.7530.0141330.5811915.1811661.3486.5688.4205118.947
Mean0.74185502.2620.0293187.5372146.6462969.3657.75312.5157077.191
STD0.0278076.0550.008676.16393.698791248.4720.8481.956672.217

Compared with other PSO-based optimization algorithms, a certain portion of the iteration steps of PSOTS is dedicated to the exploration of the variable space of a function. This exploring process may not always lead to improved performance of optimization. As a direct result of the exploration, fewer iteration steps are available for the convergence process. The convergence rates of PSOTS thus may be lower than those of other PSO-based algorithms on functions that do not have a large number of local optima. Function Schwefel in the benchmark set is an example of such functions. In addition, the global minimum of Schwefel is located in a straight line with small gradient values in the variable space. The existence of this straight line may further slow down the convergence process of PSOTS. Specifically, in the converging process of any PSO-based algorithm, particles tend to move toward the straight line rapidly while converging slowly along the direction of the straight line to the global minimum. In PSOTS, since particles are expelled apart from one another during the exploring process, the converging process first moves them to points that have longer distances from the global minimum along the direction of the straight line and then let them converge to the global minimum. The longer distances to the global minimum clearly lead to optimization performance inferior to that of other PSO-based algorithms.

The performance of PSOTS on functions with a dimensionality of a few thousand is also tested and evaluated. The dimensionality of two functions, including Ellipse and Sphere, is selected to be 5000, and all seven algorithms are used to minimize them. Each algorithm is executed 10 times. Table 7 shows the result of minimization obtained with the seven algorithms. It is clear from the table that PSOTS significantly outperforms the other six algorithms on both functions when the dimensionality is 5000.


FunctionsPSOTSPSOAPSOLPSOGPSOCCASVPSO

SphereBest30.55657.09862.38849.72835.2409836.0600535.6112
Mean36.49964.91568.97355.64141.4743441.1632342.53189
STD4.7594.4273.9283.2635.1118263.3543213.27586

EllipseBest29088.00058646.70061238.40048136.80039403.57241069.37634520.315
Mean35375.90065562.90072245.20058339.70054707.36254713.73442669.342
STD3657.3004712.8006179.1005080.9003856.7513736.6933716.105

5. Conclusions

In this paper, a new efficient PSO algorithm is developed for the optimization of high-dimensional functions. Based on a simple expelling force field and a translational procedure, the algorithm can explore in a significantly enlarged area in the variable space of a high-dimensional function and the performance of optimization can thus be significantly improved for functions that have a large number of local minima. Our analysis and testing results on high-dimensional benchmark functions also show that this new algorithm can achieve significantly improved performance on the optimization of high-dimensional functions, compared with the traditional PSO algorithm, a few of its variants, and a number of state-of-art algorithms for the optimization of high-dimensional functions.

Since a large number of PSO-based algorithms have been developed for the optimization of multivariable functions, it remains unknown whether this approach can outperform the PSO-based algorithms that have not been tested in our experiments on high-dimensional functions or not. A much more comprehensive testing and comparison is thus needed to evaluate the performance of this algorithm.

Recently, due to the rapid growth in the amounts of datasets with dimensionalities larger than ten thousand, the extraction of crucial features from such datasets has become an important problem in the area of data mining. Such datasets are often called ultrahigh-dimensional datasets. The exploring ability of the proposed approach may deteriorate significantly in such ultrahigh-dimensional spaces, which may adversely affect the application of this approach to the accurate processing of ultrahigh-dimensional datasets. Improving the optimization performance of this approach on ultrahigh-dimensional functions thus constitutes an important part of our future work.

Data Availability

The source code and testing data of this work are freely available upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work of G. Li, J. Sun, M.N.A. Rana, and Y. Song was fully supported by the Fund of Specially Appointed Professor of Jiangsu Province, China, under grant number 1034901501. The work of C. Liu was fully supported by the US Science & Technology Center Grant under grant number CCF-0939370.

References

  1. R. C. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science, pp. 39–43, Nagoya, Japan, October 1995. View at: Publisher Site | Google Scholar
  2. P. N. Suganthan, “Particle swarm optimizer with neighborhood operator,” in Proceedings of the Congress on Evolutionary Computing, pp. 1958–1962, Washington, DC, USA, July 1999. View at: Publisher Site | Google Scholar
  3. J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle swarm optimizer,” in Proceedings of IEEE Swarm Intelligence Symposium, pp. 124–129, Pasadena, CA, USA, June 2005. View at: Publisher Site | Google Scholar
  4. C.-C. Peng and C.-H. Chen, “Compensatory neural fuzzy network with symbiotic particle swarm optimization for temperature control,” Applied Mathematical Modelling, vol. 39, no. 1, pp. 383–395, 2015. View at: Publisher Site | Google Scholar
  5. W.-D. Chang, “A modified particle swarm optimization with multiple subpopulations for multimodal function optimization problems,” Applied Soft Computing, vol. 33, pp. 170–182, 2015. View at: Publisher Site | Google Scholar
  6. M. R. Tanweer, S. Suresh, and N. Sundararajan, “Dynamic mentoring and self-regulation based particle swarm optimization algorithm for solving complex real-world optimization problems,” Information Sciences, vol. 326, pp. 1–24, 2016. View at: Publisher Site | Google Scholar
  7. L. D. S. Coelho and C.-S. Lee, “Solving economic load dispatch problems in power systems using chaotic and Gaussian particle swarm optimization approaches,” International Journal of Electrical Power & Energy Systems, vol. 30, no. 5, pp. 297–307, 2008. View at: Publisher Site | Google Scholar
  8. K. Tatsumi, T. Ibuki, and T. Tanino, “Particle swarm optimization with stochastic selection of perturbation-based chaotic updating system,” Applied Mathematics and Computation, vol. 269, pp. 904–929, 2015. View at: Publisher Site | Google Scholar
  9. T. T. Ngo, A. Sadollah, and J. H. Kim, “A cooperative particle swarm optimizer with stochastic movements for computationally expensive numerical optimization problems,” Journal of Computational Science, vol. 13, pp. 68–82, 2016. View at: Publisher Site | Google Scholar
  10. Q. Fan and X. Yan, “Self-adaptive particle swarm optimization with multiple velocity strategies and its application for p-xylene oxidation reaction process optimization,” Chemometrics and Intelligent Laboratory Systems, vol. 139, pp. 15–25, 2014. View at: Publisher Site | Google Scholar
  11. G. Ardizzon, G. Cavazzini, and G. Pavesi, “Adaptive acceleration coefficients for a new search diversification strategy in particle swarm optimization algorithms,” Information Sciences, vol. 299, pp. 337–378, 2015. View at: Publisher Site | Google Scholar
  12. Y. Lu, N. Zeng, Y. Liu, and N. Zhang, “A hybrid wavelet neural network and switching particle swarm optimization algorithm for face direction recognition,” Neurocomputing, vol. 155, pp. 219–224, 2015. View at: Publisher Site | Google Scholar
  13. M. A. Arasomwan and A. O. Adewumi, “On the performance of linear decreasing inertial weight particle swarm optimization for global optimization,” The Scientific World Journal, vol. 2013, pp. 1–12, 2013. View at: Publisher Site | Google Scholar
  14. N. A. A. Aziz and Z. Ibrahim, “Asynchronous particle swarm optimization for swarm robotics,” Procedia Engineering, vol. 41, pp. 951–957, 2012. View at: Publisher Site | Google Scholar
  15. H. Banka and S. Dara, “A hamming distance based binary particle swarm optimization (HDBPSO) algorithm for high dimensional feature selection, classification and validation,” Pattern Recognition Letters, vol. 52, pp. 94–100, 2015. View at: Publisher Site | Google Scholar
  16. Z. Beheshti and S. M. Shamsuddin, “Non-parametric particle swarm optimization for global optimization,” Applied Soft Computing, vol. 28, pp. 345–359, 2015. View at: Publisher Site | Google Scholar
  17. W. Fang, J. Sun, H. Chen, and X. Wu, “A decentralized quantum-inspired particle swarm optimization algorithm with cellular structured population,” Information Sciences, vol. 330, pp. 19–48, 2015. View at: Publisher Site | Google Scholar
  18. J. Geng, M. Li, Z.-H. Dong, and Y.S. Liao, “Port throughput forecasting by MARS-RSVR with chaotic simulated annealing particle swarm optimization algorithm,” Neurocomputing, vol. 147, pp. 239–250, 2015. View at: Publisher Site | Google Scholar
  19. T. A. Giuseppe, “A cooperative coevolutionary differential evolution algorithm with adaptive subcomponents,” Procedia Computer Science, vol. 51, no. 1, pp. 834–844, 2015. View at: Google Scholar
  20. I. Gosciniak, “A new approach to particle swarm optimization algorithm,” Expert Systems with Applications, vol. 42, no. 2, pp. 844–854, 2015. View at: Publisher Site | Google Scholar
  21. J. J. Jamian, M. N. Abdullah, H. Mokhlis, M. W. Mustafa, and A. H. A. Bakar, “Global particle swarm optimization for high dimension numerical functions analysis,” Journal of Applied Mathematics, vol. 2014, pp. 1–14, 2014. View at: Publisher Site | Google Scholar
  22. Y. Li, Z.-H. Zhan, S. Lin, J. Zhang, and X. Luo, “Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems,” Information Sciences, vol. 293, pp. 370–382, 2015. View at: Publisher Site | Google Scholar
  23. Z. Li, T. T. Nguyena, S. Chen, and T. T. Khac, “A hybrid algorithm based on particle swarm and chemical reaction optimization for multi-object problems,” Applied Soft Computing, vol. 35, pp. 325–340, 2015. View at: Publisher Site | Google Scholar
  24. W. H. Lim and N. A. Mat Isa, “Adaptive division of labor particle swarm optimization,” Expert Systems with Applications, vol. 42, no. 14, pp. 5887–5903, 2015. View at: Publisher Site | Google Scholar
  25. S. Salehian and S. K. Subraminiam, “Unequal clustering by improved particle swarm optimization in wireless sensor network,” Procedia Computer Science, vol. 62, pp. 403–409, 2015. View at: Publisher Site | Google Scholar
  26. G. G. Samuel and C. C. A. Rajan, “Hybrid: particle swarm optimization-genetic algorithm and particle swarm optimization-shuffled frog leaping algorithm for long-term generator maintenance scheduling,” International Journal of Electric Power Energy Systems, vol. 65, pp. 432–442, 2015. View at: Publisher Site | Google Scholar
  27. A. Sharifi, J. Kazemi Kordestani, M. Mahdaviani, and M. R. Meybodi, “A novel hybrid adaptive collaborative approach based on particle swarm optimization and local search for dynamic optimization problems,” Applied Soft Computing, vol. 32, pp. 432–448, 2015. View at: Publisher Site | Google Scholar
  28. R. Shirkhani, H. Jazayeri-Rad, and S. J. Hashemi, “Modeling of a solid oxide fuel cell power plant using an ensemble of neural networks based on a combination of the adaptive particle swarm optimization and Levenberg-Marquardt algorithms,” Journal of Natural Gas Science and Engineering, vol. 21, pp. 1171–1183, 2014. View at: Publisher Site | Google Scholar
  29. H. Soleimani and G. Kannan, “A hybrid particle swarm optimization and genetic algorithm for closed-loop supply chain network design in large-scale networks,” Applied Mathematical Modelling, vol. 39, no. 14, pp. 3990–4012, 2015. View at: Publisher Site | Google Scholar
  30. X. Song, Y. Zhang, N. Guo, X. Sun, and Y. Wang, “Variable-size cooperative coevolutionary particle swarm optimization for feature selection on high-dimensional data,” IEEE Transactions on Evolutionary Computation, p. 1, 2020. View at: Publisher Site | Google Scholar
  31. D. Wang, D. Tan, and L. Liu, “Particle swarm optimization algorithm: an overview,” Soft Computing, vol. 22, no. 2, pp. 387–408, 2018. View at: Publisher Site | Google Scholar
  32. J. J. Jamian, M. W. Mustafa, and H. Mokhlis, “Optimal multiple distributed generation output through rank evolutionary particle swarm optimization,” Neurocomputing, vol. 152, pp. 190–198, 2015. View at: Publisher Site | Google Scholar
  33. C. Yang, W. Gao, N. Liu, and C. Song, “Low-discrepancy sequence initialized particle swarm optimization algorithm with high-order nonlinear time-varying inertia weight,” Applied Soft Computing, vol. 29, pp. 386–394, 2015. View at: Publisher Site | Google Scholar
  34. S. Zhai and T. Jiang, “A new sense-through-foliage target recognition method based on hybrid differential evolution and self-adaptive particle swarm optimization-based support vector machine,” Neurocomputing, vol. 149, pp. 573–584, 2015. View at: Publisher Site | Google Scholar
  35. L. Zhang, Y. Tang, C. Hua, and X. Guan, “A new particle swarm optimization algorithm with adaptive inertia weight based on Bayesian techniques,” Applied Soft Computing, vol. 28, pp. 138–149, 2015. View at: Publisher Site | Google Scholar

Copyright © 2020 Guoliang Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views369
Downloads309
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.