Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2021 / Article
Special Issue

Evolutionary Computation Methods for Search-Based Data Analytics Problems

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 8378579 | https://doi.org/10.1155/2021/8378579

Jing Li, Yifei Sun, Sicheng Hou, "Particle Swarm Optimization Algorithm with Multiple Phases for Solving Continuous Optimization Problems", Discrete Dynamics in Nature and Society, vol. 2021, Article ID 8378579, 13 pages, 2021. https://doi.org/10.1155/2021/8378579

Particle Swarm Optimization Algorithm with Multiple Phases for Solving Continuous Optimization Problems

Academic Editor: Lianbo Ma
Received16 Apr 2021
Revised01 Jun 2021
Accepted18 Jun 2021
Published28 Jun 2021

Abstract

An algorithm with different parameter settings often performs differently on the same problem. The parameter settings are difficult to determine before the optimization process. The variants of particle swarm optimization (PSO) algorithms are studied as exemplars of swarm intelligence algorithms. Based on the concept of building block thesis, a PSO algorithm with multiple phases was proposed to analyze the relation between search strategies and the solved problems. Two variants of the PSO algorithm, which were termed as the PSO with fixed phase (PSOFP) algorithm and PSO with dynamic phase (PSODP) algorithm, were compared with six variants of the standard PSO algorithm in the experimental study. The benchmark functions for single-objective numerical optimization, which includes 12 functions in 50 and 100 dimensions, are used in the experimental study, respectively. The experimental results have verified the generalization ability of the proposed PSO variants.

1. Introduction

Particle swarm optimization (PSO) algorithm is a population-based stochastic algorithm modeled on the social behaviors observed in flocking birds [1, 2]. As a well-known swarm intelligence algorithm, each particle, which represents a solution in the group, flies through the search space with a velocity that is dynamically adjusted according to its own and its companion’s historical behaviors. The particles tend to fly toward better search areas throughout the search process [3].

Many swarm intelligence algorithms have been proposed to solve different kinds of problems, which include single- or multiple-objective optimization and real-world applications. These algorithms include particle swarm optimization algorithm [1, 2], brain storm optimization (BSO) algorithm [46], and pigeon-inspired optimization (PIO) [7], just to name a few. Usually, a kind of swarm intelligence algorithm has many variants, which have different search strategies or parameters. Take the PSO algorithm as an example; for single-objective optimization, there are adaptive PSO algorithm [8], time-varying attractor in PSO algorithm [9], interswarm interactive learning strategy in PSO algorithm [10], triple archives PSO algorithm [11], social learning PSO algorithm for scalable optimization [12], PSO variant for mixed-variable optimization problems [13], etc. For multiobjective optimization, there are adaptive gradient multiobjective PSO algorithm [14], coevolutionary PSO algorithm with bottleneck objective learning strategy [15], normalized ranking based PSO algorithm for many-OBJECTIVE optimization [16], etc. Besides, the classical PSO algorithm and its variants have been used in various real-world applications [17], such as search-based data analytics problems [18]. Different variants of optimization algorithms have different components and strategies during the search process. The designing of proper components and strategies is vital for a search algorithm before solving the problem.

The building blocking thesis could be embedded into the genetic algorithm [19]. Building blocks indicate the components from all levels to understand and recognize the complex things or structures. An evolutionary computation algorithm could be divided into several components with the building block thesis. In the PSO algorithm, there are several components. The different settings of parameters, structures, and strategies of the PSO algorithm could perform differently on the same problem. Thus, the proper setting of the optimization algorithm is vital for the solved problem. However, there is no favorable method that could find out the best setting before the optimization. Many approaches have been introduced to analyze the components of the PSO algorithm. For example, the setting of parameter size in the PSO algorithm was surveyed and discussed in [20].

Based on the building block thesis, a PSO algorithm with multiple phases was proposed in this paper. This paper could be useful for understanding the search components in the PSO algorithm and designing the PSO algorithm for specific problems. It has two targets in this paper.(1)For the theoretical analysis of the PSO algorithm, the effectiveness of different components of various PSO algorithms, such as inertia weight, acceleration coefficient, and topology structures is studied(2)For the application of the PSO algorithm, more effective PSO algorithms could be designed for solving different real-world applications

The remainder of this paper is organized as follows. The basic PSO algorithm and topology structure were briefly introduced in Section 2. Section 3 gives an introduction to the proposed PSO algorithms with multiple phases. In Section 4, comprehensive experimental studies are conducted on 12 benchmark functions with 50 or 100 dimensions to verify the effectiveness of the proposed algorithms, respectively. Finally, Section 5 concludes with some remarks and future research directions.

2. Backgrounds

PSO algorithm is a widely used swarm intelligence algorithm for optimization. It is easy for understanding and implementation. A potential solution, which is termed as a particle in PSO algorithm, is a search point in the solution space with -dimensions. For each particle, it is connected with two vectors, i.e., the vector of velocity and the vector of position. Normally, the number of particles is from 1 to and the index of the particles or solutions is represented by notation . The number of dimensions is from 1 to and the index of the dimensions is represented by notation . The is the total number of particles and is the total number of dimensions. The position of the th particle is represented as , . is the value of the th dimension for the th solution, where , and . The velocity of a particle is labeled as , .

2.1. Particle Swarm Optimization Algorithm

The basic process of PSO algorithm is given in Algorithm 1. There are two equations, the update of velocity and position for each particle, in the basic process of PSO algorithm. The framework of canonical PSO algorithm is shown in Figure 1. The fitness value is evaluated based on the calculation on all dimensions, while the update on the velocity or position is based on each dimension. The update equations for the velocity and the position are as follows [17, 21]:where denotes the inertia weight, and are two positive acceleration constants, and is a random function to generate uniformly distributed random numbers in the range . is termed as personal best at the th iterations, which refers to the best position found so far by the th particle, and is termed as local best, which refers to the position found so far by the members in the th particle’s neighborhood that has the best fitness value. The parameter was introduced to control the global search and local search ability. The PSO algorithm with the inertia weight is termed as the canonical/classical PSO.

(1)Initialize each particle’s velocity and position with random numbers
(2)while not reaches the maximum iteration or not found the satisfied solution do
(3) Calculate each solution’s function value;
(4) Compare function value between the current position and the best position in history. For each particle, if current position has a better function value than , then update as current position;
(5) Select a particle that has the best fitness value from the current particle’s neighborhood, this particle is termed as the neighborhood best ();
(6)for each particle do
(7)  Update particle’s velocity according equation (1);
(8)  Update particle’s position according equation (2);

The position value of a particle was updated in the search space at each iteration. The velocity update equation is shown in equation (1). This equation has three components: the previous velocity, cognitive part, and social part. The cognitive part indicates that a particle learns from its own searching experience, and correspondingly, the social part indicates that a particle can learn from other particles or learn from the good solutions in the neighborhood.

2.2. Topology Structure

Different kinds of topology structures, such as a global star, local ring, four clusters, or Von Neumann structure, could be utilized in PSO variants to solve various problems. A particle in a PSO with a different structure has a different number of particles in its neighborhood with different scope. Learning from a different neighbor means that a particle follows a different neighborhood (or local) best; in other words, the topology structure determines the connections among particles and the strategy used in the propagation process of searching information over iteration.

A PSO algorithm’s ability of exploration and exploitation could be affected by its topology structure; i.e., with a different structure, the algorithm’s convergence speed and the ability to avoid premature convergence will be various on the same optimization problem, because a topology structure determines the search information sharing speed or direction for each particle. The global star and local ring are the two typical topology structures. A PSO with a global star structure, where all particles are connected, has the smallest average distance in the swarm, and on the contrary, a PSO with a local ring structure, where every particle is connected to two near particles, has the largest average distance in swarm [22].

The global star structure and the local ring structure, which are two commonly used structures, are analyzed in the experimental study. These two structures are shown in Figure 2. Each group of particles has 16 individuals. It should be noted that the nearby particle in the local structure is mostly based on the index of particles.

3. Particle Swarm Optimization with Multiple Phases

The search process could be divided into different phases, such as the exploration phase and the exploitation phase. A discussion has been given on the genetic diversity of an evolutionary algorithm in the exploration phase [23]. To enhance the generalization ability of the PSO algorithm, multiple phases could be combined into one algorithm. The PSO algorithm is a combination of several components. Each component could be seen as a building block of a PSO variant. Some changeable components in PSO algorithms are shown in Figure 3. The setting of a component is independent of other components. Normally, the parameters’ settings, topology structure, and other strategies should be determined before the search process. There are three well-known standard PSO algorithms, namely, the standard PSO algorithm by Bratton and Kennedy (SPSO-BK) [24], the standard PSO algorithm by Clerc (SPSO-C) [25], and the canonical PSO (CPSO) algorithm By Shi and Eberhart [26].

Different search phases are emphasized for PSO variants because it performs differently when solving different kinds of problems. Thus, to utilize the strengths of different variants, a PSO algorithm could be a combination of several PSO variants. For example, the global star structure could be used at the beginning of the search to obtain a good exploration ability, while the local ring structure could be used at the ending of the search to promote the exploitation ability. The basic procedure of the PSO algorithm with multiple phase strategy (PSOMP) is given in Algorithm 2. The setting of phases could be fixed or dynamically changed during the search. To validate the performance of different settings, two kinds of PSO algorithms with multiple phases are proposed.

(1)Initialize each particle’s velocity and position with random numbers;
(2)while not reaches the maximum iteration or not found the satisfied solution do
(3) Calculate each solution’s function value;
(4) Compare function value between the current position and the best position in history (personal best, termed as ). For each particle, if current position has a better function value than , then update as current position;
(5) Selection a particle which has the best fitness value among current particle’s neighborhood, this particle is termed as the neighborhood best;
(6)for each particle do
(7)  Update the particle’s velocity according equation (1);
(8)  Update the particle’s position according equation (2);
(9)Change the search phase: update the structure and/or parameter settings;
3.1. Particle Swarm Optimization with Fixed Phases

One PSOMP variant is the PSO with fixed phases (PSOFP) algorithm. The process of the PSO algorithm with multiple fixed phases is shown in Figure 4. Two standard PSO variants, the CPSO with star structure and the SPSO-BK with ring structure, are combined for the PSOFP algorithm. In the experimental study, both variants perform the same iterations. The PSOFP algorithm started with the CPSO with the star structure and ended with the SPSO-BK with the ring structure. All parameters are fixed for each phase during the search. The PSOFP algorithm switches from one variant to another after the fixed number of iterations.

3.2. Particle Swarm Optimization with Dynamic Phases

The other PSO variant is PSO with dynamic phases (PSODP) algorithm. As the name indicates, the phase could be dynamically changed for this algorithm during the search process. As the same with the PSOFP algorithm, the PSODP algorithm started with the CPSO with the star structure and ended with the SPSO-BK with the ring structure. However, the number of iterations is not fixed during the search. The process of the PSO algorithm with multiple dynamic phases is shown in Figure 5. The global best (gbest) position is the best solution found so far. The gbest not changing for iterations indicates that the algorithm may be stuck into local optima. The PSO variant will be changed after this phenomenon occurs several times. For the setting of the PSODP algorithm, the search phase will change from CPSO with star structure to SPSO-BK with a ring structure. The condition of phase change is the gbest stuck into a local optimum; i.e., the gbest is not changed for 100 iterations.

4. Experimental Study

4.1. Benchmark Test Functions and Parameter Setting

Table 1 shows the benchmark functions which have been used in the experimental study. To give an illustration of the search ability of the proposed algorithm, a diverse set of 12 benchmark functions with different types are used to conduct the experiments. These benchmark functions can be classified into two groups. The first five functions are unimodal, while the other seven functions are multimodal. All functions are run 50 times to ensure a reasonable statistical result necessary to compare the different approaches, and the value of global optimum is shifted to different for different functions. The function value of the best solution found by an algorithm in a run is denoted by . The error of each run is denoted as .


FunctionTest function

Parabolic100
Schwefel’s P2.22100
Schwefel’s P1.2100450.0

Step quartic noise100180.0
100120.0
Rosenbrock100
Schwefel100
Rastrigin100450.0
Noncontinuous Rastrigin

100180.0
Ackley100120.0
Griewank100
Generalized penalized100

Three kinds of standard or canonical PSO algorithms are tested in the experimental study. Each algorithm has two kinds of structure, the global star structure and the local ring structure. There are 50 particles in each group of PSO algorithms. Each algorithm runs 50 times, iterations in every run. The other settings for the three standard algorithms are as follows.(i)SPSO-BK settings: ; , i.e., [24](ii)SPSO-C settings: ; [25](iii)Canonical PSO (CPSO) settings: ; i.e., is linearly decreased during the search; , ; [26]

4.2. Experimental Results

The experimental results for functions with 50 dimensions are given in Tables 2 and 3. The results for unimodal functions are given in Table 2, while results for multimodal functions are given in Tables 2 and 3. The aim of the experimental study is not to validate the search performance of the proposed algorithm but to attempt to discuss the combination of different search phases. This is a primary study on the learning of the characteristics of benchmark functions.


AlgorithmBestMeanSt.D.BestMeanSt.D.

parabolic Schwefel’s P2.22
SPSO-BKStar−450−4504.52E-13−330−329.99991.35E-11
Ring−449.9999−449.99992.11E-12−329.9999−329.99994.37E-12
SPSO-CStar−449.9999−449.99994.12E-09−329.9999329.99993.08E-05
Ring−449.9999−449.99993.13E-12−329.9999−329.99996.01E-12
CPSOStar−450−4505.90E-14−330−329.81.4
Ring−373.3774−191.097598.981−329.8322−325.99992.151
PSOFP−450−4501.47E-12−329.9999−329.99996.25E-09
PSODP−450−4501.91E-12−329.9999−329.99991.45E-06

Schwefel’s P1.2 step
SPSO-BKStar450.0000550.0000699.9999180188.3613.936
Ring455.3668495.662638.833266360.4650.708
SPSO-CStar450.0000450.39612.748207343.66106.891
Ring450.0958451.31672.5006373714.3167.178
CPSOStar476.3968602.0931697.4331801800
Ring3121.60194966.1359769.950395534.6470.468
PSOFP450.0476454.389629.0223180180.42.107
PSODP450.0013460.057134.158180187.5426.872

quartic noise Rosenbrock
SPSO-BKStar120.0022120.00440.001−449.9976−418.443824.245
Ring120.0281120.06300.023−444.3137−415.00379.573
SPSO-CStar120.0408120.15080.070−449.9999−448.33512.378
Ring120.0467120.20940.085−443.4564−411.722017.957
CPSOStar120.0037120.00900.002−421.5111−389.736027.273
Ring120.0194120.03210.006−218.8776193.7953156.862
PSOFP120.0026120.00610.0024−439.0752−395.131531.0671
PSODP120.0029120.00860.0076−444.2963−405.635725.7581


AlgorithmBestMeanSt.D.BestMeanSt.D.

Schwefel Rastrigin
SPSO-BKStar6603.84249786.69711359.137497.7579523.149213.619
Ring16100.462817437.3017673.960497.7580530.695714.832
SPSO-CStar7684.66709898.36001218.2453486.8134520.522515.146
Ring16299.036717523.4040548.640499.7950530.129214.646
CPSOStar6934.36028761.8199854.741495.7680529.681822.117
Ring15875.054617592.1651661.051578.8806631.115716.836
PSOFP6144.74418472.22881171.500504.7226528.263313.507
PSODP5950.17078369.77731038.811493.7781525.210420.126

noncontinuous Rastrigin Ackley
SPSO-BKStar186224.5822.039120121.7730.649
Ring232.0000262.771714.463120.0000120.02130.149
SPSO-CStar227269.98521.175121.8030124.31681.090
Ring239.0000277.138218.542120.0000120.38070.558
CPSOStar211240.600114.4481201207.15E-14
Ring279.2935321.572117.248122.7702124.26130.529
PSOFP210237.307319.334120120.04990.248
PSODP200241.007620.067120120.04840.237

Griewank generalized penalized
SPSO-BKStar−450−449.97690.033−330−329.76540.330
Ring−449.9999−449.99970.002−329.9999−329.99870.008
SPSO-CStar−449.9999−449.89000.228−329.9999−329.09891.123
Ring−449.9999−449.99890.004−329.9999−329.99120.039
CPSOStar−450−449.99020.012−330−329.99750.012
Ring−448.2587−445.24951.087−329.2803−327.65350.783
PSOFP−450−449.98740.019−330−329.97630.051
PSODP−450−449.99020.017−330−329.97630.055

To validate the generalization ability of PSO variants, the same experiments are conducted on the functions with 100 dimensions. All parameters’ settings are the same for functions with 50 dimensions. The results for unimodal functions are given in Table 4, while the results for multimodal functions are given in Tables 4 and 5. From the experiment results, it could be seen that the proposed algorithm could obtain a good solution for most benchmark functions. Based on the combination of different phases, the proposed algorithm could be performed more robustly than the algorithm with one phase.


AlgorithmBestMeanSt.D.BestMeanSt.D.

parabolic Schwefel’s P2.22
SPSO-BKStar−450−449.99992.97E-10−329.9999−329.99991.90E-07
Ring−449.9999449.99992.72E-11329.9999329.99997.34E-11
SPSO-CStar−449.9999−449.99470.0369−329.9999−329.85560.4552
Ring−449.9999449.99991.40E-11−329.9999−329.99993.09E-11
CPSOStar−449.9999449.99992.13E-09−329.9999−329.59991.9595
Ring−149.65411344.4583532.677−325.8224−309.55547.5546
PSOFP−449.9999−449.99991.36E-07−329.9999−329.99991.36E-09
PSODP−449.9999−449.99993.34E-07−329.9999−329.99990.0006

Schwefel’s P1.2 step
SPSO-BKStar513.63551602.17603006.812208377.32137.165
Ring2426.16045355.32971565.104671978.24174.267
SPSO-CStar555.42922998.87753037.68512082683.48755.401
Ring1188.10572075.1955769.97415852448.46531.394
CPSOStar5145.967110822.52423423.207180181.71.3
Ring20782.01127917.1913168.31517912506.12287.989
PSOFP1399.17523987.62535750.961182199.7628.642
PSODP989.79562861.31225655.407185215.36115.588

quartic noise Rosenbrock
SPSO-BKStar120.0138120.02780.0110−397.0471−329.932941.398
Ring120.1851120.32650.1013−373.0928−346.462625.371
SPSO-CStar120.5708121.35540.4501397.2747−304.323540.5292
Ring120.4383120.99110.3064−371.9901339.082928.481
CPSOStar120.0379120.06580.014−363.4721−294.819350.003
Ring120.1338120.23210.043606.94443907.80621468.905
PSOFP120.0224120.04380.014−380.2421−290.006047.549
PSODP120.0219120.05180.036−368.7163−299.510843.518


AlgorithmBestMeanSt.D.BestMeanSt.D.

Schwefel Rastrigin
SPSO-BKStar16056.67820638.4852182.500569.3949611.721725.514
Ring34073.73637171.637944.548560.5540627.254625.699
SPSO-CStar16879.29520909.0812451.469552.4806598.606823.697
Ring34864.58837164.686841.829569.4197624.086322.440
CPSOStar15445.32819374.5881691.740585.3142639.108630.012
Ring34235.52137141.853835.129943.51401021.683737.831
PSOFP14734.07418723.6741738.027583.3243648.911726.8680
PSODP14094.63418899.2691994.712574.8307639.907528.179

noncontinuous Rastrigin Ackley
SPSO-BKStar212.0000275.150030.6297122.5793123.58170.7107
Ring307.2505358.345527.633120.0000121.80100.4814
SPSO-CStar288.0000374.450041.1362126.3445128.38481.1134
Ring332.2500395.714134.1028120.0000121.89580.5702
CPSOStar322.0000375.145231.873120.0000120.00000.0001
Ring580.5314660.665134.490123.0676126.43430.7101
PSOFP357360.751038.267120.0000121.64600.485
PSODP280355.148129.234120.0000121.60610.562

Griewank generalized penalized
SPSO-BKStar450−449.84810.2569330−329.65990.3758
Ring−449.9999−449.99940.0022−329.9999329.98800.0334
SPSO-CStar−449.9999−449.46700.6696−329.9999−329.22110.8798
Ring−449.9999449.99950.0024−329.9999−329.95260.0640
CPSOStar−449.9999−449.99330.0092−329.9999−329.97070.0432
Ring−437.0433−428.50812.0322−320.4856−312.04574.2756
PSOFP−449.9999−449.98170.032−329.9999−329.90690.151
PSODP−449.9999−449.98280.031−329.9999−329.92240.108

The convergence graphs of error values for each algorithm on all functions with 50 and 100 dimensions are shown in Figures 6 and 7, respectively. In these figures, each curve represents the variation of the mean value of over the iteration for each algorithm. The six PSO variants are tested with the proposed two algorithms. The notation of “S” and “R” indicates the star and ring structure in the PSO variants, respectively. For example, “SPSOBKS” and “SPSOBKR” indicate the SPSO-BK algorithm with the star and ring structure, respectively. The robustness of the proposed algorithm also could be validated from the convergence graph.

4.3. Computational Time

Tables 6 and 7 give the total computing time (seconds) of 50 runs in the experiments for functions with 50 and 100 dimensions, respectively. Normally, the evaluation speed of the PSO algorithm with ring structure is significantly slower than the same PSO algorithm with a star structure.


FunctionSPSO-BKSPSO-CCPSOPSOFPPSODP
StarRingStarRingStarRing

141.47531194.9538154.23241393.8880127.3165779.0309637.2494475.0059
149.6900977.3025156.32051203.5832138.2238733.2784668.1761496.8441
248.2647663.9135231.0108326.9424226.9095772.1789291.3018307.8365
147.83751692.1071152.19131737.4540138.2383716.2581665.2441544.4709
192.5499777.2614191.9068852.0861196.0920413.9658432.6319440.5604
131.8303203.6976137.3674146.3761137.8663766.9009252.0464197.5675
247.0545599.0113262.3676603.3503229.0871487.7298225.6345241.2337
211.3601568.8381212.1822506.8316188.6031761.6392746.7872593.8517
210.02881188.4555223.17061180.9223213.6926765.8513608.0168519.1364
213.6390837.5839203.29081179.4232182.0255755.5640764.1055542.1310
206.59781244.9973211.65361409.6724194.0266829.1956721.1344550.2234
341.61101230.0763340.13081295.6684327.5602919.8441864.0743697.8836


FunctionSPSO-BKSPSO-CCPSOPSOFPPSODP
StarRingStarRingStarRing

243.94611378.8133238.75281936.8303228.83451461.8672629.0300787.2999
263.1645594.7633278.42901420.0705255.87991371.3888260.5299259.7703
622.43941585.8146628.6117927.1755621.12431624.7289869.5341872.5241
278.02873298.0839292.05043303.1495261.19341367.36271303.8592979.7108
358.07991460.3739356.01191591.3160360.3514711.6922721.3646646.3844
236.5184311.3102248.2506249.3193237.41821456.7880251.5607259.2736
448.89931102.5489466.07951104.2448434.8893874.0973440.6255422.4402
341.5798663.7398344.3478730.8557337.06711381.5354891.4041885.4871
368.71861308.2763405.55941159.4827380.65741389.69271023.6844886.3957
345.1321732.5952358.27241229.2192336.91591467.7248743.4061814.7588
377.92891556.1002392.83462195.5736378.27781641.5258823.5398880.5851
619.3468950.2162635.63531220.6493648.21991920.6593862.0097962.4388

The PSOFP and PSODP algorithms perform slower than the traditional PSO algorithm on the test benchmark functions. The reason is that some search resources are allocated on the changes of search phases. The variant of PSO with multiple phases needs to determine whether to switch phase and the time of switching phase.

5. Conclusions

The swarm intelligence algorithms perform in varied manner for the same optimization problem. Even for a kind of optimization algorithm, the change of structure or parameters may lead to different results. It is very difficult, if not impossible, to obtain the connection between an algorithm and a problem. Thus, choosing a proper setting for an algorithm before the search is vital for the optimization process. In this paper, we attempt to combine the strengths of different settings for the PSO algorithm. Based on the building blocks thesis, a PSO algorithm with a multiple phase strategy was proposed in this paper to solve single-objective numerical optimization problems. Two variants of the PSO algorithm, which were termed as the PSO with fixed phase (PSOFP) algorithm and PSO with dynamic phase (PSODP) algorithm, were tested on the 12 benchmark functions. The experimental results showed that the combination of different phases could enhance the robustness of the PSO algorithm.

There are two phases in the proposed PSOFP and PSODP methods. However, the number of phases could be increased or the type of phase could be changed for other PSO algorithms with multiple phases. Besides the PSO algorithms, the building block thesis could be utilized in other swarm optimization algorithms. Based on the analysis of different components, the strength and weaknesses of different swarm optimization algorithms could be understood. Utilizing this multiple phase strategy with other swarm algorithms is our future work.

Data Availability

The data and codes used to support the findings of this study have been deposited in the GitHub repository.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was partially supported by the Scientific Research Foundation of Shaanxi Railway Institute (Grant no. KY2016-29) and the Natural Science Foundation of China (Grant no. 61703256).

References

  1. R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of the Sixth International Symposium on Micro Machine and Human Science, pp. 39–43, Nagoya, Japan, October 1995. View at: Publisher Site | Google Scholar
  2. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of IEEE International Conference on Neural Networks (ICNN), pp. 1942–1948, Perth, WA, Australia, November 1995. View at: Publisher Site | Google Scholar
  3. R. Eberhart and Y. Shi, “Particle swarm optimization: developments, applications and resources,” in Proceedings of the 2001 Congress on Evolutionary Computation (CEC 2001), pp. 81–86, May 2001. View at: Publisher Site | Google Scholar
  4. Y. Shi, “An optimization algorithm based on brainstorming process,” International Journal of Swarm Intelligence Research, vol. 2, no. 4, pp. 35–62, 2011. View at: Publisher Site | Google Scholar
  5. S. Cheng, Q. Qin, J. Chen, and Y. Shi, “Brain storm optimization algorithm: a review,” Artificial Intelligence Review, vol. 46, no. 4, pp. 445–458, 2016. View at: Publisher Site | Google Scholar
  6. L. Ma, S. Cheng, and Y. Shi, “Enhancing learning efficiency of brain storm optimization via orthogonal learning design,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2020. View at: Publisher Site | Google Scholar
  7. S. Cheng, X. Lei, H. Lu, Y. Zhang, and Y. Shi, “Generalized pigeon-inspired optimization algorithms,” Science China Information Sciences, vol. 62, 2019. View at: Publisher Site | Google Scholar
  8. Z.-H. Zhi-Hui Zhan, J. Jun Zhang, Y. Yun Li, and H. S.-H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, no. 6, pp. 1362–1381, 2009. View at: Publisher Site | Google Scholar
  9. J. Liu, X. Ma, X. Li, M. Liu, T. Shi, and P. Li, “Random convergence analysis of particle swarm optimization algorithm with time-varying attractor,” Swarm and Evolutionary Computation, vol. 61, Article ID 100819, 2021. View at: Publisher Site | Google Scholar
  10. Q. Qin, S. Cheng, Q. Zhang, L. Li, and Y. Shi, “Particle swarm optimization with interswarm interactive learning strategy,” IEEE Transactions on Cybernetics, vol. 46, no. 10, pp. 2238–2251, 2016. View at: Publisher Site | Google Scholar
  11. X. Xia, L. Gui, F. Yu et al., “Triple archives particle swarm optimization,” IEEE Transactions on Cybernetics, vol. 50, no. 12, pp. 4862–4875, 2020. View at: Publisher Site | Google Scholar
  12. R. Cheng and Y. Jin, “A social learning particle swarm optimization algorithm for scalable optimization,” Information Sciences, vol. 291, pp. 43–60, 2015. View at: Publisher Site | Google Scholar
  13. F. Wang, H. Zhang, and A. Zhou, “A particle swarm optimization algorithm for mixed-variable optimization problems,” Swarm and Evolutionary Computation, vol. 60, Article ID 100808, 2021. View at: Publisher Site | Google Scholar
  14. H. Han, W. Lu, L. Zhang, and J. Qiao, “Adaptive gradient multiobjective particle swarm optimization,” IEEE Transactions on Cybernetics, vol. 48, no. 11, pp. 3067–3079, 2018. View at: Publisher Site | Google Scholar
  15. X.-F. Liu, Z.-H. Zhan, Y. Gao, J. Zhang, S. Kwong, and J. Zhang, “Coevolutionary particle swarm optimization with bottleneck objective learning strategy for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 4, pp. 587–602, 2019. View at: Publisher Site | Google Scholar
  16. S. Cheng, X. Lei, J. Chen, J. Feng, and Y. Shi, “Normalized ranking based particle swarm optimizer for many objective optimization,” in Simulated Evolution and Learning (SEAL 2017)), Y. Shi, K. C. Tan, M. Zhang et al., Eds., pp. 347–357, Springer International Publishing, Newyork, NY, USA, 2017. View at: Publisher Site | Google Scholar
  17. S. Cheng, H. Lu, X. Lei, and Y. Shi, “A quarter century of particle swarm optimization,” Complex & Intelligent Systems, vol. 4, no. 3, pp. 227–239, 2018. View at: Publisher Site | Google Scholar
  18. S. Cheng, L. Ma, H. Lu, X. Lei, and Y. Shi, “Evolutionary computation for solving search-based data analytics problems,” Artificial Intelligence Review, vol. 54, 2021, In press. View at: Publisher Site | Google Scholar
  19. J. H. Holland, “Building blocks, cohort genetic algorithms, and hyperplane-defined functions,” Evolutionary Computation, vol. 8, no. 4, pp. 373–391, 2000. View at: Publisher Site | Google Scholar
  20. A. P. Piotrowski, J. J. Napiorkowski, and A. E. Piotrowska, “Population size in particle swarm optimization,” Swarm and Evolutionary Computation, vol. 58, Article ID 100718, 2020. View at: Publisher Site | Google Scholar
  21. J. Kennedy, R. Eberhart, and Y. Shi, Swarm Intelligence, Morgan Kaufmann Publisher, Burlington, MA, USA, 2001.
  22. R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, 2004. View at: Publisher Site | Google Scholar
  23. J. Arabas and K. Opara, “Population diversity of nonelitist evolutionary algorithms in the exploration phase,” IEEE Transactions on Evolutionary Computation, vol. 24, no. 6, pp. 1050–1062, 2020. View at: Publisher Site | Google Scholar
  24. D. Bratton and J. Kennedy, “Defining a standard for particle swarm optimization,” in Proceedings of the 2007 IEEE Swarm Intelligence Symposium (SIS 2007), pp. 120–127, Honolulu, HI, USA, April 2007. View at: Publisher Site | Google Scholar
  25. M. Clerc, Standard Particle Swarm Optimisation from 2006 to 2011, Independent Consultant, Jersey City, NJ, USA, 2012.
  26. Y. Shi and R. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the 1999 Congress on Evolutionary Computation (CEC1999), pp. 1945–1950, Washington, DC, USA, July 1999. View at: Publisher Site | Google Scholar

Copyright © 2021 Jing Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1722
Downloads495
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.