Research Article  Open Access
Xueying Lv, Yitian Wang, Junyi Deng, Guanyu Zhang, Liu Zhang, "Improved Particle Swarm Optimization Algorithm Based on LastEliminated Principle and Enhanced Information Sharing", Computational Intelligence and Neuroscience, vol. 2018, Article ID 5025672, 17 pages, 2018. https://doi.org/10.1155/2018/5025672
Improved Particle Swarm Optimization Algorithm Based on LastEliminated Principle and Enhanced Information Sharing
Abstract
In this study, an improved eliminate particle swarm optimization (IEPSO) is proposed on the basis of the lasteliminated principle to solve optimization problems in engineering design. During optimization, the IEPSO enhances information communication among populations and maintains population diversity to overcome the limitations of classical optimization algorithms in solving multiparameter, strong coupling, and nonlinear engineering optimization problems. These limitations include advanced convergence and the tendency to easily fall into local optimization. The parameters involved in the imported “localglobal information sharing” term are analyzed, and the principle of parameter selection for performance is determined. The performances of the IEPSO and classical optimization algorithms are then tested by using multiple sets of classical functions to verify the global search performance of the IEPSO. The simulation test results and those of the improved classical optimization algorithms are compared and analyzed to verify the advanced performance of the IEPSO algorithm.
1. Introduction
The development of industrial society has led to the successful application of the optimal design methods to diverse engineering practices, such as path planning, structural design, control theory, and control engineering [1–10]. In 1995, the foraging behavior of bird swarm inspired Kennedy and Eberhart to propose the particle swarm optimization (PSO) algorithm. PSO requires few parameter adjustments and is easy to implement; hence, it is the most commonly used swarm intelligence algorithm [11–20]. However, in practical applications, most problems are complicated design problems with multiple parameters, strong coupling, and nonlinearity. Therefore, improving the global optimization capability of an optimization algorithm is important in solving complex engineering optimization problems. To improve the capability of traditional PSO, many scholars have proposed improvement strategies, including the adjustment of parameters and combinations of various mechanisms.
Shi and Eberhant [21] proposed an inertial weight improvement strategy (SPSO) with strong global search capability at the beginning of an iteration, strong local search capability in the latter iteration, and fine search near the optimal solution. Although the SPSO improves the convergence speed of the algorithm, the “premature” phenomenon remains. Zhang [22] proposed an improved PSO algorithm with adaptive inertial weight that is based on Bayesian technology to balance the development and exploration capability of populations. Ratnawecra [23] proposed a linear adjustment method for learning factors. In the early stages of the iteration, the particle flight was mainly based on the historical information of the particle itself, and the latter particle flight was mainly based on the social information between the particle and the global optimal particle. However, this method still has defects. The best fit for the initial global search is similar to the local optimum. Moreover, convergence is only limited to some optimal regions rather than globally, thereby causing the PSO algorithm to fall into the local extrema. Chen and Ke [24] proposed a chaotic dynamic weight (CDW) PSO (CDWPSO) algorithm. Chaotic maps and dynamic weights were used to modify the search process. Although CDWPSO indicates an improved search performance relative to other natural heuristic optimization algorithms, it also easily falls into the local optimum. Chen [25] proposed a dynamic multiswarm differential learning PSO (DMSDLPSO) algorithm, in which the differential evolution method is applied to each subgroup combined with a differential mutation method to conduct a global search, and a quasiNewton method is applied for local search. The DMSDLPSO algorithm has good exploration and exploitation capabilities. Jiang [26] proposed a new binary hybrid PSO with wavelet mutation (HPSOWM), in which the motion mechanism and mutation process of particles are converted into binary elements and the problem is transformed from a continuous space problem into a discrete domain one. Although the convergence speed of the HPSOWM algorithm is stable and robust, its convergence rate is lower than those of other intelligent optimization algorithms. To solve the dynamic multiobjective optimization problem with rapid environmental change, a study proposed a cooperative multiswarm PSO for dynamic multiobjective optimization (CMPSODMO) [27]. In comparison with other dynamic multiobjective optimization algorithms, CMPSODMO indicates a better effect in addressing uncertain rapid environmental changes. Ye [28] proposed a multiswarm PSO algorithm with dynamic learning strategies, in which the population is divided into ordinary and communication particles. The dynamic communication information of communication particles was applied to the algorithm to maintain particle population diversity. Although this method improves the capability of the algorithm to handle complex multimodal functions, it increases the computational complexity of the algorithm. Cui [29] proposed a globally optimal predictionbased adaptive mutation PSO (GPAMPSO) to avoid the local optimal problem of traditional PSO algorithms. However, GPAMPSO is limited to the dimensionality reduction of nonzero mean data. Zhang [30] proposed a vector covariance PSO algorithm that divides all the dimensions of a particle into several parts randomly and optimizes each part to enhance the global and local search capabilities. However, the algorithm continues to fall into the local extrema.
PSO has attracted considerable research attention due to its easy implementation, few parameter adjustments, and adaptability. Scholars use PSO to solve engineering optimization problems and gradually penetrate various fields of application, such as parameter optimization, path planning, predictive control, and global optimization. Zhao [31] used PSO to optimize wavelet neural network parameters, reduce the limitations of the assessment of network security situational awareness, and thereby meet the requirements of network security in a big data environment. The parameterrelated coefficients in a nonlinear regression analysis model were optimized by combining particle swarm with a genetic phase [32] to reduce the vibrations caused by mine blasting that damages the structures around the blasting area. The derived diffusionfree PSO algorithm was used to estimate the parameters of an infinite impulse response system and improve the energy utilization of an infinite sensor network [33]. Wang [34] used a multiobjective PSO algorithm to solve a pathplanning problem of mobile robots in a static rough terrain environment. Wang [35] combined PSO with chaos optimization theory to establish a mathematical model of a pathplanning problem in the radioactive environment of nuclear facilities to ensure personnel safety. Lopes [36] proposed a novel particle swarmbased heuristic technique to allocate electrical loads in an industrial setting throughout the day. Multiobjective PSO was used to solve the problem of service allocation in cloud computing [37]. Petrovic [38] proposed chaos PSO to achieve an optimal process dispatching plan. Zhang [39] proposed an adaptive PSO to solve problems in reservoir operation optimization with complex and dynamic nonlinear constraints.
An improved PSO algorithm was used for a timeseries prediction of a grayscale model [40]. The algorithm reduces the average relative error between the recovery and measured values of the model to avoid the problems caused by the optimization of background values. Gulcu [41] used PSO to establish a power demand forecasting model.
In view of the aforementioned methods, an improved PSO (IEPSO) algorithm is proposed in the present work. In IEPSO, the lasteliminated principle is used to update the population and maintain particle population diversity. The global search capability of the IEPSO algorithm is improved by adding localglobal information sharing terms. A multigroup test function is used for comparison with the IEPSO. A classical optimization algorithm and its improved versions are used to test and verify the global optimization performance of the IEPSO algorithm.
2. IEPSO
2.1. Standard PSO
The initial population of the PSO algorithm is randomized. The IEPSO updates the position and speed of the particle swarm by adaptive learning, as shown in following formulas:where ω is the inertial weight, C_{1} and C_{2} are the acceleration terms, R_{1} and R_{2} are the random variables uniformly distributed in the range of (0, 1), is the global better position, P_{i}^{t} is the particle that finds the best position in history, x_{i}^{d} is the particle position in the current iteration, and v_{id}^{t+1} is the particle update speed at the next iteration.
2.2. IEPSO
The IEPSO algorithm is mainly based on the lasteliminated principle and enhances the localglobal information sharing capability to improve its global optimization performance. The specific implementation of the IEPSO algorithm is shown in Figure 1.
The position and velocity of particles in a population are randomly initialized, and the fitness value of the particles is calculated. Information on the current individual and global optimal particles, including their positions and fitness values, is saved. Then, the particle swarm operation is conducted. In the IEPSO algorithm, Formula (2) is used to update the speed to balance the exploration and exploitation capabilities of the particles in the global optimization process. Formula (3) is the localglobal information sharing term:
Formula (2) comprises four parts, namely, the inheritance of the previous speed, particle selfcognition, local information sharing, and “localglobal information sharing.”
The IEPSO algorithm is not limited to oneway communication between global and individual particles. The localglobal information sharing term (φ_{3}) is added to the information exchange between the local optimum and global optimal particles obtained by the current iteration, and the population velocity is updated by Formula (2). In the early stage of the algorithm, the entire search space is searched at a relatively high speed to determine the approximate range of the optimal solution; the result is beneficial for global search. In the latter stage, most of the particle search space is gradually reduced and concentrated in the neighborhood of the optimal value for deep search; the result is beneficial for local search.
The particles that have not exceeded the predetermined range after the speed update continue to retain their original speed. The maximum value of the velocity is assigned to the particle that is beyond the predetermined range after the speed is updated. The particles that have not exceeded the predetermined range after the location update continue to retain their original positions. When the particles are beyond the predetermined range, inferior particles are eliminated by adding new particles to the population within the predetermined range, thereby forming a new population. The fitness value of the new population is recalculated, and the information of the individual particle and its global optimal position and fitness value obtained by the current iteration are preserved. In all the algorithms, particles have good global search capability at the beginning of the iteration, and as individual particles move closer to the local optimal particle, the algorithms gradually lose particle diversity. On the basis of the idea of population variation of the traditional genetic algorithm (GA), the lasteliminated principle is applied in the IEPSO algorithm to maintain particle population diversity. When the PSO satisfies the local convergence condition, the optimal value obtained at this time may be the local optimal value. Particle population diversity is maintained by using the particle fitness function as the evaluation criterion, thereby eliminating particles with poor fitness or high similarity. New particles are added to a new species in a predetermined range, and the particle swarm operations are reexecuted. If the number of the current iteration reaches the required predefined convergence accuracy, the iteration is stopped, and the optimal solution is produced. The complexity and runtime of the algorithm increase due to the increased localglobal information sharing and the lasteliminated principle. Nevertheless, experimental results show that the improved method can enhance the accuracy of the algorithm.
3. Experimental Study
Eleven test functions are adopted in this study to test the performance of the proposed IEPSO. In this test, f_{1}–f_{5} are unimodal functions, whereas f_{6}–f_{11} are multimodal functions. f_{6} (Griewank) is a multimodal function with multiple local extrema, in which achieving the theoretically global optimum is difficult. f_{7} (Rastrigin) possesses several local minima, in which finding the global optimal value is difficult. f_{10} (Ackley) is an almost flat area modulated by a cosine wave to form a hole or a peak; the surface is uneven, and entry to a local optimum during optimization is easy. f_{11} (Cmfun) possesses multiple local extrema around the global extremum point, and falling into the local optimum is easy. Table 1 presents the 11 test functions, where D is the space dimension, S is the search range, and CF is the theoretically optimal value.

3.1. Parameter Influence Analysis of LocalGlobal Information Sharing Term
This study proposes the addition of a localglobal information sharing term, which involves the parameter C_{3}. Therefore, the following exploration is conducted in a manner in which C_{3} is selected by using the 11 test functions.(1)When C_{3} takes a constant value, constant 2 is selected.(2)The linear variation formula of C_{3} is as follows:where k is the control factor. When k = 1, C_{3} is a linearly decreasing function; when k = −1, C_{3} is a linearly increasing function. C_{3}_start and C_{3}_end are the initial and termination values of C_{3}, respectively. T is the iteration number, and t_{max} is the maximum number of iterations.
Tables 2 and 3 and Figure 2 show that C_{3} is a constant that linearly declines and linearly increases in three cases. When the parameter C_{3} of the localglobal information sharing term is a linearly decreasing function, the average fitness value of the testing function is optimal, and the convergence speed and capability to jump out of the local extrema are higher than those in the other two cases. When C_{3} takes a constant, the algorithm cannot balance the global and local search, resulting in a “premature” phenomenon. When C_{3} adopts the linearly decreasing form, the entire area can be quickly searched at an early stage, and close attention is paid to local search in the latter part of the iteration to enhance the deep search ability of the algorithm. While C_{3} adopts a linearly increasing form, it focuses on the globallocal information exchange in the latter stage of the iteration. Although this condition can increase the deep search ability of the algorithm, it will cause the convergence speed to stagnate. Therefore, compared with the linearly increasing form, the linearly decreasing form shows a simulation curve that converges faster and with higher precision.


(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
Therefore, the selection rules of the parameter C_{3} of localglobal information sharing in a decreasing function are investigated in this study. The nonlinear variation formula of C_{3} is as follows:where C_{3}_start and C_{3}_end are the initial and termination values of the acceleration term C_{3}, respectively, and k is the control factor. When k = 0.2, C_{3} is a convex decreasing function; when k = 2, C_{3} is a concave decreasing function. t is the iteration number, and t_{max} is the maximum number of iterations.
Table 4 shows that when C_{3} is a convex function, the precision and robustness of the algorithm can obtain satisfactory results on f_{1}–f_{5}. Table 5 shows that when C_{3} is a convex function, the algorithm obtains a satisfactory solution and shows a fast convergence rate on f_{6}, f_{8}, f_{9}, f_{10}, and f_{11}. In the unimodal test function, the IEPSO algorithm does not show its advantages because of its strong deep search capability. In the complex multimodal test function, when the convex function is used in C_{3}, the downward trend is slow in the early stage, thus benefiting the global search, and the downward speed increases in the later stage, thus benefiting the local search. When the concave function is used for C_{3}, the descent speed is fast in the early stage. Although the search speed is improved, the coverage area of the search is reduced, thereby leading to the convergence of the algorithm to the nonoptimal value. From the simulation diagrams (f)–(k), the convergence speed is observed to be slightly slow when C_{3} is a convex function, but its ability to jump out of the local extremum and the accuracy of the global search are higher than those in the other two cases. When C_{3} is a concave function, the convergence speed is faster than those in the other two cases, and the search accuracy is lower than that when C_{3} is a convex function.


3.2. Comparison of Test Results
The 11 test functions in Figure 1 are used to compare the IEPSO algorithm with classical PSO, SPSO, differential algorithm (DE), and GA. The DE, GA, and PSO algorithms are all stochastic intelligent optimization algorithms with population iterations. The evaluation criteria of algorithm performance include speed of convergence and size of individual population search coverage. The differential optimization algorithm has a low space complexity and obvious advantages in dealing with largescale and complex optimization problems. The GA has good convergence when solving discrete, multipeak, and noisecontaining optimization problems. Based on the traditional PSO algorithm, the SPSO algorithm achieves the balance between global search and local search by adjusting the inertial weight (Figures 3 and 4).
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
The experimental parameters of the five algorithms are set, as shown in Table 6. Each test function is run independently 10 times, and the average is recorded to reduce the data error. The iteration is stopped when the convergence condition meets the convergence accuracy. The best average fitness value of the five algorithms is blackened. The standard deviation, average fitness, and optimal value of each algorithm are shown in Tables 7 and 8; Figures 5 and 6 plot the convergence curves of the 11 test functions.



(a)
(b)
(c)
(d)
(e)
(a)
(b)
(c)
(d)
(e)
(f)
Table 7 shows that the IEPSO has the best performance on f_{1}, f_{2}, f_{3}, and f_{4}. The IEPSO algorithm obtains the theoretical optimal value on f_{2}. DE can search the global solution on f_{5}. The deep search capability of the IEPSO algorithm is considerably higher than that of the PSO and SPSO algorithms due to the increased globallocal information sharing term and the lasteliminated principle. The crossover, mutation, and selection mechanisms make the DE algorithm perform well in the early stage of the global search. However, the diversity of the population declines in the latter stage because of population differences. The simulation diagrams (a)–(e) show that although the DE algorithm converges rapidly in the early stage, its global search performance in the later stage becomes lower than that of the IEPSO algorithm. When the GA is used to solve optimization problems, the individuals in the population fall into the local optimum and do not continue searching for the optimum solution. Therefore, in Figure 5, the simulation curve of the GA converges to the local optimum.
The test results in Table 8 indicate that the IEPSO has the best performance on f_{6}, f_{7}, f_{8}, f_{9}, f_{10}, and f_{11} and that the DE and GA can obtain the theoretical optimal value on f_{9} and f_{11}. Although the GA and IEPSO algorithm can obtain the global optimal value on f_{9}, the IEPSO algorithm is more robust than the GA is. As shown in the simulation curve of Figure 6, the diversity of the population is maintained because the supplementary particles in the population are stochastic when the local optimal solution converges gradually. The IEPSO algorithm can jump out of the local extrema points in the face of complex multimodal test functions, and the number of iterations required is correspondingly reduced.
Table 9 shows the test results for the three improved PSO algorithms. The DMSDLPSO algorithm in [25] is a PSO algorithm combined with differential variation and the quasiNewton method, whereas the HPSOWM algorithm in [26] is a binary PSO algorithm based on wavelet transform. Table 9 shows that the IEPSO algorithm obtains the best value in 5 out of the 11 test functions, and the above analysis indicates that the IEPSO outperforms the other improved PSO algorithms.

4. Conclusion
In contemporary engineering design, solving the global optimization problems of multiparameter, strongly coupled, and nonlinear systems using conventional optimization algorithms is difficult. In this study, an improved PSO, that is, the IEPSO algorithm, is proposed on the basis of the lasteliminated principle and an enhanced localglobal information sharing capability. The comparison and analysis of the simulation results indicate the following conclusions:(1)The exchange of information between global and local optimal particles enhances the deep search capability of the IEPSO algorithm.(2)The standard test function is used to simulate the parameter C_{3} of the localglobal information sharing term. The results show that the global optimization capability of the IEPSO algorithm is strong when C_{3} is linearly decreasing. Moreover, the proposed algorithm can show the best search performance when C_{3} is a nonlinear convex function.(3)The lasteliminated principle is used in the IEPSO to maintain particle population diversity. Moreover, PSO is avoided in the local optimal value. A comparison of the IEPSO algorithm with the classical optimization algorithm and its improved versions verifies the global search capability of the IEPSO algorithm.
In summary, the comparative results of the simulation analysis reveal that, with the application of the lasteliminated principle and the localglobal information sharing term to the IEPSO, the proposed algorithm effectively overcomes the disadvantages of the classical algorithms, including their precocious convergence and tendency to fall into the local optimum. The IEPSO shows an ideal global optimization performance and indicates a high application value for solving practical engineering optimization problems.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
Acknowledgments
This work was supported by Shanghai RisingStar Program (no. 16QB1401000), Key Project of Shanghai Science and Technology Committee (no. 16DZ1120400), and the National Natural Science Foundation of China (Project no. 51705187), the Postdoctoral Science Foundation of China (Grant no. 2017M621202).
References
 Z. Zhou, J. Wang, Z. Zhu, D. Yang, and J. Wu, “Tangent navigated robot path planning strategy using particle swarm optimized artificial potential field,” Optik, vol. 158, pp. 639–651, 2018. View at: Publisher Site  Google Scholar
 P. Du, R. Barrio, H. Jiang, and L. Cheng, “Accurate QuotientDifference algorithm: error analysis, improvements and applications,” Applied Mathematics and Computation, vol. 309, pp. 245–271, 2017. View at: Publisher Site  Google Scholar
 L. Jiang, Z. Wang, Y. Ye, and J. Jiang, “Fast circle detection algorithm based on sampling from difference area,” Optik, vol. 158, pp. 424–433, 2018. View at: Publisher Site  Google Scholar
 H. Garg, “A hybrid PSOGA algorithm for constrained optimization problems,” Applied Mathematics & Computation, vol. 274, no. 11, pp. 292–305, 2016. View at: Publisher Site  Google Scholar
 J. Zhang and P. Xia, “An improved PSO algorithm for parameter identification of nonlinear dynamic hysteretic models,” Journal of Sound and Vibration, vol. 389, pp. 153–167, 2017. View at: Publisher Site  Google Scholar
 R. Saini, P. P. Roy, and D. P. Dogra, “A segmental HMM based trajectory classification using genetic algorithm,” Expert Systems with Applications, vol. 93, pp. 169–181, 2018. View at: Publisher Site  Google Scholar
 P. R. D. O. D. Costa, S. Mauceri, P. Carroll et al., “A genetic algorithm for a vehicle routing problem,” Electronic Notes in Discrete Mathematics, vol. 64, pp. 65–74, 2017. View at: Google Scholar
 V. Jindal and P. Bedi, “An improved hybrid ant particle optimization (IHAPO) algorithm for reducing travel time in VANETs,” Applied Soft Computing, vol. 64, pp. 526–535, 2018. View at: Publisher Site  Google Scholar
 Z. Peng, H. Manier, and M. A. Manier, “Particle swarm optimization for capacitated locationrouting problem,” IFACPapersOnLine, vol. 50, no. 1, pp. 14668–14673, 2017. View at: Publisher Site  Google Scholar
 G. Xu and G. Yu, “Reprint of: on convergence analysis of particle swarm optimization algorithm,” Journal of Shanxi Normal University, vol. 4, no. 14, pp. 25–32, 2008. View at: Google Scholar
 J. Lu, W. Xie, and H. Zhou, “Combined fitness function based particle swarm optimization algorithm for system identification,” Computers & Industrial Engineering, vol. 95, pp. 122–134, 2016. View at: Publisher Site  Google Scholar
 F. Javidrad and M. Nazari, “A new hybrid particle swarm and simulated annealing stochastic optimization method,” Applied Soft Computing, vol. 60, pp. 634–654, 2017. View at: Publisher Site  Google Scholar
 J. Jie, J. Zhang, H. Zheng, and B. Hou, “Formalized model and analysis of mixed swarm based cooperative particle swarm optimization,” Neurocomputing, vol. 174, pp. 542–552, 2016. View at: Publisher Site  Google Scholar
 A. Meng, Z. Li, H. Yin, S. Chen, and Z. Guo, “Accelerating particle swarm optimization using crisscross search,” Information Sciences, vol. 329, pp. 52–72, 2016. View at: Publisher Site  Google Scholar
 L. Wang, B. Yang, and J. Orchard, “Particle swarm optimization using dynamic tournament topology,” Applied Soft Computing, vol. 48, pp. 584–596, 2016. View at: Publisher Site  Google Scholar
 M. S. Kiran, “Particle swarm optimization with a new update mechanism,” Applied Soft Computing, vol. 60, pp. 670–678, 2017. View at: Publisher Site  Google Scholar
 H. C. Tsai, “Unified particle swarm delivers high efficiency to particle swarm optimization,” Applied Soft Computing, vol. 55, pp. 371–383, 2017. View at: Publisher Site  Google Scholar
 S. F. Li and C. Y. Cheng, “Particle swarm optimization with fitness adjustment parameters,” Computers & Industrial Engineering, vol. 113, pp. 831–841, 2017. View at: Publisher Site  Google Scholar
 Y. Chen, L. Li, H. Peng, J. Xiao, Y. Yang, and Y. Shi, “Particle swarm optimizer with two differential mutation,” Applied Soft Computing, vol. 61, pp. 314–330, 2017. View at: Publisher Site  Google Scholar
 Q. Zhang, W. Liu, X. Meng, B. Yang, and A. V. Vasilakos, “Vector coevolving particle swarm optimization algorithm,” Information Sciences, vol. 394, pp. 273–298, 2017. View at: Publisher Site  Google Scholar
 Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization[C]//Evolutionary computation,” in Proceedings of the 1999 Congress on Evolutionary ComputationCEC99, vol. 3, pp. 1945–1950, IEEE, Washington, DC, USA, 1999. View at: Google Scholar
 Z. Wang and J. Cai, “The pathplanning in radioactive environment of nuclear facilities using an improved particle swarm optimization algorithm,” Nuclear Engineering & Design, vol. 326, pp. 79–86, 2018. View at: Publisher Site  Google Scholar
 A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Selforganizing hierarchical particle swarm optimizer with timevarying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at: Publisher Site  Google Scholar
 K. Chen, F. Zhou, and A. Liu, “Chaotic dynamic weight particle swarm optimization for numerical function optimization,” KnowledgeBased Systems, vol. 139, pp. 23–40, 2018. View at: Publisher Site  Google Scholar
 Y. Chen, L. Li, H. Peng, J. Xiao, and Q. Wu, “Dynamic multiswarm differential learning particle swarm optimizer,” Swarm and Evolutionary Computation, vol. 39, pp. 209–221, 2018. View at: Publisher Site  Google Scholar
 F. Jiang, H. Xia, Q. A. Tran, Q. M. Ha, N. Q. Tran, and J. Hu, “A new binary hybrid particle swarm optimization with wavelet mutation,” KnowledgeBased Systems, vol. 130, pp. 90–101, 2017. View at: Publisher Site  Google Scholar
 R. Liu, J. Li, C. Mu, J. fan, and L. Jiao, “A coevolutionary technique based on multiswarm particle swarm optimization for dynamic multiobjective optimization,” European Journal of Operational Research, vol. 261, no. 3, pp. 1028–1051, 2017. View at: Publisher Site  Google Scholar
 W. Ye, W. Feng, and S. Fan, “A novel multiswarm particle swarm optimization with dynamic learning strategy,” Applied Soft Computing, vol. 61, pp. 832–843, 2017. View at: Publisher Site  Google Scholar
 L. Zhang, Y. Tang, C. Hua, and X. Guan, “A new particle swarm optimization algorithm with adaptive inertia weight based on Bayesian techniques,” Applied Soft Computing, vol. 28, pp. 138–149, 2015. View at: Publisher Site  Google Scholar
 Q. Cui, Q. Li, G. Li et al., “Globallyoptimal predictionbased adaptive mutation particle swarm optimization,” Information Sciences, vol. 418, pp. 186–217, 2017. View at: Publisher Site  Google Scholar
 D. Zhao and J. Liu, “Study on network security situation awareness based on particle swarm optimization algorithm,” Computers & Industrial Engineering, vol. 125, pp. 764–775, 2018. View at: Publisher Site  Google Scholar
 H. Samareh, S. H. Khoshrou, K. Shahriar, M. M. Ebadzadeh, and M. Eslami, “Optimization of a nonlinear model for predicting the ground vibration using the combinational particle swarm optimizationgenetic algorithm,” Journal of African Earth Sciences, vol. 133, pp. 36–45, 2017. View at: Publisher Site  Google Scholar
 M. Dash, T. Panigrahi, and R. Sharma, “Distributed parameter estimation of IIR system using diffusion particle swarm optimization algorithm,” Journal of King Saud UniversityEngineering Sciences, 2017, In press. View at: Google Scholar
 B. Wang, S. Li, J. Guo, and Q. Chen, “Carlike mobile robot path planning in rough terrain using multiobjective particle swarm optimization algorithm,” Neurocomputing, vol. 282, pp. 42–51, 2018. View at: Publisher Site  Google Scholar
 Z. Wang and J. Cai, “The pathplanning in radioactive environment of nuclear facilities using an improved particle swarm optimization algorithm,” Nuclear Engineering & Design, vol. 326, pp. 79–86, 2018. View at: Publisher Site  Google Scholar
 R. F. Lopes, F. F. Costa, A. Oliveira et al., “Algorithm based on particle swarm applied to electrical load scheduling in an industrial setting,” Energy, vol. 147, pp. 1007–1015, 2018. View at: Publisher Site  Google Scholar
 F. Sheikholeslami and N. J. Navimipour, “Service allocation in the cloud environments using multiobjective particle swarm optimization algorithm based on crowding distance,” Swarm & Evolutionary Computation, vol. 35, pp. 53–64, 2017. View at: Publisher Site  Google Scholar
 M. Petrović, N. Vuković, M. Mitić et al., “Integration of process planning and scheduling using chaotic particle swarm optimization algorithm,” Expert Systems with Applications, vol. 64, pp. 569–588, 2016. View at: Publisher Site  Google Scholar
 Z. Zhang, Y. Jiang, S. Zhang, S. Geng, H. Wang, and G. Sang, “An adaptive particle swarm optimization algorithm for reservoir operation optimization,” Applied Soft Computing Journal, vol. 18, no. 4, pp. 167–177, 2014. View at: Publisher Site  Google Scholar
 K. Li, L. Liu, J. Zhai, T. M. Khoshgoftaar, and T. Li, “The improved grey model based on particle swarm optimization algorithm for time series prediction,” Engineering Applications of Artificial Intelligence, vol. 55, pp. 285–291, 2016. View at: Publisher Site  Google Scholar
 S. Gulcu and H. Kodaz, “The estimation of the electricity energy demand using particle swarm optimization algorithm: a case study of Turkey,” Procedia Computer Science, vol. 111, pp. 64–70, 2017. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 Xueying Lv et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.