Abstract

In this study, an improved eliminate particle swarm optimization (IEPSO) is proposed on the basis of the last-eliminated principle to solve optimization problems in engineering design. During optimization, the IEPSO enhances information communication among populations and maintains population diversity to overcome the limitations of classical optimization algorithms in solving multiparameter, strong coupling, and nonlinear engineering optimization problems. These limitations include advanced convergence and the tendency to easily fall into local optimization. The parameters involved in the imported “local-global information sharing” term are analyzed, and the principle of parameter selection for performance is determined. The performances of the IEPSO and classical optimization algorithms are then tested by using multiple sets of classical functions to verify the global search performance of the IEPSO. The simulation test results and those of the improved classical optimization algorithms are compared and analyzed to verify the advanced performance of the IEPSO algorithm.

1. Introduction

The development of industrial society has led to the successful application of the optimal design methods to diverse engineering practices, such as path planning, structural design, control theory, and control engineering [110]. In 1995, the foraging behavior of bird swarm inspired Kennedy and Eberhart to propose the particle swarm optimization (PSO) algorithm. PSO requires few parameter adjustments and is easy to implement; hence, it is the most commonly used swarm intelligence algorithm [1120]. However, in practical applications, most problems are complicated design problems with multiple parameters, strong coupling, and nonlinearity. Therefore, improving the global optimization capability of an optimization algorithm is important in solving complex engineering optimization problems. To improve the capability of traditional PSO, many scholars have proposed improvement strategies, including the adjustment of parameters and combinations of various mechanisms.

Shi and Eberhant [21] proposed an inertial weight improvement strategy (SPSO) with strong global search capability at the beginning of an iteration, strong local search capability in the latter iteration, and fine search near the optimal solution. Although the SPSO improves the convergence speed of the algorithm, the “premature” phenomenon remains. Zhang [22] proposed an improved PSO algorithm with adaptive inertial weight that is based on Bayesian technology to balance the development and exploration capability of populations. Ratnawecra [23] proposed a linear adjustment method for learning factors. In the early stages of the iteration, the particle flight was mainly based on the historical information of the particle itself, and the latter particle flight was mainly based on the social information between the particle and the global optimal particle. However, this method still has defects. The best fit for the initial global search is similar to the local optimum. Moreover, convergence is only limited to some optimal regions rather than globally, thereby causing the PSO algorithm to fall into the local extrema. Chen and Ke [24] proposed a chaotic dynamic weight (CDW) PSO (CDW-PSO) algorithm. Chaotic maps and dynamic weights were used to modify the search process. Although CDW-PSO indicates an improved search performance relative to other natural heuristic optimization algorithms, it also easily falls into the local optimum. Chen [25] proposed a dynamic multiswarm differential learning PSO (DMSDL-PSO) algorithm, in which the differential evolution method is applied to each subgroup combined with a differential mutation method to conduct a global search, and a quasi-Newton method is applied for local search. The DMSDL-PSO algorithm has good exploration and exploitation capabilities. Jiang [26] proposed a new binary hybrid PSO with wavelet mutation (HPSOWM), in which the motion mechanism and mutation process of particles are converted into binary elements and the problem is transformed from a continuous space problem into a discrete domain one. Although the convergence speed of the HPSOWM algorithm is stable and robust, its convergence rate is lower than those of other intelligent optimization algorithms. To solve the dynamic multiobjective optimization problem with rapid environmental change, a study proposed a cooperative multiswarm PSO for dynamic multiobjective optimization (CMPSODMO) [27]. In comparison with other dynamic multiobjective optimization algorithms, CMPSODMO indicates a better effect in addressing uncertain rapid environmental changes. Ye [28] proposed a multiswarm PSO algorithm with dynamic learning strategies, in which the population is divided into ordinary and communication particles. The dynamic communication information of communication particles was applied to the algorithm to maintain particle population diversity. Although this method improves the capability of the algorithm to handle complex multimodal functions, it increases the computational complexity of the algorithm. Cui [29] proposed a globally optimal prediction-based adaptive mutation PSO (GPAM-PSO) to avoid the local optimal problem of traditional PSO algorithms. However, GPAM-PSO is limited to the dimensionality reduction of nonzero mean data. Zhang [30] proposed a vector covariance PSO algorithm that divides all the dimensions of a particle into several parts randomly and optimizes each part to enhance the global and local search capabilities. However, the algorithm continues to fall into the local extrema.

PSO has attracted considerable research attention due to its easy implementation, few parameter adjustments, and adaptability. Scholars use PSO to solve engineering optimization problems and gradually penetrate various fields of application, such as parameter optimization, path planning, predictive control, and global optimization. Zhao [31] used PSO to optimize wavelet neural network parameters, reduce the limitations of the assessment of network security situational awareness, and thereby meet the requirements of network security in a big data environment. The parameter-related coefficients in a nonlinear regression analysis model were optimized by combining particle swarm with a genetic phase [32] to reduce the vibrations caused by mine blasting that damages the structures around the blasting area. The derived diffusion-free PSO algorithm was used to estimate the parameters of an infinite impulse response system and improve the energy utilization of an infinite sensor network [33]. Wang [34] used a multiobjective PSO algorithm to solve a path-planning problem of mobile robots in a static rough terrain environment. Wang [35] combined PSO with chaos optimization theory to establish a mathematical model of a path-planning problem in the radioactive environment of nuclear facilities to ensure personnel safety. Lopes [36] proposed a novel particle swarm-based heuristic technique to allocate electrical loads in an industrial setting throughout the day. Multiobjective PSO was used to solve the problem of service allocation in cloud computing [37]. Petrovic [38] proposed chaos PSO to achieve an optimal process dispatching plan. Zhang [39] proposed an adaptive PSO to solve problems in reservoir operation optimization with complex and dynamic nonlinear constraints.

An improved PSO algorithm was used for a time-series prediction of a grayscale model [40]. The algorithm reduces the average relative error between the recovery and measured values of the model to avoid the problems caused by the optimization of background values. Gulcu [41] used PSO to establish a power demand forecasting model.

In view of the aforementioned methods, an improved PSO (IEPSO) algorithm is proposed in the present work. In IEPSO, the last-eliminated principle is used to update the population and maintain particle population diversity. The global search capability of the IEPSO algorithm is improved by adding local-global information sharing terms. A multigroup test function is used for comparison with the IEPSO. A classical optimization algorithm and its improved versions are used to test and verify the global optimization performance of the IEPSO algorithm.

2. IEPSO

2.1. Standard PSO

The initial population of the PSO algorithm is randomized. The IEPSO updates the position and speed of the particle swarm by adaptive learning, as shown in following formulas:where ω is the inertial weight, C1 and C2 are the acceleration terms, R1 and R2 are the random variables uniformly distributed in the range of (0, 1), is the global better position, Pit is the particle that finds the best position in history, xid is the particle position in the current iteration, and vidt+1 is the particle update speed at the next iteration.

2.2. IEPSO

The IEPSO algorithm is mainly based on the last-eliminated principle and enhances the local-global information sharing capability to improve its global optimization performance. The specific implementation of the IEPSO algorithm is shown in Figure 1.

The position and velocity of particles in a population are randomly initialized, and the fitness value of the particles is calculated. Information on the current individual and global optimal particles, including their positions and fitness values, is saved. Then, the particle swarm operation is conducted. In the IEPSO algorithm, Formula (2) is used to update the speed to balance the exploration and exploitation capabilities of the particles in the global optimization process. Formula (3) is the local-global information sharing term:

Formula (2) comprises four parts, namely, the inheritance of the previous speed, particle self-cognition, local information sharing, and “local-global information sharing.”

The IEPSO algorithm is not limited to one-way communication between global and individual particles. The local-global information sharing term (φ3) is added to the information exchange between the local optimum and global optimal particles obtained by the current iteration, and the population velocity is updated by Formula (2). In the early stage of the algorithm, the entire search space is searched at a relatively high speed to determine the approximate range of the optimal solution; the result is beneficial for global search. In the latter stage, most of the particle search space is gradually reduced and concentrated in the neighborhood of the optimal value for deep search; the result is beneficial for local search.

The particles that have not exceeded the predetermined range after the speed update continue to retain their original speed. The maximum value of the velocity is assigned to the particle that is beyond the predetermined range after the speed is updated. The particles that have not exceeded the predetermined range after the location update continue to retain their original positions. When the particles are beyond the predetermined range, inferior particles are eliminated by adding new particles to the population within the predetermined range, thereby forming a new population. The fitness value of the new population is recalculated, and the information of the individual particle and its global optimal position and fitness value obtained by the current iteration are preserved. In all the algorithms, particles have good global search capability at the beginning of the iteration, and as individual particles move closer to the local optimal particle, the algorithms gradually lose particle diversity. On the basis of the idea of population variation of the traditional genetic algorithm (GA), the last-eliminated principle is applied in the IEPSO algorithm to maintain particle population diversity. When the PSO satisfies the local convergence condition, the optimal value obtained at this time may be the local optimal value. Particle population diversity is maintained by using the particle fitness function as the evaluation criterion, thereby eliminating particles with poor fitness or high similarity. New particles are added to a new species in a predetermined range, and the particle swarm operations are reexecuted. If the number of the current iteration reaches the required predefined convergence accuracy, the iteration is stopped, and the optimal solution is produced. The complexity and runtime of the algorithm increase due to the increased local-global information sharing and the last-eliminated principle. Nevertheless, experimental results show that the improved method can enhance the accuracy of the algorithm.

3. Experimental Study

Eleven test functions are adopted in this study to test the performance of the proposed IEPSO. In this test, f1f5 are unimodal functions, whereas f6f11 are multimodal functions. f6 (Griewank) is a multimodal function with multiple local extrema, in which achieving the theoretically global optimum is difficult. f7 (Rastrigin) possesses several local minima, in which finding the global optimal value is difficult. f10 (Ackley) is an almost flat area modulated by a cosine wave to form a hole or a peak; the surface is uneven, and entry to a local optimum during optimization is easy. f11 (Cmfun) possesses multiple local extrema around the global extremum point, and falling into the local optimum is easy. Table 1 presents the 11 test functions, where D is the space dimension, S is the search range, and CF is the theoretically optimal value.

3.1. Parameter Influence Analysis of Local-Global Information Sharing Term

This study proposes the addition of a local-global information sharing term, which involves the parameter C3. Therefore, the following exploration is conducted in a manner in which C3 is selected by using the 11 test functions.(1)When C3 takes a constant value, constant 2 is selected.(2)The linear variation formula of C3 is as follows:where k is the control factor. When k = 1, C3 is a linearly decreasing function; when k = −1, C3 is a linearly increasing function. C3_start and C3_end are the initial and termination values of C3, respectively. T is the iteration number, and tmax is the maximum number of iterations.

Tables 2 and 3 and Figure 2 show that C3 is a constant that linearly declines and linearly increases in three cases. When the parameter C3 of the local-global information sharing term is a linearly decreasing function, the average fitness value of the testing function is optimal, and the convergence speed and capability to jump out of the local extrema are higher than those in the other two cases. When C3 takes a constant, the algorithm cannot balance the global and local search, resulting in a “premature” phenomenon. When C3 adopts the linearly decreasing form, the entire area can be quickly searched at an early stage, and close attention is paid to local search in the latter part of the iteration to enhance the deep search ability of the algorithm. While C3 adopts a linearly increasing form, it focuses on the global-local information exchange in the latter stage of the iteration. Although this condition can increase the deep search ability of the algorithm, it will cause the convergence speed to stagnate. Therefore, compared with the linearly increasing form, the linearly decreasing form shows a simulation curve that converges faster and with higher precision.

Therefore, the selection rules of the parameter C3 of local-global information sharing in a decreasing function are investigated in this study. The nonlinear variation formula of C3 is as follows:where C3_start and C3_end are the initial and termination values of the acceleration term C3, respectively, and k is the control factor. When k = 0.2, C3 is a convex decreasing function; when k = 2, C3 is a concave decreasing function. t is the iteration number, and tmax is the maximum number of iterations.

Table 4 shows that when C3 is a convex function, the precision and robustness of the algorithm can obtain satisfactory results on f1f5. Table 5 shows that when C3 is a convex function, the algorithm obtains a satisfactory solution and shows a fast convergence rate on f6, f8, f9, f10, and f11. In the unimodal test function, the IEPSO algorithm does not show its advantages because of its strong deep search capability. In the complex multimodal test function, when the convex function is used in C3, the downward trend is slow in the early stage, thus benefiting the global search, and the downward speed increases in the later stage, thus benefiting the local search. When the concave function is used for C3, the descent speed is fast in the early stage. Although the search speed is improved, the coverage area of the search is reduced, thereby leading to the convergence of the algorithm to the nonoptimal value. From the simulation diagrams (f)–(k), the convergence speed is observed to be slightly slow when C3 is a convex function, but its ability to jump out of the local extremum and the accuracy of the global search are higher than those in the other two cases. When C3 is a concave function, the convergence speed is faster than those in the other two cases, and the search accuracy is lower than that when C3 is a convex function.

3.2. Comparison of Test Results

The 11 test functions in Figure 1 are used to compare the IEPSO algorithm with classical PSO, SPSO, differential algorithm (DE), and GA. The DE, GA, and PSO algorithms are all stochastic intelligent optimization algorithms with population iterations. The evaluation criteria of algorithm performance include speed of convergence and size of individual population search coverage. The differential optimization algorithm has a low space complexity and obvious advantages in dealing with large-scale and complex optimization problems. The GA has good convergence when solving discrete, multipeak, and noise-containing optimization problems. Based on the traditional PSO algorithm, the SPSO algorithm achieves the balance between global search and local search by adjusting the inertial weight (Figures 3 and 4).

The experimental parameters of the five algorithms are set, as shown in Table 6. Each test function is run independently 10 times, and the average is recorded to reduce the data error. The iteration is stopped when the convergence condition meets the convergence accuracy. The best average fitness value of the five algorithms is blackened. The standard deviation, average fitness, and optimal value of each algorithm are shown in Tables 7 and 8; Figures 5 and 6 plot the convergence curves of the 11 test functions.

Table 7 shows that the IEPSO has the best performance on f1, f2, f3, and f4. The IEPSO algorithm obtains the theoretical optimal value on f2. DE can search the global solution on f5. The deep search capability of the IEPSO algorithm is considerably higher than that of the PSO and SPSO algorithms due to the increased global-local information sharing term and the last-eliminated principle. The crossover, mutation, and selection mechanisms make the DE algorithm perform well in the early stage of the global search. However, the diversity of the population declines in the latter stage because of population differences. The simulation diagrams (a)–(e) show that although the DE algorithm converges rapidly in the early stage, its global search performance in the later stage becomes lower than that of the IEPSO algorithm. When the GA is used to solve optimization problems, the individuals in the population fall into the local optimum and do not continue searching for the optimum solution. Therefore, in Figure 5, the simulation curve of the GA converges to the local optimum.

The test results in Table 8 indicate that the IEPSO has the best performance on f6, f7, f8, f9, f10, and f11 and that the DE and GA can obtain the theoretical optimal value on f9 and f11. Although the GA and IEPSO algorithm can obtain the global optimal value on f9, the IEPSO algorithm is more robust than the GA is. As shown in the simulation curve of Figure 6, the diversity of the population is maintained because the supplementary particles in the population are stochastic when the local optimal solution converges gradually. The IEPSO algorithm can jump out of the local extrema points in the face of complex multimodal test functions, and the number of iterations required is correspondingly reduced.

Table 9 shows the test results for the three improved PSO algorithms. The DMSDL-PSO algorithm in [25] is a PSO algorithm combined with differential variation and the quasi-Newton method, whereas the HPSOWM algorithm in [26] is a binary PSO algorithm based on wavelet transform. Table 9 shows that the IEPSO algorithm obtains the best value in 5 out of the 11 test functions, and the above analysis indicates that the IEPSO outperforms the other improved PSO algorithms.

4. Conclusion

In contemporary engineering design, solving the global optimization problems of multiparameter, strongly coupled, and nonlinear systems using conventional optimization algorithms is difficult. In this study, an improved PSO, that is, the IEPSO algorithm, is proposed on the basis of the last-eliminated principle and an enhanced local-global information sharing capability. The comparison and analysis of the simulation results indicate the following conclusions:(1)The exchange of information between global and local optimal particles enhances the deep search capability of the IEPSO algorithm.(2)The standard test function is used to simulate the parameter C3 of the local-global information sharing term. The results show that the global optimization capability of the IEPSO algorithm is strong when C3 is linearly decreasing. Moreover, the proposed algorithm can show the best search performance when C3 is a nonlinear convex function.(3)The last-eliminated principle is used in the IEPSO to maintain particle population diversity. Moreover, PSO is avoided in the local optimal value. A comparison of the IEPSO algorithm with the classical optimization algorithm and its improved versions verifies the global search capability of the IEPSO algorithm.

In summary, the comparative results of the simulation analysis reveal that, with the application of the last-eliminated principle and the local-global information sharing term to the IEPSO, the proposed algorithm effectively overcomes the disadvantages of the classical algorithms, including their precocious convergence and tendency to fall into the local optimum. The IEPSO shows an ideal global optimization performance and indicates a high application value for solving practical engineering optimization problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This work was supported by Shanghai Rising-Star Program (no. 16QB1401000), Key Project of Shanghai Science and Technology Committee (no. 16DZ1120400), and the National Natural Science Foundation of China (Project no. 51705187), the Postdoctoral Science Foundation of China (Grant no. 2017M621202).