Abstract
As the convergence accuracy of the Porcellio scaber algorithm (PSA) is low, this study proposes an improved algorithm based on the tdistribution elite mutation mechanism. First, the improved algorithm applies a tdistribution mutation to each dimension of the optimal solution of each generation. Using the dominant information of the optimal solution and tdistribution characteristics, the result of the mutation is then employed as the updated location of the selected porcellio; thus, the algorithm enhances the ability to jump out of local extreme values and improves the convergence speed. Second, the updated iterative rule of PSA may lead to the loss of information of the elite porcellio in the last generation. To solve this problem, the judgment mechanisms of the current and previous optimal solutions are included in the algorithm process. Finally, dynamic selfadaptive improvement is applied to the weight allocation parameters of PSA. Simulation results on 24 benchmark functions show that the improved algorithm has significant advantages in convergence accuracy, convergence speed, and stability when compared with basic PSA, PSO, GSA, and FPA. This indicates that the improved algorithm has certain advantages in terms of optimization. An optimal solution with good practicability is obtained by solving three practical engineering problems: threebar truss, welded beam, and tension/compressionspring design.
1. Introduction
Many realworld applications involve complex optimization problems [1]. Swarm intelligence optimization algorithms can solve problems by simulating the intelligent characteristics and behavior modes of biological populations, which have high selforganization, selfadaptability, generalization, and abstraction capabilities [2, 3]. They can quickly and approximately solve certain NPhard problems in the real world and have become an important method to effectively solve complex science and engineering optimization problems [4, 5]. Since the inception of swarm intelligence optimization algorithms, several classical algorithms, such as the genetic algorithm (GA) [6], particle swarm optimization (PSO) [7], and ant colony optimization (ACO) [8], have emerged.
The general framework of swarm intelligence optimization algorithms starts with a set of stochastic solutions and utilizes a series of metaheuristics to explore and exploit the search space to find the best approximate solution to the optimization problem. Maintaining population diversity and finding effective exploration and exploitation methods and a balance between them are the main issues in the research on swarm intelligence optimization algorithms [1, 9]. The nonfree lunch theorem [10] indicates that no swarm intelligence optimization algorithm can solve all practical optimization problems, which motivates the development of a swarm intelligence optimization algorithm research field. In recent years, the research directions of swarm intelligence algorithms can be divided into three categories [1, 11]: (1) discovering the behavioral characteristics of biological communities in nature to design new algorithms. To this end, certain swarm intelligence optimization algorithms, such as the black widow optimization algorithm (BWO) [12], flower pollination algorithm (FPA) [13], whale optimization algorithm (WOA) [14], sparrow search algorithm (SSA) [15], Coronavirus herd immunity optimizer (CHIO) [16], chameleon swarm algorithm (CSA) [17], and white shark optimizer (WSO) [1] have emerged. (2) Integrating the advantages of different swarm intelligence algorithms to form hybrid algorithms. For example, some studies [4, 18–21] combine two or more algorithms to improve the efficiency of solving optimization problems. (3) Improving the existing algorithms; the commonly used ones include chaotic maps [22, 23], cellular [9, 24], and mutation operators [25, 26].
The objective functions of many realworld science and engineering problems are mostly nonlinear, nondifferentiable, multimodal, and high dimensional. Traditional gradientbased local optimization methods are not very effective [2]. Global optimization methods based on swarm intelligence do not require gradient information, only knowledge of the input and output parameters of the problem, and have shown good performance in solving complex optimization problems [5]. The areas that involve these problems include vehicle routing [20], coordinating the charging schedule of electric vehicles [27], feature selection [28], flowshop scheduling [29, 30], wireless sensor network deployment [31], and single batchprocessing machines [32].
Porcellio scaber algorithm (PSA) [33] is a new swarm intelligence optimization algorithm, biologically designed by the two main survival principles of Porcellio scaber. Currently, PSA has been successfully applied in engineering fields, such as microgrid optimization [34, 35] and pressure vessel design [36]; however, as an emerging algorithm, its relevant research and applications are in their infancy.
PSA has two main shortcomings. First, the iterative updating rules of the algorithm should make full use of the information of the previous generation of optimal Porcellio scaber; however, the algorithm lacks the judgment operation of the contemporary (k + 1 generation) and the previous generation (k generation) optimal solutions. The contemporary (k + 1 generation) optimal solution may not be as good as the previous generation (k generation) optimal solution; if the contemporary (k + 1 generation) optimal solution is directly brought into the next generation (k + 2 generation) operation, the information of the previous generation (k generation) optimal solution is lost, which affects the convergence speed of the algorithm. Second, the algorithm easily falls into the local extreme value, especially when dealing with highdimensional optimization problems, which can easily lead to a premature algorithm and convergence stagnation.
To solve these problems, this study proposes an improved PSA with tdistribution elitist mutation (TPSA). The main contributions of this study are summarized as follows:(1)The TPSA algorithm mainly implements dimensionbydimension tdistribution perturbation on the position vector of each generation of elite Porcellio scaber and takes the variation result as the updated position of the selected Porcellio scaber with a certain probability. It fully uses the advantageous information contained in the elite Porcellio scaber and controls and leads the Porcellio scaber population to quickly approach the optimal solution with the help of the tdistribution characteristics to improve its convergence accuracy and speed.(2)The judgment operation of the contemporary optimal solution and the optimal solution of the previous generation in the PSA algorithm process is added and the dynamic adaptive improvement of weight distribution parameters is carried out. This improves the balance between exploration and exploitation and improves performance.(3)The performance of TPSA is verified in 24 wellknown benchmark functions and compared with the comparison algorithms. The practicability of TPSA is verified by three realworld engineering problems, and the best solution obtained is compared with the algorithms reported in recent years.
The rest of the paper is organized as follows: Section 2 introduces the principle of the PSA algorithm, Section 3 describes the TPSA algorithm improvement strategy and algorithm implementation process in detail, Section 4 demonstrates the experimental results and analysis of convergence and stability, Section 5 introduces the application of TPSA in three realworld engineering problems, and finally, Section 6 concludes the paper.
2. Porcellio scaber Algorithm
Porcellio scaber is a worldwide animal species, as shown in Figure 1 [33]. Porcellio scaber prefers to live in damp, dark, and humusrich places and in groups. Several studies [37, 38] have shown that P. scaber has two behaviors regarded as their survival laws: (1) group behavior—aggregation and (2) individual behavior—propensity to explore novel environments. When the living environment is unfavorable, they explore a new environment separately; conversely when environmental conditions are favorable, they stay together.
PSA uses the fitness function as a scale to evaluate the advantages and disadvantages of the porcellio scabers’ living environment, and the mathematical modeling of the two survival rules of the Porcellio scaber is used as the rules of the algorithm update iteration. Its mathematical description is as follows.
In the ddimensional search space, the population consisting of N porcellio scabers is X = (X_{1},X_{2}, …, X_{n}), and the vector X_{i}^{k} = (x_{i1}^{k}, x_{i2}^{k}, …, x_{ij}^{k}…, x_{id}^{k})^{T} represents the position of the ith Porcellio scaber of the kth generation. The movement of the aggregation behavior of the Porcellio scaber is modeled according to
Equation (1) can also be expressed as
From equation (1), all porcellio scabers eventually stay in the place with the best environmental conditions in the initial position; however, if the initial conditions are the worst environment, only the porcellio scabers with the aggregation behavior given in equation (1) cannot survive. The actual movement of the Porcellio scaber should be the weighted result of the two behaviors of gathering and exploring the new environment. Therefore, the final updating rule of PSA is given in the literature [33], which is calculated according towhere λ ∈ (0, 1) is the weight allocation parameter of the two behaviors of gathering and exploring new environment tendencies, and the value of λ can be different for different porcellio scabers. pτ represents the motor behavior of the Porcellio scaber exploring a new environment, where τ is a ddimensional random vector and p is a function of the intensity of the exploratory action, p = f (x_{i}^{k}+τ). Each Porcellio scaber randomly selects any direction around its center of mass to explore the new environment. p is the simplest way to select a function that represents the fitness of the Porcellio scaber as the intensity of the behavior of exploring the new environment. The literature [33] provides the following definition of p with good performance:
3. Porcellio scaber Algorithm with tDistributed Elite Mutation Mechanism
3.1. tDistribution Elite Dimensional Variation Strategy
Each generation of the optimal Porcellio scaber is the elite of the Porcellio scaber population. The advantage information contained in it plays a key role in the updating and iteration of the algorithm, which enables its rapid convergence to the global optimal solution and improves its accuracy. To this end, the TPSA algorithm defines an elite mutation probability parameter P_{t} around an elite Porcellio scaber and performs tdistribution mutation on the position vector of the optimal Porcellio scaber in each generation dimensionbydimension. The mutation result is directly given to the selected Porcellio scaber with P_{t} probability as the new location of position update, making full use of the advantage information of the optimal Porcellio scaber. The search space of the Porcellio scaber is controlled by the characteristics of the tdistribution, and the algorithm is guided to converge to the optimal solution quickly. This is expressed as follows:where γ is the scaling coefficient of the step size and t(iter) is the tdistribution of freedom with the number of iterations of the algorithm iter.
The curve shape of the tdistribution is related to the degrees of freedom. When the degree of freedom of the tdistribution is 1, its curve shape is similar to that of the Cauchy distribution; as the degree of freedom increases, the curve shape gradually approaches a Gaussian distribution, as shown in Figure 2. In the early iterations of the TPSA algorithm, the iter value is small. The tdistribution presents the characteristics of the Cauchy distribution, which can mutate and generate the next generation of porcellio scabers far away from the optimal Porcellio scaber position. Thus, the diversity of the Porcellio scaber population can be maintained to search in a larger space, improve the exploration ability, and prevent the algorithm from falling into local extremum and convergence stagnation. As the algorithm runs iteratively to the middle and late stages, the iter value increases, and the tdistribution presents the characteristics of a Gaussian distribution, which can mutate around the optimal Porcellio scaber to generate the next generation of porcellio scabers with a certain probability; this improves the exploitation ability of PSA and increases the convergence accuracy.
3.2. Optimal Porcellio scaber Retention Strategy
From equation (3), when PSA performs a position iterative update, the k+1generation Porcellio scaber position update requires the kgeneration optimal Porcellio scaber position information. However, according to the implementation process PSA in the literature [33], the algorithm does not fully consider that, if the optimal solution of k + 1 generation is inferior to that of k generation, and the information of the optimal solution of k + 1 generation is included in the calculation of the position update of k + 2 generation; it easily causes the loss of the information of the optimal solution of k generation. Therefore, after updating the contemporary position, a judgment mechanism between the contemporary and previous generation optimal solutions is added and the execution process is as follows: if MinFitness (k + 1) < MinFitness (k) Proceed with Algorithm 1 in Section 3.4 for the next iteration else = = (x_{i1}^{k}, x_{i2}^{k},…, x_{ij}^{k}…, x_{id}^{k}) end if
Here i is the number of the kth generation optimal Porcellio scaber, d is the dimension of the search space, and MinFitness (k) is the optimal fitness of the kth generation Porcellio scaber population. If the optimal fitness of the Porcellio scaber population of generation k+1 is less than that of the population of generation k, the next iteration continues according to the execution process of Algorithm 1. Otherwise, the optimal Porcellio scaber information of generation k population will be reserved for generation k + 1 and be included in the position update calculation of generation k + 2.
3.3. Dynamic Adaptive Strategy of Weight Allocation Parameters
The proportion of exploration and exploitation in PSA is controlled by the weight allocation parameter λ. In PSA, λ is a constant, which has a significant influence on the results of different optimization problems and is difficult to determine. To this end, this study introduces the concept of inertia weight inspired by the literature [7] and uses the following equation to calculate λ.where [min, max] is the upper and lower boundary of λ, max iter is the maximum number of iterations of the algorithm, and iter is the current number of iterations. In this manner, λ adaptively adjusts dynamically as the algorithm runs. In the early stage of the algorithm, λ is large and mainly controls the algorithm’s exploration, which is conducive to expanding the search space and improving the convergence speed. In the middle and late stages of the algorithm, λ decreases gradually, and the algorithm mainly performs exploitation, which helps improve the convergence accuracy.
3.4. TPSA
After adopting the tdistributed elite mutation, optimal Porcellio scaber retention, and dynamic adaptive strategy of weight allocation parameter, the TPSA algorithm is as follows:

3.5. Time Complexity Analysis
Given that the objective function of the optimization problem is f(x), the dimension of the search space is d. According to the description of PSA and the operation rules of time complexity, the time complexity of PSA is T(PSA) = O(d + f(d)).
TPSA adds the calculation of tdistribution elite variation, dynamic weight parameters, and optimal Porcellio scaber retention strategy. According to the execution process of Algorithm 1, the time complexity of each incremental part can be deduced. T(tdistribution elite variation) = O(d + f(d)), T(dynamic weight parameters) = O(1), T(retention strategy of optimal Porcellio scaber) = O(1), and T(TPSA) = T(PSA) + T(tdistribution elite variation) + T(dynamic weight parameters) + T(retention strategy of optimal Porcellio scaber). After simplification, T(TPSA) = O(d + f(d)). The time complexity of TPSA and PSA is of the same order of magnitude, with an insignificant increase and no negative impact on the algorithm.
4. Experimental Results and Analysis
4.1. Experiment Design and Parameter Settings
TPSA was compared with classical and emerging intelligent algorithms such as PSA [33], PSO [7], GSA [39], and FPA [13] on 24 benchmark functions to verify the performance of the TPSA. The 24 benchmark functions are shown in Table 1, including nine highdimensional unimodal functions, nine highdimensional multimodal functions, and six lowdimensional functions. The following hardware and software configurations were used: CPU—2.70 GHZ (COREi5), RAM—16 GB, OS—Win10 (64 bits), DISK—SSD (512 GB), and MATLAB R2020a.
Without loss of generality, the algorithm was independently run 50 times for each function, and the worst, best, mean, standard deviation, and success rate were calculated as evaluation metrics for the algorithm performance.
The optimization precision was set to 10^{−10} and the success rate of optimization was calculated as follows:
In the experiment, if actual solution value − theoretical optimal value <10^{−10}, the search was considered as a success.
Parameter Settings. The population size of the five algorithms was N = 30, and the number of iterations was set to 1,000. Through a large number of numerical experiments, we concluded that TPSA performs better by setting the following parameters: P_{t} = 0.3, a stepsize scaling coefficient on f_{13}, f_{19}–f_{22} as 30, and the others as 1; λ linearly reduced from 0.9 to 0.2. The PSA weight allocation parameter λ = 0.8. The PSO learning factor c_{1} = c_{2} = 2, the inertia weight linearly reduced from 0.9 to 0.4, and the maximum speed was half of the search space. The GSA gravitational constant G_{0} = 100 and acceleration a = 20. The FPA probability conversion parameter p = 0.2.
4.2. Results and Discussion
The experimental results and correlation analysis are presented separately according to the type of each benchmark function.
4.2.1. Type I: HighDimensional Unimodal Function
Table 2 shows the experimental results of the five algorithms on nine highdimensional unimodal functions. For the success rate of the search, TPSA had a success rate of 100% for all nine functions, and the success rate of the comparison algorithms was mostly zero or lower, except for GSA, which had a success rate of 100% on the f_{1} function. For the convergence to the theoretical optimal value of the function, TPSA found the theoretical optimal value 0 of eight functions except f_{4}, and the robustness of the algorithm was strong from the mean and standard deviation indicators. PSA, GSA, and FPA could not find the theoretical optimal value of all functions. PSO, which sets the maximum speed to half of the search space, observed the theoretical optimal value of the function on f_{1}, f_{4}, and f_{6} functions; however, its robustness was poor, and judging from the mean and standard deviation indicators, the success rate of the search was low. For the f_{4} function, although TPSA could not converge to the theoretical optimal value 0 of the function, the mean value of the algorithm was 1.7908E – 290, whereas the worst value reached 5.2583E − 289, with a standard deviation of approximately zero. This indicates that TPSA is more stable and robust than PSA, PSO, GSA, and FPA on this function. In essence, TPSA achieved a better performance in convergence accuracy and robustness than PSA, PSO, GSA, and FPA when searching for a highdimensional unimodal function.
4.2.2. Type II: HighDimensional Multimodal Functions
These functions often have numerous local minima distributed in the solution space and the global optimal solution is not easily found; however, according to the experimental results in Table 3, TPSA can converge to the theoretical optimal value 0 on f_{10}, f_{12}, f_{14}, f_{15}, f_{16}, and f_{17} functions. Considering the mean and standard deviation indicators, the robustness of the algorithm was good, and the success rate of finding the optimal value was 100% except f_{18}; PSA, GSA, and FPA could not find the theoretical optimal value, and the success rate was zero. Although PSO found the theoretical optimal values of f_{10}, f_{12}, and f_{16} functions, judging from the mean and standard deviation indicators, its robustness was poor and the success rate of finding the optimal values for f_{10}, f_{12}, and f_{16} was low, at 26%, 38%, and 12%, respectively. Evidently, it could not find the theoretical optimal value of other functions. On the f_{11} function, TPSA outperformed all other comparison algorithms except PSO, which is comparable to TPSA in terms of optimal value. On the f_{13} function, TPSA was inferior to PSO and GSA in terms of optimal value metrics, but better than PSA and FPA; TPSA was better than PSA, PSO, GSA, and FPA in terms of worst value and standard deviation metrics. On the f_{18} function, none of the algorithms could find the theoretical optimal value of the function, but the optimal accuracies of TPSA and GSA convergence were significantly better than the other algorithms. This indicates that the algorithm is more stable. In conclusion, the performance of TPSA on highdimensional multimodal functions was significantly better than that of the other algorithms, showing that the improvement strategy of this study is effective.
4.2.3. Type III: LowDimensional Function
The global optimum of a lowdimensional function is usually surrounded by many local extrema and the function shows strong oscillation characteristics; the global optimum is not easily found. Therefore, it is often used as a test of the exploration capability of a swarm intelligence algorithm [40]. Table 4 presents the experimental results. With respect to the f_{19} and f_{20} functions, the success rate of TPSA was 100%, and the worst, mean, and standard deviation were better than those of the PSA, PSO, GSA, and FPA algorithms. TPSA was comparable to PSO in the f_{21} function and to FPA in the f_{22} function, outperforming other comparative algorithms. For f_{23} and f_{24} functions, TPSA and PSO achieved the best search performance, and the theoretical optimal value −1 was successfully found in 50 experiments, with a search success rate of 100%. The optimization performance of PSA, GSA, and FPA was poor as they could not find the theoretical optimal value of the function, and the search success rate was low.
4.2.4. Convergence and Stability Analysis
The fitness convergence curve for some functions is shown in Figure 3, and a boxplot of the experimental results is shown in Figure 4. Figures 3 and 4 showed the advantages of the optimization performance of TPSA over the other algorithms. The theoretical optimal values of the selected functions were zero. For the convenience of display and observation, the fitness of the algorithm for all functions is treated with a logarithm base of 10.
(a)
(b)
(c)
(d)
(e)
(f)
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3 shows that TPSA converged to the theoretical optimal value of the function, whereas PSA, PSO, GSA, and FPA could not. Particularly, for f_{6}, f_{10}, f_{14}, and f_{15} functions, TPSA converged to the theoretical optimal value with only a few iterations. Therefore, its convergence speed has obvious advantages over the other algorithms.
Figure 4 shows that TPSA converged to the theoretical optimal value of the function each time, and its convergence accuracy and stability were better than those of PSA, PSO, GSA, and FPA. The convergence results of PSA showed a low precision and high fluctuation; although PSO converged to the theoretical optimal value on f_{1}, f_{6}, and f_{10} functions, the fitness fluctuation was strong, and the algorithm performance was unstable. The fitness fluctuation of GSA and FPA improved; however, neither of them could converge to the theoretical optimal value of the function.
In conclusion, TPSA has significantly improved convergence accuracy, convergence speed, robustness, and stability, compared with the other algorithms and is competitive among intelligent optimization algorithms.
4.3. Wilcoxon RankSum test
As the performance of the algorithm cannot be fully described using only the mean and standard deviation, a statistical validation is required [41]. Therefore, in this study, the Wilcoxon ranksum test was chosen to verify whether TPSA is significantly different from PSA, PSO, GSA, and FPA at the p = 5% significance level to determine the level of superiority and inferiority among them. When p <5%, it is considered that the H0 hypothesis is rejected and there is a significant difference between the two algorithms. Conversely, there is no obvious difference between the two algorithms, and their performance is equivalent.
Table 5 shows the values of the Wilcoxon ranksum test p of TPSA and other algorithms for all benchmark functions, where “N/A” indicates that the performances of the two algorithms are quite incomparable. TPSA cannot be compared with itself and is marked as “N/A”, whereas “+“, “−,” and “=” indicate that TPSA is superior, inferior, or equivalent to the other algorithms, respectively. The value of p in Table 5 is greater than 0.05 only when TPSA is compared with PSO on the f_{13} function, and with FPA on the f_{22} function, which is marked in bold in the table; it is less than 0.05 in other cases. Table 5 shows that the optimization performance of TPSA was significantly better than that of PSA, PSO, GSA, and FPA, thus verifying the effectiveness of the improved strategy in this study.
4.4. TPSA in HigherDimension Performance Analysis
To further verify the stability and convergence accuracy of TPSA in higher dimensions, three functions, f_{5}, f_{12}, and f_{15}, were selected from two types of highdimensional unimodal and multimodal test functions for simulation experiments in 500, 1,000, and 2,000 dimensions. The program independently ran 50 times, and the mean and standard deviation were used as evaluation metrics. Table 6 presents the experimental results. It is observed that, with a significant increase in the function dimension, the mean value and standard deviation of fitness of TPSA were 0 and remained unchanged, while the optimized mean value of PSA increased significantly. This shows that TPSA did not fall into a “dimension disaster” with an increase in dimensions, which reduces the optimization performance. Compared with PSA, TPSA had a stronger ability to jump out of the local extreme value to maintain a good stability and could converge to the theoretical extreme value in higher dimensions. This further demonstrates the effectiveness of the tdistributed elite mutation strategy proposed in this study.
4.5. Strategy Effectiveness Analysis
PSA using only the tdistribution mutation strategy, optimal Porcellio scaber retention strategy, and dynamic adaptive weight allocation strategy were denoted as TMPSA, ORSPSA, and DAPSA, respectively. PSA was compared experimentally with the three singlestrategy improved algorithms described above, and the results are listed in Table 7. As seen in Table 7, for unimodal and multimodal functions of f_{1}, f_{3}, f_{5}, f_{7}, f_{9}, f_{11}, f_{13}, f_{15}, and f_{17}, the three strategies improved PSA performance overall, in which the tdistribution mutation strategy played a key role. For the functions f_{3}, f_{9}, and f_{13}, TPSA obtained the optimal solution with the integrated action of the three strategies. For the fixeddimensional multimodal functions f_{19}–f_{22}, the optimization results of ORSPSA and DAPSA were comparable to PSA, but TMPSA performed worse. However, as shown in Table 4, with the integrated action of the three strategies, TPSA had 100, 100, 96, and 86% of the search success of them, respectively. The optimization performance was substantially improved compared with same of PSA; therefore, the strategy proposed in this paper is effective.
5. Engineering Applications of TPSA
To further study the performance of TPSA, it was applied to solve three practical engineering problems—threebar truss, weldedbeam, and tension/compressionspring designs—and the results were compared with those reported in the literature. These engineering application problems are multiconstraint optimization problems; however, PSA cannot solve constrained optimization problems. This study uses the penalty function in the literature [15] to transform constrained optimization into unconstrained optimization for the solution.
5.1. ThreeBar Truss Design Problem
Figure 5 illustrates the threebar truss structure. The height of the truss is H; the crosssectional areas of each bar are A_{1,}A_{2}, and A_{3}; and the concentrated load is p. The problem is solved to minimize the volume of a threebar truss under stress constraints, deflection, and buckling. This problem requires the optimization of two variables (A_{1} and A_{2}) to adjust the crosssectional area of each bar. The problem is mathematically described as follows:where , , , and .
The population size was set to 50 and the maximum iteration to 1,000. The optimal solutions of TPSA and PSA were compared with the optimal solution of the algorithm reported in the relevant literature. The results are listed in Table 8, which shows that the optimal solution solved by TPSA is superior to PSA, CS [42], HHO [43], SCGWO [44], and GLFGWO [45] and equivalent to the results of AEO [46]. This shows that TPSA can optimize the design of the threebar truss with a good effect.
5.2. WeldedBeam Design Problem
The weldedbeam structure is shown in Figure 6 with four design parameters: weld thickness (h), length of the clamped bar (l), height of the bar (t), and thickness of the bar (b). The optimal design should minimize the construction cost under the constraints of shear stress τ, bending stress σ, buckling load P_{c}, and deflection δ. The mathematical description of the problem is as follows:where , , , ,where , , , , and .
Variable range is , .
The population size was set to 50; the maximum number of iterations was 10,000, the stepsize scaling factor γ was 2; and the program independently ran 30 times. TPSA and PSA were used to solve the weldedbeam design problem and the results were compared with those of the algorithms reported in the relevant literature. A comparison of the optimal solution and statistical results are presented in Tables 9 and 10, respectively. It is observed that the optimal solution by TPSA is superior to PSA, CPSO [47], IGMM [48], TEO [49], IGWO [50], SFOA [51], CSBSA [52], and WAROA [53] and is equivalent to the results of AEO [46]. However, the mean and standard deviation of TPSA are better than those of AEO. This indicates that TPSA is better than the others in solving weldedbeam design problems.
5.3. Tension/CompressionSpring Design Problem
Figure 7 shows the structure of the tension/compression spring with three design parameters: average coil diameter (D), number of active coils (p), and wire diameter (d). The design goal was to minimize the weight under certain constraints. The mathematical description of the problem is as follows:where , , and .
The population size was set to 50; the maximum number of iterations was 1,000; the stepsize scaling factor γ was 5; and the program ran independently for 30 times. TPSA and PSA were used to solve the tension/compressionspring design problem and the results were compared with those of the algorithms reported in the relevant literature. A comparison of the optimal solution and statistical results are shown in Tables 11 and 12, respectively. Tables 11 and 12 show that the optimal solution of TPSA is superior to PSA, HMPA [4], AEO [46], CPSO [47], IGWO [50], SFOA [51], and PO [54] and is equivalent to the results of the TEO [49] and WAROA [53]. However, the worst and average values of TPSA are better than those of WAROA. This indicates that TPSA can effectively solve a tension/compressionspring design problem.
Table 12 shows that the optimal solution of PSA is inferior to all other algorithms and its average value and standard deviation are relatively large. This indicates that PSA is unstable in solving this problem. TPSA results are superior to those of the algorithms reported in the literature, which further shows that the improved strategy in this study effectively improves the performance of PSA.
TPSA mutates the elite Porcellio scaber using the tdistribution operation. The degrees of freedom of the tdistribution are the number of iterations of the algorithm, and their values dynamically increase as the algorithm runs. Thus, its morphology changes dynamically from a Cauchy to Gaussian distribution. In the early stage of the algorithm, the dynamically changing characteristics maintain the diversity of the population to a large extent so that the population can sufficiently explore the search space. In the late stage of algorithm operation, the tdistribution mainly presents Gaussian distribution characteristics, and the mutated population implements fine exploitation in a smaller space around the optimal Porcellio scaber, which improves the convergence accuracy while avoiding the algorithm from sinking into a local optimum. The optimal Porcellio scaber retention strategy in TPSA partially overcomes PSA’s disadvantage of losing the optimal solution of the previous generation. TPSA’s dynamic adaptive improvement of weight distribution parameters improves the balance of exploration and exploitation and enhances performance. In summary, the TPSA assembled with these strategies has good performance; therefore, good optimal solutions are obtained when solving the three engineering problems.
6. Conclusions
PSA is a new swarm intelligence optimization algorithm, which has the disadvantages of low convergence accuracy and easy premature convergence. To address these issues, an improved PSA (TPSA), based on the tdistribution elite mutation mechanism, is proposed. First, using tdistribution for each generation of elite Porcellio scaber for a dimensionbydimension mutation, the characteristics of tdistribution are used to maintain population diversity, thus enhancing the algorithm’s ability to explore and exploit the global and local space. Secondly, the judgment mechanism between the contemporary optimal solution and the previous generation optimal solution is added to the algorithm process to address the shortcoming that the basic PSA may lead to the loss of information of the previous generation of elite Porcellio scaber. Finally, dynamic adaptive improvements are made to the weight assignment parameters of PSA to achieve a balance between exploitation and exploration and improve its performance in finding the best solution.
The performance and practicality of TPSA were evaluated on 24 benchmark functions and applied to three realworld engineering problems. First, the convergence accuracy, convergence speed, and stability of TPSA were evaluated on 24 benchmark functions, including highdimensional unimodal, highdimensional multimodal, and lowdimensional functions. The results show that TPSA has significant advantages over basic PSA, PSO, GSA, and FPA. This was also proved by the Wilcoxon ranksum test on the experimental results. To further validate the performance of TPSA, experiments were conducted in 500, 1,000, and 2,000 dimensions on some functions, and the results show that TPSA converged to the theoretical optimum of the function without falling into “dimension disaster,” which further demonstrated its good convergence and stability. In the three practical engineering problems of threebar truss, weldedbeam, and tension/compressionspring designs, the optimal solution of TPSA was better than those of the algorithms reported in related literature, which verified its practicality. However, it should also be noted that TPSA is inferior to PSO and GSA in optimizing some functions. Therefore, in the future, we will continue to improve the optimization performance of TPSA and apply it to more complex engineering optimization problems.
Data Availability
The experimental data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was supported by the Key Project of the Excellent Youth Talent Support Program in Higher Education Institutions of Anhui Province (Grant no. gxyqZD2019125), the Key Project of the Natural Science Foundation in Higher Education Institutions of Anhui Province (Grants nos. KJ2020A0035 and KJ2017A843), and the Project of Provincial Quality Engineering in Higher Education Institutions of Anhui Province (Grants nos. 2020kfkc577 and 2020jyxm2226).