Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article
Special Issue

Advanced Control and Applications of Medical Robots

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 2846181 | https://doi.org/10.1155/2020/2846181

Bin Bai, Zhi-wei Guo, Qi-liang Wu, Junyi Zhang, Yan-chao Cui, "Application of the Improved PSO-Based Extended Domain Method in Engineering", Mathematical Problems in Engineering, vol. 2020, Article ID 2846181, 14 pages, 2020. https://doi.org/10.1155/2020/2846181

Application of the Improved PSO-Based Extended Domain Method in Engineering

Academic Editor: Jing Guo
Received21 May 2020
Revised22 Jul 2020
Accepted29 Jul 2020
Published07 Sep 2020

Abstract

The standard particle swarm optimization (PSO) algorithm is the boundary constraints of simple variables, which can hardly be directly applied in the constrained optimization. Furthermore, the standard PSO algorithm often fails to obtain the global optimal solution when the dimensionality is high for unconstrained optimization. Thus, an improved PSO-based extended domain method (IPSO-EDM) is proposed to solve engineering optimization problems. The core idea of this method is that the original feasible region is expanded in the constrained optimization which is transformed into the unconstrained optimization by combining the ergodicity of chaos optimization and the evolutionary variation to realize global search. In addition, to verify the effectiveness of the IPSO-EDM, an unconstrained optimization case study, four constrained optimization case studies, and one engineering example are investigated. The results indicate that the computational accuracy of the IPSO-EDM is comparable to that provided by the existing literature, and the computational efficiency of the IPSO-EDM is significantly improved. Meanwhile, this method has conspicuous global search ability and stability in engineering optimization.

1. Introduction

Optimization began in the 17th century, which originated from differential and integral calculus invented by Newton and Leibnitz. Then, optimization algorithms [15] were rapidly developed, such as artificial neural network, simulated annealing, genetic algorithm, ant colony optimization, and particle swarm optimization (PSO). All these methods were widely used in different fields [612], such as chemical engineering, biomedicine, navigation, robot, automobile, architecture, and aerospace.

Actually, the mechanical engineering optimization can be expressed as continuity interval constraint optimization. To investigate this problem, some traditional gradient methodologies [1315] were investigated such as the penalty function and Lagrange multiplier. Although the theory of these methods is impeccable, the objective function and the constraint condition must be differentiable. However, the constraint function and objective function are nondifferentiable and discontinuous implicit functions in practical engineering. Consequently, a new optimization method was developed to study this problem, which plays an important role for the development of engineering optimization design.

Initially, the PSO was presented by Kennedy and Eberhart [16] to investigate the flight behaviour of birds, which was termed as the global PSO algorithm. This method has been extended to different kinds of fields. For instance, Xue et al. [17] used analytical method with modified PSO to establish the subdomain model and optimize cogging torque. Han et al. [18] adopted an adaptive gradient multiobjective PSO to improve the computational performance for a mechanism. Yi et al. [19] presented a parallel chaotic local search algorithm to solve constrained engineering design problems. Park et al. [20] used chaotic sequences with conventional linearly decreasing inertia weights to increase the exploitation capability. However, the essence of these methods is not improved, thus some scholars renewed the equation of the global PSO. For instance, Phung et al. [21] proposed a discrete PSO algorithm to solve the extended travelling salesman problem for robotic inspection. Wang and Zhang [22] presented an optimization method to describe a planar parallel 3-DOF nano positioner in modelling. Nickabadi et al. [23] successfully regarded the adaptive inertia weight factor as the feedback parameter to ascertain the situation of the particles in the search space. Mojarrad and Nayeripour [24] used a fuzzy adaptive PSO to solve the non-convex economic dispatch problems. Khan et al. [25] proposed a modified PSO algorithm to avoid the premature convergence and strengthen its robustness. This algorithm can adaptively update parameters to keep the diversity of the swarm. However, the convergence speed of inertia weight method is not very fast, so another method is developed. Liang et al. [26] proposed a fuzzy multilevel algorithm-based PSO to optimize support vector regression machine and realize a fuzzy multilevel drilling leak risk evaluation system. Tian et al. [27] utilized sigmoid-based acceleration coefficients to avoid premature convergence and entrapment into local optima when they handled complex multimodal problems. Hsieh et al. [28] developed a discrete cooperative coevolving PSO algorithm to study the influence of detour distance constraints on the carpooling performance. Zahara et al. [29, 30] put forward the hybrid Nelder-Mead-PSO algorithm for unconstrained optimization, and then they presented embedded constraint handling methods for dealing with constraints. Liu et al. [31] adopted a bottleneck objective learning strategy for many-objective optimization to improve convergence on all objectives. Xia et al. [32] proposed a triple archives PSO to deal with the selecting proper exemplars and designing an efficient learning model. Wang et al. [33] investigated the evolutionary computation community for large-scale optimization.

The above investigations on the PSO algorithm indicate that some constrained points are not located in the feasible region but outside the feasible region; however, these excluded constrained points are closer to the boundary than the points located in the feasible region. Obviously, this is unreasonable. Based on the above research studies, a new methodology named improved particle swarm optimization-based extended domain method (IPSO-EDM) is proposed to investigate engineering optimization.

In the following, Section 2 describes the basic theory of the IPSO-EDM, including standard PSO, improved PSO, and the algorithm principle. Section3 gives an unconstrained optimization case study, four numerical constrained optimization case studies, and one engineering case study to verify the effectiveness of the IPSO-EDM. Section 4 gives the conclusions.

2. Improved Particle Swarm Optimization

2.1. Standard PSO

Assuming that the particle swarm is composed of M particles in an N-dimensional space, the position of the ith particle of the kth iteration is expressed as , the flight speed is described as , the local optimal position is , and the global optimal location is . Each particle updates its speed and position according to the following equation, which are expressed aswhere is the jth component of the ith particle, are positive constants and they are called learning factor, c1 adjusts the step length for particles to their optimal location, and adjusts the step length for particles to the global optimal position, and and are, respectively, random numbers that obey a uniform distribution, and their values are within .

To prevent particles from flying out of the search space in the optimization, the velocity and position are limited as and . Meanwhile, the inertia weight is involved. This method is termed as the standard PSO algorithm. The update equations are expressed as

The standard PSO does not have too many requirements on the objective function and constraint function, which can conduct constraint optimization, and these constraints limit the range of each variable interval value such as . This is different from the gradient method, but it becomes more complicated if there are equality or inequality constraints in the optimization. One obvious reason is that the feasible region has changed from a hypercube to a less regular region, and the variables are not independent on each other. For this reason, the standard PSO must be improved to deal with the constraints.

2.2. Improved PSO
2.2.1. Constraint Methods of PSO

At present, there are four typical methods to deal with constraints:(1)Discriminant function method: inequality and equality constraints are used as discriminant functions to determine whether the search points are feasible points in the optimization process. It will be discarded or modified to be a feasible point during search if the search point is not a feasible point. Thus, this method has strict restrictions to the search point, and it is very difficult to generate the initial feasible point when the feasible region composed by equality and inequality constraints is small.(2)Penalty function method: the optimization and constraint functions are combined to form the penalty function. The original constrained optimization with equality and inequality constraints has become an unconstrained optimization with the penalty function. However, the disadvantage of this method is that the penalty factor must be chosen correctly; otherwise, it can hardly obtain the optimal solution.(3)Multiobjective optimization method: optimization and constraint functions are, respectively, used as the new optimization targets. However, solving the multiobjective optimization is more difficult than solving a single-objective optimization in many cases.(4)“Competitive selection” method: deals with the feasible particles (the design points represented by particles satisfy all constraint requirements) and infeasible particles (some or all design points represented by particles dissatisfy the optimization constraint requirements) in PSO. However, it is not appropriate to deem that feasible particles are superior to infeasible ones in competitive selection, and the infeasible region can be extended into a feasible region to find the optimal point.

2.2.2. Core Idea of the “Competitive Selection” Constraint

The “competitive selection” constraint can be summarized in three aspects:(1)All feasible particles are superior to infeasible particles(2)The particle with a better objective function is selected for two feasible particles(3)The advantages and disadvantages of the particle are judged according to the degree of constraint violation for two infeasible particles; the lesser the degree of the constraint, the larger the violation

In Figure 1(a), unconstrained optimal point is in the feasible region, and the constraint is useless to the whole optimization; essentially, the constrained optimal point is the unconstrained optimal point. However, the unconstrained optimal point is not located in the feasible region but at its boundary for some optimization. For example, point is the constrained optimal point, point C is in the feasible region, and point D is outside the feasible region in Figure 1(b). According to the competitive selection, point is superior to D. However, point is closer to B than C; thus, the optimal information provided by is superior to C. As a result, point is labelled as suboptimal, which is not appropriate. So, the algorithm must be improved. Figure 1(c) shows an improved method, which expands the original feasible region to include points such as D which is unfeasible but close to the feasible region and can provide better function information and is taken as the feasible point. This methodology is termed as extended domain method (EDM).

In addition, the new speed depends on the current speed in the updated formula for standard PSO. However, the algorithm is random, and it is impossible to predict and control the size of particle speed. To solve this problem, a control variable is introduced in the extended domain, and the result is shown in Figure 2,where is the optimal location, is the current particle position, is the optimal historical position of the particle swarm, is the optimal historical position of the current particle, is the optimal location of the current particle swarm, and and are, respectively, the largest and smallest speeds.

Figure 2 shows that the particle with higher speed may miss a better position, while the particle with smaller speed improves the position, but it is not the optimal position. Thus, the current particle speed is expected to be controlled. One method is to use current location information of the particle swarm to determine its speed. The difference value between the particle swarm and the particle in the current optimal position is used to determine current particle velocity. The results indicate that the position determined using this method is better than the positions and . In fact, the constrained optimization is transformed into an unconstrained optimization by the EDM, so a scientific and reasonable unconstrained optimization method needs to be developed.

2.2.3. Unconstrained PSO

Firstly, the initial particle swarm position and velocity are determined in a random way in standard PSO. Generally, the designers expect the particles in the particle swarm space can better reflect the information which is studied in the initial state. One of the direct ways is to generate many particles which fill in the whole search space. However, this will consume a lot of computational resources in subsequent iterations. The PSO algorithm will lose its characteristics of group cooperation if the number of particles in the particle swarm space is too small, which is meaningless. Usually, the particle swarm with dozens of particles can solve the complex optimization problem, but random arrangements of particles often result in a “cluster” in an area. To enable the particles to be relatively “evenly” distributed in the search space, the uniform design method based on statistical theory is used to initialize the particle swarm space. The random distribution and uniform distribution of the initial particle swarm space are shown in Figure 3.

Secondly, the standard PSO algorithm itself cannot obtain global information of the objective function, and it is easy to fall into the local optimal. Generally, the particle can jump out of the local optimal and give new global information by dynamically adjusting the inertia factor, but simply adjusting the inertia factor is not enough for the complex multipeak function. To handle this problem, the ergodicity of chaos optimization [22] is combined with evolutionary variation to realize global search. In this method, logistic chaotic system equation is applied in the PSO algorithm and obeys Chebyshev distribution, as shown in Figure 4.

Figure 4 indicates that the middle value of the logistic chaotic sequence is relatively uniform, while the probability of both sides is relatively large. It means that the chance of finding the global optimal point will be reduced using the logistic chaotic sequence if the global optimal point is not at ends of the design variable. Therefore, it is very necessary to find a chaotic sequence, which can not only maintain ergodicity but also keep uniform statistical distribution. According to this problem, evolutionary variation strategy is developed. Generally, this strategy introduces a mutation operator to change the design variable. This method can help the particle swarm escape local optimal and maintain its overall vitality and prevent the particle swarm from falling into the condition of “precocity” at the earlier iteration. Based on the above research, the idea of uniform design, chaos optimization, and evolutionary variation is introduced into unconstrained PSO, and this method is improved to realize global search and local search.

Firstly, the uniform design of the Halton sequence [34] is adopted to initialize the particle swarm. Assume that the particle position scope of the design variable is , whose component values of the ith particle are , and the position of the initialized particle swarm is expressed as

Similarly, the velocity for the component values of the particle is expressed as

In the process of particle swarm evolution, the points around the optimal position still need to be searched to get a better position when the optimal position of the particle swarm is found. This is called as the chaos optimization search method, which is expressed as

However, the chaotic sequence generated by equation (5) is not uniform in statistics; therefore, the chaotic sequence needs to be transformed as follows:

Equation (6) not only satisfies the ergodicity of the chaotic sequence but also complies with the uniform distribution in . The uniform logistic chaotic sequence and original logistic chaotic sequence of the frequency curve and ergodic graph are shown in Figures 5 and 6. The point set of the chaotic sequence is 1e4.

The optimal position of the particle swarm is denoted as after the kth evolution, and the ith component is denoted as ; then, the initial value of the chaotic sequence is expressed as

According to equations (5) and (6), the chaotic sequence is generated, and its coordinate value corresponding to the original space is obtained via reverse transformation, which is expressed aswhere l is the number of iterations of the chaotic sequence and k is the number of iterations.

However, the chaos optimization can hardly make particle swarm get rid of local optimal, and the evolutionary variation is used to help it jump out local optimal. The mutation operator plays a key role in evolutionary variation. The Gaussian operator and Cauchy operator are two commonly used mutation operators. The global searching ability of the Cauchy mutation is stronger than that of the Gaussian mutation, but Cauchy mutation may produce large stride length in search, which means its local search ability is not as good as that of the Gaussian mutation. Therefore, according to the characteristics of Cauchy variation and Gaussian variation, the “coarse tune” of the particle swarm is obtained through the Cauchy mutation at first, and then its “fine tune” is obtained through the Gaussian mutation. The mutation formula is described aswhere and represent the value of the kth design variable before and after mutation, respectively; is a random number; is the contraction coefficient and ; and is the number of mutations.

The design variable value can hardly exceed its interval after mutation because of its range limitation in optimization process. The design variable value will be mutated again when its value is not in the definition domain after mutation until its solution is convergence. The maximum number of mutations is stipulated as q when the mutation value still does not satisfy the value range in the procedure. Then, the mutation can be judged according to the absolute value of the difference between the mutation value and interval endpoint value, namely, the mutation value is the left endpoint value and vice versa if the variation value is close to the left endpoint.

2.2.4. Algorithm Principle

Generally, the optimization model is described aswhere is the objective function; is the inequality constraint function; and is the equality constraint function.

To divide the particle swarm space into the feasible domain and unfeasible domain, a constraint conflict function is constructed, which is defined as

According to equation (11), an arbitrary feasible point satisfies , and an arbitrary unfeasible point satisfies ; meanwhile, can also describe the degree of constraint violation of the unfeasible point.

To control the size of the extended domain and particle speed, the control variable is defined in the following four situations for any two given search points and :(1) is superior to when and are both in the extended domain, i.e., and (2) is superior to when and (3) is superior to when and are both outside the extended domain, i.e., , and (4) is superior to when is in the extended domain, , and is outside the extended domain,

The control variable of the extended domain is gradually changing with particle swarm evolution, and the strategy is expressed aswhere k is particle swarm evolution algebra and is the initial control variable of the extended domain, which is defined aswhere is the set composed by the initial particle swarm.

When evolves to the kth generation, stipulating , can be expressed as

The control variable can be expressed as

To make the particle swarm fast gather to the optimal point in the direction, its update strategy is written aswhere is the optimal position of particles in the current generation; controls step length; and determines the motion direction of the particle.

The following limit policy is adopted when the particle swarm crosses the border in the process of updating, which is denoted aswhere is the uniform-distributed random number and is the locational average of the particle swarm.

To include points such as D in Figure 1(b), the EDM expands the original feasible region and provides preferable information, which is more reasonable.

3. Example

3.1. Unconstrained Optimization

Four test functions are investigated to verify the effectiveness of chaotic methodology. The comparison of PSO, CPSO, and IPSO-EDM is shown in Tables 14. Firstly, the mathematic models are established as follows.(1)Sphere function:(2)Rastrigin function:(3)Rosenbrock function:(4)Ackley function:


PSOCPSOIPSO-EDM

Dimensionality = 10 = 20 = 10 = 20 = 10 = 20
Worst value4.4369e – 497.4023e − 203.0672e − 486.0385e − 223.3696e − 709.4993e − 39
Mean value6.5256e – 506.2447e − 211.5934e − 497.9355e − 231.7190e − 717.0559e − 40
Optimal value1.1728e − 531.5012e − 241.5102e − 542.1277e − 2700
Standard variance1.1743e − 491.6707e − 206.8481e − 491.5866e − 227.5283e − 712.2945e − 39


PSOCPSOIPSO-EDM

Dimensionality = 10 = 20 = 10 = 20 = 10 = 20
Worst value5.969727.858801.8847e − 1000
Mean value3.034616.267609.4753e − 1200
Optimal value06.96470000
Standard variance1.53045.316804.2132e − 1100


PSOCPSOIPSO-EDM

Dimensionality = 10 = 20 = 10 = 20 = 10 = 20
Worst value22.19173.0299e312.200376.56622.99134.9399
Mean value4.3672174.96612.494920.58550.25300.2822
Optimal value0.09764.74830.00140.19269.9854e − 49.9068e − 4
Standard variance4.6971672.35082.733026.61680.69191.0986


PSOCPSOIPSO-EDM

Dimensionality = 10 = 20 = 10 = 20 = 10 = 20
Worst value2.6645e − 151.0484e − 102.6645e − 151.0991e − 116.2172e − 151.3323e − 14
Mean value2.6645e − 153.9470e − 112.6645e − 151.8865e − 122.8422e − 157.1054e − 15
Optimal value2.6645e − 151.1324e − 122.6645e − 151.0925e − 132.6645e − 152.6645e − 15
Standard variance03.6101e − 1103.0165e − 127.9441e − 162.7938e − 15

Note. The scale of the particle swarm is 50, the change interval of inertia factor earning factors , the number of iterations is 2000 times, the number of chaotic iterations is 1000 times, and the maximum number of evolutionary variations allowed is 1000 times.

It is seen from Tables 14 that four test functions are used to verify the effectiveness of the IPSO-EDM. As seen in Tables 14, the worst value, average value, optimal value, and standard variance in the same dimension are, respectively, calculated via PSO, CPSO, and IPSO-EDM, and the results calculated by the IPSO-EDM are the minimum values. Thus, the computational accuracy of the IPSO-EDM is the highest. In short, the IPSO-EDM has optimal computational accuracy compared with PSO and CPSO.

3.2. Constrained Optimization
3.2.1. Numerical Case Studies

To test the effectiveness of the IPSO-EDM, 3 test case studies are investigated, the scale of the particle swarm is 50, the number of iterations is 1000 times, and . The statistical results of the optimal function and constraint conflict function are listed in Tables 5 and 6. The comparison of different methods is shown in Tables 710, where “N/A” means not available. Firstly, the mathematic models are established as follows.G1:where .G2:where .G3:where .


Test examplesOptimal valueMean valueWorst valueStandard variance

G1−15−15−151.8724e − 15
G2−3.066e4−3.066e4−3.066e40.0019
G35.1265e35.1482e35.1871e320.3178


Test examplesOptimal valueMean valueWorst valueStandard variance

G104.8850e − 162.6645e − 151.0352e − 15
G201.7764e − 151.4211e − 144.5094e − 15
G308.8510e − 40.00610.0020


Test examplesHM [35]ASCHEA [36]SR [37]IPSO-EDMEDPSO [21]MPSO [25]

G1−14.7886−15.0000−15.0000−15.0000−15.0000−14.9863
G2−30665.5−30665.5−30665.5−30665.5−30665.5−30665.5
G3N/A5126.55126.45126.55126.55126.5


Test examplesHM [35]ASCHEA [36]SR [37]IPSO-EDMEDPSO [21]MPSO [25]

G1−14.7082−14.8400−15.0000−15.0000−15.0000−14.9986
G2−30665.3−30665.5−30665.5−30665.5−30665.5−30665.5
G3N/A5141.75128.95148.25148.15149.7


Test examplesHM [35]ASCHEA [36]SR [37]IPSO-EDMEDPSO [21]MPSO [25]

G1−14.6154N/A−15.0000−15.0000−15.0000−14.9992
G2−30645.9N/A−30665.5−30665.5−30665.5−30665.5
G3N/AN/A5142.4725187.1345148.15146.5


Test examplesHM [35]ASCHEA [36]SR [37]IPSO-EDMEDPSO [21]MPSO [25] (%) (%) (%) (%) (%)

G11400000150000035000050000600009000096.4296.6785.7116.6744.44
G21400000150000035000050000600009000096.4296.6785.7116.6744.44
G31400000150000035000050000600009000096.4296.6785.7116.6744.44

Note. is the improved computational efficiency by the IPSO-EDM compared with that by HM; is the improved computational efficiency of the IPSO-EDM compared with that by ASCHEA; is the improved computational efficiency by the IPSO-EDM compared with that by SR; is the improved computational efficiency by the IPSO-EDM compared with that by EDPSO; and is the improved computational efficiency by the IPSO-EDM compared with that by MPSO.

Three numerical case studies are used to verify the effectiveness of the proposed IPSO-EDM. As seen in Tables 710, the worst value, average value, and the best value are, respectively, calculated via the IPSO-EDM and five famous methods, i.e., HM, ASCHEA, SR, EDPSO, and MPSO. The investigation indicates that the computational accuracy of the IPSO-EDM is comparable to HM, ASCHEA, SR, EDPSO, and MPSO. Furthermore, this research manifests that the computational efficiency of the IPSO-EDM is significantly improved compared with the other five methods (HM, ASCHEA, SR, EDPSO, and MPSO) by measuring the product of the group size and algorithm cycle times. It can be seen from Table 10 that the computational efficiency of the IPSO-EDM is the largest compared with the other five methods, i.e., the product of population size and algorithm cycle times of the IPSO-EDM is the least among all methods.

To verify the effectiveness of the presented algorithm, the new parameters such as and are investigated. The performance function is written aswhere .

The iteration process of and is calculated by HL, W-G, and IPSO-EDM, which are shown in Figure 7.

Figure 7 indicates that of the standard HL method is very large, and its convergence value is 674.2829, but convergence values of W-G and IPSO-EDM are, respectively, 2.42e − 2 and 5.14e − 4, which manifests feasible points of HL are very few, i.e., the accuracy is very low. Meanwhile, it can be seen that the curve of obtained is fairly close each other by W-G and IPSO-EDM, but the convergence value obtained via the IPSO-EDM is 5.14e − 4 which is smaller than 2.42e − 2 obtained by W-G, and this means the accuracy of IPSO-EDM is higher than that of W-G. Meanwhile, the convergence value of via HL is 1.1657, and they are 2.22572 and 2.22599 by W-G and IPSO-EDM. Thus, the computational accuracy and efficiency of the IPSO-EDM are optimal and of HL are the worst.

3.2.2. Engineering Case Study

Many researchers use the finite element method to study engineering [38], but they do not optimize it. This method can also be used in practical engineering, for instance, this is a welded beam structure, which is shown in Figure 8.

The optimization goal is to seek four design variables, i.e., , , , and , which satisfy the constraints of shear stress , bending stress , bending load of the welding rod, deviation , and the boundary condition, and the total manufacturing cost of the welding rod is the minimum. The mathematical model is described aswhere

Note that .

The comparison results obtained by the IPSO-EDM and the other methods are shown in Table 11.


Methods
Coello [39]0.2088003.4205008.9975000.2100001.748309
Coello and Montes [40]0.2059863.4713289.0202240.2064801.728226
Coello and Becerra [41]0.2057003.4705009.0366000.2057001.724852
IPSO-EDM0.2057303.4704899.0366240.2057301.724852
CPSO [27]0.2057313.4705829.0368390.2056801.724237

MethodsOptimal valueMean valueWorst valueStandard variance

Coello [39]1.7483091.7719731.7858350.001122
Coello and Montes [40]1.7282261.7926541.9934080.074713
Coello and Becerra [41]1.7248521.9718093.1797090.443131
IPSO-EDM1.7248521.7386201.9596060.048555
CPSO [27]1.7243811.7388101.9587210.047891

The optimal results obtained by the IPSO-EDM are equivalent to those provided by the existing literature, which verifies the accuracy of this method. However, the computational efficiency of the proposed IPSO-EDM is higher than the methods in the literature in Table 10.

4. Conclusions

The standard PSO algorithm is improved from the perspective of engineering application to solve engineering optimization problems.(1)The original feasible region is expanded, and some points that are closer to the constrained optimal point in the feasible region are contained as feasible points, which provide preferable function information compared with the points in the original feasible region. This approach uses the current location information of the particles and particle swarm to determine the speed of the particles. In short, the difference value in the current optimal positions of the particle swarm and the particle is used to determine the current particle velocity. The optimal location of obtained points using the proposed method is better than that obtained using the standard PSO.(2)The constrained optimization is transformed into the unconstrained optimization by combining the ergodicity of chaos optimization and the evolutionary variation to realize global search. The logistic chaotic system equation is applied in the PSO algorithm, and the mutation operator is introduced in the evolutionary variation strategy to escape local optimal and maintain its vitality of the particle swarm, which prevents the particle swarm from falling into the condition of “precocity” at the earlier iteration.(3)An unconstrained optimization case study, four numerical case studies, and one engineering case study are used to verify the effectiveness of the IPSO-EDM. The worst value, average value, and optimal value are, respectively, calculated via the IPSO-EDM and compared with other methods. The investigation indicates that the computational accuracy of the IPSO-EDM is comparable to that provided by the existing literature; however, the computational efficiency of the IPSO-EDM is significantly improved.(4)Though the PSO method is improved and six case studies are investigated to prove the effectiveness of this method, yet the numerical case studies are relatively simple, and the design variables of the engineering case study are small. Moreover, only the deterministic optimization is studied. Thus, the nondeterministic optimization will be researched, and the random variable will be subject to normal distribution, exponential distribution, or Weibull distribution. Accordingly, the proposed method can be expanded to wide-spread engineering application fields. Further studies will focus on the IPSO-EDM considering random variables of different distributions including normal distribution, exponential distribution, and Weibull distribution to deal with actual operation of the machines.

Data Availability

The data used to support the findings of this study are currently under embargo, while the research findings are commercialized. Requests for data 6/12 months after publication of this article will be considered by the corresponding author.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors gratefully acknowledge the financial support for this research from the National Key R&D Plan Project (Grant no. 2017YFB1301300), the National Natural Science Foundation of China (Grant nos. 11772011 and 11902220), and the National Natural Science Foundation of Hebei Province (Grant no. E2020202217).

References

  1. Y. M. Zhang, Z. X. Wen, H. Q. Pei, J. P. Wang, Z. W. Li, and Z. F. Yue, “Equivalent method of evaluating mechanical properties of perforated Ni-based single crystal plates using artificial neural networks,” Computer Methods in Applied Mechanics and Engineering, vol. 360, Article ID 112725, 2020. View at: Google Scholar
  2. A. L. Soubhia and A. L. Serpa, “Discrete optimization for positioning of actuators and sensors in vibration control using the simulated annealing method,” Journal of the Brazilian Society of Mechanical Sciences and Engineering, vol. 42, no. 2, p. 101, 2020. View at: Google Scholar
  3. H. Zhi and S. Y. Liu, “A hybrid GABC-GA Algorithm for mechanical design optimization problems,” Intelligent Automation and Soft Computing, vol. 25, no. 4, pp. 815–825, 2019. View at: Google Scholar
  4. J. Martínez-Morales, H. Quej-Cosgaya, J. Lagunas-Jiménez, E. Palacios-Hernández, and J. Morales-Saldaña, “Design optimization of multilayer perceptron neural network by ant colony optimization applied to engine emissions data,” Science China Technological Sciences, vol. 62, no. 6, pp. 1055–1064, 2019. View at: Publisher Site | Google Scholar
  5. M. Huang and Z. Liu, “Research on mechanical fault prediction method based on multifeature fusion of vibration sensing data,” Sensors, vol. 20, no. 1, p. 6, 2020. View at: Google Scholar
  6. J. P. Janet, L. Chan, and H. J. Kulik, “Accelerating chemical discovery with machine learning: simulated evolution of spin crossover complexes with an artificial neural network,” The Journal of Physical Chemistry Letters, vol. 9, no. 5, pp. 1064–1071, 2018. View at: Publisher Site | Google Scholar
  7. F. Zhao, S. Qin, Y. Zhang, W. Ma, C. Zhang, and H. Song, “A two-stage differential biogeography-based optimization algorithm and its performance analysis,” Expert Systems with Applications, vol. 115, pp. 329–345, 2019. View at: Publisher Site | Google Scholar
  8. E. Pena, S. M. Zhang, R. Patriat et al., “Multi-objective particle swarm optimization for postoperative deep brain stimulation targeting of subthalamic nucleus pathways,” Journal of Neural Engineering, vol. 15, no. 6, Article ID 066020, 2018. View at: Publisher Site | Google Scholar
  9. E. Camci, D. R. Kripalani, L. L. Ma, E. Kayacan, and M. A. Khanesar, “An aerial robot for rice farm quality inspection with type-2 fuzzy neural networks tuned by particle swarm optimization-sliding mode control hybrid algorithm,” Swarm and Evolutionary Computation, vol. 41, pp. 1–8, 2018. View at: Google Scholar
  10. J.-Y. Jhang, C.-J. Lin, C.-T. Lin, and K.-Y. Young, “Navigation control of mobile robots using an interval type-2 fuzzy controller based on dynamic-group particle swarm optimization,” International Journal of Control, Automation and Systems, vol. 16, no. 5, pp. 2446–2457, 2018. View at: Publisher Site | Google Scholar
  11. W. Tao, Z. Liu, P. Zhu, C. Zhu, and W. Chen, “Multi-scale design of three dimensional woven composite automobile fender using modified particle swarm optimization algorithm,” Composite Structures, vol. 181, pp. 73–83, 2017. View at: Publisher Site | Google Scholar
  12. A. ElSaid, F. El Jamiy, J. Higgins, B. Wild, and T. Desell, “Optimizing long short-term memory recurrent neural networks using ant colony optimization to predict turbine engine vibration,” Applied Soft Computing, vol. 73, pp. 969–991, 2018. View at: Publisher Site | Google Scholar
  13. Z. Cao, H. Guo, J. Zhang, D. Niyato, and U. Fastenrath, “Improving the efficiency of stochastic vehicle routing: a partial Lagrange multiplier method,” IEEE Transactions on Vehicular Technology, vol. 65, no. 6, pp. 3993–4005, 2016. View at: Publisher Site | Google Scholar
  14. J.-G. Ahn, H.-I. Yang, and J.-G. Kim, “Multipoint constraints with Lagrange multiplier for system dynamics and its reduced-order modeling,” AIAA Journal, vol. 58, no. 1, pp. 385–401, 2020. View at: Publisher Site | Google Scholar
  15. J. Liang, L. Dai, S. Chen et al., “Generalized inverse matrix-exterior penalty function (GIM-EPF) algorithm for data processing of multi-wavelength pyrometer (MWP),” Optics Express, vol. 26, no. 20, pp. 25706–25720, 2018. View at: Publisher Site | Google Scholar
  16. J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the 1995 IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, 1995. View at: Google Scholar
  17. Z. Xue, H. Li, Y. Zhou, N. Ren, and W. Wen, “Analytical prediction and optimization of cogging torque in surface-mounted permanent magnet machines with modified particle swarm optimization,” IEEE Transactions on Industrial Electronics, vol. 64, no. 12, pp. 9795–9805, 2017. View at: Publisher Site | Google Scholar
  18. H. Han, X. Wu, L. Zhang, Y. Tian, and J. Qiao, “Self-organizing RBF neural network using an adaptive gradient multiobjective particle swarm optimization,” IEEE Transactions on Cybernetics, vol. 49, no. 1, pp. 69–82, 2019. View at: Publisher Site | Google Scholar
  19. J. Yi, X. Li, C.-H. Chu, and L. Gao, “Parallel chaotic local search enhanced harmony search algorithm for engineering design optimization,” Journal of Intelligent Manufacturing, vol. 30, no. 1, pp. 405–428, 2019. View at: Publisher Site | Google Scholar
  20. J. B. Park, Y. W. Jeong, J. R. Shin, and K. Y. Lee, “An improved particle swarm optimization for nonconvex economic dispatch problems,” IEEE Transactions on Power Systems, vol. 25, no. 1, pp. 156–166, 2010. View at: Google Scholar
  21. M. D. Phung, C. H. Quach, T. H. Dinh, and Q. Ha, “Enhanced discrete particle swarm optimization path planning for UAV vision-based surface inspection,” Automation in Construction, vol. 81, pp. 25–33, 2017. View at: Publisher Site | Google Scholar
  22. R. Wang and X. Zhang, “Optimal design of a planar parallel 3-DOF nanopositioner with multi-objective,” Mechanism and Machine Theory, vol. 112, pp. 61–83, 2017. View at: Publisher Site | Google Scholar
  23. A. Nickabadi, M. M. Ebadzadeh, and R. Safabakhsh, “A novel particle swarm optimization algorithm with adaptive inertia weight,” Applied Soft Computing, vol. 11, no. 4, pp. 3658–3670, 2011. View at: Publisher Site | Google Scholar
  24. H. D. Mojarrad and M. Nayeripour, “A new fuzzy adaptive particle swarm optimization for non-smooth economic dispatch,” Energy, vol. 35, no. 4, pp. 1764–1778, 2010. View at: Google Scholar
  25. S. Khan, M. Kamran, and O. U. Rehman, “A modified PSO algorithm with dynamic parameters for solving complex engineering design problem,” International Journal of Computer Mathematics, vol. 95, no. 11, pp. 2308–2329, 2018. View at: Publisher Site | Google Scholar
  26. H. B. Liang, D. L. Zou, Z. L. Li, M. J. Khan, and Y. J. Lu, “Dynamic evaluation of drilling leakage risk based on fuzzy theory and PSO-SVR algorithm,” Future Generation Computer Systems, vol. 95, pp. 454–466, 2019. View at: Publisher Site | Google Scholar
  27. D. P. Tian, X. F. Zhao, and Z. Z. Shi, “Chaotic particle swarm optimization with sigmoid-based acceleration coefficients for numerical function optimization,” Swarm and Evolutionary Computation, vol. 51, Article ID 100573, 2019. View at: Google Scholar
  28. F. S. Hsieh, F. M. Zhan, and Y. H. Guo, “A solution methodology for carpooling systems based on double auctions and cooperative coevolutionary particle swarms,” Applied Intelligence, vol. 49, no. 2, pp. 741–763, 2019. View at: Publisher Site | Google Scholar
  29. S. K. S. Fan and E. Zahara, “A hybrid simplex search and particle swarm optimization for unconstrained optimization,” European Journal of Operational Research, vol. 181, no. 2, pp. 527–548, 2007. View at: Publisher Site | Google Scholar
  30. E. Zahara and C.-H. Hu, “Solving constrained optimization problems with hybrid particle swarm optimization,” Engineering Optimization, vol. 40, no. 11, pp. 1031–1049, 2008. View at: Publisher Site | Google Scholar
  31. X. F. Liu, Z. H. Zhan, Y. Gao, J. Zhang, S. Kwong, and J. Zhang, “Coevolutionary particle swarm optimization with bottleneck objective learning strategy for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 4, pp. 587–602, 2019. View at: Publisher Site | Google Scholar
  32. X. W. Xia, L. Gui, F. Yu et al., “Triple archives particle swarm optimization,” IEEE Transactions on Cybernetics, pp. 1–14, 2019. View at: Publisher Site | Google Scholar
  33. Z.-J. Wang, Z.-H. Zhan, S. Kwong, H. Jin, and J. Zhang, “Adaptive granularity learning distributed particle swarm optimization for large-scale optimization,” IEEE Transactions on Cybernetics, pp. 1–14, 2020. View at: Publisher Site | Google Scholar
  34. A. Hinrichs, F. Pillichshammer, and S. Tezuka, “Tractability properties of the weighted star discrepancy of the Halton sequence,” Journal of Computational and Applied Mathematics, vol. 350, pp. 46–54, 2019. View at: Publisher Site | Google Scholar
  35. S. Koziel and Z. Michalewicz, “Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization,” Evolutionary Computation, vol. 7, no. 1, pp. 19–44, 1999. View at: Publisher Site | Google Scholar
  36. S. B. Hamida and M. Schoenauer, “ASCHEA: new results using adaptive segregational constraint handing,” in Proceedings of the 2002 Congress on Evolutionary Computation, pp. 884–889, IEEE, Piscataway, NJ, USA, May 2002. View at: Google Scholar
  37. T. P. Runarsson and X. Yao, “Stochastic ranking for constrained evolutionary optimization,” IEEE Transactions on Evolutionary Computation, vol. 4, no. 3, pp. 284–294, 2000. View at: Publisher Site | Google Scholar
  38. B. Bai, H. Li, W. Zhang, and Y. C. Cui, “Application of extremum response surface method-based improved substructure component modal synthesis in mistuned turbine bladed disk,” Journal of Sound and Vibration, vol. 472, Article ID 115210, 2020. View at: Google Scholar
  39. C. A. C. Coello, “Use of a self-adaptive penalty approach for engineering optimization problems,” Computers in Industry, vol. 41, no. 1, pp. 113–127, 2000. View at: Google Scholar
  40. C. A. C. Coello and E. M. Montes, “Constraint-handing in genetic algorithms through the use of dominance-based tournament selection,” Advanced Engineering Informatics, vol. 16, no. 1, pp. 193–203, 2002. View at: Google Scholar
  41. C. A. C. Coello and R. L. Becerra, “Efficient evolutionary optimization through the use of a cultural algorithm,” Engineering Optimization, vol. 36, no. 2, pp. 219–236, 2004. View at: Publisher Site | Google Scholar

Copyright © 2020 Bin Bai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views82
Downloads54
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.