Research Article  Open Access
An Improved Multiobjective Particle Swarm Optimization Algorithm Using Minimum Distance of Point to Line
Abstract
In a multiobjective particle swarm optimization algorithm, selection of the global best particle for each particle of the population from a set of Pareto optimal solutions has a significant impact on the convergence and diversity of solutions, especially when optimizing problems with a large number of objectives. In this paper, a new method is introduced for selecting the global best particle, which is minimum distance of point to line multiobjective particle swarm optimization (MDPLMOPSO). Using the basic concept of minimum distance of point to line and objective, the global best particle among archive members can be selected. Different test functions were used to test and compare MDPLMOPSO with CDMOPSO. The result shows that the convergence and diversity of MDPLMOPSO are relatively better than CDMOPSO. Finally, the proposed multiobjective particle swarm optimization algorithm is used for the Pareto optimal design of a fivedegreeoffreedom vehicle vibration model, which resulted in numerous effective tradeoffs among conflicting objectives, including seat acceleration, front tire velocity, rear tire velocity, relative displacement between sprung mass and front tire, and relative displacement between sprung mass and rear tire. The superiority of this work is demonstrated by comparing the obtained results with the literature.
1. Introduction
The dynamic behavior of a vehicle is critical for determining driving performance and ride comfort. Simultaneously, it also influences the dynamic loads applied to both the road and main axles, which can cause damage to road surfaces and early failure of chassis components. However, determining the system parameters usually conflicts with the main objectives of ride comfort, driving performance, and low dynamic loads. Therefore, optimization of these parameters has been actively studied. With the development of computational capacity and computing technologies, NarimanZadeh et al. [1] and Mahmoodabadi et al. [2] performed twoobjective and fiveobjective optimization using advanced evolutionary algorithms and particle swarm optimization, which resulted in numerous effective tradeoffs between conflicting objectives, including seat acceleration, front tire velocity, rear tire velocity, relative displacement between sprung mass and front tire, and relative displacement between sprung mass and rear tire, for a fivedegreeoffreedom vehicle vibration model. Weiping et al. [3] presented the optimization of an eightdegreeoffreedom model using fuzzy theory and a multipopulation genetic algorithm to determine a set of parameters that achieves the best driver’s seat performance. To reduce the dynamic load between the tires and road and improve ride comfort, Pengmin et al. proposed a modification of the suspension system parameters via a fourdegreeoffreedom model [4]. Zhonglang and Pengmin [5] proposed modifying ride comfort and dynamic load through a sevendegreeoffreedom model and suspension system adjustment based on a specific 4 × 2 tractor case. Drehmer et al. [6] applied particle swarm optimization and sequential quadratic programming algorithms to obtain the optimal suspension parameters for different road profiles and vehicle velocity conditions.
Optimization is becoming an important research tool in scientific research and engineering practice. The multiobjective optimization problem (MOP) is normally considered the minimizing of an objective and can be in the form of the following equation:
Here, is variable space and is a variable vector that needs to be determined. consists of a set of objective functions with objective space . Each objective is generally contradicted during multiobjective optimization, that is, one of them being the best objective and the other objective being worst accordingly. In this case, it is less probable to find a solution that optimizes all the objectives. On this basis, the necessity of finding a solution tradeoff is defined as making each objective into a balanced level, or socalled Pareto optimal solutions. Multiobjective optimizations require filtering of a set of optimal solutions instead of the sole best solution. One of the Pareto optimal solutions is not obviously better based on a lack of sufficiently detailed information, which is necessary for determining as many Pareto solutions as possible.
Multiobjective evolutionary algorithms (MOEAS) have been proposed to resolve multiobjective problems, for instance, the nondominated sorting genetic algorithm II (NSGAII) [7] by Deb et al. and the Pareto archived evolution strategy (PAES) by Knowles and Corne [8], which approaches a nondominated front of solution set, as well as the Improved Strength Pareto Evolutionary Algorithm (SPEA2) [9] by Zitzler et al. These researchers were inspired by natural evolution and population evolutionary algorithms.
To improve previous solutions, a new solution called particle swarm optimization (PSO) [10] was proposed by Kennedy and Eberhart and based on population evolutionary algorithms and motivated by social behavior, such as bird flock food searching. The approach can be considered a distributed behavioral algorithm that performs (in a more general form) a multidimensional search. In the simulation, the behavior of each individual is affected by either the best local (i.e., within a certain neighborhood) or the best global individual. The approach uses the concept of population and a measure of performance similar to the fitness value used with evolutionary algorithms. Additionally, individual adjustments are analogous to the use of a crossover operator. However, this approach introduces the use of flying potential solutions through hyperspace (used to accelerate convergence), which does not seem to have an analogous mechanism in traditional evolutionary algorithms. Another important difference is that PSO allows individuals to benefit from their past experiences, whereas in an evolutionary algorithm, the current population is normally the only “memory” used by the individuals. PSO has been successfully used for both continuous nonlinear and discrete binary singleobjective optimization. Particle swarm optimization seems particularly suitable for multiobjective optimization primarily due to the high convergence rate for singleobjective optimization.
In recent years, PSO has been investigated to solve MOP problems.
Ray and Liew [11] introduced a swarm metaphor for multiobjective design optimization. The algorithm employs a multilevel sieve to generate a set of leaders, a probabilistic crowding radiusbased strategy (based on a roulettewheel scheme that ensures a larger crowding radius has a higher probability of being selected as a leader) for leader selection and a simple generational operator for information transfer.
Parsopoulos and Vrahatis [12] presented a study of the particle swarm optimization method in multiobjective optimization problems. This algorithm adopts an aggregating function where three approaches were implemented: a conventional linear aggregating function, a dynamic aggregating function, and the bangbang weighted aggregation approach. According to this approach, all the objectives are summed to a weighted combination , where , , are nonnegative weights. It is usually assumed that . A multiobjective problem is scaled to a singleobjective problem before optimization.
Hu and Eberhart [13] proposed a dynamic neighborhood PSO for multiobjective design optimization. In this algorithm, only one objective is optimized at a time. The relatively simple objective function is fixed, and the difficult function is optimized. This algorithm adopts onedimensional optimization to handle multiple objectives.
Currently, the Pareto Dominance method has become increasingly popular, but none of the proposals to extend PSO to solve multiobjective optimization problems used a secondary population. This may limit the algorithm performance. However, in more recent papers, these ideas have already been incorporated by other authors. The most representative proposals are the following.
Coello and Lechuga [14] first extended PSO to handle multiobjective optimization problems by using a Pareto ranking scheme, and the use of global attraction mechanisms combined with a historical archive of previously found nondominated vectors would motivate convergence toward globally nondominated solutions. Additionally, repository updates and global best guide selection are performed considering a geographically based system (hypercube) defined in terms of the objective function values of each individual, which produces welldistributed Pareto fronts. The literature [15] includes an improved version of the algorithm [2], which has an added constrainthandling mechanism and a mutation operator that significantly improves the exploratory capabilities of the original algorithm to maintain diversity.
Hu et al. [16] proposed an improved version of the literature [13]. An extended memory is introduced to store global Pareto optimal solutions and reduce computation time.
Fieldsend and Sing [17] present a MOPSO, which uses an unconstraint archive. In this algorithm, a different data structure (called dominated tree) for storing the elite particles facilitates the choice of a best local guide for each particle of the population.
Mostaghim and Teich [18] proposed a MOPSO, which updated the external archive by Pareto Dominance. An archive is dominationfree if no two points in the archive dominate each other. Obviously, while executing the function update, dominated points must be deleted to maintain a dominationfree archive. Additionally, this method proposed a sigma method in which the best local guides for each particle are adopted to improve the convergence and diversity of a PSO approach used for multiobjective optimization.
Li [19] proposed an approach in which the main mechanisms of the NSGAII [a] are adopted in a PSO algorithm. One mechanism is a Nondominated Sorting Particle Swarm Optimizer to push the population toward a Pareto front. To maintain population diversity, a crowding distance assignment mechanism is proposed that resorts the nondominant solutions in the nonDomPSOList according to crowding distance values and then randomly selects a global best guide for the th particle from a specific part (e.g., top 5%) of the sorted nonDomPSOList.
The Raquel and Naval Jr. [20] proposed algorithm extends PSO to solve multiobjective optimization problems by incorporating the mechanism of crowding distance computation in the global best selection and the deletion method of the external archive of nondominated solutions whenever the archive is full. The crowding distance mechanism combined with a mutation operator maintains the diversity of nondominated solutions in the external archive. In this algorithm, a bounded external archive stores nondominated solutions found in previous iterations. The global best guide of the particles is selected from among the nondominated solutions with the highest crowding distance values. Similar to [12], this method randomly selects a global best guide for the th particle from a specific part (e.g., top 10%) of the sorted archive.
In this paper, a minimum distance of point to line multiobjective particle swamp optimization (MDPLMOPSO) is developed. The main differences between our approach and other proposals that exist in the literature are as follows:(1)This algorithm adopts a bounded archive mechanism to store the global Pareto optimal solutions. The global best guide of each particle in the population was selected from the archive.(2)A minimum distance of point to line method was utilized to find the global best guide of each particle in the population. The selection of the global best guide of the particle swarm is crucial in a multiobjectivePSO algorithm. It affects the convergence capability of the algorithm and maintains an adequate spread of nondominated solutions. No other proposal uses the mechanism in the way it is adopted in this paper.(3)The mutation operator of MOPSO was adapted because of the exploratory capability it could provide the algorithm by initially performing mutation on the entire population and then rapidly decreasing its coverage over time [14]. This is helpful in terms of preventing premature convergence due to existing local Pareto fronts in some optimization problems.(4)This algorithm is evaluated based on representative functions along with a performance comparison with the MOPSO algorithm [14], CDMOPSO algorithm [15], and NSGAII algorithm [7].
2. MDPLMOPSO Algorithm
2.1. MDPLMOPSO Implementation Procedure
(1)Particle swarm initialization;(a)Set swarm set number and particle dimension ;(b)Set the range for variable values for and ;(c)Particle speed control, variable ;(d)Swarm position and speed initialization, particle creation in random, , , .(2)Parameter setting evolution;(a)Max Iterations ;(b)Maximum and Minimum for inertia weight, value as = 0.4, = 0.9;(c)Learning factors and , .(3)Evaluate objective function value For to M (M is the population size) For to L (L is objective function numbers)(a)Evaluate (b)Particle normalization(4)Particle position initialization = Best particle found in (5)Archive initialization Store the nondominated solutions found in into Archive(6)If iterations cannot achieve (a)Find from Archive Line solver for nondominated particle from archive. Distance from to line . While , particle in archive will be selected as the global best guide, .(b)Particle speed and position update where is the number of iterations, and and are random numbers in the range . If extends beyond the boundaries, it is reintegrated by setting the decision variable equal to the value of its corresponding lower or upper boundary, and its velocity is multiplied by −1 so that it searches in the opposite direction.(c)Perform mutation on .(d)Evaluate (e)Archive update Insert the new nondominated solution in into Archive if they are not dominated by any of the stored solutions. All solutions in the archive dominated by the new solution are removed from the archive. If the archive is full, the solution to be replaced is determined according to the crowding distance values.(f)Update the personal best solution of each particle in . If the current dominates the position in memory, the particles position is updated using .(g)Increment iteration counter (7)Cycle count increase until iteration requirements are achieved.
The flowchart of the proposed algorithm for MDPLMOPSO algorithm is shown in Figure 1.
2.2. Finding the Global Best Guide
As mentioned previously, several important MOPSO methods exist [15, 16, 20]. In each of these methods, there is a suggestion for finding the global best guide. In most of these methods, there is an inspiration of multiobjective evolutionary algorithms. In this section, we discuss some of these methods, their advantages, and disadvantages and introduce a new method for finding the global best guide.
2.2.1. Overview of the Method of Finding the Global Best Guide
(1) Handling Multiple Objectives with Particle Swarm Optimization [15]. In this method, the objective space is divided to hypercubes before selecting the global best guide for each particle. Next, a fitness value is assigned to each hypercube depending on the number of elite particles within it. The more elite the particles that are in a hypercube, the lower the fitness value. Then, roulettewheel selection is applied to the hypercube, and one is selected. Finally, the global best guide is a random particle selected from the selected hypercube. Therefore, the global best guide is selected using the roulettewheel selection method, which is a random selection. It is possible that a particle does not select a suitable guide as its global guide.
(2) Multiobjective Optimization Using Dynamic Neighborhood Particle Swarm Optimization [16]. In this method, a dynamic neighborhood strategy is used. This paper showed that, for twoobjective optimization, the global best guide for the particle is found in the objective space. First, the distance between particle and other particles is calculated in terms of the first objective value, which is called a fixed objective. Then, k local neighbors based on the calculated distances are found. The local optima among the neighbors in terms of the second objective value are the global best guide for particle . In this method, fixed objective selection must be performed with a priori knowledge about the objective functions; and onedimensional optimization is used to handle multiple objectives. Therefore, selecting the global best guide depends on only one objective.
(3) An Effective Use of Crowding Distance in Multiobjective Particle Swarm Optimization (CDMOPSO) [20]. In this method, a crowding distance strategy is used. The crowding distance value of a solution provides an estimate of the density of solutions surrounding that solution. First, a bounded external archive stores nondominated solutions found in previous iterations. The nondominated solutions in the archive can be used as the global best guide of the particles in the swarm. Next, the nondominated solutions in the archive are sorted by decreasing crowding distance. Finally, the global best guide of the particles is selected from among the nondominated solutions with the highest crowding distance values. Selecting different guides for each particle in a specified top part of the sorted repository is implemented. Therefore, the global best guide is selected using the crowding distance value method, which is a random selection.
2.2.2. Minimum Distance of Point to Line Method
To overcome the disadvantages of the aforementioned methods, a MDPLMOPSO is proposed to find the global best guide for each particle. First, the basic concept of the minimum distance of point to line (MDPL) is introduced. Later, this paper explains how this method determines the global best guide for each particle of the population in the objective.
In the twodimensional coordinate system, a straight line, line , can be determined using the origin point and any point as shown in Figure 2. Line is defined as follows:
Point is any point outside line , and the distance between point and line is defined as follows:
Similarly, in the threedimensional coordinate system, a straight line can be produced through the origin point and any point , as shown in Figure 3 as line . Line is defined as follows:
Point is any point outside line , and the distance between point and line is defined as follows:
Here, we use twoobjective optimization as an example to illustrate how to find global optimal guide.
Using the basic concept of a distance of point to line and considering the objective space, finding the global best guide among the archive members for particle of twoobjective optimization populations is as follows:
First, we draw line of point with coordinates in the twoobjective spaces. is the related nondominated particle in the archive. is defined as follows:
Second, we calculate distance from point with coordinates in the objective space related to the population particle to line (). is defined as follows:
Finally, archive particle will be considered in terms of optimization, while distance from to archive line is the minimum. is defined as follows:
In other words, each particle with a minimum distance to the line of the archive member must select that archive member as the global best guide. Therefore, MDPLMOPSO can determine the most appropriate guide as its global guide for each particle in the population.
As shown in Figure 4, point () with coordinates , , , and is the Pareto optimal front four points of the twoobjective space whose nondominated particles in the archive are (). Line () is achieved according to (14). The corresponding point of particle in the objective space is point with coordinates . The distance () is the distance between point and line . If it is assumed that the distance is the minimum value, the nondominated particle in the archive is selected. Particle is the global best guide of population particle .
Figure 5 shows how the algorithm determines the global best guide among the archive members for each particle of the population for a twodimensional objective space. The reason particle is selected from the archive members as the global best guide is the minimum distance between the population particles and line of particle . This method can guide the population particles to move directly toward the Pareto optimal front.
3. Performance Evaluations
3.1. Testing Functions and Performance Targets
To evaluate the performance of the proposed MDPLMOPSO, six representative test functions denoted as DT1, ZDT2, ZDT3, ZDT4, and ZDT6 are employed, which are detailed in Table 1. These functions were defined and used by Zitzler et al. [21] to demonstrate the effectiveness of their evolutionary algorithms in handling the common issues of multiobjective converging to the Pareto optimal front and maintaining diversity within the population.

In addition, convergence and diversity are the significant criteria for algorithm performance. Generational distance (GD) is one measure to represent convergence metric, and Spacing () is a measure of equilibrium of distribution and diversity [7].
(1) Generational distance (GD): convergence measure (GD) is a measure of the convergence range of the multiobjective optimization algorithm. First, we find a set of solutions from the true Pareto optimal front in the objective space. Second, for each solution obtained with an algorithm, we compute the minimum Euclidean distance from chosen solutions on the Pareto optimal front. The root mean square is expressed as the convergence measure GD. As shown in Figure 6, the shaded area represents the possible solution area. The solid line is the true Pareto optimal front, and the hollow point is the selected theoretical optimal solution. The dashed line is the Pareto optimal front obtained by the multiobjective optimization algorithm. The real point is the Pareto optimal solution of the archive. The smaller the value, the better the convergence of the algorithmwhere is the quantity of Pareto front solutions using an optimized algorithm and denotes the distance between Pareto front particle and the closest Euclid of the optimum solution. GD theoretically represents the distance between the Pareto front based on algorithm optimization and the optimized solution for testing functions. A smaller GD value indicates better convergence of the algorithm.
(2) Diversity measure is a measure of the distribution and diversity of the multiobjective optimization algorithm. It is more important to obtain a set of optimal solutions distributed throughout the entire Pareto front. As shown in Figure 7, the shaded areas represent the possible solution areas, the dashed lines are the Pareto optimal front obtained by the multiobjective optimization algorithm, and the solid points are the Pareto optimal solutions for the external archives. The smaller the value, the better the convergence of the algorithmwhere is the quantity of Pareto front solutions using an optimized algorithm and denotes the Euclidean distance (in the objective space) between the members in the Pareto optimal front and its nearest member in the Pareto optimal front. A smaller value of implies a more uniform distribution of solutions in the Pareto optimal front.
3.2. Results and Discussion
To verify the performance of the proposed algorithm, a comparison analysis is provided between the results and CDMOPSO that are conducive to archive update and maintenance as well as selection. Through a tencycle testing function simulation, GD for convergence and of diversity could be calculated. This can be summarized as a mean value for algorithm performance and variance for stability.
Raquel and Naval Jr. [20] proposed a CDMOPSO algorithm. There are 30 particles in the particle swarm, the maximum number of iterations is 400, and the learning factor is 2. The MDPLMOPSO algorithm has the same parameters as the CDMOPSO algorithm. In the following analysis, results obtained by performing 30 independent tests for each algorithm are compared.
Figures 8–12 indicate a comparison analysis between the Pareto front generated from five testing functions and a true Pareto front. Tables 2 and 3 refer to two different algorithms of convergence GD value and diversity value for the five testing functions. Row 1 is the mean value and Row 2 is the variance value in the figures.
 
Note. Mean value; variance value. 
 
Note. Mean value; variance value. 
(a) MDPLMOPSO
(b) CDMOPSO
(a) MDPLMOPSO
(b) CDMOPSO
(a) MDPLMOPSO
(b) CDMOPSO
(a) MDPLMOPSO
(b) CDMOPSO
(a) MDPLMOPSO
(b) CDMOPSO
From Figures 8 to 12, both MDPLMOPSO and CDMOPSO can cover the Pareto front of ZDT1, ZDT2, and ZDT4, despite ZDT3 and ZDT6 being partially covered. Compared with CDMOPSO, MDPLMOPSO is closer to the Pareto front in terms of ZDT3, while CDMOPSO is more suitable regarding ZDT6. This can also be observed from the information in Tables 2 and 3.
Table 2 refers to convergence GD of testing function optimum solutions, in which lower values indicate better convergence. Furthermore, the GD results from MDPLMOPSO are less than the GD in CDMOPSO for functions ZDT1, ZDT2, ZDT3, and ZDT4, which indicates that the convergence and stability of MDPLMOPSO are better than CDMOPSO, and the mean value and variance value are less than in CDMOPSO. Considering GD mean value increased by 16% as function ZDT6, the convergence of MDPLMOPSO is slightly worse than CDMOPSO.
Table 3 is the diversity for testing function optimum solutions. For lower values of , the diversity is improved. Of the Table 3 information, the results and GD convergence follow the same trend. However, the diversity of the MDPLMOPSO algorithm improves as ZDT1–ZDT4 is considered. As for ZDT6, the diversity of MDPLMOPSO is slightly less conducive than CDMOPSO because the mean value increases by approximately 9%.
Therefore, it can be concluded that MDPLMOPSO has improved convergence and diversity performance compared with that of the CDMOPSO algorithm.
4. Multiobjective Optimization Design of Vehicle Driving Dynamics
NarimanZadeh et al. [1] and Mahmoodabadi et al. [2] performed twoobjective and fiveobjective optimization using advanced evolutionary algorithms and particle swarm optimization, which resulted in numerous effective tradeoffs between conflicting objectives, including seat acceleration, front tire velocity, rear tire velocity, relative displacement between sprung mass and front tire, and relative displacement between sprung mass and rear tire, for a fivedegreeoffreedom (fiveDOF) vehicle vibration model.
In this paper, an improved multiobjective particle swarm optimization (MDPLMOPSO) algorithm is used to optimize the fiveDOF vehicle driving dynamics in [1, 2] and is compared with the references.
4.1. FiveDegreeofFreedom Vehicle Model
A fivedegreeoffreedom vehicle with passive suspension, which is adopted from [2], is shown in Figure 13.
The fixed parameters and design variables of the vehicle are shown in Table 4.

Kinetic energy of vehicle vibration fivedegreeoffreedom model:
Potential energy of vehicle vibration fivedegreeoffreedom model:
Dissipation energy of vehicle vibration fivedegreeoffreedom model:
Substituting (13)–(15) into the Lagrange motion equation yields the vibration differential equations:
Then, a matrix form of the vibration model of (17) can be written for efficiently obtaining solutions efficiently: where matrices , , and represent the mass matrix, damping matrix, and stiffness, is the tire stiffness matrix, and , , and are displacement, speed, and acceleration column vectors. represents the road activation vector. Displacement vector and the road activation vector is . The road activation vector is shown in Figure 14.
It is assumed that the vehicle moves at a constant velocity = 20 m/s. It is further assumed that the rear tire follows the same trajectory as the front tire with a delay of = .
4.2. Design Variables and Objective Functions
4.2.1. Design Variables
To optimize the suspension system, corresponding stiffness and damping parameters are used as the design variables, which are defined as . Their bounds are defined in Table 4.
4.2.2. Objective Functions
The five conflicting objectives are seat acceleration , front tire velocity , rear tire velocity , relative displacement between sprung mass and front tire , and relative displacement between sprung mass and rear tire for the fiveDOF vehicle vibration model. For this purpose, four of the ten possible pairs of five objectives are considered in various twoobjective optimization processes. The pairs of objectives to be optimized separately chosen were , , , and . All the objective functions must be minimized simultaneously.
4.3. Multiobjective Optimization Result Analysis
The Pareto front for the vibration system is obtained using MDPLMOPSO and is shown in Figures 15 and 16. Obviously, these two objectives are interinfluenced, as one objective is better optimized than another objective. The two end points demonstrate the two extreme cases of these conflicting objectives. However, the points between these extremes are the nondominated optimum points, any of which can be selected as design points. In other words, any other set of objectives will locate a point inferior to the corresponding Pareto front.
(a) Forward tire
(b) Rear tire
(a) Forward tire
(b) Rear tire
Figure 15 shows the Pareto front of seat acceleration and forward tire velocity representing different nondominated optimum points with respect to these conflicting objectives. In this figure, points A1, A2 and B1, B2 represent the best seat acceleration and best forward tire velocity, respectively. Note that all the optimum design points in this Pareto front are nondominated and could be chosen by a designer. Considering the actual engineering optimization problem, the engineer chooses a tradeoff solution according to the actual request. In Figure 15, points C1 and C2 are tradeoff design points.
Nondominated Pareto fronts for another chosen set of objective functions are shown in Figure 16. Design points A3 and A4 represent the best seat acceleration , while points B2, B3, and B4 represent the best , , and , respectively. Similarly, C3 and C4 are selected as the tradeoff design points.
The corresponding values of objective functions and design variables of these optimum design points are provided in Tables 5–8. In these tables, points C_1, C_2, C_3, C_4, and C_1′, C_2′, C_3′, C_4′ represent the optimum design points reported in [1, 2]. It is evident that those design points are significantly dominated by proposed tradeoff designing points.




The time behavior of the seat acceleration of the tradeoff design points of these figures and the optimum point proposed in [1, 2] is shown for comparison in Figures 17–20. It is obvious from these figures that the seat acceleration for the design points obtained in this paper is better than the acceleration using the design points given in [1, 2].
5. Conclusions
In this work, an improved multiobjective particle swarm optimization algorithm was introduced. A new method was proposed to determine the global best guide particle for the population particle based on the basic concept of the minimum distance of point to line (MDPL). To consider the optimization performance of the proposal algorithm, five twoobjective test functions were used. The results were compared with the results of the CDMOPSO algorithm. The results indicate that the improved multiobjective particle swarm optimization algorithm is a successful method. Moreover, MDPLMOPSO has been used to optimally design a fivedegreeoffreedom vehicle vibration model, which resulted in numerous effective tradeoffs between conflicting objectives, including seat acceleration, front tire velocity, rear tire velocity, relative displacement between sprung mass and front tire, and relative displacement between sprung mass and rear tire. A comparison of the obtained results and the literature demonstrates the superiority of this work.
Appendix
Matrices of Model
Matrices , , and represent the mass matrix, damping matrix, and stiffness; is the tire stiffness matrix. is described by The stiffness matrix is described by The damping matrix is described by The tire stiffness matrix is described by
Conflicts of Interest
The authors declare that they have no conflicts of interest.
References
 N. NarimanZadeh, M. Salehpour, A. Jamali, and E. Haghgoo, “Pareto optimization of a fivedegree of freedom vehicle vibration model using a multiobjective uniformdiversity genetic algorithm (MUGA),” Engineering Applications of Artificial Intelligence, vol. 23, no. 4, pp. 543–551, 2010. View at: Publisher Site  Google Scholar
 M. J. Mahmoodabadi, A. Adljooy Safaie, A. Bagheri, and N. NarimanZadeh, “A novel combination of particle swarm optimization and genetic algorithm for pareto optimal design of a fivedegree of freedom vehicle vibration model,” Applied Soft Computing Journal, vol. 13, no. 5, pp. 2577–2591, 2013. View at: Publisher Site  Google Scholar
 L. Weiping, W. Lei, Z. Baozhen et al., “Optimizing vehicle ride comfort based on uncertainty theory and fuzzy theory,” Mechanical Science and Technology for Aerospace Engineering, vol. 32, no. 5, pp. 636–640, 2013. View at: Google Scholar
 L. Pengmin, H. Limei, and Y. Jinmin, “Optimization of vehicle suspension parameters based on comfort and tyre dynamic load,” China Journal of Highway and Transport, vol. 20, no. 1, pp. 112–117, 2007. View at: Google Scholar
 Z. Zhonglang and L. Pengmin, “Optimization method of suspension parameters for articulated vehicle based on ride comfort and roadfriendliness,” Journal of Traffic and Transportation Engineering, vol. 9, no. 5, pp. 49–54, 2005. View at: Google Scholar
 L. R. C. Drehmer, W. J. P. Casas, and H. M. Gomes, “Parameters optimisation of a vehicle suspension system using a particle swarm optimisation algorithm,” Vehicle System Dynamics, vol. 53, no. 4, pp. 449–474, 2015. View at: Publisher Site  Google Scholar
 K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGAII,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site  Google Scholar
 J. D. Knowles and D. W. Corne, “Approximating the nondominated front using the pareto archived evolution strategy,” Evolutionary Computation, vol. 8, no. 2, pp. 149–172, 2000. View at: Publisher Site  Google Scholar
 E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: improving the strength Pareto evolutionary algorithm,” in Evolutionary Methods for Design, Optimization and Control with Applications to Industrial Problems, pp. 95–100, Springer, Berlin, Germany, 2002. View at: Google Scholar
 J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks (ICNN ’95), vol. 4, pp. 1942–1948, 1995. View at: Publisher Site  Google Scholar
 T. Ray and K. M. Liew, “A swarm metaphor for multiobjective design optimization,” Engineering Optimization, vol. 34, no. 2, pp. 141–153, 2002. View at: Publisher Site  Google Scholar
 K. E. Parsopoulos and M. N. Vrahatis, “Particle swarm optimization method in multiobjective problems,” Frontiers in Artificial Intelligence & Applications, vol. 76, no. 1, pp. 214–220, 2002. View at: Google Scholar
 X. Hu and R. Eberhart, “Multiobjective optimization using dynamic neighbourhood particle swarm optimization,” in Proceedings of the IEEE International Conference on Evolutionary Computation, vol. 2, pp. 1677–1681, 2002. View at: Google Scholar
 C. A. C. Coello and M. S. Lechuga, “MOPSO: a proposal for multiple objective particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation (CEC '02), pp. 1051–1056, May 2002. View at: Publisher Site  Google Scholar
 C. A. Coello Coello, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives with particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 256–279, 2004. View at: Publisher Site  Google Scholar
 X. Hu, R. C. Eberhart, and Y. Shi, “Particle swarm with extended memory for multiobjective optimization,” in Proceedings of the 2003 IEEE Swarm Intelligence Symposium, SIS 2003, pp. 193–197, April 2003. View at: Publisher Site  Google Scholar
 J. E. Fieldsend and S. Sing, “A multiobjective algorithm based upon particle swarm optimization, an efficient data structure and turbulence,” in Proceedings of the 2002 U.K. Workshop on Computational Intelligence, pp. 37–44, Birmingham, UK, 2002. View at: Google Scholar
 S. Mostaghim and J. Teich, “Strategies for finding good local guides in multiobjective particle swarm optimization (MOPSO),” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '03), pp. 26–33, Indianapolis, Ind, USA, April 2003. View at: Publisher Site  Google Scholar
 X. Li, “A nondominated sorting particle swarm optimizer for multiobjective optimization,” Lecture Notes in Computer Science, vol. 2723, pp. 37–48, 2003. View at: Google Scholar
 C. R. Raquel and P. C. Naval Jr., “An effective use of crowding distance in multiobjective particle swarm optimization,” in Proceedings of the 7th Annual conference on Genetic and Evolutionary Computation, pp. 257–264, ACM, June 2005. View at: Publisher Site  Google Scholar
 E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: empirical results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, 2000. View at: Google Scholar
Copyright
Copyright © 2017 Zhengwu Fan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.