Shock and Vibration

Volume 2017, Article ID 8204867, 16 pages

https://doi.org/10.1155/2017/8204867

## An Improved Multiobjective Particle Swarm Optimization Algorithm Using Minimum Distance of Point to Line

^{1}Department of Vehicle Engineering, Taiyuan University of Technology, Shanxi, China^{2}Department of Mechanical Engineering, Taiyuan University of Science Technology, Shanxi, China^{3}Centre for Efficiency and Performance Engineering, University of Huddersfield, Queensgate, Huddersfield HD1 3DH, UK

Correspondence should be addressed to Tie Wang; moc.361@75eitgnaw

Received 12 February 2017; Revised 9 May 2017; Accepted 15 May 2017; Published 25 September 2017

Academic Editor: Toshiaki Natsuki

Copyright © 2017 Zhengwu Fan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In a multiobjective particle swarm optimization algorithm, selection of the global best particle for each particle of the population from a set of Pareto optimal solutions has a significant impact on the convergence and diversity of solutions, especially when optimizing problems with a large number of objectives. In this paper, a new method is introduced for selecting the global best particle, which is minimum distance of point to line multiobjective particle swarm optimization (MDPL-MOPSO). Using the basic concept of minimum distance of point to line and objective, the global best particle among archive members can be selected. Different test functions were used to test and compare MDPL-MOPSO with CD-MOPSO. The result shows that the convergence and diversity of MDPL-MOPSO are relatively better than CD-MOPSO. Finally, the proposed multiobjective particle swarm optimization algorithm is used for the Pareto optimal design of a five-degree-of-freedom vehicle vibration model, which resulted in numerous effective trade-offs among conflicting objectives, including seat acceleration, front tire velocity, rear tire velocity, relative displacement between sprung mass and front tire, and relative displacement between sprung mass and rear tire. The superiority of this work is demonstrated by comparing the obtained results with the literature.

#### 1. Introduction

The dynamic behavior of a vehicle is critical for determining driving performance and ride comfort. Simultaneously, it also influences the dynamic loads applied to both the road and main axles, which can cause damage to road surfaces and early failure of chassis components. However, determining the system parameters usually conflicts with the main objectives of ride comfort, driving performance, and low dynamic loads. Therefore, optimization of these parameters has been actively studied. With the development of computational capacity and computing technologies, Nariman-Zadeh et al. [1] and Mahmoodabadi et al. [2] performed two-objective and five-objective optimization using advanced evolutionary algorithms and particle swarm optimization, which resulted in numerous effective trade-offs between conflicting objectives, including seat acceleration, front tire velocity, rear tire velocity, relative displacement between sprung mass and front tire, and relative displacement between sprung mass and rear tire, for a five-degree-of-freedom vehicle vibration model. Weiping et al. [3] presented the optimization of an eight-degree-of-freedom model using fuzzy theory and a multipopulation genetic algorithm to determine a set of parameters that achieves the best driver’s seat performance. To reduce the dynamic load between the tires and road and improve ride comfort, Pengmin et al. proposed a modification of the suspension system parameters via a four-degree-of-freedom model [4]. Zhonglang and Pengmin [5] proposed modifying ride comfort and dynamic load through a seven-degree-of-freedom model and suspension system adjustment based on a specific 4 × 2 tractor case. Drehmer et al. [6] applied particle swarm optimization and sequential quadratic programming algorithms to obtain the optimal suspension parameters for different road profiles and vehicle velocity conditions.

Optimization is becoming an important research tool in scientific research and engineering practice. The multiobjective optimization problem (MOP) is normally considered the minimizing of an objective and can be in the form of the following equation:

Here, is variable space and is a variable vector that needs to be determined. consists of a set of objective functions with objective space . Each objective is generally contradicted during multiobjective optimization, that is, one of them being the best objective and the other objective being worst accordingly. In this case, it is less probable to find a solution that optimizes all the objectives. On this basis, the necessity of finding a solution trade-off is defined as making each objective into a balanced level, or so-called Pareto optimal solutions. Multiobjective optimizations require filtering of a set of optimal solutions instead of the sole best solution. One of the Pareto optimal solutions is not obviously better based on a lack of sufficiently detailed information, which is necessary for determining as many Pareto solutions as possible.

Multiobjective evolutionary algorithms (MOEAS) have been proposed to resolve multiobjective problems, for instance, the nondominated sorting genetic algorithm II (NSGA-II) [7] by Deb et al. and the Pareto archived evolution strategy (PAES) by Knowles and Corne [8], which approaches a nondominated front of solution set, as well as the Improved Strength Pareto Evolutionary Algorithm (SPEA2) [9] by Zitzler et al. These researchers were inspired by natural evolution and population evolutionary algorithms.

To improve previous solutions, a new solution called particle swarm optimization (PSO) [10] was proposed by Kennedy and Eberhart and based on population evolutionary algorithms and motivated by social behavior, such as bird flock food searching. The approach can be considered a distributed behavioral algorithm that performs (in a more general form) a multidimensional search. In the simulation, the behavior of each individual is affected by either the best local (i.e., within a certain neighborhood) or the best global individual. The approach uses the concept of population and a measure of performance similar to the fitness value used with evolutionary algorithms. Additionally, individual adjustments are analogous to the use of a crossover operator. However, this approach introduces the use of flying potential solutions through hyperspace (used to accelerate convergence), which does not seem to have an analogous mechanism in traditional evolutionary algorithms. Another important difference is that PSO allows individuals to benefit from their past experiences, whereas in an evolutionary algorithm, the current population is normally the only “memory” used by the individuals. PSO has been successfully used for both continuous nonlinear and discrete binary single-objective optimization. Particle swarm optimization seems particularly suitable for multiobjective optimization primarily due to the high convergence rate for single-objective optimization.

In recent years, PSO has been investigated to solve MOP problems.

Ray and Liew [11] introduced a swarm metaphor for multiobjective design optimization. The algorithm employs a multilevel sieve to generate a set of leaders, a probabilistic crowding radius-based strategy (based on a roulette-wheel scheme that ensures a larger crowding radius has a higher probability of being selected as a leader) for leader selection and a simple generational operator for information transfer.

Parsopoulos and Vrahatis [12] presented a study of the particle swarm optimization method in multiobjective optimization problems. This algorithm adopts an aggregating function where three approaches were implemented: a conventional linear aggregating function, a dynamic aggregating function, and the bang-bang weighted aggregation approach. According to this approach, all the objectives are summed to a weighted combination , where , , are nonnegative weights. It is usually assumed that . A multiobjective problem is scaled to a single-objective problem before optimization.

Hu and Eberhart [13] proposed a dynamic neighborhood PSO for multiobjective design optimization. In this algorithm, only one objective is optimized at a time. The relatively simple objective function is fixed, and the difficult function is optimized. This algorithm adopts one-dimensional optimization to handle multiple objectives.

Currently, the Pareto Dominance method has become increasingly popular, but none of the proposals to extend PSO to solve multiobjective optimization problems used a secondary population. This may limit the algorithm performance. However, in more recent papers, these ideas have already been incorporated by other authors. The most representative proposals are the following.

Coello and Lechuga [14] first extended PSO to handle multiobjective optimization problems by using a Pareto ranking scheme, and the use of global attraction mechanisms combined with a historical archive of previously found nondominated vectors would motivate convergence toward globally nondominated solutions. Additionally, repository updates and global best guide selection are performed considering a geographically based system (hypercube) defined in terms of the objective function values of each individual, which produces well-distributed Pareto fronts. The literature [15] includes an improved version of the algorithm [2], which has an added constraint-handling mechanism and a mutation operator that significantly improves the exploratory capabilities of the original algorithm to maintain diversity.

Hu et al. [16] proposed an improved version of the literature [13]. An* extended memory* is introduced to store global Pareto optimal solutions and reduce computation time.

Fieldsend and Sing [17] present a MOPSO, which uses an unconstraint archive. In this algorithm, a different data structure (called dominated tree) for storing the elite particles facilitates the choice of a best local guide for each particle of the population.

Mostaghim and Teich [18] proposed a MOPSO, which updated the external archive by* Pareto Dominance*. An archive is domination-free if no two points in the archive dominate each other. Obviously, while executing the function update, dominated points must be deleted to maintain a domination-free archive. Additionally, this method proposed a sigma method in which the best local guides for each particle are adopted to improve the convergence and diversity of a PSO approach used for multiobjective optimization.

Li [19] proposed an approach in which the main mechanisms of the NSGA-II [a] are adopted in a PSO algorithm. One mechanism is a Nondominated Sorting Particle Swarm Optimizer to push the population toward a Pareto front. To maintain population diversity, a crowding distance assignment mechanism is proposed that resorts the nondominant solutions in the nonDomPSOList according to crowding distance values and then randomly selects a global best guide for the th particle from a specific part (e.g., top 5%) of the sorted nonDomPSOList.

The Raquel and Naval Jr. [20] proposed algorithm extends PSO to solve multiobjective optimization problems by incorporating the mechanism of crowding distance computation in the global best selection and the deletion method of the external archive of nondominated solutions whenever the archive is full. The crowding distance mechanism combined with a mutation operator maintains the diversity of nondominated solutions in the external archive. In this algorithm, a bounded external archive stores nondominated solutions found in previous iterations. The global best guide of the particles is selected from among the nondominated solutions with the highest crowding distance values. Similar to [12], this method randomly selects a global best guide for the th particle from a specific part (e.g., top 10%) of the sorted archive.

In this paper, a minimum distance of point to line multiobjective particle swamp optimization (MDPL-MOPSO) is developed. The main differences between our approach and other proposals that exist in the literature are as follows:(1)This algorithm adopts a bounded archive mechanism to store the global Pareto optimal solutions. The global best guide of each particle in the population was selected from the archive.(2)A minimum distance of point to line method was utilized to find the global best guide of each particle in the population. The selection of the global best guide of the particle swarm is crucial in a multiobjective-PSO algorithm. It affects the convergence capability of the algorithm and maintains an adequate spread of nondominated solutions. No other proposal uses the mechanism in the way it is adopted in this paper.(3)The mutation operator of MOPSO was adapted because of the exploratory capability it could provide the algorithm by initially performing mutation on the entire population and then rapidly decreasing its coverage over time [14]. This is helpful in terms of preventing premature convergence due to existing local Pareto fronts in some optimization problems.(4)This algorithm is evaluated based on representative functions along with a performance comparison with the MOPSO algorithm [14], CD-MOPSO algorithm [15], and NSGA-II algorithm [7].

#### 2. MDPL-MOPSO Algorithm

##### 2.1. MDPL-MOPSO Implementation Procedure

(1)Particle swarm initialization;(a)Set swarm set number and particle dimension ;(b)Set the range for variable values for and ;(c)Particle speed control, variable ;(d)Swarm position and speed initialization, particle creation in random, , , .(2)Parameter setting evolution;(a)Max Iterations ;(b)Maximum and Minimum for inertia weight, value as = 0.4, = 0.9;(c)Learning factors and , .(3)Evaluate objective function value For to* M* (*M* is the population size) For to* L* (*L* is objective function numbers)(a)Evaluate (b)Particle normalization(4)Particle position initialization = Best particle found in (5)Archive initialization Store the non-dominated solutions found in into Archive(6)If iterations cannot achieve (a)Find from Archive Line solver for non-dominated particle from archive. Distance from to line . While , particle in archive will be selected as the global best guide, .(b)Particle speed and position update where is the number of iterations, and and are random numbers in the range . If extends beyond the boundaries, it is reintegrated by setting the decision variable equal to the value of its corresponding lower or upper boundary, and its velocity is multiplied by −1 so that it searches in the opposite direction.(c)Perform mutation on .(d)Evaluate (e)Archive update Insert the new non-dominated solution in into Archive if they are not dominated by any of the stored solutions. All solutions in the archive dominated by the new solution are removed from the archive. If the archive is full, the solution to be replaced is determined according to the crowding distance values.(f)Update the personal best solution of each particle in . If the current dominates the position in memory, the particles position is updated using .(g)Increment iteration counter (7)Cycle count increase until iteration requirements are achieved.

The flowchart of the proposed algorithm for MDPL-MOPSO algorithm is shown in Figure 1.