Abstract

In a multiobjective particle swarm optimization algorithm, selection of the global best particle for each particle of the population from a set of Pareto optimal solutions has a significant impact on the convergence and diversity of solutions, especially when optimizing problems with a large number of objectives. In this paper, a new method is introduced for selecting the global best particle, which is minimum distance of point to line multiobjective particle swarm optimization (MDPL-MOPSO). Using the basic concept of minimum distance of point to line and objective, the global best particle among archive members can be selected. Different test functions were used to test and compare MDPL-MOPSO with CD-MOPSO. The result shows that the convergence and diversity of MDPL-MOPSO are relatively better than CD-MOPSO. Finally, the proposed multiobjective particle swarm optimization algorithm is used for the Pareto optimal design of a five-degree-of-freedom vehicle vibration model, which resulted in numerous effective trade-offs among conflicting objectives, including seat acceleration, front tire velocity, rear tire velocity, relative displacement between sprung mass and front tire, and relative displacement between sprung mass and rear tire. The superiority of this work is demonstrated by comparing the obtained results with the literature.

1. Introduction

The dynamic behavior of a vehicle is critical for determining driving performance and ride comfort. Simultaneously, it also influences the dynamic loads applied to both the road and main axles, which can cause damage to road surfaces and early failure of chassis components. However, determining the system parameters usually conflicts with the main objectives of ride comfort, driving performance, and low dynamic loads. Therefore, optimization of these parameters has been actively studied. With the development of computational capacity and computing technologies, Nariman-Zadeh et al. [1] and Mahmoodabadi et al. [2] performed two-objective and five-objective optimization using advanced evolutionary algorithms and particle swarm optimization, which resulted in numerous effective trade-offs between conflicting objectives, including seat acceleration, front tire velocity, rear tire velocity, relative displacement between sprung mass and front tire, and relative displacement between sprung mass and rear tire, for a five-degree-of-freedom vehicle vibration model. Weiping et al. [3] presented the optimization of an eight-degree-of-freedom model using fuzzy theory and a multipopulation genetic algorithm to determine a set of parameters that achieves the best driver’s seat performance. To reduce the dynamic load between the tires and road and improve ride comfort, Pengmin et al. proposed a modification of the suspension system parameters via a four-degree-of-freedom model [4]. Zhonglang and Pengmin [5] proposed modifying ride comfort and dynamic load through a seven-degree-of-freedom model and suspension system adjustment based on a specific 4 × 2 tractor case. Drehmer et al. [6] applied particle swarm optimization and sequential quadratic programming algorithms to obtain the optimal suspension parameters for different road profiles and vehicle velocity conditions.

Optimization is becoming an important research tool in scientific research and engineering practice. The multiobjective optimization problem (MOP) is normally considered the minimizing of an objective and can be in the form of the following equation:

Here, is variable space and is a variable vector that needs to be determined. consists of a set of objective functions with objective space . Each objective is generally contradicted during multiobjective optimization, that is, one of them being the best objective and the other objective being worst accordingly. In this case, it is less probable to find a solution that optimizes all the objectives. On this basis, the necessity of finding a solution trade-off is defined as making each objective into a balanced level, or so-called Pareto optimal solutions. Multiobjective optimizations require filtering of a set of optimal solutions instead of the sole best solution. One of the Pareto optimal solutions is not obviously better based on a lack of sufficiently detailed information, which is necessary for determining as many Pareto solutions as possible.

Multiobjective evolutionary algorithms (MOEAS) have been proposed to resolve multiobjective problems, for instance, the nondominated sorting genetic algorithm II (NSGA-II) [7] by Deb et al. and the Pareto archived evolution strategy (PAES) by Knowles and Corne [8], which approaches a nondominated front of solution set, as well as the Improved Strength Pareto Evolutionary Algorithm (SPEA2) [9] by Zitzler et al. These researchers were inspired by natural evolution and population evolutionary algorithms.

To improve previous solutions, a new solution called particle swarm optimization (PSO) [10] was proposed by Kennedy and Eberhart and based on population evolutionary algorithms and motivated by social behavior, such as bird flock food searching. The approach can be considered a distributed behavioral algorithm that performs (in a more general form) a multidimensional search. In the simulation, the behavior of each individual is affected by either the best local (i.e., within a certain neighborhood) or the best global individual. The approach uses the concept of population and a measure of performance similar to the fitness value used with evolutionary algorithms. Additionally, individual adjustments are analogous to the use of a crossover operator. However, this approach introduces the use of flying potential solutions through hyperspace (used to accelerate convergence), which does not seem to have an analogous mechanism in traditional evolutionary algorithms. Another important difference is that PSO allows individuals to benefit from their past experiences, whereas in an evolutionary algorithm, the current population is normally the only “memory” used by the individuals. PSO has been successfully used for both continuous nonlinear and discrete binary single-objective optimization. Particle swarm optimization seems particularly suitable for multiobjective optimization primarily due to the high convergence rate for single-objective optimization.

In recent years, PSO has been investigated to solve MOP problems.

Ray and Liew [11] introduced a swarm metaphor for multiobjective design optimization. The algorithm employs a multilevel sieve to generate a set of leaders, a probabilistic crowding radius-based strategy (based on a roulette-wheel scheme that ensures a larger crowding radius has a higher probability of being selected as a leader) for leader selection and a simple generational operator for information transfer.

Parsopoulos and Vrahatis [12] presented a study of the particle swarm optimization method in multiobjective optimization problems. This algorithm adopts an aggregating function where three approaches were implemented: a conventional linear aggregating function, a dynamic aggregating function, and the bang-bang weighted aggregation approach. According to this approach, all the objectives are summed to a weighted combination , where , , are nonnegative weights. It is usually assumed that . A multiobjective problem is scaled to a single-objective problem before optimization.

Hu and Eberhart [13] proposed a dynamic neighborhood PSO for multiobjective design optimization. In this algorithm, only one objective is optimized at a time. The relatively simple objective function is fixed, and the difficult function is optimized. This algorithm adopts one-dimensional optimization to handle multiple objectives.

Currently, the Pareto Dominance method has become increasingly popular, but none of the proposals to extend PSO to solve multiobjective optimization problems used a secondary population. This may limit the algorithm performance. However, in more recent papers, these ideas have already been incorporated by other authors. The most representative proposals are the following.

Coello and Lechuga [14] first extended PSO to handle multiobjective optimization problems by using a Pareto ranking scheme, and the use of global attraction mechanisms combined with a historical archive of previously found nondominated vectors would motivate convergence toward globally nondominated solutions. Additionally, repository updates and global best guide selection are performed considering a geographically based system (hypercube) defined in terms of the objective function values of each individual, which produces well-distributed Pareto fronts. The literature [15] includes an improved version of the algorithm [2], which has an added constraint-handling mechanism and a mutation operator that significantly improves the exploratory capabilities of the original algorithm to maintain diversity.

Hu et al. [16] proposed an improved version of the literature [13]. An extended memory is introduced to store global Pareto optimal solutions and reduce computation time.

Fieldsend and Sing [17] present a MOPSO, which uses an unconstraint archive. In this algorithm, a different data structure (called dominated tree) for storing the elite particles facilitates the choice of a best local guide for each particle of the population.

Mostaghim and Teich [18] proposed a MOPSO, which updated the external archive by Pareto Dominance. An archive is domination-free if no two points in the archive dominate each other. Obviously, while executing the function update, dominated points must be deleted to maintain a domination-free archive. Additionally, this method proposed a sigma method in which the best local guides for each particle are adopted to improve the convergence and diversity of a PSO approach used for multiobjective optimization.

Li [19] proposed an approach in which the main mechanisms of the NSGA-II [a] are adopted in a PSO algorithm. One mechanism is a Nondominated Sorting Particle Swarm Optimizer to push the population toward a Pareto front. To maintain population diversity, a crowding distance assignment mechanism is proposed that resorts the nondominant solutions in the nonDomPSOList according to crowding distance values and then randomly selects a global best guide for the th particle from a specific part (e.g., top 5%) of the sorted nonDomPSOList.

The Raquel and Naval Jr. [20] proposed algorithm extends PSO to solve multiobjective optimization problems by incorporating the mechanism of crowding distance computation in the global best selection and the deletion method of the external archive of nondominated solutions whenever the archive is full. The crowding distance mechanism combined with a mutation operator maintains the diversity of nondominated solutions in the external archive. In this algorithm, a bounded external archive stores nondominated solutions found in previous iterations. The global best guide of the particles is selected from among the nondominated solutions with the highest crowding distance values. Similar to [12], this method randomly selects a global best guide for the th particle from a specific part (e.g., top 10%) of the sorted archive.

In this paper, a minimum distance of point to line multiobjective particle swamp optimization (MDPL-MOPSO) is developed. The main differences between our approach and other proposals that exist in the literature are as follows:(1)This algorithm adopts a bounded archive mechanism to store the global Pareto optimal solutions. The global best guide of each particle in the population was selected from the archive.(2)A minimum distance of point to line method was utilized to find the global best guide of each particle in the population. The selection of the global best guide of the particle swarm is crucial in a multiobjective-PSO algorithm. It affects the convergence capability of the algorithm and maintains an adequate spread of nondominated solutions. No other proposal uses the mechanism in the way it is adopted in this paper.(3)The mutation operator of MOPSO was adapted because of the exploratory capability it could provide the algorithm by initially performing mutation on the entire population and then rapidly decreasing its coverage over time [14]. This is helpful in terms of preventing premature convergence due to existing local Pareto fronts in some optimization problems.(4)This algorithm is evaluated based on representative functions along with a performance comparison with the MOPSO algorithm [14], CD-MOPSO algorithm [15], and NSGA-II algorithm [7].

2. MDPL-MOPSO Algorithm

2.1. MDPL-MOPSO Implementation Procedure

(1)Particle swarm initialization;(a)Set swarm set number and particle dimension ;(b)Set the range for variable values for and ;(c)Particle speed control, variable ;(d)Swarm position and speed initialization, particle creation in random, , , .(2)Parameter setting evolution;(a)Max Iterations ;(b)Maximum and Minimum for inertia weight, value as = 0.4, = 0.9;(c)Learning factors and , .(3)Evaluate objective function valueFor to M (M is the population size)For to L (L is objective function numbers)(a)Evaluate (b)Particle normalization(4)Particle position initialization = Best particle found in (5)Archive initializationStore the non-dominated solutions found in into Archive(6)If iterations cannot achieve (a)Find from ArchiveLine solver for non-dominated particle from archive.Distance from to line .While , particle in archive will be selected as the global best guide, .(b)Particle speed and position update where is the number of iterations, and and are random numbers in the range .If extends beyond the boundaries, it is reintegrated by setting the decision variable equal to the value of its corresponding lower or upper boundary, and its velocity is multiplied by −1 so that it searches in the opposite direction.(c)Perform mutation on .(d)Evaluate (e)Archive updateInsert the new non-dominated solution in into Archive if they are not dominated by any of the stored solutions. All solutions in the archive dominated by the new solution are removed from the archive. If the archive is full, the solution to be replaced is determined according to the crowding distance values.(f)Update the personal best solution of each particle in . If the current dominates the position in memory, the particles position is updated using .(g)Increment iteration counter (7)Cycle count increase until iteration requirements are achieved.

The flowchart of the proposed algorithm for MDPL-MOPSO algorithm is shown in Figure 1.

2.2. Finding the Global Best Guide

As mentioned previously, several important MOPSO methods exist [15, 16, 20]. In each of these methods, there is a suggestion for finding the global best guide. In most of these methods, there is an inspiration of multiobjective evolutionary algorithms. In this section, we discuss some of these methods, their advantages, and disadvantages and introduce a new method for finding the global best guide.

2.2.1. Overview of the Method of Finding the Global Best Guide

(1) Handling Multiple Objectives with Particle Swarm Optimization [15]. In this method, the objective space is divided to hypercubes before selecting the global best guide for each particle. Next, a fitness value is assigned to each hypercube depending on the number of elite particles within it. The more elite the particles that are in a hypercube, the lower the fitness value. Then, roulette-wheel selection is applied to the hypercube, and one is selected. Finally, the global best guide is a random particle selected from the selected hypercube. Therefore, the global best guide is selected using the roulette-wheel selection method, which is a random selection. It is possible that a particle does not select a suitable guide as its global guide.

(2) Multiobjective Optimization Using Dynamic Neighborhood Particle Swarm Optimization [16]. In this method, a dynamic neighborhood strategy is used. This paper showed that, for two-objective optimization, the global best guide for the particle is found in the objective space. First, the distance between particle and other particles is calculated in terms of the first objective value, which is called a fixed objective. Then, k local neighbors based on the calculated distances are found. The local optima among the neighbors in terms of the second objective value are the global best guide for particle . In this method, fixed objective selection must be performed with a priori knowledge about the objective functions; and one-dimensional optimization is used to handle multiple objectives. Therefore, selecting the global best guide depends on only one objective.

(3) An Effective Use of Crowding Distance in Multiobjective Particle Swarm Optimization (CD-MOPSO) [20]. In this method, a crowding distance strategy is used. The crowding distance value of a solution provides an estimate of the density of solutions surrounding that solution. First, a bounded external archive stores nondominated solutions found in previous iterations. The nondominated solutions in the archive can be used as the global best guide of the particles in the swarm. Next, the nondominated solutions in the archive are sorted by decreasing crowding distance. Finally, the global best guide of the particles is selected from among the nondominated solutions with the highest crowding distance values. Selecting different guides for each particle in a specified top part of the sorted repository is implemented. Therefore, the global best guide is selected using the crowding distance value method, which is a random selection.

2.2.2. Minimum Distance of Point to Line Method

To overcome the disadvantages of the aforementioned methods, a MDPL-MOPSO is proposed to find the global best guide for each particle. First, the basic concept of the minimum distance of point to line (MDPL) is introduced. Later, this paper explains how this method determines the global best guide for each particle of the population in the objective.

In the two-dimensional coordinate system, a straight line, line , can be determined using the origin point and any point as shown in Figure 2. Line is defined as follows:

Point is any point outside line , and the distance between point and line is defined as follows:

Similarly, in the three-dimensional coordinate system, a straight line can be produced through the origin point and any point , as shown in Figure 3 as line . Line is defined as follows:

Point is any point outside line , and the distance between point and line is defined as follows:

Here, we use two-objective optimization as an example to illustrate how to find global optimal guide.

Using the basic concept of a distance of point to line and considering the objective space, finding the global best guide among the archive members for particle of two-objective optimization populations is as follows:

First, we draw line of point with coordinates in the two-objective spaces. is the related nondominated particle in the archive. is defined as follows:

Second, we calculate distance from point with coordinates in the objective space related to the population particle to line (). is defined as follows:

Finally, archive particle will be considered in terms of optimization, while distance from to archive line is the minimum. is defined as follows:

In other words, each particle with a minimum distance to the line of the archive member must select that archive member as the global best guide. Therefore, MDPL-MOPSO can determine the most appropriate guide as its global guide for each particle in the population.

As shown in Figure 4, point () with coordinates , , , and is the Pareto optimal front four points of the two-objective space whose nondominated particles in the archive are (). Line () is achieved according to (14). The corresponding point of particle in the objective space is point with coordinates . The distance () is the distance between point and line . If it is assumed that the distance is the minimum value, the nondominated particle in the archive is selected. Particle is the global best guide of population particle .

Figure 5 shows how the algorithm determines the global best guide among the archive members for each particle of the population for a two-dimensional objective space. The reason particle is selected from the archive members as the global best guide is the minimum distance between the population particles and line of particle . This method can guide the population particles to move directly toward the Pareto optimal front.

3. Performance Evaluations

3.1. Testing Functions and Performance Targets

To evaluate the performance of the proposed MDPL-MOPSO, six representative test functions denoted as DT1, ZDT2, ZDT3, ZDT4, and ZDT6 are employed, which are detailed in Table 1. These functions were defined and used by Zitzler et al. [21] to demonstrate the effectiveness of their evolutionary algorithms in handling the common issues of multiobjective converging to the Pareto optimal front and maintaining diversity within the population.

In addition, convergence and diversity are the significant criteria for algorithm performance. Generational distance (GD) is one measure to represent convergence metric, and Spacing () is a measure of equilibrium of distribution and diversity [7].

(1) Generational distance (GD): convergence measure (GD) is a measure of the convergence range of the multiobjective optimization algorithm. First, we find a set of solutions from the true Pareto optimal front in the objective space. Second, for each solution obtained with an algorithm, we compute the minimum Euclidean distance from chosen solutions on the Pareto optimal front. The root mean square is expressed as the convergence measure GD. As shown in Figure 6, the shaded area represents the possible solution area. The solid line is the true Pareto optimal front, and the hollow point is the selected theoretical optimal solution. The dashed line is the Pareto optimal front obtained by the multiobjective optimization algorithm. The real point is the Pareto optimal solution of the archive. The smaller the value, the better the convergence of the algorithmwhere is the quantity of Pareto front solutions using an optimized algorithm and denotes the distance between Pareto front particle and the closest Euclid of the optimum solution. GD theoretically represents the distance between the Pareto front based on algorithm optimization and the optimized solution for testing functions. A smaller GD value indicates better convergence of the algorithm.

(2) Diversity measure is a measure of the distribution and diversity of the multiobjective optimization algorithm. It is more important to obtain a set of optimal solutions distributed throughout the entire Pareto front. As shown in Figure 7, the shaded areas represent the possible solution areas, the dashed lines are the Pareto optimal front obtained by the multiobjective optimization algorithm, and the solid points are the Pareto optimal solutions for the external archives. The smaller the value, the better the convergence of the algorithmwhere is the quantity of Pareto front solutions using an optimized algorithm and denotes the Euclidean distance (in the objective space) between the members in the Pareto optimal front and its nearest member in the Pareto optimal front. A smaller value of implies a more uniform distribution of solutions in the Pareto optimal front.

3.2. Results and Discussion

To verify the performance of the proposed algorithm, a comparison analysis is provided between the results and CD-MOPSO that are conducive to archive update and maintenance as well as selection. Through a ten-cycle testing function simulation, GD for convergence and of diversity could be calculated. This can be summarized as a mean value for algorithm performance and variance for stability.

Raquel and Naval Jr. [20] proposed a CD-MOPSO algorithm. There are 30 particles in the particle swarm, the maximum number of iterations is 400, and the learning factor is 2. The MDPL-MOPSO algorithm has the same parameters as the CD-MOPSO algorithm. In the following analysis, results obtained by performing 30 independent tests for each algorithm are compared.

Figures 812 indicate a comparison analysis between the Pareto front generated from five testing functions and a true Pareto front. Tables 2 and 3 refer to two different algorithms of convergence GD value and diversity value for the five testing functions. Row 1 is the mean value and Row 2 is the variance value in the figures.

From Figures 8 to 12, both MDPL-MOPSO and CD-MOPSO can cover the Pareto front of ZDT1, ZDT2, and ZDT4, despite ZDT3 and ZDT6 being partially covered. Compared with CD-MOPSO, MDPL-MOPSO is closer to the Pareto front in terms of ZDT3, while CD-MOPSO is more suitable regarding ZDT6. This can also be observed from the information in Tables 2 and 3.

Table 2 refers to convergence GD of testing function optimum solutions, in which lower values indicate better convergence. Furthermore, the GD results from MDPL-MOPSO are less than the GD in CD-MOPSO for functions ZDT1, ZDT2, ZDT3, and ZDT4, which indicates that the convergence and stability of MDPL-MOPSO are better than CD-MOPSO, and the mean value and variance value are less than in CD-MOPSO. Considering GD mean value increased by 16% as function ZDT6, the convergence of MDPL-MOPSO is slightly worse than CD-MOPSO.

Table 3 is the diversity for testing function optimum solutions. For lower values of , the diversity is improved. Of the Table 3 information, the results and GD convergence follow the same trend. However, the diversity of the MDPL-MOPSO algorithm improves as ZDT1–ZDT4 is considered. As for ZDT6, the diversity of MDPL-MOPSO is slightly less conducive than CD-MOPSO because the mean value increases by approximately 9%.

Therefore, it can be concluded that MDPL-MOPSO has improved convergence and diversity performance compared with that of the CD-MOPSO algorithm.

4. Multiobjective Optimization Design of Vehicle Driving Dynamics

Nariman-Zadeh et al. [1] and Mahmoodabadi et al. [2] performed two-objective and five-objective optimization using advanced evolutionary algorithms and particle swarm optimization, which resulted in numerous effective trade-offs between conflicting objectives, including seat acceleration, front tire velocity, rear tire velocity, relative displacement between sprung mass and front tire, and relative displacement between sprung mass and rear tire, for a five-degree-of-freedom (five-DOF) vehicle vibration model.

In this paper, an improved multiobjective particle swarm optimization (MDPL-MOPSO) algorithm is used to optimize the five-DOF vehicle driving dynamics in [1, 2] and is compared with the references.

4.1. Five-Degree-of-Freedom Vehicle Model

A five-degree-of-freedom vehicle with passive suspension, which is adopted from [2], is shown in Figure 13.

The fixed parameters and design variables of the vehicle are shown in Table 4.

Kinetic energy of vehicle vibration five-degree-of-freedom model:

Potential energy of vehicle vibration five-degree-of-freedom model:

Dissipation energy of vehicle vibration five-degree-of-freedom model:

Substituting (13)–(15) into the Lagrange motion equation yields the vibration differential equations:

Then, a matrix form of the vibration model of (17) can be written for efficiently obtaining solutions efficiently: where matrices , , and represent the mass matrix, damping matrix, and stiffness, is the tire stiffness matrix, and , , and are displacement, speed, and acceleration column vectors. represents the road activation vector. Displacement vector and the road activation vector is . The road activation vector is shown in Figure 14.

It is assumed that the vehicle moves at a constant velocity = 20 m/s. It is further assumed that the rear tire follows the same trajectory as the front tire with a delay of = .

4.2. Design Variables and Objective Functions
4.2.1. Design Variables

To optimize the suspension system, corresponding stiffness and damping parameters are used as the design variables, which are defined as . Their bounds are defined in Table 4.

4.2.2. Objective Functions

The five conflicting objectives are seat acceleration , front tire velocity , rear tire velocity , relative displacement between sprung mass and front tire , and relative displacement between sprung mass and rear tire for the five-DOF vehicle vibration model. For this purpose, four of the ten possible pairs of five objectives are considered in various two-objective optimization processes. The pairs of objectives to be optimized separately chosen were , , , and . All the objective functions must be minimized simultaneously.

4.3. Multiobjective Optimization Result Analysis

The Pareto front for the vibration system is obtained using MDPL-MOPSO and is shown in Figures 15 and 16. Obviously, these two objectives are interinfluenced, as one objective is better optimized than another objective. The two end points demonstrate the two extreme cases of these conflicting objectives. However, the points between these extremes are the nondominated optimum points, any of which can be selected as design points. In other words, any other set of objectives will locate a point inferior to the corresponding Pareto front.

Figure 15 shows the Pareto front of seat acceleration and forward tire velocity representing different nondominated optimum points with respect to these conflicting objectives. In this figure, points A1, A2 and B1, B2 represent the best seat acceleration and best forward tire velocity, respectively. Note that all the optimum design points in this Pareto front are nondominated and could be chosen by a designer. Considering the actual engineering optimization problem, the engineer chooses a trade-off solution according to the actual request. In Figure 15, points C1 and C2 are trade-off design points.

Nondominated Pareto fronts for another chosen set of objective functions are shown in Figure 16. Design points A3 and A4 represent the best seat acceleration , while points B2, B3, and B4 represent the best , , and , respectively. Similarly, C3 and C4 are selected as the trade-off design points.

The corresponding values of objective functions and design variables of these optimum design points are provided in Tables 58. In these tables, points C_1, C_2, C_3, C_4, and C_1′, C_2′, C_3′, C_4′ represent the optimum design points reported in [1, 2]. It is evident that those design points are significantly dominated by proposed trade-off designing points.

The time behavior of the seat acceleration of the trade-off design points of these figures and the optimum point proposed in [1, 2] is shown for comparison in Figures 1720. It is obvious from these figures that the seat acceleration for the design points obtained in this paper is better than the acceleration using the design points given in [1, 2].

5. Conclusions

In this work, an improved multiobjective particle swarm optimization algorithm was introduced. A new method was proposed to determine the global best guide particle for the population particle based on the basic concept of the minimum distance of point to line (MDPL). To consider the optimization performance of the proposal algorithm, five two-objective test functions were used. The results were compared with the results of the CD-MOPSO algorithm. The results indicate that the improved multiobjective particle swarm optimization algorithm is a successful method. Moreover, MDPL-MOPSO has been used to optimally design a five-degree-of-freedom vehicle vibration model, which resulted in numerous effective trade-offs between conflicting objectives, including seat acceleration, front tire velocity, rear tire velocity, relative displacement between sprung mass and front tire, and relative displacement between sprung mass and rear tire. A comparison of the obtained results and the literature demonstrates the superiority of this work.

Appendix

Matrices of Model

Matrices , , and represent the mass matrix, damping matrix, and stiffness; is the tire stiffness matrix. is described by The stiffness matrix is described by The damping matrix is described by The tire stiffness matrix is described by

Conflicts of Interest

The authors declare that they have no conflicts of interest.