Modelling and Simulation in Engineering

Volume 2017, Article ID 2034907, 17 pages

https://doi.org/10.1155/2017/2034907

## Pareto Optimization of a Half Car Passive Suspension Model Using a Novel Multiobjective Heat Transfer Search Algorithm

^{1}Mechanical Engineering Department, School of Technology, Pandit Deendayal Petroleum University, Gandhinagar, Gujarat 382007, India^{2}Simon Fraser University, Burnaby, BC, Canada^{3}Department of Mathematics and Statistics, Thompson Rivers University, Kamloops, BC, Canada

Correspondence should be addressed to Vimal Savsani; moc.liamg@inasvas.lamiv

Received 16 August 2016; Revised 18 January 2017; Accepted 24 January 2017; Published 3 May 2017

Academic Editor: Mohamed B. Trabia

Copyright © 2017 Vimal Savsani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Most of the modern multiobjective optimization algorithms are based on the search technique of genetic algorithms; however the search techniques of other recently developed metaheuristics are emerging topics among researchers. This paper proposes a novel multiobjective optimization algorithm named multiobjective heat transfer search (MOHTS) algorithm, which is based on the search technique of heat transfer search (HTS) algorithm. MOHTS employs the elitist nondominated sorting and crowding distance approach of an elitist based nondominated sorting genetic algorithm-II (NSGA-II) for obtaining different nondomination levels and to preserve the diversity among the optimal set of solutions, respectively. The capability in yielding a Pareto front as close as possible to the true Pareto front of MOHTS has been tested on the multiobjective optimization problem of the vehicle suspension design, which has a set of five second-order linear ordinary differential equations. Half car passive ride model with two different sets of five objectives is employed for optimizing the suspension parameters using MOHTS and NSGA-II. The optimization studies demonstrate that MOHTS achieves the better nondominated Pareto front with the widespread (diveresed) set of optimal solutions as compared to NSGA-II, and further the comparison of the extreme points of the obtained Pareto front reveals the dominance of MOHTS over NSGA-II, multiobjective uniform diversity genetic algorithm (MUGA), and combined PSO-GA based MOEA.

#### 1. Introduction

In recent time, the multiobjective evolutionary algorithms (MOEAs) have gained enormous attention in solving the engineering optimization problems with more than one objective. The multi/many objective optimization problems (MOOPs) differ from their single objective counterparts (SOOPs) in terms of both their problem definitions/statements and methods to solve such problems. MOOPs have the conflicting objectives to be optimized simultaneously and solving these yields a set of optimal or trade-off or Pareto solutions (Pareto front points), whereas SOOPs have a single objective at a given time and the solution is usually a single optimal point. The classical methods such as calculus based methods, gradient based methods, and elimination and interpolation methods and the like for solving SOOPs and methods, for instance, weighted sum method and constraint method for solving MOOPs are limited to the problems with simple objective functions and constraints. This is due to their nature to get stuck to a suboptimal solution, their convergence dependency on initial guess, and their unsuitability in solving a large variety of optimization problems [1]. Alternatively evolutionary algorithms (EAs), or metaheuristics, have had a remarkable success in finding the global optimum of complex problems nearly in all the disciplines of the knowledge. This success of EAs and their nature of using a population of solutions had led the researchers to employing the search techniques of EAs to optimize the MOOPs. Such algorithms to solve MOOPs are referred to as multiobjective evolutionary algorithms (MOEAs) when they use EAs as their basic search techniques and are referred to as simply multiobjective optimization algorithms (MOOAs) when they use any metaheuristics in general. Primarily all the methods to solve MOOPs have two goals to attain. The first is to find the Pareto front solutions as close as possible to the optimal Pareto front and the second is to maintain diversity among the optimal set of solutions [1].

One of the oldest attempts to have employed the EA to form a population based multiobjective evolutionary algorithm was by Schaffer [2] and the method is known as vector evaluated genetic algorithm (VEGA). VEGA does the selection for each objective separately [3], but its incapability in finding all the Pareto front solutions had limited its applications to very few real world optimization problems. These demerits of VEGA were overcome by algorithms which were inspired by the work of Goldberg and Holland [4], such as multiobjective genetic algorithm (MOGA) by Knowles and Corne [5], nondominated sorting genetic algorithm (NSGA) by Srinivas and Deb [6], and niched Pareto genetic algorithm (NPGA) by Horn et al. [7]. These new algorithms introduced the concept of Pareto optimality into the selection process which awarded them a great success in application to various disciplines of engineering as thoroughly described in [8]. Consequently the researchers had started experimenting various ways in assigning fitness values to the populations and in maintaining the diversity of optimal points which led to the development of new state-of-the-art MOEAs, for example, strength Pareto evolutionary algorithm (SPEA2 and SPEA) by Zhou et al. [9] and by Zitzler and Thiele [3], Pareto archived evolution strategy (PAES) by Knowles and Corne [5], Pareto envelope-based selection algorithm (PESA and PESA-II) by Corne et al. [10] and Horn et al. [7], an elitist based nondominated sorting genetic algorithm-II (NSGA-II) by Deb et al. [11], and a multiobjective evolutionary algorithm based on decomposition (MOEA/D) by Zhang and Li [12] and its improved versions described in [9]. In addition, swarm intelligence based search algorithms for multiobjective optimization [13–15] have also been developed and applied to a variety of MOOPs. Several more recently developed multiobjective algorithms [16–19] based on the search techniques other than the ones used in genetic algorithm [4] or EAs and particle swarm optimization [20] have also shown a great potential in obtaining the Pareto front closest to the true Pareto front.

In this study a nonlinear MOOP of optimizing the vehicle suspension system design having conflicting objectives is optimized. The goal is to optimize the design of passive suspension system of the passenger road vehicle by using a 2-dimensional pitch-heave ride model, also known as half car model. This half car ride model had been used by many researchers as an MOOP to test the performance of their proposed MOOAs, for instance, multiobjective uniform diversity genetic algorithm (MUGA) by Jamali et al. [21] and hybrid PSO and GA algorithm for MOOPs by Mahmoodabadi et al. [22]. The following paragraph illustrates the MOOP of half car ride model.

Vehicle suspension system has essentially three objectives, good ride comfort, good road holding or handling, and limited suspension deflection or suspension rattle space. These objectives are conflicting with one another and the designers developing such products are always seeking an optimized passive suspension system fulfilling largely all three conflicting objectives simultaneously. In the present study the multiobjective optimization problem is solved by employing a lumped mass-spring-damper vibration model for the half car vehicle suspension system and then applying MOOAs to get the desired set of optimal design solutions, that is, trade-off or Pareto optimum design points. The employed 2D half car model has five degrees of freedom with one lumped sprung and two lumped unsprung masses (front and rear tires) and thus has a set of five second-order, linear, ordinary differential equations of motion.

In the present paper a multiobjective heat transfer search (MOHTS) algorithm for solving MOOPs is proposed and has been applied to the MOOP of optimizing the suspension parameters of the half car model with driver’s seat having total five degrees of freedom as used in [21–23] to evaluate the potential of the proposed MOHTS algorithm. Moreover, optimization results of MOHTS are compared with the ones obtained from the popular MOEA named NSGA-II and with the other algorithms which were developed and applied to solve the multiobjective half car optimization problem. The proposed algorithm (MOHTS) employs a very recently developed metaheuristic algorithm named heat transfer search algorithm [25] as a base search technic. Additionally, for sorting of the population an elitist nondominated sorting approach and for maintaining the diversity among multiple optimal solutions of each nondominated front the crowding distance approach of NSGA-II are adopted.

The second section briefly introduces the basic concepts and definitions pertaining to multiobjective optimization. The third section describes the overview of the search technic of heat transfer search (HTS) algorithm, elitist based nondominated sorting and crowding distance approach of NSGA-II, and then explains the architecture and working of the proposed MOHTS algorithm. The fourth section of the paper illustrates two different half car multiobjective optimization problems which are solved numerically, elucidating the potential of the proposed MOHTS algorithm. Firstly the half car MOOP with five conflicting objective functions as used in [21–23] is employed in numerical study 1 to test and compare the performance of MOHTS with NSGA-II, MUGA [21], and with the combined PSO and GA based MOEA [22]; secondly another half car MOOP with a different set of five more realistic conflicting objective functions from the ones used in numerical study 1 is formulated and tested on the platforms of MOHTS and NSGA-II.

#### 2. Terminologies in Multiobjective Optimization

A multiobjective optimization problem can be stated as follows [1, 26].

Find the design vector , which minimizes

subject to inequality and equality constraints, respectively, and bounds on design vector members

Here, is the vector of design variables known as design vector with its bounds as shown in (4); forms an -dimensional Cartesian space called design space created by representing each variable with one coordinate axis. is the vector of objectives or decisions known as objective vector which is to be minimized meaning all the objectives are to be minimized simultaneously. Unlike the single objective optimization problem (SOOP), MOOP creates a multidimensional objective space in the present case it is an -dimensional objective space with Cartesian each axis as one objective function. Equations (2) and (3) show the inequality and equality constraints, respectively, which must be satisfied by the design vector in order to create the feasible solutions. The set of all feasible solutions is known as the feasible region . In general, there exists no solution vector which can minimize all the objective functions simultaneously; hence a concept called Pareto optimum solution which is used in solving MOOPs is now introduced.

##### 2.1. Pareto Dominance and Optimality

For the multiobjective minimization problem as stated above, the set of optimum solutions (Pareto optimal design vectors) includes those vectors which cannot minimize any member of the corresponding objective vector without deteriorating another member [3]. To illustrate the Pareto optimality mathematically, consider two design vectors , ; then vector is said to be dominant over vector (denoted by ) iff , . In a given set of solutions, the design vectors that are nondominated by any other design vector of the same set are known as nondominated regarding that set. Furthermore, the design vectors which are nondominated over the entire feasible region are known as Pareto optimal solutions and these solutions collectively form a set called Pareto optimal set (PS) or Pareto optimal front (PF).

##### 2.2. Solving MOOPs by Extending EAs and Metaheuristics

Evolutionary or population based algorithms for solving SOOPs yield a set of solutions known as populations instead of a single solution after one iteration. This nature of metaheuristics in obtaining multiple solutions in each iteration makes it suitable to solve MOOPs. As discussed in the first section, MOOPs have primarily two goals: the first is to obtain the Pareto front as close as possible to the optimal Pareto front and the second is to preserve the diversity between the Pareto optimal solutions. The first aim can be achieved by choosing an appropriate fitness assignment scheme that prefers nondominated solutions, and the second can be accomplished by using an appropriate strategy that preserves the diversity among the solutions of each nondominated front. For instance, NSGA [6] used nondominated sorting as a fitness assignment procedure in which each individual from the population is compared with the others to find its nondominancy and thus to obtain the first Pareto front this will be followed by sorting all the nondominated individuals from the population and this will be repeated until the entire population is sorted in different fronts. In addition, for maintaining the diversity among the solutions in a front, a front wise sharing function method was used in NSGA which calculates the Euclidean distance for each solution in the Pareto front from another solution in the same front. The performance of NSGA was further improved in terms of converging the solution more close to the Pareto front by employing an elitist based nondominated sorting, in maintaining a widespread of solutions by applying crowding distance approach instead of sharing function method, and in reducing the computation time. This improved NSGA was named NSGA-II [11]. The proposed algorithm (MOHTS) adopts the fitness assignment and diversity preserving mechanisms of NSGA-II over the search method of HTS for solving MOOPs.

#### 3. Multiobjective Heat Transfer Search Algorithm

This section explains the working of the proposed MOHTS algorithm. The different elements of the MOHTS algorithm are basic search technic which is HTS and nondominated sorting method and diversity preserving crowding distance approach of NSGA-II. These elements which constitute the proposed algorithm are briefly explained first followed by the MOHTS procedure section described by the pseudocode for better understanding and software implementation for the readers and users of this algorithm.

##### 3.1. Basic Heat Transfer Search Algorithm

A very recently developed metaheuristic algorithm called heat transfer search (HTS) algorithm [25] which is based on the thermodynamic mechanism of the heat transfer has been employed as a basic search strategy in the proposed MOHTS algorithm. The search agents in the algorithm are the molecules which interact with one another and with the surrounding for gaining the thermal equilibrium. Unlike other metaheuristic algorithms, the HTS has been developed by putting more efforts in setting a well-balanced trade-off between the intensification (exploitation) and diversification (exploration). The algorithm is divided into three phases as conduction, convection, and radiation; and in the course of entire search process each phase is performed with an equal probability. All the three phases intensify the search space in initial iterations and later diversify during remaining iterations. The following sections briefly explain all three phases.

###### 3.1.1. Conduction Phase

As stated above in HTS algorithm the population of solutions is represented as number of molecules, design variables are represented as molecules’ temperature level, and fitness values are represented as energy level of molecules. In this phase energy from the molecules with high energy is transferred to the ones with the lower energy. The population is updated during this phase and can be described as follows: If number of iterations ≤ maximum number of iterations/CDF (initial populations) and if then Else ; then Else, for the remaining populations (number of iterations maximum number of iterations/CDF), if then Else ; then

Here, and indicate randomly picked solutions and vary from 1 to (population size); and indicates the design variable index ranging from 1 to (total number of design variables), which is also selected randomly, and CDF is the conduction factor. Moreover, and are probability value for conduction phase (this value is between 0 and 0.3333) and random number between 0 and 1, respectively. For the first half of the total generations (CDF should be taken as 2), a function evaluation comparison is made between two randomly selected solutions; the inferior of the two is replaced by (5) and (6). For the latter half the inferior solution is replaced by (7) and (8). Thus the switching from (5) and (6) to (7) and (8) obtains both the intensification and diversification for maintaining the algorithm’s potential to seek an optimum solution.

###### 3.1.2. Convection Phase

In this phase, a thermal equilibrium is attained as system’s mean temperature () interacts with the surrounding temperature (). The latter temperature is assumed to be the best solution and the solution is updated as follows: If current iteration number maximum number of iterations/CVF (initial populations) then Else, for the remaining population,

It should be noted that, along with each solution , each design variable is also updated during this phase. Here is taken in the range from 0.6666 to 1, and CVF (convection factor) is taken as 10. That is to say, for the first one-tenth of the iterations, the algorithm explores the feasible design space and for the remaining iterations it exploits the same, but in actual practice this condition does not take place as exploration (diversification) and exploitation (intensification) are taken care of by the last terms of (9) and (10).

###### 3.1.3. Radiation Phase

In this phase, system (solution) communicates either with the surroundings which is the best solution or within the system which is some other solution to attain the thermal equilibrium. The solutions are improved as described below. If number of iterations maximum number of iterations/RF (initial populations) and if then Else ; then Else, for the remaining populations, if then Else ; then

Here and are randomly selected solutions for which their fitness values are compared with each other and that determines how each solution is updated using (11), (12), (13), and (14). The range of in this phase is from 0.3333 to 0.6666, and radiation factor is taken to be 2.

All three phases of the HTS algorithm are employed as basic search technic in the proposed multiobjective heat transfer search (MOHTS) algorithm. Notice that the proposed MOHTS algorithm uses the values of the constants CDF, CVF, and RF as used in [25]. Readers are advised to refer to [25] for clear justification behind using these particular values.

##### 3.2. Elitist Nondominated Sorting and Crowding Distance Approach

Elitist nondominated sorting method and diversity preserving crowding distance approach of NSGA-II are introduced in the proposed MOHTS algorithm for sorting of the population in different nondomination levels with computed crowded distance. Firstly for each solution obtained from the basic search method (HTS) or from initially generated random population , all the objectives from the objective vector are evaluated. In addition, a domination count defined as number of solutions dominating the solution and which is a set of solutions dominated by solution are calculated. Secondly, all the solutions are assigned a domination count zero and are put in first nondominated level also known as Pareto front (PF) and their nondomination rank () is set to 1. Thirdly, for each solution with , each member of the set is visited and its domination count is reduced by one. While reducing count if it falls to zero the corresponding solution is put in second nondomination level and is set to 2. The procedure is repeated for each member of second nondomination level to obtain the third nondomination level, and subsequently the procedure should be repeated until the whole population is sorted into different nondomination levels.

In crowding distance approach for maintaining diversity among the obtained solutions firstly the population is sorted according to value of each objective function in ascending order. An infinite crowding distance is then assigned to the boundary solutions, and , of each objective. Here is the total number of solutions in a particular nondominated set. The boundary solutions are the minimum () and maximum () function values. Except the boundary solutions, all the other solutions of the sorted population (* to *) for each objective () are assigned the crowding distance () as

In (15) the right hand side term is the difference in values of objective function for two neighbouring solutions ( and ) of solution *.* Now each solution is assigned two entities, nondomination rank and crowding distance . A crowded comparison operator () is used as follows to compare two solutions ( and ) , if or ( and ). That is to say, between two solutions, the one with the lower nondomination rank is preferred and if both the solutions have the same nondomination rank then the one with the higher crowding distance is preferred.

##### 3.3. MOHTS Procedure

The procedure of the proposed MOHTS algorithm has been shown in Pseudocode 1. Firstly the parameters such as population size (), termination criteria, here being the maximum number of generations , conduction, convection, and radiation factors (CDF, CVF, and RF, resp.) are initialized. Secondly a random parent population in feasible region is generated and each objective function of the objective vector for is evaluated. Next, elitist based nondominated sorting and crowding distance computation as explained in earlier section is applied on . Thirdly HTS algorithm is employed to create the offspring population , which is then merged with to form the merged parent-offspring population . This is sorted based on elitism nondomination, and based on the computed values of and the best solutions are updated to form a new parent population. This process is repeated till the maximum number of generations (iterations) is reached. It should be noted that the same algorithm can also be used with the termination criteria set on the basis of number of function evaluations. Since nondominated sorting and crowding distance assignment of MOHTS area is adopted from NSGA-II, the computational complexity of MOHTS is also the same as NSGA-II which is , where is the total number of objective functions and is the population size.