Abstract

In this paper, two novel algorithms are designed for solving biobjective optimization engineering problems. In order to obtain the optimal solutions of the biobjective optimization problems in a fast and accurate manner, the algorithms, which have combined Newton’s method with Neumann series expansion as well as the weighted sum method, are applied to deal with two objectives, and the Pareto optimal front is achieved through adjusting weighted factors. Theoretical analysis and numerical examples demonstrate the validity and effectiveness of the proposed algorithms. Moreover, an effective biobjective optimization strategy, which is based upon the two algorithms and the surrogate model method, is developed for engineering problems. The effectiveness of the optimization strategy is proved by its application to the optimal design of the dummy head structure in the car crash experiments.

1. Introduction

It is very important to research on the multiobjective optimization problems in the engineering designs. For example, the economist fuel, the maximum carrying capacity, and the lightest weight need to be considered at the same time in the design of aircraft and spacecraft [1]; the strong rigidity, lightweight, and low-order modes also need to be considered commonly in the thin-walled beam section optimization problem of the automobile body structure design [2]. According to the investigation on the dummy head under automobile impact condition, the peak of synthetic acceleration of frontal and lateral drop is the main indicators of mechanical characteristics of dummy head [3]. A bilevel optimization was carried out for the cross-sectional shape of a thin-walled car body frame constrained with static and dynamic stiffness [4]. The common coin of these engineering problems is complex mechanical structure and too much design variables always lead to the intricate solving procedure and large amount of computation with the purpose of multiple objectives meeting the best simultaneously. However, in most cases, the change of one target may cause an influence on the other ones. It is almost impossible to get a solution which can make each objective function reach the optimal value [5]. Therefore, the importance of carrying out the research of multiobjective optimization is of significance to engineering itself especially.

In most cases, an optimal solution which meets all objectives at the same time in a multiobjective problem does not exist. Thus, the key to describe an optimization problem is establishing a scientific and reasonable standard. However, it is also an effective and acceptable way to keep all object values on a relatively better level in the case that the optimal solutions cannot be obtained simultaneously. So, designers can choose one from several groups of the relatively better designs to guide decisions based on engineering background knowledge. The concept of Pareto optimal solution in a multiobjective optimization problem is an objective description which can take into account of every object thoroughly, so that optimization schemes can be calculated by designers in the circumstances of keeping the whole optimization level from dropping [6].

The idea of solving numerical multiobjective optimization problem is a scalarization process, which uses a suitable scalar (single objective) optimization problem instead of the vector (multiobjective) optimization problem [7]. The commonly used algorithms are provided for scalarization process such as minimax method [8], constraint method [9], and usual weighted sum method [10].

The proposed minimax method is a classical multiobjective optimization algorithm [8]. By proving that the set of Pareto optimal solutions coincides with the set of stationary points, it is a parameter-free optimization method for computing a point satisfying a certain first-order necessary condition in multiobjective optimization. It borrows the idea of Newton’s method for single-objective optimization and with respect to the authors’ theoretical results obtained; Newton’s method for multiobjective optimization behaves exactly as its counterpart for single-criterion optimization: it is fairly robust with respect to the dimension of the problem and the starting point chosen, the rate of convergence is at least superlinear, and it is quadratic if the second derivatives are Lipschitz-continuous. But the authors did not discuss the adaptation of the approach they proposed for constrained multiobjective problems. Quasi-Newton’s method for solving multiobjective was proposed by Qu et al. [11] and Povalej [12]. By using the well-known BFGS method and the idea of [8], the authors had proven that quasi-Newton’s method for multiobjective optimization converges superlinearly to the solution of the given problem, if all functions involved are twice continuously differentiable and strongly convex. The advantage of this method, compared to Newton’s approach, is that the approximation of Hessian matrices is usually reasonably faster than their actual evaluation. This difference is especially noticeable when the dimension of the problem rises. The adaptation of this approach to constrained multiobjective optimization is not considered too.

The representative constraint method is -constraint method [9]; it retains the objective function which designers most prefer, as an objective function of single-objective optimization function, turning other objective functions into constraints by adding a restriction domain [13]. This algorithm has high efficiency and produces Pareto solutions which have a relatively broad range and does not need to make a priority of getting each objective function in grading (to determine the weight) [6]. However, -constraint method cannot guarantee that the result is a Pareto optimal solution; selecting an appropriate constraint value often requires some prior knowledge and has a low calculating efficiency when the number of objective function increases. The main drawbacks of these common methods are the limitations on calculation and dissatisfactions with the quality of Pareto optimal solutions [14].

The weighted sum method has been widely used because of its simplicity and high computational efficiency [15]. The early usual weighted sum method transforms multiple objectives into an aggregated objective function by multiplying each objective function by a weighted factor and adding them up. But it has two drawbacks: difficulty to obtain Pareto optimal solutions uniformly and failure to solve nonconvex problems [1618].

Many methods for solving nonconvex optimization problems have been proposed over the decades. Typical one is the normal-boundary intersection method (NBIM) [19]. It approaches a group of Pareto optimal solutions through geometric intuition parametric method and gives an accurate pattern of Pareto front. NBIM can not only obtain the Pareto optimal solutions in nonconvex regions but also the solutions are uniformly distributed. However, there are still serious defects, for example, non-Pareto optimal solutions (dominated solutions) are also obtained which must be filtered out. The adaptive weighted sum method (AWSM) is presented for the biobjective optimization problems [20], by adding inequality constraints based on traditional weighted sum methods and redefining feasible regions of optimization problems. So, the solve area is extended, and the optimal solutions are iterated automatically.

There are many biobjective optimization problems in engineering applications. The increase of each objective value will immediately cause an influence on another one. For example, energy-absorbing and impact force are a typical pair of contradictory optimized objectives in bumper-crash box design, which needs to make the maximum of impact force decrease while maximize energy absorption to the peak value. But in the practical engineering, with the rising of energy absorption, the impact force of the crash box will be even greater. So, the biobjective optimization is of great significance in engineering.

In this paper, two new algorithms for biobjective optimization problem are presented. One is Newton Neumann Series Expansion Algorithm (NNSEA) for unconstrained problem and Newton Neumann Series Expansion Frisch Algorithm (NNSEFA) for constrained problems. Two examples are given to demonstrate the valid and effectiveness of the algorithms, respectively. Finally, two algorithms are applied to the optimization problem in a dummy head design, and some good results are obtained. The following sections will discuss them in detail.

2. Newton Weighted Sum Algorithm for Unconstrained Multiobjective Optimization

2.1. Definition and Some Theories of the Multiobjective Optimization Problems

In order to accurately describe the concept of Pareto optimal solution, some definitions and symbols of multiobjective optimization will be presented first.

In this paper, denote by the positive integer set, by the real number set, by the n-dimensional real vector space, and by the linear space which is composed of n-order real matrix. The Euclidean norm in will be denoted by , and we will use the same notation to describe the induced operator norms on the corresponding matrix spaces. is a vector of design variables. is the vector-valued objective function which components are n-variable real functions for all .

A general multiobjective optimization problem can be defined as follows: where is called the objective vector-valued function and is the feasible region of (1). can be described by where and are the equality and inequality constraints of multiobjective optimization, respectively. If , (1) is called an unconstrained multiobjective optimization problem.

For solving (1), we provide the concept of Pareto optimality as explained below.

Definition 1. A point is a local Pareto optimum or local Pareto optimal solution of if and only if there does not exist such that Note that if and are both convex, then the local Pareto optimality is equivalent to the global Pareto optimality. So a Pareto optimal solution means the reasonable solution, which satisfies the objectives at an acceptable level without being dominated by any other solution.

In order to obtain the information of every objective function and the change tendency of optimization process more intuitively, the set of objective function values can be used in case of making impolitic decision. The detailed definition is as below.

Definition 2. If is the set of Pareto optimal solutions in (1), then set is a Pareto front of for which holds.
Assume is twice continuously differentiable on feasible region , i.e., . And for , let and denote the gradient and Hessian matrix of at for all , respectively.

Throughout the paper, unless explicitly mentioned, we will assume that with strong convexity which implies the is positive definite for all , and .

2.2. Newton Method Based on Weighted Sum Technique

Newton’s method is extensively used in optimization problem, and the iteration direction includes the gradient and Hessian matrix information of objectives. When the initial iteration point is very close to the optimal point, the rate of convergence is rapid. And if the objective functions satisfy some conditions, it can achieve superlinear convergence or quadratic convergence. So in multiobjective optimization problem, Newton method combined with weighted sum method is chosen as the main calculation algorithm. The derivation process is as follows.

In the multiobjective optimization problem (1), , the Taylor expansion of around is

Hence, the second order approximate Taylor expansion of around is

Here, in (6), is positive definite. Hence, the problem is converted from finding the minimum of into finding the second order approximation minimum of . Since from the derivative of (6) at both sides with respect to , using the necessary condition of extreme value, we obtain

Considering the algorithm is an iterative process, take , we have the Newton iteration method for single objective as

And note the iteration direction of Newton’s method for a single objective at

For solving problem (1), by weighted sums of for all , we have the sum function which will be denoted by . Hence, where the weighting factors will be denoted by and , for all .

Expression (10) can be calculated as the derivative at both sides with respect to , then, the iterative formula of Newton weighted sum algorithm for (1) is

So, we can conclude the direction of Newton weighted sum algorithm for (1) at

3. Neumann Series Expansion

The Neumann series is the expansion of the matrix inversion, and its function lies in the efficiency of the matrix inversion. In engineering, the problem can be solved by the Newton method, and when there are many design variables, a considerable amount of calculation is needed. But when there are two object functions, the introduction of expansion principle can not only maintain the advantages of the Newton method but also reduce the work by half that is needed to run two object functions. The theorem is as follows:

Theorem 1. Assume that is a -order invertible matrix. Then, for matrix , there exists , and when , such hold for . (14) is called the Neumann series expansion.
According to (10), the weighted sum function of problem (31) is and the corresponding Newton’s iterative format is Then the iterative direction vector of (16) at is According to (17) and Neumann series expansion of Theorem 1, let be the square matrix in (13), and be the square matrix in (13), therefore, Define with . Then we should state holds with adjusted under some certain conditions.
Assume that is a bounded set, for , then defined by (19) is a linear operator.

Boundedness and continuity of are as follows.

Theorem 2. A linear operator is bounded if and only if there exists a constant , such that holds.

Proof. Assume that is a bounded linear operator, so the unit ball can be mapped into a bounded set on by , i.e., the image set of is a bounded set on .
Take . If , then satisfies to (22). For any , , we have and i.e., Therefore, if is a bounded linear operator, then (22) holds.

Inversely, assume that (22) holds. The boundedness of implies that there exists a positive constant such that

And there exists , such that holds for every . From (20), we have

Therefore, , i.e., is a bounded set on .

Theorem 3. A linear operator is continues if and only if is bounded.

Proof. (The necessary condition). Assume is unbounded, then the inequality (20) is not satisfied. Hence, if there exists , such that holds for any natural number . Take , we have therefore, but which is contradictious to the continuity of .
(The sufficient condition). From the inequality (20), if , then we have therefore, and is continuous.

Because of and under the assumption of is continuous matrix function for . Therefore, the linear operator is bounded on .

From the foregoing, we concluded that is a continuous and bounded linear operator, so has an upper bound on . Similarly, linear operator and have their own upper bound on for . So, back to (18) and the compatibility of norms, we can have

In (29), has a public upper bound which denoted by . Hence, it can fully satisfy the requirement of by adjusting appropriately.

4. An Algorithm for Unconstrained Biobjective Optimization Problem Based on Neumann Series Expansion (NSE)

When there are only two objective functions, a biobjective optimization algorithm is established in this paper by introducing the technique of NSE [21] to Newton weighted sum method. With this algorithm, the complicated inverse calculation of -rank matrix is avoided. It is only needed to calculate the inversion once for Hessian matrix of one objective function in (1). So the operating speed is improved especially in a high-dimensional design variables condition. Hence, the proposed algorithm is named as Newton Neumann Series Expansion Algorithm (NNSEA).

When there are only two objective functions, rewrite the problem (1) as

Note that when the feasible region is extended to , the constraints are invalid and (30) can be turned into unconstrained biobjective optimization as

The whole process of NNSEA for calculating a biobjective Pareto optimal solution is symbolized by Algorithm 1 as follows.

  (Initialization)
Step 1 Choose and a set of weighting factors , and stopping criteria denoted by a small value , solve for , , .
  (Main loop)
Step 2 If , then stop and output . Else, go to Step 3.
Step 3 Solve ,
  If , the direction is
     ;
  Else, adjust weighting factors and to satisfy with .
  (Update)
Step 4 Define .
Step 5 Set ,go to Step 2.

By selecting multiple groups of weighting factors, each group can obtain a Pareto optimal solution and the corresponding Pareto front. The process of NNSEA for solving unconstrained biobjective is shown in Figure 1.

5. NNSEFA Based on NNSEA

NNSEA is a method for solving unconstrained biobjective optimization problem. But the design variables are mostly under some inequality constraints in engineering application. Therefore, to improve the NNSEA by handling constraints is of great significance in engineering case. By introducing a penalty function, the constraints can be transformed to penalty terms and integrated into objective functions. Based on the NNSEA and combined with Frisch’s penalty function method, the proposed algorithm for solving constrained biobjective optimization is named Newton Neumann Series Expansion Frisch Algorithm (NNSEFA).

5.1. Handling the Constraints with Frisch Penalty Function

The Frisch penalty function is one of the interior penalty function methods by employing a logarithm to handle with the constraint. In problem (30) with constraints

The penalty term is constructed by Frisch’s method, which is expressed as

When the penalty term is closer to zero, it means that design variables satisfy the constraints. During the solving process, should be scaled down until it is small enough to be neglected compared to the object values. At this moment, the obtained solution can not only be equivalent to the optimal solution of the original problem but also satisfy the constraints. The process of NNESFA is as follows.

First, denote the logarithmic penalty function of as the Taylor expansion of around is

Hence, the second order approximation Taylor expansion of around is

Then, by adding together for , a sum function denoted by which can be expressed as where and .

By taking derivative of (37) at both sides with respect to , the iterative formula is

So, the iteration direction at is

Just like NNSEA, by calculating , and based on the valve of , we choose the direction properly. In order to ensure all the directions are descending during the optimization, an identification process is introduced. Taking the negative gradient direction of sum function, i.e., . The criterion of this strategy is the product of Newton direction and negative gradient direction denoted by

5.2. NNSEFA for Constrained Biobjective Optimization Problem

The whole process of NNSEFA for calculating a constrained biobjective Pareto optimal solution is symbolized by Algorithm 2 as follows.

  (Initialization)
Step 1 Choose and a set of weighting factors ,give stopping criteria and , coefficients .
Step 2 Establish each logarithmic penalty function of objectives, i.e., .
  (Main loop)
Step 3 Calculate , , if , stop and output ,then go to Step 8. Else, go to Step 4.
Step 4 Calculate the iterative direction ,
  If and , then
     ;
  If and , adjust weighting factors and to satisfy with .
  Else, .
Step 5 Calculate iteration step size .
  If , for all , take , and go to Step 7. Else, go to Step 6.
Step 6 Take ,go to Step 5.
  (Update)
Step 7 Calculate , define , and go to Step 3.
Step 8 Calculate , and if , stop and the is the Pareto optimal solution, output . Else, , go to Step 1.

Similarly, by selecting multiple groups of weighting factors, each group can obtain a Pareto optimal solution and the corresponding Pareto front. The process of NNSEFA for solving inequality constrained biobjective is shown in Figure 2.

6. Numerical Examples

Two benchmark test examples are chosen to test the effectiveness of NNSEA and NNSEFA. The first is an example from a published paper [22] for NNSEA. The second one is from the Genetic Algorithm Toolkit (GATOOL) of MATLAB for NNSEFA.

6.1. Test 1

Start coding in MATLAB 7.9.0(R2009b) for NNSEA, and set , weighting factors are generated in . Initial point . Note that the weighting factors are generated in two approaches, one is generated randomly, and another is provided uniformly. Then start the calculation and output the coordinates of Pareto front with all 200 groups of optimal solutions in Figures 3 and 4.

As can be seen in Figures 3 and 4, each red asterisks represents a Pareto optimal function value of one specific set of weighting factor. And all the function values compose the Pareto front. The blue asterisks and blue lines represent the iterative points and descent directions in the whole process. 200 groups of Pareto optimal solutions are all convergent and distributed uniformly and broadly. In this paper, the calculations are executed 20 times by each weighting factor generate approach, the detailed information is listed in Table 1. It is clear that this NNSEA has high performance and good adaptability for biobjective optimization problem.

6.2. Test 2

Similarly, in test 2 we set and weighting factors are generated in randomly. Initial point is , and set coefficients , , with initial , the iterative number for calculating every Pareto optimal solution is no more than 200. Then start the calculation and output the coordinates of Pareto front with all 200 groups of optimal solutions in Figure 5.

Only the Pareto front is displayed for giving an observation of function values more clearly. In the biobjective optimization problem, the middle section of Pareto front should be emphatically focused because of the values of this section, without favoring either function. We can see the middle section of the red asterisks, which are arranged smoothly and compactly, in Figure 6. Executed 20 times in the same computer and recorded, the average time of calculation is 270.85 s with the average number of iterative for each Pareto solution of 8.54, and all the Pareto optimal solutions are convergent which means NNSEFA has a good performance of efficiency and convergence.

7. Biobjective Optimization for Dummy Head in Car Crash Experiment

With the rapid development of automobile industry, the passive safety of automobiles has become a more and more important research subject for enterprises and research institutes. As an anthropomorphic test device, collision dummies are widely used in automobile safety testing. The collision dummy is made up of the head, neck, chest, buttocks, upper limbs, and lower limbs. Head injury is one of the most common injuries in traffic accidents, and statistics show that the mortality caused by head injury is the highest, which accounts for 68% of all deaths, making head injury the greatest killer in car accidents [23]. Therefore, research on dummy head structure in automobile crashes is essential to head injury analysis and vehicle safety.

7.1. Modeling of Biobjective Optimization Problem for Dummy Head Design

In this paper, to improve the efficiency of the dummy head analysis, a simplified dummy head model is developed based on the finite element (FE) model of Hybrid III 50th. Sensitivity analysis and the equivalent modeling in mechanics are applied to developing the new simplified dummy head structure. Through sensitivity analysis of the materials used for dummy head structure such as artificial skin and bones and mechanical property response, a simplified dummy head model is proposed in this section. In Figure 7(b), the detailed FE model of the proposed simplified head model is given and the dummy head FE model of Hybrid III 50th is shown in Figure 7(a). All models in this paper are meshed by software HyperMesh and then solved in LS-DYNA and meshed with shell and body elements.

For the dummy head calibration test, there is a legal and authoritative standard to examine the validity of the dummy structure. The synthetic head acceleration generated in automobile crashes is applied to evaluating the mechanical properties of the head structure. Considering the frontal and lateral impact condition, the peak accelerations of the dummy head generated in frontal and lateral collision are denoted as and , respectively. The accuracy of the dummy head model is comprehensively considered by using the method of frontal and lateral head regulation, and thus the peak and are used as the main indexes of the mechanical characteristics of the dummy head. In simulating the collision process of the head structure, the models drop from a certain height and hit a rigid plate, as shown in Figure 8. The simulation processes of the dummy head in frontal and lateral collisions are presented in Figures 8(a) and 8(b), respectively.

7.2. Validity of the Dummy Head Model

Acceleration curves of the simplified dummy head models under frontal (FRB) and lateral (MDB) impacts are obtained, and calibration test is referred for comparison analysis. Thus, two acceleration-time curves compared with the test curves of the dummy head model under different collision stations are depicted in Figure 9. In Figure 9, the green curves are obtained in simulated dummy test, and the blue curves in physical dummy test. Moreover, Figure 9(a) is in frontal impact and Figure 9(b) in lateral collision.

According to Figure 9, the acceleration-time curves of the simplified dummy head model proposed in this paper is consistent with the experimental results, and also, the two peak acceleration values and meet the calibration requirement. The quantitative analysis of the two peak accelerations compared with the test results are carried out in Table 2, and the errors in peak acceleration between the simplified model and physical dummy head are also considered in this paper.

From Table 2, we can conclude that the two peak acceleration values are all within calibration range, which means that the simplified dummy head model developed in this paper is effective and can be used in the simulation research of the head injuries. However, the figures in Table 2 also present a fact that the error of the peak acceleration of simplified head model is greater; thus, the accuracy of the head model cannot be guaranteed in some cases. Therefore, an optimization design of the dummy head structure will be carried out in the following section to improve the accuracy of the simplified head model.

7.3. Establishment of Mathematical Model for Optimization Design of the Dummy Head

According to Wood et al. [24], the mechanical responses of dummy head in automobile collision are typically dependent on the viscoelastic properties of the head polymer skin. In order to improve the accuracy of the simplified dummy head proposed in this paper, three material parameters—relaxation modulus (GI), shear modulus (MU1), and decay constant (BETA)—are selected as design variables based on the material sensitivity and the research on physical properties. Considering the frontal and lateral collision, thus the optimization design of the dummy head is a biobjective optimization problem. Thus, a biobjective optimization problems for optimization design of the dummy head structure is established as follows: where and are two objectives; 250.14 g, and 137.1 g are the experiment values of the peak acceleration of the dummy head, while and are acceleration peak values of the simplified model under frontal and lateral impact conditions. And are design variables that consist of GI, MU1, and BETA; according to the calibration test design requirements, in this paper, .

7.4. Establishment of Surrogate Model of Dummy Head

To reduce the complexity of the optimization problem in practical engineering, surrogate modeling has been broadly applied to many fields for its simple theory and excellent function. In this simulation, the response surface methodology (RSM) is selected for its better performance in many multiobjective optimization problems [2527]. Firstly, 16 groups data (3 factors 4 levels) are obtained by the orthogonal experimental design (DOE) method, and the corresponding acceleration results performed in LS-DYNA are shown in Table 3.

According to the data in Table 3, the quadratic polynomial response surface models of the objective functions and are constructed as equation (44) and (45).

Then, after surrogate models are obtained, the determination coefficient of variance analysis is employed to verify the fitting precision of response surface models. In general, if the determination coefficient is closer to 1, the surrogate model function with respect to response variables is more precise. In the paper, the determination coefficients and [27] are 93.153% and 98.8691%, respectively. Therefore, the surrogate models in this paper can simulate response variables accurately.

7.5. Optimization Design of the Dummy Head Structure

The effective optimization strategy proposed in this paper is applied to the biobjective optimization design of the dummy head structure. Following the optimal process of the strategy NNSEFA, the Pareto optimal solutions of the biobjective problem (43) are obtained quickly and after comparative analysis, an optimal scheme with design variables 0.0056 Gpa, 6e-4 Gpa, and 0.3 is selected for optimization design of the simplified dummy head model. The two accelerations were considered to evaluate the performance of the optimal design. Figures 10 and 11 show the acceleration-time curves of the optimal dummy head model compared with the physical dummy head structure under frontal and lateral collision, respectively. Moreover, the green curves in Figures 10 and 11 are obtained through the simulated dummy head experiment and the blue curves through physical dummy test.

According to the acceleration-time curves presented in Figures 10 and, the optimal design of the dummy head structure satisfy the calibration requirement, and its peak value of the two accelerations is close to the acceleration value obtained in the experiment of the physical dummy head. In addition, the acceleration-time curves of the optimal design are consistent with the experimental results. To analyze the properties of the optimal dummy head model, the design scheme and its two corresponding responses are shown in Table 4.

From Table 4, the degree of the optimizations for the simplified dummy head model is obvious and appreciable. Firstly, the peak values of acceleration for the frontal and lateral drop of the dummy head model is in accordance with the calibration test completely. The peak synthesis acceleration of frontal head drop is 236.7 g, the error of the simulated model being −5.37%, reduced by 2.48% compared with the original model before optimization, and the synthetic acceleration peak in lateral drop is 136.0 g, the deviation from the peak of the test curve being −0.80%, reduced by 5.98% after optimization. It can be seen that the optimal model of the dummy head is not only effective but also accurate in calculation. In addition, the dummy head model has been simplified for high computation efficiency, and the biobjective optimization performed in the optimal design has made the model more accurate. In summary, the optimization scheme for dummy head structure is of high accuracy and can be applied to the study on the occupant safety protection.

8. Conclusion

Two algorithmsNNSEA and NNSEFA—are presented in this paper. In NNSEA, the Neumann series expansion is introduced in Newton’s weighted sum method so it can simplify the calculation on matrix inversion. And based on NNSEA, Frisch penalty function is employed to handle the constraints. So NNSEFA can achieve biobjective optimization with constraints. When the objective functions are strong convex, the proposed NNSEA and NNSEFA are two efficient approaches for biobjective optimization problems according to the test function analysis.

In the engineering application of dummy, NNSEFA provides valuable reference to designers. As computer simulation shows, the general calibration curve, the peak and the peak time for the optimized model of dummy head, and physical dummy are very consistent. Compared with the original design, this optimization model can simulate the injury of human head in the collision process more accurately. It provides the design idea for occupant safety design. Obviously, the optimization is feasible.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

The authors would like to acknowledge the National Natural Science Funds for Distinguished Young Scholar (Grant No. 51505181), the China Postdoctoral Science Foundation funded project (No. 2016M590256), the Advanced Manufacturing Project of Provincial School Construction of Jilin Province (No. SXGJSF2017-2), and the Science & Technological Project of Smart Manufacturing in Tianjin supported by Tianjin Science and technology commission (No. 16ZXZNGX00100).