Applied Bionics and Biomechanics

Volume 2018, Article ID 7071647, 13 pages

https://doi.org/10.1155/2018/7071647

## Biobjective Optimization Algorithms Using Neumann Series Expansion for Engineering Design

^{1}School of Mechanical Science and Engineering, Jilin University, Changchun, China^{2}Aviation University of Air Force, Changchun, China^{3}Tianjin Aerisafety Science and Technology Co. Ltd., Tianjin, China

Correspondence should be addressed to Tianshuang Xu; nc.ude.ulj@stx

Received 5 April 2018; Revised 27 August 2018; Accepted 19 September 2018; Published 19 December 2018

Academic Editor: Jose Merodio

Copyright © 2018 Huan Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In this paper, two novel algorithms are designed for solving biobjective optimization engineering problems. In order to obtain the optimal solutions of the biobjective optimization problems in a fast and accurate manner, the algorithms, which have combined Newton’s method with Neumann series expansion as well as the weighted sum method, are applied to deal with two objectives, and the Pareto optimal front is achieved through adjusting weighted factors. Theoretical analysis and numerical examples demonstrate the validity and effectiveness of the proposed algorithms. Moreover, an effective biobjective optimization strategy, which is based upon the two algorithms and the surrogate model method, is developed for engineering problems. The effectiveness of the optimization strategy is proved by its application to the optimal design of the dummy head structure in the car crash experiments.

#### 1. Introduction

It is very important to research on the multiobjective optimization problems in the engineering designs. For example, the economist fuel, the maximum carrying capacity, and the lightest weight need to be considered at the same time in the design of aircraft and spacecraft [1]; the strong rigidity, lightweight, and low-order modes also need to be considered commonly in the thin-walled beam section optimization problem of the automobile body structure design [2]. According to the investigation on the dummy head under automobile impact condition, the peak of synthetic acceleration of frontal and lateral drop is the main indicators of mechanical characteristics of dummy head [3]. A bilevel optimization was carried out for the cross-sectional shape of a thin-walled car body frame constrained with static and dynamic stiffness [4]. The common coin of these engineering problems is complex mechanical structure and too much design variables always lead to the intricate solving procedure and large amount of computation with the purpose of multiple objectives meeting the best simultaneously. However, in most cases, the change of one target may cause an influence on the other ones. It is almost impossible to get a solution which can make each objective function reach the optimal value [5]. Therefore, the importance of carrying out the research of multiobjective optimization is of significance to engineering itself especially.

In most cases, an optimal solution which meets all objectives at the same time in a multiobjective problem does not exist. Thus, the key to describe an optimization problem is establishing a scientific and reasonable standard. However, it is also an effective and acceptable way to keep all object values on a relatively better level in the case that the optimal solutions cannot be obtained simultaneously. So, designers can choose one from several groups of the relatively better designs to guide decisions based on engineering background knowledge. The concept of Pareto optimal solution in a multiobjective optimization problem is an objective description which can take into account of every object thoroughly, so that optimization schemes can be calculated by designers in the circumstances of keeping the whole optimization level from dropping [6].

The idea of solving numerical multiobjective optimization problem is a scalarization process, which uses a suitable scalar (single objective) optimization problem instead of the vector (multiobjective) optimization problem [7]. The commonly used algorithms are provided for scalarization process such as minimax method [8], constraint method [9], and usual weighted sum method [10].

The proposed minimax method is a classical multiobjective optimization algorithm [8]. By proving that the set of Pareto optimal solutions coincides with the set of stationary points, it is a parameter-free optimization method for computing a point satisfying a certain first-order necessary condition in multiobjective optimization. It borrows the idea of Newton’s method for single-objective optimization and with respect to the authors’ theoretical results obtained; Newton’s method for multiobjective optimization behaves exactly as its counterpart for single-criterion optimization: it is fairly robust with respect to the dimension of the problem and the starting point chosen, the rate of convergence is at least superlinear, and it is quadratic if the second derivatives are Lipschitz-continuous. But the authors did not discuss the adaptation of the approach they proposed for constrained multiobjective problems. Quasi-Newton’s method for solving multiobjective was proposed by Qu et al. [11] and Povalej [12]. By using the well-known BFGS method and the idea of [8], the authors had proven that quasi-Newton’s method for multiobjective optimization converges superlinearly to the solution of the given problem, if all functions involved are twice continuously differentiable and strongly convex. The advantage of this method, compared to Newton’s approach, is that the approximation of Hessian matrices is usually reasonably faster than their actual evaluation. This difference is especially noticeable when the dimension of the problem rises. The adaptation of this approach to constrained multiobjective optimization is not considered too.

The representative constraint method is -constraint method [9]; it retains the objective function which designers most prefer, as an objective function of single-objective optimization function, turning other objective functions into constraints by adding a restriction domain [13]. This algorithm has high efficiency and produces Pareto solutions which have a relatively broad range and does not need to make a priority of getting each objective function in grading (to determine the weight) [6]. However, -constraint method cannot guarantee that the result is a Pareto optimal solution; selecting an appropriate constraint value often requires some prior knowledge and has a low calculating efficiency when the number of objective function increases. The main drawbacks of these common methods are the limitations on calculation and dissatisfactions with the quality of Pareto optimal solutions [14].

The weighted sum method has been widely used because of its simplicity and high computational efficiency [15]. The early usual weighted sum method transforms multiple objectives into an aggregated objective function by multiplying each objective function by a weighted factor and adding them up. But it has two drawbacks: difficulty to obtain Pareto optimal solutions uniformly and failure to solve nonconvex problems [16–18].

Many methods for solving nonconvex optimization problems have been proposed over the decades. Typical one is the normal-boundary intersection method (NBIM) [19]. It approaches a group of Pareto optimal solutions through geometric intuition parametric method and gives an accurate pattern of Pareto front. NBIM can not only obtain the Pareto optimal solutions in nonconvex regions but also the solutions are uniformly distributed. However, there are still serious defects, for example, non-Pareto optimal solutions (dominated solutions) are also obtained which must be filtered out. The adaptive weighted sum method (AWSM) is presented for the biobjective optimization problems [20], by adding inequality constraints based on traditional weighted sum methods and redefining feasible regions of optimization problems. So, the solve area is extended, and the optimal solutions are iterated automatically.

There are many biobjective optimization problems in engineering applications. The increase of each objective value will immediately cause an influence on another one. For example, energy-absorbing and impact force are a typical pair of contradictory optimized objectives in bumper-crash box design, which needs to make the maximum of impact force decrease while maximize energy absorption to the peak value. But in the practical engineering, with the rising of energy absorption, the impact force of the crash box will be even greater. So, the biobjective optimization is of great significance in engineering.

In this paper, two new algorithms for biobjective optimization problem are presented. One is Newton Neumann Series Expansion Algorithm (NNSEA) for unconstrained problem and Newton Neumann Series Expansion Frisch Algorithm (NNSEFA) for constrained problems. Two examples are given to demonstrate the valid and effectiveness of the algorithms, respectively. Finally, two algorithms are applied to the optimization problem in a dummy head design, and some good results are obtained. The following sections will discuss them in detail.

#### 2. Newton Weighted Sum Algorithm for Unconstrained Multiobjective Optimization

##### 2.1. Definition and Some Theories of the Multiobjective Optimization Problems

In order to accurately describe the concept of Pareto optimal solution, some definitions and symbols of multiobjective optimization will be presented first.

In this paper, denote by the positive integer set, by the real number set, by the *n*-dimensional real vector space, and by the linear space which is composed of *n*-order real matrix. The Euclidean norm in will be denoted by , and we will use the same notation to describe the induced operator norms on the corresponding matrix spaces. is a vector of design variables. is the vector-valued objective function which components are *n*-variable real functions for all .

A general multiobjective optimization problem can be defined as follows: where is called the objective vector-valued function and is the feasible region of (1). can be described by where and are the equality and inequality constraints of multiobjective optimization, respectively. If , (1) is called an unconstrained multiobjective optimization problem.

For solving (1), we provide the concept of Pareto optimality as explained below.

*Definition 1. *A point is a local Pareto optimum or local Pareto optimal solution of if and only if there does not exist such that
Note that if and are both convex, then the local Pareto optimality is equivalent to the global Pareto optimality. So a Pareto optimal solution means the reasonable solution, which satisfies the objectives at an acceptable level without being dominated by any other solution.

In order to obtain the information of every objective function and the change tendency of optimization process more intuitively, the set of objective function values can be used in case of making impolitic decision. The detailed definition is as below.

*Definition 2. *If is the set of Pareto optimal solutions in (1), then set is a Pareto front of for which
holds.

Assume is twice continuously differentiable on feasible region , i.e., . And for , let and denote the gradient and Hessian matrix of at for all , respectively.

Throughout the paper, unless explicitly mentioned, we will assume that with strong convexity which implies the is positive definite for all , and .

##### 2.2. Newton Method Based on Weighted Sum Technique

Newton’s method is extensively used in optimization problem, and the iteration direction includes the gradient and Hessian matrix information of objectives. When the initial iteration point is very close to the optimal point, the rate of convergence is rapid. And if the objective functions satisfy some conditions, it can achieve superlinear convergence or quadratic convergence. So in multiobjective optimization problem, Newton method combined with weighted sum method is chosen as the main calculation algorithm. The derivation process is as follows.

In the multiobjective optimization problem (1), , the Taylor expansion of around is

Hence, the second order approximate Taylor expansion of around is

Here, in (6), is positive definite. Hence, the problem is converted from finding the minimum of into finding the second order approximation minimum of . Since from the derivative of (6) at both sides with respect to , using the necessary condition of extreme value, we obtain

Considering the algorithm is an iterative process, take , we have the Newton iteration method for single objective as

And note the iteration direction of Newton’s method for a single objective at

For solving problem (1), by weighted sums of for all , we have the sum function which will be denoted by . Hence, where the weighting factors will be denoted by and , for all .

Expression (10) can be calculated as the derivative at both sides with respect to , then, the iterative formula of Newton weighted sum algorithm for (1) is

So, we can conclude the direction of Newton weighted sum algorithm for (1) at

#### 3. Neumann Series Expansion

The Neumann series is the expansion of the matrix inversion, and its function lies in the efficiency of the matrix inversion. In engineering, the problem can be solved by the Newton method, and when there are many design variables, a considerable amount of calculation is needed. But when there are two object functions, the introduction of expansion principle can not only maintain the advantages of the Newton method but also reduce the work by half that is needed to run two object functions. The theorem is as follows:

Theorem 1. *Assume that is a -order invertible matrix. Then, for matrix , there exists , and when , such
hold for . (14) is called the Neumann series expansion.**According to (10), the weighted sum function of problem (31) is
and the corresponding Newton’s iterative format is
**Then the iterative direction vector of (16) at is
**According to (17) and Neumann series expansion of Theorem 1, let be the square matrix in (13), and be the square matrix in (13), therefore,
**Define with . Then we should state holds with adjusted under some certain conditions.**Assume that is a bounded set, for , then defined by (19) is a linear operator.
*

Boundedness and continuity of are as follows.

Theorem 2. *A linear operator is bounded if and only if there exists a constant , such that
holds.*

*Proof. *Assume that is a bounded linear operator, so the unit ball can be mapped into a bounded set on by , i.e., the image set of is a bounded set on .

Take . If , then satisfies to (22). For any , , we have and
i.e.,
Therefore, if is a bounded linear operator, then (22) holds.

Inversely, assume that (22) holds. The boundedness of implies that there exists a positive constant such that

And there exists , such that holds for every . From (20), we have

Therefore, , i.e., is a bounded set on .

Theorem 3. *A linear operator is continues if and only if is bounded.*

*Proof. *(The necessary condition). Assume is unbounded, then the inequality (20) is not satisfied. Hence, if there exists , such that
holds for any natural number . Take , we have
therefore, but which is contradictious to the continuity of .

(The sufficient condition). From the inequality (20), if , then we have
therefore, and is continuous.

Because of and under the assumption of is continuous matrix function for . Therefore, the linear operator is bounded on .

From the foregoing, we concluded that is a continuous and bounded linear operator, so has an upper bound on . Similarly, linear operator and have their own upper bound on for . So, back to (18) and the compatibility of norms, we can have

In (29), has a public upper bound which denoted by . Hence, it can fully satisfy the requirement of by adjusting appropriately.

#### 4. An Algorithm for Unconstrained Biobjective Optimization Problem Based on Neumann Series Expansion (NSE)

When there are only two objective functions, a biobjective optimization algorithm is established in this paper by introducing the technique of NSE [21] to Newton weighted sum method. With this algorithm, the complicated inverse calculation of -rank matrix is avoided. It is only needed to calculate the inversion once for Hessian matrix of one objective function in (1). So the operating speed is improved especially in a high-dimensional design variables condition. Hence, the proposed algorithm is named as Newton Neumann Series Expansion Algorithm (NNSEA).

When there are only two objective functions, rewrite the problem (1) as

Note that when the feasible region is extended to , the constraints are invalid and (30) can be turned into unconstrained biobjective optimization as

The whole process of NNSEA for calculating a biobjective Pareto optimal solution is symbolized by Algorithm 1 as follows.