Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article
Special Issue

Building Mathematical Models for Multicriteria and Multiobjective Applications 2020

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 9474580 |

Ding Han, Jianrong Zheng, "A Kriging Model-Based Expensive Multiobjective Optimization Algorithm Using R2 Indicator of Expectation Improvement", Mathematical Problems in Engineering, vol. 2020, Article ID 9474580, 16 pages, 2020.

A Kriging Model-Based Expensive Multiobjective Optimization Algorithm Using R2 Indicator of Expectation Improvement

Guest Editor: Juan Carlos Leyva-Lopez
Received20 Apr 2020
Revised27 May 2020
Accepted28 May 2020
Published27 Jun 2020


Most of the multiobjective optimization problems in engineering involve the evaluation of expensive objectives and constraint functions, for which an approximate model-based multiobjective optimization algorithm is usually employed, but requires a large amount of function evaluation. Aiming at effectively reducing the computation cost, a novel infilling point criterion EIR2 is proposed, whose basic idea is mapping a point in objective space into a set in expectation improvement space and utilizing the R2 indicator of the set to quantify the fitness of the point being selected as an infilling point. This criterion has an analytic form regardless of the number of objectives and demands lower calculation resources. Combining the Kriging model, optimal Latin hypercube sampling, and particle swarm optimization, an algorithm, EIR2-MOEA, is developed for solving expensive multiobjective optimization problems and applied to three sets of standard test functions of varying difficulty and comparing with two other competitive infill point criteria. Results show that EIR2 has higher resource utilization efficiency, and the resulting nondominated solution set possesses good convergence and diversity. By coupling with the average probability of feasibility, the EIR2 criterion is capable of dealing with expensive constrained multiobjective optimization problems and its efficiency is successfully validated in the optimal design of energy storage flywheel.

1. Introduction

Science and engineering practice possess a large number of multiobjective optimization problems (MOP) [1] whose objectives often conflict with each other and need to be optimized simultaneously, where the Pareto set is desired. Evolutionary Algorithm (EA) [2] has proven to be very suitable for MOP, some typical algorithms include NSGA-II [3], SPEA2 [4], MOEA/D [5], and MOPSO [6]. However, the success of these algorithms depends heavily on a large number of function evaluations and can only be applied in cases that function evaluation is cheap computationally. To handle some expensive MOPs, it is a desire to develop some efficient algorithms.

To reduce the number of expensive evaluation required, approximate/surrogate model [7], which mimic the input-output relationship of expensive function but is cheap-to-evaluate, is often employed. Polynomial response surfaces [8], support vector machines [9], neural networks [10], and Kriging [11] are commonly used approximate models.

Currently, there are two ways to integrate the approximate model with EA. The first one is to first construct a static global approximate model based on all sample points already evaluated, and the EA algorithm then searches for the optimal solution. However, with limited computational resources, it is hard to guarantee the approximate model’s accuracy over the entire design space. The second one utilizes an approximate model in a dynamic manner by first constructing a rough approximate model with a fraction of sample points generated by some Design of Experiments (DoE) [12] techniques, and then maximizing some infilling point criterion (IPC) [13], a new point is located by EA and is evaluated by expensive function. This process is repeated until resources are exhausted. The second manner has the advantage of high efficiency in using computation resources and is used in this paper.

The infilling point criterion [14] plays a critical role in the approximate model-based optimization method. Expectation Improvement (EI), proposed in Efficient Global Optimization (EGO) [15] by Jone, may be one of the most researched method in the literature. It uses the Kriging model as an approximate model and considers not only the predicted value but also modeling uncertainty, achieving a good balance between exploration and exploitation in the optimization process. Other IPCs include statistical lower bound [16] and probability of improvement [17].

However, those criterions are proposed originally only for single optimization and have to be modified for MOPs. Knowles first introduced EI into the field of multiobjective optimization and presented ParEGO [18], where the Kriging model is used and multiple objectives are converted into a single objective through a parameterized scalarizing weight vector, and then followed by the direct application of EGO, the weight vector changed randomly as iteration proceeded. Joan pointed out ParEGO is apt to favor model accuracy rather than finding nondominated solutions and demands more iterations before convergence and hence proposed Pareto Expected Improvement index(PEI) [19], which considers both EI and the probability of being Pareto optimal solution. Multi-EGO [20] builds the Kriging model and calculates separately EI for each objective and calls MOGA [21] as a solver to output a set of candidate points, from which a point is selected as an infilling point. MOEA/D-EGO [22] borrowed the ideas from ParEGO, and it generated not a single but a set of weight vectors that distributed uniformly in the objective space to form single objective optimization subproblems involving EI, optimizing jointly those subproblems produces multiple infill points and in this way parallelly infilling is achieved. Recently, some IPCs compatible with multiobjective optimization were proposed, such as MaxMin improvement [23] and expectation hypervolume improvement [24]. Svenson defined MaxMin improvement in the analytic form when the number of the objective function is two and suggested using the Monte Carlo method (MCM) [25] to calculate MaxMin value when the number of the objective is more than two.

In this paper, a novel IPC is proposed for expensive multiobjective optimization. The outstanding feature of the IPC is to map one point in the objective space to a set in the expectation improvement space, where the fitness of the point is defined as a Quality indicator (QI) [26] value of the set. In this way, QI plays a role in guiding optimization towards an infilling point, rather than evaluating the nondominated solution set after optimization. There are a large number of candidate QIs in the literature, such as IGD [27, 28], -indicator [29], SDE indicator [30], and Hypervolume (HV) [31]. However, these QIs do not meet the requirements of easy-using and low-computation cost to some extent. For example, both IGD and -indicator demand Pareto front (PF) as the reference set, which is impossible in practical problems; HV does not require PF but a reference point, for which, if set unreasonably, will mislead the optimization search direction [32]. Besides, HV possesses high computational complexity, which increases exponentially with the number of objectives. Some QIs have a specific bias, for example, -indicator is more convergence-oriented, and as a result, optimization process may suffer from premature convergence, while SDE prefers diversity [33] which may make the optimization period too long [34]. R2 indicator [35, 36], having some desire properties in common with HV but is cheaper to evaluate, attracted widespread interest in recent years. In addition, it is weakly monotonic [37] and does not require PF to be provided; hence, it is adopted in this article.

The rest of paper is organized as follows. In Section 2, the mathematical background of Kriging, EI, and R2 indicator is reviewed. Then, the proposed EIR2 indicator and framework of the whole optimization algorithm are presented in Section 3. To evaluate the performance of the proposed IPC, a series of experiments over three benchmark test function suites against two competitors IPCs is conducted in Section 4, their results are measured by three performance metrics and then analyzed in Section 5, and followed by Section 6, where the proposed EIR2 is modified and applied to a real-world constrained expensive MOP, an optimization design of an energy storage flywheel. Lastly, Section 7 gives the conclusion and future research direction.

2. Background

2.1. Multiobjective Optimization Problem

Without loss of generality, a multiobjective optimization problem is formulated as follows:where represents the objective function vector which contains objective functions, and stand for inequality and equality constraint function with the total number and , respectively, and stands for design vector with lower bound and upper bound . A solution is said to dominate another one if and only if and , written as . A solution is called Pareto optimal if there is no satisfying (such is also referred to as nondominated by ). The set of all the Pareto optimal solutions is called the Pareto Set (PS), whose image in the objective space is called Pareto Front (PF).

In practice, objective functions and constraint functions may come from finite element simulation; hence, they have no explicit analytic expressions and are usually computationally expensive to evaluate. Under this condition, we are interested in identifying a set of nondominated solutions, called approximated Pareto Set/Front, to represent the true Preto Set/Front, meanwhile consuming as less computing resources as possible.

2.2. Kriging Model and Expectation Improvement

Kriging model is an unbiased estimation model with smallest estimated variance, excellent high-dimensional, and nonlinear fitting capability. It assumes the relationship between input and predicted output as follows:where is a constant and is the Gaussian random variable, which represent global trend and local deviation of the output separately.

Having a set of known observations , our job is to search for the best values of and when predicting at a new point . In the design space, the relationship between two different points and is characterized by correlation functions, such as the Gaussian function:where hyperparameter controls the smoothness of prediction function and is usually determined through maximizing a likelihood function of by some intelligent optimization algorithms such as particle swarm optimization. According to optimization theory, analytical form of and are obtained:where is known as the output vector and is a matrix of covariance between any two points in , that is, . Finally, at the new point , we get the predicted value and variance:where is a correlation vector between point and all points in . Unlike other approximate models such as response surface [38] and neural network [39], which can only give out the predicted value, Kriging is capable of outputting predicted variance as a byproduct, which can be regarded as a predicting uncertainty at .

With Kriging, the value at a new point is treated as a Gaussian random variable, that is, . An improvement quantity is designed as , where is the smallest value in . It is noted that only those that are less than produce improvement. The expectation of improvement (EI) [40] has a closed form, that is,where and are the probability distribution and probability density functions, respectively.

2.3. R2 Indicator

The R2 indicator [41] is a unitary indicator, which was proposed to evaluate the performance of the nondominated solution set , given aswhere is a discrete and finite set of utility functions , for which there are many options to choose, such as weight sums and penalty-based boundary intersection [42].

3. Proposed Method

3.1. Expectation Improvement R2 Indicator

The main difficulty in extending EI infilling criterion to MOPs lies in the fact that we have to specify before calculating the EI value, which is impractical as no such solution exists as being optimal with respect to every objective, but a set of nondominated solution found during previous optimization process. It is now highly desired to find an infilling point having large EI values with respect to all points in the current nondominated set. This motivates us to treat each nondominated solution as current optima and then calculate the EI value in each objective, resulting in a vector function:where is a point in current nondominated set and . By this function, we obtain a vector consisting of all EI values for which is deemed as in objective . Obviously, we need to build Kriging models, each corresponding to one objective function. After continually applying function to all nondominated solutions, we get a set composed of points, defined aswhere . In essence, we mapped point in design space to a set in another space, called expectation improvement space, where the R2 indicator of is defined as fitness of being selected as the infilling point. The mapping from objective space to the EI space is illustrated in Figure 1.

Combining EI and R2 indicator, a novel indicator for MOP, Expectation Improvement R2 Indicator (EIR2), is defined as follows:

As EIR2 is a scalar indicator, some single optimization algorithm can be employed to determine the optimal solution as the infilling point. Besides, EIR2 actually defines an indicator template in which other utility functions can be specified. In this paper, Tchebyche function is employed as utility function because it can tackle MOPs with convex or concave Pareto front, defined aswhere is weight vector satisfying and . These weight vectors are uniformly distributed in the objective space and are collected in the weight set . is the ideal point, that is, . As all values are nonnegative, a natural selection is . To summarize, the EIR2 indicator has the form

The point that maximizes the EIR2 indicator is recognized as an infilling point for expensive evaluation.

3.2. Weight Vectors Generating

The utility function is actually an optimization subproblem parameterized by the weight vector, by which the optimization process can be controlled. If the geometry characteristics of PF are known, such as being convex/concave, a set of weight vectors can be distributed consistent with the PF in the objective space, by which a more accurate approximate PF with uniform distribution is more likely to be obtained. However, there is often no information about the PF in priori, which is common for practical MOP; hence, a better choice in this situation is to treat all objectives equally important and generate a set of weight vectors evenly distributed in the objective space.

To help generate weight vector , an auxiliary set is introduced , where is an user-specified positive integer. We can chose randomly element from as the first elements of and assign the value of to ensure . Exhausting every possible combination, there are totally weight vectors generated. Figure 2 illustrates all 21 weight vectors generated using parameter  = 3 and  = 5.

3.3. Comparison with Other IPCs

The novelty of EIR2 is that fitness assessment of point is carried out not in the objective space but in the EI space, where the R2 indicator is calculated for the mapping set of the point. Since the EI indicator has a closed function form, so does EIR2 regardless of the number of objective functions. In addition, the computational complexity of EIR2 is , which is considerably less than that of hypervolume [26].

ParEGO also uses the Chebyshev function; however, its purpose is to integrate multiple objectives into one single objective before applying the EGO algorithm. Therefore, in essence, the infilling point criterion used in ParEGO is EI.

With regard to the MaxMin indicator, despite analytic form being existing, its calculation has to resort to dividing the objective space into many blocks on which tedious multivariable integration is carried out. Even worse, the number of blocks increases exponentially with the growth of the number of objective. So, in practice, we only apply this analytic form when while using the Monte Carlo method when .

Figure 3 shows contour plots of three IPCs, namely, ParEGO, MaxMin, and EIR2 applied in test function TEST2 defined in Table 1, where lighter color denotes higher value. In these plots, 20 known sample points are represented by red points, and blue points represent the true Pareto set, which are composed of three disconnected point subsets. It is easy to find that ParEGO can identify only one Pareto subset out of three, while MaxMin and EIR2 successfully locate the three Pareto subsets. In the region, where no Pareto solution exists, the function value of EIR2 is almost constant, while the function landscape of MaxMin shows a ladder-like trend.

NameDimensionBoundaryObjective functionComments on PF

TEST12Convex, simple

TEST22Convex, disconnected




ZDT35Convex, disconnected




Deutz [43] recently proposed an IPC, also called EIR2, which is however very different from ours, proposed in this article. The main difference is that the R2 indicator is calculated in different spaces. Deutz’s R2 indicator is defined in objective space and the expectation of R2 indicators is carried out through multivariate integration. Therefore, the analytic form can only be obtained when but does not exist when , where the Monte Carlo method has to be adopted. In contrast, EIR2 in this paper calculates the R2 indicator in the EI space and hence having the analytic form for any . Most importantly, our EIR2 involves no multivariable integration and hence is cheap to evaluate.

3.4. Overview of the EIR2-EMOA

Combining the devised EIR2 infilling point criterion with the Kriging model, optimal Latin hypercube sampling method [44], and particle swarm optimization [45], an optimization algorithm for multiobjective problems with expensive functions is proposed, referred to as EIR2-MOEA. The detailed procedure is shown in Algorithm 1.

: Maximum number of evaluations by expensive function
  : Ratio of the number of initial samples to
  : Integer used to generate weight vector
: Nondominated solution set
(1) Initialization: Obtain design variable limits and , number of objective , etc. according to MOP to be solved. Set and .
(2) Generate sample points in design space using optimal Latin hypercube sampling.
(3) for = 1 to do
(4) Calculate the expensive objective function values for sample point
(5) , i.e., add sample point to
(6) end for
(7) Generate weight vectors in objective space and save them in set .
(8) Find non-dominated solutions in and put them into
(9) fordo
(10) fordo
(11) Using PSO to find the best hyperparameter
(12) Build Kriging model of the -th objective function based on
(13) end for
(14) According to EIR2 indicator, apply PSO to find the best infilling point
(15) Calculate expensive objective function
(16) Update
(17) Find the non-dominated solutions in and put them into
(18) end for return Output as approximated Pareto set

In the initial stage of EIR2-MOEA, a set of sample points of small size need to be selected for establishing the initial Kriging model. As a design of experiment method, the optimal Latin Hypercube Sampling (LHS) method is utilized as it has flexibility in setting the number of sampling points and can ensure these points filling in the entire design space.

When maximizing the EIR2 value and looking for optimal hyperparameters of the Kriging model, an optimization method is required. However, the landscapes of these two functions are nonlinear, for which traditional gradient-based optimization algorithms are prone to falling into local minima if the initial point is not selected wisely and has to be rerun multiple times. Particle swarm optimization algorithm, as an intelligent optimization algorithm, does not need information about gradient and has fewer parameter settings, high efficiency of search, and lower complexity, hence is employed to find the optimal parameters.

4. Experiment Setup

In order to assess the performance of the proposed algorithm, we applied it to three sets of standard test function suites and compared it with another two algorithms, ParEGO and MaxMin.

4.1. Test Problems

Three benchmark function suites, namely, simple set, ZDT set, and DTLZ set, are selected to verify the performance of the IPCs in different aspects, whose definitions are shown in Table 1. The simple set includes three biobjective optimization problems TEST1, TEST2, and FON with the number of variables less than three, whose corresponding PFs are convex, concave, and discontinuous, respectively. The ZDT set contains three biobjective test functions, ZDT1, ZDT2, and ZDT3, with convex, concave, and discontinuous PF, respectively. We set the number of variables to 5 to test IPCs’ performance on problems having more decision variables. DTLZ2, DTLZ5, and DTLZ7 form the DTLZ test set, which all have three objectives and five decision variables, and they are selected to assess the performance of the IPCs when the number of objectives is large.

4.2. Performance Metric

We hope that the approximated Pareto front found close enough to the true Pareto front and is evenly distributed in the objective space, that is, to meet the requirement on convergence and diversity. For this purpose, we choose three performance metrics to quantify the performance of the approximated Pareto front.(1)Inverted Generational Distance (IGD) [28]:where and are true and approximated Pareto set, respectively, is the number of solutions in , and represents the Euclidean distance between and in objective space. IGD measures the average distance between the true and approximated Pareto set. Therefore, the smaller IGD value is preferred.(2)HyperVolume (HV) [31]:where represents reference point which is dominated by all solutions in and stands for Lebesgue measure. Geometrically, is the volume enclosed by the approximated Pareto set and the reference point . can reflect both convergence and uniformity. If the approximate Pareto set performances well with respect to both aspects, then the HV value will be high.(3)Nondominated solution ratio (NR):where is the number of nondominated solutions and the maximum number of evaluations by expensive function. is used to measure the proportion of the nondominated solutions out of the total sample points evaluated by expensive functions. In other words, represents the efficiency of IPC in using computing resources. The larger the value, the more nondominated solutions found by the IPC and the higher the utilization efficiency of computing resources.

4.3. Parameter Setting

We hope to evaluate the performance of each IPC with limited computing resources, that is, for each test function we set the maximum number of evaluations involving expensive functions , shown in Table 2. The size of the initial sample set is specified to be a fraction of , namely, , where is a proportional coefficient which is in this paper. To ensure fairness, all three IPCs start with the same initial sample point set generated by optimal LHS in each run, and ten runs are executed for each test function.



Reference points are required for calculating the HV value. All biobjective optimization problems share the same reference point except TEST1 for which is used; all three-objective optimization problems are assigned the same reference point except DTLZ7, where is utilized.

As for particle swarm optimization, the population size is set to 100, the evolution generation is 100, and the scaling factor is 1.2. Other parameters of MaxMin and ParEGO are taken as the recommended values in original papers.

5. Result and Discussion

For each test function, ten independent runs are performed, and the HV, IGD, and NR indicators of the resulting approximated Pareto set are calculated. Finally, their mean and standard deviation of the ten runs are calculated and listed, respectively, in Tables 35. For each test function, among the three IPCs, the winning one is shown in boldface with a gray background.


TEST1272.20241.71E + 00278.86261.03E − 01279.02288.64E − 02
TEST20.69008.02E − 030.72271.49E − 030.72337.44E − 04
FON0.45833.10E − 020.53242.21E − 030.53643.08E − 03
ZDT10.84685.53E − 030.84635.01E − 030.86431.08E − 03
ZDT20.50572.14E − 020.51214.63E − 030.53121.04E − 03
ZDT31.29258.52E − 031.29464.72E − 021.32301.14E − 03
DTLZ20.64999.69E − 030.70785.60E − 030.73782.85E − 03
DTLZ50.40985.02E − 030.41513.90E − 030.43583.45E − 04
DTLZ71.54102.31E − 021.63332.72E − 021.68583.39E − 03


TEST10.43474.88E − 020.19395.19E − 030.19084.49E − 03
TEST20.04151.02E − 020.00975.06E − 040.00975.49E − 04
FON0.06403.47E − 020.01211.05E − 030.01091.98E − 03
ZDT10.02112.42E − 030.02324.17E − 030.00968.15E − 04
ZDT20.02851.51E − 020.02493.26E − 030.01261.16E − 03
ZDT30.05919.69E − 030.05441.21E − 020.01431.79E − 03
DTLZ20.09995.40E − 030.09926.36E − 030.07004.21E − 03
DTLZ50.02893.01E − 030.03344.89E − 030.00844.72E − 04
DTLZ70.13931.70E − 020.08511.25E − 020.05764.23E − 03


TEST10.66895.78E − 020.87112.52E − 020.86442.86E − 02
TEST20.28832.95E − 020.61832.66E − 020.60172.54E − 02
FON0.18674.50E − 020.42003.22E − 020.43677.33E − 02
ZDT10.26111.76E − 020.25563.19E − 020.45223.31E − 02
ZDT20.24786.87E − 020.20332.10E − 020.43113.77E − 02
ZDT30.12422.17E − 020.12831.77E − 020.29582.95E − 02
DTLZ20.27521.81E − 020.27332.49E − 020.43812.00E − 02
DTLZ50.16521.27E − 020.12621.64E − 020.35291.09E − 02
DTLZ70.19242.43E − 020.36866.87E − 020.54382.53E − 02

5.1. Simple Tests

TEST1, TEST2, and FON are all biobjective test functions but have, respectively, convex, piecewise continuous, and concave Pareto front. The comparison of mean and std value for HV, IGD, and NR metric can be found at the top part of Tables 35, respectively. It is shown that ParEGO achieved the worst performance on all three performance metrics, while EIR2 and MinMax achieve similar results but both significantly better than ParEGO. Specifically, for TEST1 and TEST2, EIR2 is better than MaxMin in terms of HV and IGD but is slightly worse concerning NR. For FON, EIR2 ranked first on all three performance metrics.

Figure 4 plots the approximated Pareto front obtained by the three IPCs on TEST1, TEST2, and FON test function, respectively, with its HV value being median in ten runs. It is evident that the number of nondominated solutions obtained by ParEGO is small and they are not evenly distributed, which is more obvious on TEST2 and FON. This phenomenon can be foreseen because ParEGO lacks a mechanism to guarantee the uniform distribution of the solutions, that is, its weight vector is randomly generated at each iteration. In contrast, the approximated Pareto fronts obtained by EIR2 and MaxMin seem to have completely covered the true Pareto fronts and have good distribution performance, regardless of the nature of the true Pareto front of the test function.

Overall, EIR2 and MaxMin have comparable performance and are much better than ParEGO on simple test functions with a low number of variables.

5.2. ZDT Tests

The middle part of Tables 35 presents the IGD, HV, and NR indicator comparison on ZDT test suites. Figure 5 illustrates the approximated Pareto fronts obtained by each IPC on ZDT1∼ZDT3 whose HV value being median in ten runs. It is clear that EIR2 is the best in terms of HV, IGD, and NR indicators for all ZDT test functions. Specifically, the HV value and IGD value of EIR2 are much better than that of ParEGO and MaxMin. It is worth noting that the advantages of EIR2 are quite obvious for IGD and NR indicators. For example, for ZDT3 EIR2 gets an NR value of 0.5438, while the other two algorithms only have 0.1924 and 0.3686; meanwhile, it gets an IGD value of 0.0143 while the other two 0.0544 and 0.0591.

According to Figure 5, it is seen that both ParEGO and MaxMin successfully converged to the true Pareto front of ZDT1 and ZDT2; however, they failed in making them evenly distributed. This phenomenon is even worse on ZDT3 whose true Pareto front consists of five segmented curves. The majority of solutions found by ParEGO and MaxMin are clustered in one segment, while only a few solutions located in other segments. In contrast, the approximated Pareto set obtained by EIR2 are uniformly and equally distributed among all five segments.

In summary, EIR2 is capable of finding more nondominated solutions than ParEGO and MaxMin using the same amount of computing resources and keeping them evenly distributed along the approximated Pareto front.

5.3. DTLZ Tests

The DTLZ test function suites are used aiming at testing an algorithm’s ability to three-objective optimization problems. The metric comparisons are listed at the bottom part of Tables 35, respectively. Figure 6 shows each IPC’s results on DTLZ2, DTLZ5, and DTLZ7 with their HV value being median in ten runs. Same as the ZDT case, on the three test functions, the EIR2 is still significantly ahead of its two competitors in terms of all three performance metrics. To be more specific, take DTLZ2 as an instance, the NR value of EIR2 is 0.3529, which is nearly three times 0.1262 of MaxMin and more than twice 0.1652 of ParEGO, which shows again EIR2 is more efficient in using computation resources compared to other IPCs. The larger HV value and smaller IGD value obtained by EIR2 also indicate its capacity in achieving strong convergence ability.

The same conclusion is also reached by inspecting Figure 6. Specifically, ParEGO has difficulty in converging to the true Pareto front, while MaxMin, despite converged to PF, suffered from being not able to spread its approximated Pareto front uniformly distributed along the true Pareto front which is a sphere of radius 1, and there are only a few solutions in the middle part and most of the solutions are found in three edges. In contrast, the approximated PF obtained by EIR2 is adequate to converge to and cover entirely and evenly the spheres surface.

6. Engineering Application

In addition to boundary constraints, most MOPs also contain equality or inequality constraints. We assume that constraint functions are expensive; hence, the Kriging model need to be established for each constraint. On the purpose of measuring the degree to which the solution satisfies the constraints, probability of feasibility () [46, 47] is introduced as follows:where and is the predicted value and standard deviation of constraint function . Based on , the average probability of feasibility () is defined as follows:where is the number of all inequality constraints. The value of an infeasible solution is low, but it will increase as more constraints get satisfied. Clearly, the value of varies in interval . The closer the value of is to 1, the higher the probability that the solution is feasible, and hence more preferred.

In combination with , EIR2 is extended to constrained MOPs

In this section, the proposed CEIR2 indicator is applied to the optimization design of energy storage flywheel [48] to test its effectiveness.

The energy storage flywheel [49] uses a flywheel in high-speed rotating to store kinetic energy and convert it into electrical energy when needed. Given a constant speed, the maximum energy storage is proportional to the moment of inertia of flywheel. Larger moment of inertia can be obtained by adjusting the geometric parameters of the flywheel, but at the same time on the risk of increasing the total mass. In this paper, our job is to maximize the flywheel rotational inertia as well as minimize total mass.

Figure 7 is a cross-sectional profile of an energy storage flywheel. There are six decision variables: and , representing the radial sizes and thickness of the hub and spoke of the flywheel, respectively. Three inequality constraints involve restrictions on radial stress, radial deformation, and hoop stress. The overall optimization problem is expressed as follows:where , whose upper and lower boundary are listed in Table 6. and represent the rotational inertia and total mass of the flywheel, respectively. , , and , respectively, represent the maximum radial stress, the maximum hoop stress, and the maximum radial deformation of the outer ring when the flywheel rotates, accompanied by their corresponding allowable values, , , and .

Range (mm)


Although formulas of the objective functions and can be obtained directly from the geometric parameters, the calculation of the constraint functions has to resort to finite element analysis, considering the complexity of flywheel structure and external force conditions; hence, this problem is a constrained expensive multiobjective optimization problem.

The material used for the flywheel is Ti-6Al-4V titanium alloy, with corresponding allowable stress . We set at the working rotation speed . ANSYS is employed for calculating values of objective and constraint functions.

As for parameter setting, the maximum number of function evaluations is 360, is set to , and other parameters remain unchanged as for the test function case.

The resulting feasible nondominated solutions are obtained by applying CEIR2 to this engineering problem, as shown in Figure 8. For comparison, we use the optimal LHS method to get 900 sample points at once and then evaluate values of the expensive objective and constraint functions, out of which the feasible nondominated solutions are also shown in Figure 8.

CEIR2 found a total of 44 feasible nondominated solutions, which have a large distribution range in the objective space, namely, and , and uniformly distributed along the approximated PF. In comparison, the optimal LHS spend 900 expensive function evaluations but found only 11 feasible nondominated solutions, which are inferior to that of CEIR2 both in terms of convergence and diversity, indicating CEIR2’s advantage in using computing resources.

7. Conclusion and Future Work

In this paper, a novel infilling point criterion EIR2 is proposed, which is combined with the Kriging approximate model, optimal Latin hypercube sampling, and PSO algorithm for dealing with expensive multiobjective optimization problems. The basic idea of EIR2 is to map a point in the objective space to a set in the EI space, and the R2 indicator of the set is defined as the fitness of this point. Due to its analytic form, the computation required for this criterion is very low. The proposed algorithm is applied to three suites of standard test functions. The results show that the algorithm can get more nondominated solutions with higher convergence and dispersion under the equal computing resources. Combined with the average probability of feasibility indicator, EIR2 can also be applied to constrained expensive MOPs. In the engineering optimization of energy storage flywheel, the nondominated solutions having a wider range and diversity are obtained exhausting relatively low computation cost.

Future work will have to extend EIR2 to a parallel version that is capable of finding multiple infilling points [5054] in one iteration with the help of parallel computers to further accelerate the speed of the optimization process. Besides, regarding the parameterization of the utility function, we use a fixed set of uniformly distributed weight vectors. This strategy runs smoothly if the geometry of PF is relatively regular, such as a simplex-like shape, but may struggle [55] if the true PF turns out to be irregular, such as having a sharp peak of a low tail or discontinuous, because multiple weight vectors cloud corresponds to one same solution. As a result, the diversity of the nondominated solution set cannot be guaranteed. A potential promising strategy [5658] is to dynamically adjust the weight vectors during the optimization process so that the solutions are directed towards each part of the PF. Future work will be incorporating this strategy to our IPC to further improve its ability to maintain diversity.


MOPs:Multiobjective Optimization Problems
MOEA:Multiobjective Optimization Evolutionary Algorithm
EI:Expectation Improvement
EGO:Efficient Global Optimization
IPC:Infilling Point Criterion
DoE:Design of Experiments
MCM:Monte Carlo Method
EIR2:Expectation Improvement R2 Indicator
IGD:Inverted Generational Distance
LHS:Latin hypercube sampling
NR:Nondominated solution ratio
PS:Pareto set
PF:Pareto front
PSO:Particle swarm optimization
PoF:Probability of Feasibility
APoF:Average Probability of Feasibility
QI:Quality indicators.

Data Availability

The source code of the EIR2-MOEA algorithm proposed in this article can be obtained by contacting the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Authors’ Contributions

Both authors have contributed equally to this paper and have read and approved the final manuscript.


This research was supported by Key Projects in the National Science & Technology Pillar Program (2011BAF08B03) and Shanghai’s Plan for Absorption and Innovation of Imported Technology (industry-20; direction-28).


  1. K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons, Hoboken, NJ, USA, 2001.
  2. A. Abraham and L. Jain, “Evolutionary multiobjective optimization,” in Advanced Information and Knowledge Processing, A. Abraham, L. Jain, and R. Goldberg, Eds., pp. 1–6, Springer, London, UK, 2005. View at: Publisher Site | Google Scholar
  3. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site | Google Scholar
  4. E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: improving the strength pareto evolutionary algorithm,” in Proceedings of the EUROGEN’2001, Athens. Greece, September 2001. View at: Google Scholar
  5. Q. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, pp. 712–731, 2007. View at: Publisher Site | Google Scholar
  6. C. A. Coello Coello and M. S. Lechuga, “MOPSO: a proposal for multiple objective particle swarm optimization,” in Proceedings of the 2002 Congress on Evolutionary Computation, pp. 1051–1056, IEEE Computer Society, Washington, DC, USA, 2002. View at: Google Scholar
  7. Y. S. Ong, P. B. Nair, and A. J. Keane, “Evolutionary optimization of computationally expensive problems via surrogate modeling,” AIAA Journal, vol. 41, no. 4, pp. 687–696, 2003. View at: Publisher Site | Google Scholar
  8. D. R. Jones, “Taxonomy of global optimization methods based on response surfaces,” Journal of Global Optimization, vol. 21, no. 4, pp. 345–383, 2001. View at: Publisher Site | Google Scholar
  9. S. M. Clarke, J. H. Griebsch, and T. W. Simpson, “Analysis of support vector regression for approximation of complex engineering analyses,” Journal of Mechanical Design, vol. 127, no. 6, pp. 1077–1087, 2005. View at: Publisher Site | Google Scholar
  10. A. Gaspar-Cunha and A. Vieira, “A multi-objective evolutionary algorithm using neural networks to approximate fitness,” Evaluations, vol. 6, p. 20, 2005. View at: Google Scholar
  11. J. P. C. Kleijnen, “Kriging metamodeling in simulation: a review,” European Journal of Operational Research, vol. 192, no. 3, pp. 707–716, 2009. View at: Publisher Site | Google Scholar
  12. J. Antony, Design of Experiments for Engineers and Scientists, Elsevier, Amsterdam, Netherlands, 2014.
  13. J. Parr, C. M. Holden, A. I. Forrester, and A. J. Keane, “Review of efficient surrogate infill sampling criteria with constraint handling,” in Proceedings of the 2nd International Conference on Engineering Optimization, pp. 1–10, Lisbon, Portugal, 2010. View at: Google Scholar
  14. M. T. M. Emmerich, K. Yang, and A. H. Deutz, “Infill criteria for multiobjective bayesian optimization,” in High-Performance Simulation-Based Optimization, T. Bartz-Beielstein, B. Filipi, P. Koros̆ec et al., Eds., pp. 3–16, Springer International Publishing, Cham, Switzerland, 2020. View at: Publisher Site | Google Scholar
  15. D. R. Jones, M. Schonlau, and W. J. Welch, “Efficient global optimization of expensive black-box functions,” Journal of Global Optimization, vol. 13, no. 4, pp. 455–492, 1998. View at: Publisher Site | Google Scholar
  16. A. Forrester, A. Sobester, and A. Keane, Engineering Design via Surrogate Modelling: A Practical Guide, John Wiley & Sons, Hoboken, NJ, USA, 2008.
  17. F. Viana and R. Haftka, “Surrogate-based optimization with parallel simulations using the probability of improvement,” in Proceedings of the 13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conference, American Institute of Aeronautics and Astronautics, Fort Worth, TX, USA, 2010. View at: Publisher Site | Google Scholar
  18. J. Knowles, “ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 1, pp. 50–66, 2006. View at: Publisher Site | Google Scholar
  19. J. Davins-Valldaura, S. Moussaoui, G. Pita-Gil, and F. Plestan, “ParEGO extensions for multi-objective optimization of expensive evaluation functions,” Journal of Global Optimization, vol. 67, no. 1–2, pp. 79–96, 2017. View at: Publisher Site | Google Scholar
  20. J. Shinkyu and O. Shigeru, “Efficient global optimization (EGO) for multi-objective problem and data mining,” in Proceedings of the 2005 IEEE Congress on Evolutionary Computation, vol. 3, pp. 2138–2145, Scotland, UK, 2005. View at: Publisher Site | Google Scholar
  21. T. Murata and H. Ishibuchi, “MOGA: multi-objective genetic algorithms,” in Proceedings of the 1995 IEEE International Conference on Evolutionary Computation, vol. 1, p. 289, Perth, Western Australia, 1995. View at: Publisher Site | Google Scholar
  22. Q. Zhang, W. Liu, E. Tsang, and B. Virginas, “Expensive multiobjective optimization by MOEA/D with Gaussian process model,” IEEE Transactions on Evolutionary Computation, vol. 14, pp. 456–474, 2010. View at: Publisher Site | Google Scholar
  23. J. Svenson and T. Santner, “Multiobjective optimization of expensive-to-evaluate deterministic computer simulator models,” Computational Statistics & Data Analysis, vol. 94, pp. 250–264, 2016. View at: Publisher Site | Google Scholar
  24. K. Shimoyama, S. Jeong, and S. Obayashi, “Kriging-surrogate-based optimization considering expected hypervolume improvement in non-constrained many-objective test problems,” in Proceedings of the 2013 IEEE Congress on Evolutionary Computation, pp. 658–665, IEEE, Cancun, MX, USA, 2013. View at: Publisher Site | Google Scholar
  25. K. M. Decker, “The Monte Carlo method in science and engineering: theory and application,” Computer Methods in Applied Mechanics and Engineering, vol. 89, no. 1–3, pp. 463–483, 1991. View at: Publisher Site | Google Scholar
  26. M. Li and X. Yao, “Quality evaluation of solution sets in multiobjective optimisation: a survey,” ACM Computing Surveys, vol. 52, no. 2, pp. 1–38, 2019. View at: Publisher Site | Google Scholar
  27. H. Ishibuchi, R. Imada, Y. Setoguchi, and Y. Nojima, “Reference point specification in inverted generational distance for triangular linear Pareto front,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 6, pp. 961–975, 2018. View at: Publisher Site | Google Scholar
  28. M. R. Sierra and C. A. Coello Coello, “Improving PSO-based multi-objective optimization using crowding, mutation and epsilon-dominance. evolutionary multi-criterion optimization,” in Lecture Notes in Computer Science, pp. 505–519, Springer, Berlin, Germany, 2005. View at: Publisher Site | Google Scholar
  29. E. Zitzler, L. Thiele, M. Laumanns, C. Fonseca, and V. da Fonseca, “Performance assessment of multiobjective optimizers: an analysis and review,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 2, pp. 117–132, 2003. View at: Publisher Site | Google Scholar
  30. B. Li, K. Tang, J. Li, and X. Yao, “Stochastic ranking algorithm for many-objective optimization based on multiple indicators,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 6, pp. 924–938, 2016. View at: Publisher Site | Google Scholar
  31. J. Bader and E. Zitzler, “HypE: an algorithm for fast hypervolume-based many-objective optimization,” Evolutionary Computation, vol. 19, pp. 45–76, 2010. View at: Publisher Site | Google Scholar
  32. H. Ishibuchi, R. Imada, Y. Setoguchi, and Y. Nojima, “How to specify a reference point in hypervolume calculation for fair performance comparison,” Evolutionary Computation, vol. 26, no. 3, pp. 411–440, 2018. View at: Publisher Site | Google Scholar
  33. T. Pamulapati, R. Mallipeddi, and P. N. Suganthan, “ISDE+—an indicator for multi and many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 23, pp. 346–352, 2019. View at: Publisher Site | Google Scholar
  34. R. Wang, S. Chen, L. Ma, S. Cheng, and Y. Shi, “Multi-indicator bacterial foraging algorithm with kriging model for many-objective optimization,” in Advances in Swarm Intelligence, vol. 10941, pp. 530–539, Springer International Publishing, Cham, Switzerland, 2018. View at: Publisher Site | Google Scholar
  35. D. H. Phan and J. Suzuki, “R2-IBEA: R2 indicator based evolutionary algorithm for multiobjective optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1836–1845, Cancun, MX, USA, 2013. View at: Publisher Site | Google Scholar
  36. R. H. Gómez and C. A. C. Coello, “MOMBI: a new metaheuristic for many-objective optimization based on the R2 indicator,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 2488–2495, Cancun, MX, USA, 2013. View at: Publisher Site | Google Scholar
  37. D. Brockhoff, T. Wagner, and H. Trautmann, “On the properties of the R2 indicator,” in Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, pp. 465–472, Association for Computing Machinery, Philadelphia, PA, USA, 2012. View at: Publisher Site | Google Scholar
  38. A. Sobester, S. J. Leary, and A. J. Keane, “On the design of optimization strategies based on global response surface approximation models,” Journal of Global Optimization, vol. 33, no. 1, pp. 31–59, 2005. View at: Publisher Site | Google Scholar
  39. I. Pan, M. Babaei, A. Korre, and S. Durucan, “Artificial neural network based surrogate modelling for multi-objective optimisation of geological CO2 storage operations,” Energy Procedia, vol. 63, pp. 3483–3491, 2014. View at: Publisher Site | Google Scholar
  40. J. M. Parr, A. I. J. Forrester, A. J. Keane, and C. M. E. Holden, “Enhancing intill sampling criteria for surrogate-based constrained optimization,” Journal of Computational Methods in Sciences and Engineering, vol. 12, no. 1–2, pp. 25–45, 2012. View at: Publisher Site | Google Scholar
  41. F. Li, J. Liu, S. Tan, and X. Yu, “R2-MOPSO: a multi-objective particle swarm optimizer based on R2-indicator and decomposition,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC), pp. 3148–3155, Sendai, Japan, May 2015. View at: Publisher Site | Google Scholar
  42. H. Sato, “Inverted PBI in MOEA/D and its impact on the search performance on multi and many-objective optimization,” in Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, pp. 645–652, Association for Computing Machinery, Vancouver, BC, Canada, 2014. View at: Publisher Site | Google Scholar
  43. A. Deutz, M. Emmerich, and K. Yang, in The Expected R2-Indicator Improvement for Multi-Objective Bayesian Optimization. Evolutionary Multi-Criterion Optimization, pp. 359–370, Springer International Publishing, Cham, Switzerland, 2019. View at: Publisher Site
  44. J.-S. Park, “Optimal Latin-hypercube designs for computer experiments,” Journal of Statistical Planning and Inference, vol. 39, no. 1, pp. 95–111, 1994. View at: Publisher Site | Google Scholar
  45. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of ICNN’95-International Conference on Neural Networks, vol. 4, pp. 1942–1948, Perth, Australia, 1995. View at: Publisher Site | Google Scholar
  46. S. u. Rehman and M. Langelaar, “Expected improvement based infill sampling for global robust optimization of constrained problems,” Optimization and Engineering, vol. 18, pp. 723–753, 2017. View at: Publisher Site | Google Scholar
  47. Z. Han, F. Liu, C. Xu, K. Zhang, and Q. Zhang, “Efficient multi-objective evolutionary algorithm for constrained global optimization of expensive functions,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC), pp. 2026–2033, Wellington, New Zealand, 2019. View at: Publisher Site | Google Scholar
  48. L. Jiang, W. Zhang, G. J. Ma, and C. W. Wu, “Shape optimization of energy storage flywheel rotor,” Structural and Multidisciplinary Optimization, vol. 55, no. 2, pp. 739–750, 2017. View at: Publisher Site | Google Scholar
  49. S. C. Mohan and D. K. Maiti, “Structural optimization of rotating disk using response surface equation and genetic algorithm,” International Journal for Computational Methods in Engineering Science and Mechanics, vol. 14, no. 2, pp. 124–132, 2013. View at: Publisher Site | Google Scholar
  50. G. Sun, Y. Tian, R. Wang, J. Fang, and Q. Li, “Parallelized multiobjective efficient global optimization algorithm and its applications,” Structural and Multidisciplinary Optimization, vol. 61, no. 2, pp. 763–786, 2020. View at: Publisher Site | Google Scholar
  51. D. Han and J. R. Zheng, “Solving expensive multi-objective optimization problems by kriging model with multi-point updating strategy,” Applied Mechanics and Materials, vol. 685, pp. 667–670, 2014. View at: Publisher Site | Google Scholar
  52. K. Yang, P. S. Palar, M. Emmerich, K. Shimoyama, and T. Bäck, “A multi-point mechanism of expected hypervolume improvement for parallel multi-objective bayesian global optimization,” in Proceedings of the Genetic and Evolutionary Computation Conference, pp. 656–663, Association for Computing Machinery, Prague, Czech Republic, 2019. View at: Publisher Site | Google Scholar
  53. Y. He, J. Sun, P. Song, X. Wang, and A. S. Usmani, “Preference-driven Kriging-based multiobjective optimization method with a novel multipoint infill criterion and application to airfoil shape design,” Aerospace Science and Technology, vol. 96, Article ID 105555, 2020. View at: Publisher Site | Google Scholar
  54. J. Liu, W.-P. Song, Z.-H. Han, and Y. Zhang, “Efficient aerodynamic shape optimization of transonic wings using a parallel infilling strategy and surrogate models,” Structural and Multidisciplinary Optimization, vol. 55, no. 3, pp. 925–943, 2017. View at: Publisher Site | Google Scholar
  55. H. Ishibuchi, Y. Setoguchi, H. Masuda, and Y. Nojima, “Performance of decomposition-based many-objective algorithms strongly depends on Pareto front shapes,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 2, pp. 169–190, 2017. View at: Publisher Site | Google Scholar
  56. S. Xu, H. Chen, X. Liang, and M. He, “A modified MOEAD with an adaptive weight adjustment strategy,” in Proceedings of the International Conference on Intelligent Computing, Automation and Systems (ICICAS), pp. 184–188, Chongqing, China, 2019. View at: Publisher Site | Google Scholar
  57. C. Dai, X. Lei, and X. He, “A decomposition-based evolutionary algorithm with adaptive weight adjustment for many-objective problems,” Soft Computing, 2019. View at: Publisher Site | Google Scholar
  58. M. Li and X. Yao, “What weights work for you? Adapting weights for any Pareto front shape in decomposition-based evolutionary multiobjective optimisation,” Evolutionary Computation, vol. 28, no. 2, pp. 227–253, 2020. View at: Publisher Site | Google Scholar

Copyright © 2020 Ding Han and Jianrong Zheng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.