Research Article  Open Access
Cheng Yan, Xiuli Shen, Fushui Guo, "Novel TwoStage Method for LowOrder Polynomial Model", Mathematical Problems in Engineering, vol. 2018, Article ID 8156390, 13 pages, 2018. https://doi.org/10.1155/2018/8156390
Novel TwoStage Method for LowOrder Polynomial Model
Abstract
One of the most popular statistical models is a loworder polynomial response surface model, i.e., a polynomial of first order or second order. These polynomials can be used for global metamodels in weakly nonlinear simulation to approximate their global tendency and local metamodels in response surface methodology (RSM), which has been studied in various applications in engineering design and analysis. The order of the selected polynomial determines the number of sampling points (input combinations) and the resulting accuracy (validity, adequacy). This paper derives a novel method to obtain an accurate highorder polynomial while requiring fewer sampling points. This method uses a twostage procedure such that the second stage modifies the loworder polynomial estimated in the first stage; this second stage does not require new points. This paper evaluates the performance of the method numerically by using several test functions. These numerical results show that the proposed method can provide more accurate predictions than the traditional method.
1. Introduction
Metamodels are essentially the simple approximation functions of simulation models of real systems. The currently most common usage of metamodels is to reduce the computational cost significantly by substituting the timeconsuming evaluations in some computationally intensive tasks [1]. The second usage of metamodels is to smooth objectivefunction surfaces and deal with noisy data, which can facilitate the use of gradientbased methods for optimization problems [2, 3]. Another increasingly popular usage for metamodels is to deal with missing data and gain insight into the contributions of each input variable to associated output variables [4]. Besides, metamodels can also be used to reduce numerical instability [5]. Moreover, metamodels can be utilized as calibration methods for lowfidelity simulations of limited accuracy [6]. Because of these benefits, metamodels have been extensively researched and employed in various applications including engineering design, analysis, and optimization [7–10].
Up to now, various types of metamodeling techniques have been proposed, such as radial basis function (RBF) [11], polynomial response surface [12], Kriging [13, 14], support vector regression (SVR) [15, 16], multivariate adaptive regression splines (MARS) [17, 18], and artificial neural networks (ANN) [19]. Particularly, one of the most popular metamodeling techniques is the polynomial response surface, which was first proposed by Box and Wilson [20] and later discussed in detail by Myers, Montgomery, and AndersonCook [12]. The polynomials have great advantages in efficiency of model construction as well as transparency, which means the functional relationships between the input variables and associated output variables can be easily obtained [21]. These polynomials can be not only used for local metamodels in response surface methodology (RSM) but also used for global metamodels in weakly nonlinear simulation to approximate their global tendency. Therefore, they have been widely utilized in both academic research and engineering applications [22–25].
Generally speaking, the order of the selected polynomial has significant influences on the number of sampling points as well as the resulting accuracy (validity, adequacy). Namely, as the order of the polynomial increases, the polynomial response surface becomes more accurate in approximating higher nonlinear problems. However, the number of sampling points need to increase sharply, which may be impractical for these highfidelity simulations. We should note that the mean squared error (MSE) may also increase.
This paper derives a novel method to obtain an accurate highorder polynomial while requiring fewer sampling points. The proposed approach is based on a twostage procedure. The second stage modifies the loworder polynomial constructed in the first stage by utilizing its feedback. It is noted that the second stage does not require new sampling points.
The remaining sections of this paper are organized as follows. Firstly, we analyze the basic theory and characteristics of the polynomial response surface. Secondly, we introduce the modeling process of the improved polynomial response surface. Thirdly, we present the detailed scheme of the numerical experiments. Then, we analyze the results and discuss the performance of the proposed method. Finally, we conclude our paper with a summary and suggestions for future work.
2. Polynomial Response Surface
The polynomial response surface is mainly used to develop an approximate functional relationship between a number of input variables and an associated response. The relationship can usually be written as follows:where denotes the true response, denotes the vector of input variables, denotes the approximation response, denotes a random error, denotes a vector of constant coefficients, and denotes a polynomial basisfunction vector with elements that consists of constant term, terms of as well as crossproducts of these terms up to a certain order.
The key step of the polynomial response surface is to estimate by using the leastsquare method. In detail, an appropriate design of experiment (DOE) should first be chosen and a series of (≥p) samples can therefore be obtained. The totality of these samples can be represented by a design matrix, which is denoted by as follows:where denotes the th sampling point.
Second, the values of the polynomial basisfunction vectors corresponding to all the sampling points should be calculated. The totality of these vectors can be represented by a matrix, which is denoted by as follows:
Third, the true response should be observed or measured for each sampling point by conducting numerical simulations or physical experiments. The totality of these responses can be represented by a vector, which is denoted by as follows:
Then, from (1), we can havewhere denotes the random error term at the th sampling point. Equation (5) can be expressed in matrix form as follows:where .
Next, the sum of squared residuals (SSR) should be calculated.
Besides, need to be minimized; therefore the following equation must be satisfied:
Finally, can be estimated from (8).
In this way, the polynomial response surface has been constructed.
3. Improved Polynomial Response Surface
The firstorder and secondorder polynomial models are the two most popular polynomial response surfaces. Although polynomial with order higher than two can also be employed, the number of sampling points needs to increase sharply with the order increasing. In order to overcome this difficulty, we propose an improved method. The core idea is to start with a loworder polynomial and refit it to obtain highorder polynomial in a second successive fitting by using the feedback of the initial simple fitting. No new sampling point is needed in the second fitting.
In detail, the improved method involves the following steps:(1)Choose an appropriate DOE and generate a series of samples, namely, the design matrix . Further information about DOE can be found in a large number of references [12, 26–30].(2)Conduct numerical simulations or physical experiments to observe or measure the true response vector for all the sampling points obtained from step (1).(3)Construct the initial loworder polynomial response surface . The firstorder and secondorder polynomial models are selected in this paper and denoted by 1RS and 2RS, respectively.(4)Choose an appropriate method to modify the initial polynomial response surface , and construct the improved model . The improved polynomial response surfaces corresponding to the initial model 1RS and 2RS are denoted by 1IRS and 2IRS, respectively.(5)Use the improved model to predict the response.
3.1. Model Construction
This paper propose a method to correct the initial loworder polynomial and construct the corresponding highorder polynomial. The method can be expressed as follows:where and are two vectors of constant coefficients, respectively, and is a firstorder polynomial basisfunction vector with elements. They can be written as follows:
The idea of the method is to treat the response of the loworder polynomial as feedback and then multiply a linear regression model and add another different linear regression model. Essentially, a secondorder polynomial can be obtained when applying the method to 1RS. A thirdorder polynomial can be obtained when applying the method to 2RS. We can see that 2RS has different coefficients and the correction method has different coefficients. Therefore, sampling points are needed at least to obtain a secondorder polynomial for the traditional method, while (when ) sampling points are needed at least to obtain a thirdorder polynomial for the improved method. It is noted that sampling points are needed at least to obtain a thirdorder polynomial model for the traditional method. Obviously, the improved method can obtain highorder polynomial with fewer sampling points.
We begin to construct the improved model . It should be noted at first that there is no need to generate new sampling points because the design matrix expressed in (2) as well as the true response vector expressed in (4) can be used to construct not only the initial model but also the improved model .
From (1) and (10), we can getwhere denotes the response of the improved model at the th sampling point, denotes the error term of the improved model at the th sampling point, and denotes the response of the initial model at the th sampling point.
Equation (12) can be transformed as follows:where
The totality of these equations represented by (13) can also be expressed in matrix form as follows:where
The leastsquare method is employed to estimate the values of . Similar to (9), we can get
In this way, the improved model has been constructed. It can be expressed as follows:
4. Numerical Experiments
4.1. Benchmark Problems for Global Performance
To test the global performance of the proposed method, we employ nine benchmark problems which are often used in relevant literature. The dimensions of these problems range from 2 to 20.
(1) 2Variable GoldsteinPrice Function. This 2variable benchmark problem is taken from Acar [31]. It can be written aswhere and .
(2) 2Variable BraninHoo Function. This 2variable benchmark problem is taken from Acar [31]. It can be written aswhere and .
(3) 3Variable Perm Function. This 3variable benchmark problem is taken from a website (http://www.sfu.ca/~ssurjano///perm0db.html). It can be written aswhere for all .
(4) 3Variable CubicPolynomial Function. This 3variable benchmark problem is a common function. It can be written aswhere , , and .
(5) 4Variable PowerSum Function. This benchmark problem is taken from a website (http://www.sfu.ca/~ssurjano///powersum.html). It can be written aswhere , for all . The 4variable model of this problem is considered. And the function parameters .
(6) 4Variable Hartmann Function. This 4variable benchmark problem is taken from a website (http://www.sfu.ca/~ssurjano///hart4.html). It can be written aswhere for all , , and and are expressed as follows:
(7) 10Variable Zakharov Function. This benchmark problem is taken from a website (http://www.sfu.ca/~ssurjano///zakharov.html). It can be written aswhere , for all . The 10variable model of this problem is considered.
(8) 15Variable DixonPrice Function. This benchmark problem is taken from Acar [32]. It can be written aswhere , for all . The 15variable model of this problem is considered.
(9) 20–Variable Welch et al. (1992) Function. This 20variable benchmark problem is taken from a website (http://www.sfu.ca/~ssurjano///welchetal92.html). It can be written aswhere , for all .
To facilitate the description, we use some simple marks to label these benchmark problems respectively. In detail, the 2variable GoldsteinPrice function is denoted by GP2; the 2variable BraninHoo function is denoted by BH2; the 3variable Perm function is denoted by PM3; the 3variable CubicPolynomial function is denoted by CP3; the 4variable PowerSum function is denoted by PS4; the 4variable Hartmann function is denoted by HM4; the 10variable Zakharov function is denoted by ZH10; the 15variable DixonPrice function is denoted by DP15; and the 20–variable Welch et al. (1992) function is denoted by WE20.
4.2. Benchmark Problems for Local Performance
To test the local performance of the proposed method, we employ three benchmark problems, which are polynomials of secondorder, thirdorder, and fourthorder.
(1) QuadraticPolynomial Function. This benchmark problem can be written aswhere , , and . It is denoted by QP3.
(2) CubicPolynomial Function. This benchmark problem is the same as CP3, which is described in (22).
(3) FourOrderPolynomial Function. This benchmark problem can be written aswhere , , and . It is denoted by FP3.
4.3. Numerical Procedure
When the loworder polynomials are used for local metamodels in response surface methodology (RSM), the resolutionIII (RIII) designs, central composite designs (CCDs), and BoxBehnken designs are considered to be the most suitable DOE techniques[10]. When the loworder polynomials are used for global metamodels in weakly nonlinear simulation to approximate its global tendency, the Latin hypercube sampling (LHS) technique is one of the most popular choices both in scientific research and engineering problems [21, 31, 33, 34]. LHS maximizes the minimum distance between the sampling points to obtain uniform designs; moreover its projections onto each variable axis give uniform points. This paper will first discuss the global performance of the improved method. Therefore, we first utilize MATLAB(2011) routine “lhsdesign” with “maximin” criterion to select the locations of the sampling points for all the benchmark problems discussed above.
The performance of metamodels may vary from DOE to DOE. To reduce the random effect, we select 1000 training sets and 1000 corresponding test sets for each benchmark problem. Particularly, for a specified training set, the number of points is chosen as triple the number of coefficients in a secondorder polynomial model (namely, , denotes the dimension of the benchmark problem), which refers to Jin, Chen, and Simpson [21]. Meanwhile, 1000 test points are selected for a specified test set by using LHS. The performance of metamodels for each benchmark problem will be estimated by using the mean of the 1000 replicates.
In summary, the detailed information about the training and test data used for each benchmark problem are listed in Table 1.

5. Results and Discussion
5.1. Global Performance of the Improved Method
Reviewing the relevant literature [35–40], we select root mean squared error at test points () as validation metrics to measure the performance of the improved method for the benchmark problems. The definition of is expressed as follows:where denotes the number of test points.
Our main concerns are the comparison between the traditional model and its corresponding improved model, namely, the comparison between 1RS and 1IRS, as well as the comparison between 2RS and 2IRS. Although we do not think it is necessary to compare 1RS with 2RS or to compare 1IRS with 2RS, we still want to present the fact that maybe 2RS has better performance than 1RS for all the benchmark problems, while the performance of 1IRS may be close to that of 2RS in some particular problems.
The of traditional models (1RS and 2RS) and their corresponding improved models (1IRS and 2IRS) for different benchmark problems are shown in Figure 1. The boxplot provides a graphical depiction of how prediction errors vary over the range of 1000 training sets and test sets. The plot is composed of a box, an upper limit line with whiskers, a lower limit line also with whiskers and outliers. The box includes a top edge line representing the 75th percentile value, an interior line representing the median value, as well as a bottom edge line representing the 25th percentile value. The upper/lower limit line is extending from the top/bottom edge line of the box to the most extreme data points which are not considered to be outliers. The outliers are data with values beyond the limit line and are presented by the “+” symbols.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
From Figure 1 we can see the following. (1) When considered the median value of the 1000 test sets, 1IRS performs better than 1RS for seven benchmark problems. For the other two problems (HM4 and DP15), 1IRS has similar results with 1RS. It is noted that in this paper the results are called similar results if the difference is between 1% and 1%. (2) 2IRS has better accuracy than 2RS for seven benchmark problems. For the other two problems (DP15 and WE20), 2IRS has similar results with 2RS. (3) 2RS performs better than 1RS for all the benchmark problems, yet the accuracy of 1IRS is close to that of 2RS for GP2 and PM3. Particularly, the accuracy of 1IRS is even better than that of 2RS for CP3 and ZH10. (4) For BH2, PM3, and HM4, 1IRS has larger outliers (or longer tails) than 1RS and 2IRS has larger outliers than 2RS. Therefore, the improved models have larger variance than the traditional models. The detailed mean and COV (coefficient of variation) values of over the 1000 test sets are shown in Appendix A.
Why Does 2IRS Perform Better Than 2RS and 1IRS Perform Better Than 1RS? We think the reason is that the improved method can obtain highly nonlinear terms with fewer sampling points when compared with the traditional method. For example, when the number of sampling points is less than , it is impossible to obtain a thirdorder polynomial for the traditional method. However, for the improved method, a thirdorder polynomial can be obtained as long as the number of sampling points is more than . We should note that is less than when .
5.2. Effect of Validation Metrics
The choice of different validation metrics may influence the results. To reduce the source of uncertainty in the results as much as possible, we select another four commonly used validation metrics. They are root mean squared error at training points (), correlation coefficient at test points (), average absolute error at test points (), and max absolute error at test points (). The detailed definitions and results of these metrics are shown in Appendix BE. Here we gather all the five validation metrics for all the nine benchmark problems and present them in Table 2.

From Table 2 we can see the following. (1) For GP2, BH2, PM3, PS4, and ZH10, all the five metrics show that 1IRS is more accurate than 1RS and 2IRS is more accurate than 2RS. (2) For CP3, all the five metrics show that 1IRS is more accurate than 1RS; meanwhile four metrics indicate that 2IRS is more accurate than 2RS. (3) For HM4, two metrics show that 1RS is more accurate than 1IRS; meanwhile just one metric indicates that 2RS is more accurate than 2IRS. (4) For DP15, only one metric shows that 1RS is more accurate than 1IRS; meanwhile just one metric indicates that 2RS is more accurate than 2IRS. (5) For WE20, all the five metrics show that 1IRS is more accurate than 1RS; meanwhile all the five metrics show that 2IRS has similar accuracy with 2RS. (6) Among all the fortyfive examples, thirtynine examples show that 1IRS is more accurate than 1RS and three examples show that 1IRS has similar accuracy with 1RS; meanwhile thirtyfour examples show that 2IRS is more accurate than 2RS, and nine examples show that 2IRS has similar accuracy with 2RS.
In summary, the choice of the validation metrics can slightly influence the results, but the conclusions obtained by the five metrics remain unchanged. The improved method performs better than the traditional method. Particularly, the leastsquare method, which is used to estimate the polynomial coefficients, implies that the most relevant metric is . And the test set () is more relevant than the training set ().
5.3. Effect of the Number of Sampling Points
The accuracy of metamodels may depend on DOE and vary from DOE to DOE. To reduce the random effect caused by DOE, we have selected 1000 different training sets for each benchmark problem. However, for each training set, the number of sampling points remains unchanged. Therefore, we still need to examine the effect of the number of sampling points on the performance of the improved method. Considering the length of our paper, we just select ZH10 as the example problem, choose , , and as validation metrics, and make the comparison between 2RS and 2IRS.
Figure 2 shows the results with the number of sampling points varying between and . From it we can get the following findings:(1)With the number of sampling points increasing, both and of 2RS and 2IRS are getting smaller and smaller. That is to say, the accuracy of 2RS and 2IRS increases continuously with the increase of the number of sampling points.(2) and of 2IRS are always smaller than that of 2RS; meanwhile of 2IRS are always bigger than that of 2RS. That is to say, 2IRS has an obvious accuracy improvement when compared to 2RS, even though the number of sampling points varies.
5.4. Significance of Results
The results above have proven the effectiveness of the improved method to some extent. In order for the method to be better used by engineers, we will compare its performance with some other popular metamodels, which are Kriging with firstorder polynomial regression function (KRG1), Kriging with secondorder polynomial regression function (KRG2), radial based function with Gaussianform basis (RBFG), and radial based function with multiquadricform basis (RBFM). Considering Jin, Chen, and Simpson [21] have concluded that 2RS has special advantages in efficiency and transparency and some disadvantages in accuracy, we mainly focus on the accuracy improvement of the proposed method.
Figure 3 shows of 2RS, 2IRS, KRG1, KRG2, RBFG, and RBFM for different benchmark problems. The detailed results are shown in Appendix F. Considering the frequency of the accuracy ranking of all metamodels for the nine benchmark problems, we can see the following. (1) The frequency of 2RS that ranks 1st or 2nd is only one, while that of 2IRS is three, that of KRG1 is two, that of KRG2 is eight, that of RBFG is one, and that of RBFM is three. (2) The frequency of 2RS that ranks 5th or 6th is five, while that of 2IRS is one, that of KRG1 is three, that of KRG2 is zero, that of RBFG is eight, and that of RBFM is one. (3) Obviously, 2RS performs worse than KRG1, yet 2IRS performs better than KRG1. (4) 2RS performs worse than RBFM, yet 2IRS has similar performance with RBFM. (5) Both 2RS and 2IRS perform better than RBFG. (6) 2RS performs worse than KRG2 for all the nine benchmark problems, yet 2IRS performs better than KRG2 for PO3 and ZH10.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
In summary, compared to the traditional method, the improved method retains the advantages in efficiency and transparency and possesses significant accuracy improvement.
5.5. Local Performance of the Improved Method
All the results above are mainly used to test the global performance of the improved method. Therefore, LHS is utilized to generate the sampling points. To test the performance of the improved method used for local metamodels in RSM, we should use simple benchmark problems (i.e., QP3, CP3, and FP3) and select CCDs to generate corresponding training points. CCDs are considered to be one of the most suitable DOE techniques in response surface methodology [10]. Considering the length of our paper, we just select , , and as validation metrics.
Figure 4 shows the results of traditional models (1RS and 2RS) and their corresponding improved models (1IRS and 2IRS) for QP3, CP3, and FP3. From it we can see the following. (1) For QP3, 1IRS performs better than 1RS for all the validation metrics. Particularly, the errors of 1IRS, 2RS, and 2IRS are zero. This is because the function QP3 can be fitted exactly by the three models. (2) For CP3 and FP3, 1IRS performs better than 1RS for all the validation metrics, while 2IRS also performs better than 2RS for all the validation metrics.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
In summary, when the polynomials are used for local metamodels in RSM, the improved method still performs better than the traditional method.
6. Conclusions
In this paper, we proposed a new method to obtain an accurate highorder polynomial while requiring fewer sampling points. The core idea of the method is to start with a loworder polynomial and refit it to obtain highorder polynomial in a second successive fitting by using the feedback of the initial simple fitting.
To test the global performance of the improved method, we employed nine example problems which are widely used as benchmark problems in relevant literature. As expected, the accuracy of the improved method is better than that of the traditional method. Analyzing the principle, we think the reason for the better performance of the improved method is that it can obtain highly nonlinear terms with fewer sampling points when compared with the traditional method.
To obtain general conclusions, we investigated the effects of validation metrics and the number of sampling points on the performance of the improved method. We found that the choice of the validation metrics and the number of sampling points can slightly influence the results, but the conclusions remain unchanged.
In order for the improved method to be better used, we compared its performance with KRG1, KRG2, RBFG, and RBFM. The results showed that the improved method retains the advantages in efficiency and transparency and possesses significant accuracy improvement when compared with the traditional polynomial response surface.
Moreover, we researched the performance of the improved method used for local metamodels in RSM. The results showed that the proposed method still performs better than the traditional method.
However, there is no single outstanding metamodel which works best for all tasks. Therefore, finding more accurate metamodels is still our future work.
Appendix
A.
Table 3 shows the mean and COV values of of traditional models (1RS and 2RS) and their corresponding improved models (1IRS and 2IRS) over the 1000 test sets for nine benchmark problems. The values inside parentheses are COV values of .

B.
The definition of is expressed as follows:where denotes the number of training points.
Table 4 shows the mean values of of traditional models (1RS and 2RS) and their corresponding improved models (1IRS and 2IRS) over the 1000 training sets for nine benchmark problems.

C.
The definition of is expressed as follows:where
Table 5 shows the mean values of of traditional models (1RS and 2RS) and their corresponding improved models (1IRS and 2IRS) over the 1000 test sets for nine benchmark problems.

D.
The definition of is expressed as follows:
Table 6 shows the mean values of of traditional models (1RS and 2RS) and their corresponding improved models (1IRS and 2IRS) over the 1000 test sets for nine benchmark problems.

E.
The definition of is expressed as follows:
Table 7 shows the mean values of of traditional models (1RS and 2RS) and their corresponding improved models (1IRS and 2IRS) over the 1000 test sets for nine benchmark problems.

F. Significance
Table 8 shows the mean values of of 2RS, 2IRS, KRG1, KRG2, RBFG, and RBFM over the 1000 test sets for nine benchmark problems.

Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Authors’ Contributions
Cheng Yan and Xiuli Shen conceived and designed the experiments; Fushui Guo performed the experiments; Cheng Yan analyzed the data; Fushui Guo contributed analysis tools; Cheng Yan wrote the paper.
References
 T. Simpson, A. Booker, D. Ghosh, A. Giunta, P. Koch, and R. Yang, “Approximation methods in multidisciplinary analysis and optimization: a panel discussion,” Structural and Multidisciplinary Optimization, vol. 27, no. 5, 2004. View at: Publisher Site  Google Scholar
 T. Hemker, K. R. Fowler, M. . Farthing, and O. von Stryk, “A mixedinteger simulationbased optimization approach with surrogate functions in water resources management,” Optimization and Engineering. International Multidisciplinary Journal to Promote Optimization Theory & Applications in Engineering Sciences, vol. 9, no. 4, pp. 341–360, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 D. Kavetski and G. Kuczera, “Model smoothing strategies to remove microscale discontinuities and spurious secondary optima im objective functions in hydrological calibration,” Water Resources Research, vol. 43, no. 3, 2007. View at: Google Scholar
 A. Forrester, A. Sobester, and A. Keane, Engineering design via surrogate modelling: a practical guide, Wiley, New York, USA, 2008.
 J. Doherty and S. Christensen, “Use of paired simple and complex models to reduce predictive bias and quantify uncertainty,” Water Resources Research, vol. 47, no. 12, 2011. View at: Publisher Site  Google Scholar
 J. Umakant, K. Sudhakar, P. M. Mujumdar, and C. R. Rao, “Customized regression model for improving low fidelity analysis tool,” in Proceedings of the 11th AIAA/ISSMO Multidisciplinary Analysis and Optimaztion Conference, pp. 2470–2482, USA, September 2006. View at: Google Scholar
 F. A. C. Viana, T. W. Simpson, V. Balabanov, and V. Toropov, “Metamodeling in multidisciplinary design optimization: how far have we really come?” AIAA Journal, vol. 52, no. 4, pp. 670–690, 2014. View at: Publisher Site  Google Scholar
 A. I. J. Forrester and A. J. Keane, “Recent advances in surrogatebased optimization,” Progress in Aerospace Sciences, vol. 45, no. 1–3, pp. 50–79, 2009. View at: Publisher Site  Google Scholar
 J. P. C. Kleijnen and R. G. Sargent, “A methodology for fitting and validating metamodels in simulation,” European Journal of Operational Research, vol. 120, no. 1, pp. 14–29, 2000. View at: Publisher Site  Google Scholar
 J. P. Kleijnen, Design and analysis of simulation experiments, vol. 230 of International Series in Operations Research & Management Science, Springer, Cham, Second edition, 2015. View at: Publisher Site  MathSciNet
 K. T. Fang, R. Li, and A. Sudjianto, Design and Modeling for Computer Experiments, CRC Press, 2005.
 R. H. Myers and D. C. Montgomery, Response Surface Methodology, John Wiley & Sons, Inc., New York, NY, USA, 1995. View at: MathSciNet
 D. J. Toal and A. J. Keane, “Nonstationary kriging for design optimization,” Engineering Optimization, vol. 44, no. 6, pp. 741–765, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 J. P. C. Kleijnen, “Kriging metamodeling in simulation: a review,” European Journal of Operational Research, vol. 192, no. 3, pp. 707–716, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 S. M. Clarke, J. H. Griebsch, and T. W. Simpson, “Analysis of support vector regression for approximation of complex engineering analyses,” Journal of Mechanical Design, vol. 127, no. 6, pp. 1077–1087, 2005. View at: Publisher Site  Google Scholar
 C. Yan, X. Shen, and F. Guo, “An improved support vector regression using least squares method,” Structural and Multidisciplinary Optimization, vol. 57, no. 6, pp. 2431–2445, 2018. View at: Publisher Site  Google Scholar  MathSciNet
 S. Crino and D. E. Brown, “Global optimization with multivariate adaptive regression splines,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 37, no. 2, pp. 333–340, 2007. View at: Publisher Site  Google Scholar
 Kooperberg and Charles, “Multivariate adaptive regression splines,” The Annals of Statistics, vol. 19, no. 1, pp. 1–67, 1991. View at: Google Scholar
 J. Eason and S. Cremaschi, “Adaptive sequential sampling for surrogate model generation with artificial neural networks,” Computers & Chemical Engineering, vol. 68, pp. 220–232, 2014. View at: Publisher Site  Google Scholar
 G. E. P. Box and K. B. Wilson, “On the experimental attainment of optimum conditions,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 13, no. 1, pp. 1–45, 1951. View at: Google Scholar  MathSciNet
 R. Jin, W. Chen, and T. Simpson, “Comparative studies of metamodeling techniques under multiple modeling criteria,” in Proceedings of the 8th Symposium on Multidisciplinary Analysis and Optimization, Long Beach,CA,U.S.A.. View at: Publisher Site  Google Scholar
 C. GonzálezFernández, B. MolinuevoSalces, and M. C. GarcíaGonzález, “Evaluation of anaerobic codigestion of microalgal biomass and swine manure via response surface methodology,” Applied Energy, vol. 88, no. 10, pp. 3448–3453, 2011. View at: Publisher Site  Google Scholar
 W. Yongjiang, C. Zhong, M. Jianwei, F. Minger, and W. Xueqian, “Optimization of ultrasonicassisted extraction process of Poria cocos polysaccharides by response surface methodology,” Carbohydrate Polymers, vol. 77, no. 4, pp. 713–717, 2009. View at: Publisher Site  Google Scholar
 J. Zhang, D. Fu, Y. Xu, and C. Liu, “Optimization of parameters on photocatalytic degradation of chloramphenicol using tio2 as photocatalyst by response surface methodology,” Journal of Environmental Sciences, vol. 22, no. 8, pp. 1281–1289, 2010. View at: Publisher Site  Google Scholar
 G. G. Wang and S. Shan, “Review of metamodeling techniques in support of engineering design optimization,” Journal of Mechanical Design, vol. 129, no. 4, pp. 370–380, 2007. View at: Publisher Site  Google Scholar
 T. J. Santner, B. J. Williams, and W. I. Notz, The Design And Analysis of Computer Experiments, Springer, New York, NY, USA, 2003. View at: Publisher Site  MathSciNet
 V. R. Joseph and Y. Hung, “Orthogonalmaximin Latin hypercube designs,” Statistica Sinica, vol. 18, no. 1, pp. 171–186, 2008. View at: Google Scholar  MathSciNet
 J.S. Park, “Optimal Latinhypercube designs for computer experiments,” Journal of Statistical Planning and Inference, vol. 39, no. 1, pp. 95–111, 1994. View at: Publisher Site  Google Scholar  MathSciNet
 J. P. C. Kleijnen, S. M. Sanchez, T. W. Lucas, and T. M. Cioppa, “A users guide to the brave new world of designing simulation experiments,” INFORMS Journal on Computing, vol. 17, no. 3, 2003. View at: Google Scholar
 J. P. C. Kleijnen, “An overview of the design and analysis of simulation experiments for sensitivity analysis,” European Journal of Operational Research, vol. 164, no. 2, pp. 287–300, 2005. View at: Publisher Site  Google Scholar
 E. Acar, “Various approaches for constructing an ensemble of metamodels using local measures,” Structural and Multidisciplinary Optimization, vol. 42, no. 6, pp. 879–896, 2010. View at: Publisher Site  Google Scholar
 E. Acar, “Simultaneous optimization of shape parameters and weight factors in ensemble of radial basis functions,” Structural and Multidisciplinary Optimization, vol. 49, no. 6, pp. 969–978, 2014. View at: Publisher Site  Google Scholar
 X. Zhou and T. Jiang, “Metamodel selection based on stepwise regression,” Structural and Multidisciplinary Optimization, vol. 54, no. 3, pp. 641–657, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 T. Goel, R. T. Haftka, W. Shyy, and N. V. Queipo, “Ensemble of surrogates,” Structural and Multidisciplinary Optimization, vol. 33, no. 3, pp. 199–216, 2007. View at: Publisher Site  Google Scholar
 D. J. Toal and A. J. Keane, “Performance of an ensemble of ordinary, universal, nonstationary and limit Kriging predictors,” Structural and Multidisciplinary Optimization, vol. 47, no. 6, pp. 893–903, 2013. View at: Publisher Site  Google Scholar
 X. Zhou, Y. Ma, Y. Tu, and Y. Feng, “Ensemble of surrogates for dual response surface modeling in robust parameter design,” Quality and Reliability Engineering International, vol. 29, no. 2, pp. 173–197, 2013. View at: Publisher Site  Google Scholar
 X. J. Zhou, Y. Z. Ma, and X. F. Li, “Ensemble of surrogates with recursive arithmetic average,” Structural and Multidisciplinary Optimization, vol. 44, no. 5, pp. 651–671, 2011. View at: Publisher Site  Google Scholar
 E. Acar and M. RaisRohani, “Ensemble of metamodels with optimized weight factors,” Structural and Multidisciplinary Optimization, vol. 37, no. 3, pp. 279–294, 2009. View at: Publisher Site  Google Scholar
 E. Acar, “Effect of error metrics on optimum weight factor selection for ensemble of metamodels,” Expert Systems with Applications, vol. 42, no. 5, pp. 2703–2709, 2015. View at: Publisher Site  Google Scholar
 T. Goel, R. T. Hafkta, and W. Shyy, “Comparing error estimation measures for polynomial and kriging approximation of noisefree functions,” Structural and Multidisciplinary Optimization, vol. 38, no. 5, pp. 429–442, 2009. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 Cheng Yan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.