Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 162712, 15 pages
http://dx.doi.org/10.1155/2015/162712
Research Article

A Study on Many-Objective Optimization Using the Kriging-Surrogate-Based Evolutionary Algorithm Maximizing Expected Hypervolume Improvement

Institute of Fluid Science, Tohoku University, Sendai 980-8577, Japan

Received 25 August 2014; Revised 13 January 2015; Accepted 13 January 2015

Academic Editor: Yudong Zhang

Copyright © 2015 Chang Luo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The many-objective optimization performance of the Kriging-surrogate-based evolutionary algorithm (EA), which maximizes expected hypervolume improvement (EHVI) for updating the Kriging model, is investigated and compared with those using expected improvement (EI) and estimation (EST) updating criteria in this paper. Numerical experiments are conducted in 3- to 15-objective DTLZ1-7 problems. In the experiments, an exact hypervolume calculating algorithm is used for the problems with less than six objectives. On the other hand, an approximate hypervolume calculating algorithm based on Monte Carlo sampling is adopted for the problems with more objectives. The results indicate that, in the nonconstrained case, EHVI is a highly competitive updating criterion for the Kriging model and EA based many-objective optimization, especially when the test problem is complex and the number of objectives or design variables is large.

1. Introduction

Evolutionary algorithms (EAs) are a class of metaheuristics inspired by the process of natural evolution, and they have been successfully applied to solve the optimization problems with two or three objectives over the past two decades [1]. Recently, EAs are gradually extended to deal with many-objective problems that involve four or more objectives. Garza-Fabre et al. [2] combined three novel fitness assignment methods with a generic multiobjective evolutionary algorithm (MOEA) and conducted the numerical experiments in 5- to 50-objective DTLZ1, DTLZ3, and DTLZ6 problems [3]. It was found that the proposed three methods were effective in guiding the search in high-dimensional objective spaces. Adra and Fleming [4] investigated two new management mechanisms for promoting diversity in evolutionary many-objective optimization and tested 6- to 20-objective DTLZ2 problems. The results indicated that the inclusion of one of the mechanisms improved the convergence and diversity performances of the existing MOEAs in solving many-objective optimization problems. Hadka and Reed [5] proposed the Borg MOEA for the many-objective and multimodal optimization using the -dominance concept and an adaptive population sizing approach. A comparative study was conducted on 18 test problems from DTLZ [3], WFG [6], and CEC 2009 [7] test suites, and the Borg MOEA showed significant advantages over the competing algorithms on many-objective and multimodal problems. Lopez et al. [8] coupled an achievement function and the -indicator using an alternative preference relation in a MOEA for solving many-objective optimization problems. The experiments were conducted on DTLZ and WFG test suites. The results showed that the proposed approach had a good performance even when using a high number of objectives. Yang et al. [9] proposed a grid-based evolutionary algorithm (GrEA) to solve many-objective optimization problems and performed the numerical experiments in DTLZ test suite. The experimental results indicated that the proposed algorithm showed the effectiveness and competitiveness in balancing convergence and diversity compared with six state-of-the-art MOEAs. Deb and Jain [10] proposed a reference-point based many-objective NSGA-II (called NSGA-III). This algorithm was applied to 3- to 15-objective DTLZ1–4 problems and compared with two recently suggested versions of MOEA/D. It was found that the proposed NSGA-III obtained satisfactory results on all test problems.

However, the real-world applications of the conventional EA based optimization approaches need high CPU cost due to a large number of expensive performance analyses such as three-dimensional computational fluid dynamics for complex geometries. A common strategy to reduce their CPU cost is to combine surrogate model with EA. A number of surrogate models such as response surface model (RSM) [11], radial basis function (RBF) [12], Kriging model [13], and neural network (NN) [14] have been applied to practical engineering designs. Among these surrogate models, the Kriging model can estimate the deviation between the response model and sample points and automatically adapt to the sample points. Additionally, the Kriging model has a characteristic that an assumption of order of the approximate function is not needed, so it is superior to general RSM. In this study, the Kriging model is adopted as the surrogate model.

For the Kriging model, a few updating criteria have been proposed to determine the locations in the design space where new sample points should be added for improving the model accuracy. Jones et al. [15] suggested that the expected improvement (EI) of an original objective function was maximized to determine the location of a new additional sample point and designated the efficient global optimization algorithm (EGO). Jeong et al. [16] extended EGO for multiobjective optimization problems, in which the EIs in terms of all objective functions were maximized and some of the nondominated solutions in the EIs space were selected as the additional sample points. Emmerich et al. [17] proposed the expected hypervolume improvement (EHVI) as the updating criterion of the Gaussian random field metamodel and evaluated EHVI using the Monte Carlo integration method. Li et al. [18] suggested a domination status based updating criterion to judge whether new sample point was needed in the Kriging metamodel assisted multiobjective genetic algorithm. Shimoyama et al. [19] compared the optimization performances of the EHVI, EI, and estimation (EST) updating criteria in the multiobjective optimization. The results indicated that EHVI kept a good balance between accurate and wide search for nondominated solutions.

However, the study of using EHVI as updating criterion for the optimization of many-objective problems that involve four or more objectives has not been found in the open literature. The present paper is concerned with the many-objective optimization performance of the Kriging-surrogate-based EA approach considering EHVI as the updating criterion. The long term objective is to reduce the CPU cost of the many-objective optimization approach by decreasing the number of expensive performance analyses. In order to further investigate the performance of EHVI updating criterion, two other updating criteria EI and EST are adopted for comparisons. The present comparisons are conducted through numerical experiments in DTLZ test suite [3], which can be extended to an arbitrary number of objective functions.

The structure of this paper is as follows. In Section 2, the terminology and background related to the current study are reviewed. In Section 3, the Kriging model and EA based approach are outlined. In Section 4, several examples and corresponding results are given. Concluding remarks are presented in Section 5.

2. Background

2.1. Many-Objective Optimization

First, a many-objective problem with objectives is defined as follows:where is a vector of design variables, is a vector of objective functions, denotes the th objective function to be minimized, and is the feasible region delimited by the problem’s constraints.

In the many-objective problems, the goal is to find the optimal tradeoff solutions known as the Pareto optimal set. The definition of the Pareto optimal set is based on the domination concept between points in the design space. The dominance relation between two design vectors and is defined as follows.

Definition 1. A design vector dominates another design vector , if for all and for at least one . This is denoted by .

Definition 2. is nondominated if and only if there is no vector in that dominates .

A design vector is Pareto optimal if the design vector is nondominated with respect to the set of all possible design vectors. The set of all possible Pareto optimal is called Pareto optimal set, which forms the Pareto front in the objective space. In general, multiobjective optimization algorithms are designed to capture or closely approximate the Pareto front.

2.2. Kriging Model

The Kriging model has its original applications in mining and geostatistical fields referring to spatially and temporally correlated data [20]. The Kriging model is a combination of global model and localized departures as follows:where denotes an unknown function of interest and denotes a known global approximation model. is a realization of a stochastic process with mean zero and variance , and the covariance matrix of is given byIn (3), is an correlation matrix whose entries are symmetric with respect to the diagonal, is the correlation function between any two points and among sample points. The type of correlation function needs to be determined by users. This paper employs the following Gaussian correlation function:where denotes the number of design variables. is the weight of the distance along the th design variable.

In the Kriging model, the values of , , and are determined by maximizing the likelihood function. First, is obtained by maximizing the concentrated log-likelihood function as and that maximize the likelihood function are represented in closed form aswhere and 1 is an -dimensional unit vector. After is obtained, and are obtained by (6) and (7), respectively. Thereinto, is calculated by (4). Now, (2) can be written to be the Kriging model predictor aswhere is an -dimensional vector whose th element is .

The accuracy of the predicted value depends greatly on the distances between predicted point and the sample points. The mean squared error for a predicted point using the Kriging model predictor is defined by

2.3. Expected Hypervolume Improvement

EHVI is based on the theory of the hypervolume indicator [21]. Hypervolume indicator is a recently popular metric which is used for comparing the performances of different multiobjective optimizers. The hypervolume of a set of solutions measures the size of the portion of objective space that is dominated by the set collectively. In the field of evolutionary multiobjective optimization (EMO), the hypervolume indicator is the only unary indicator that is known to be strictly monotonic with regard to Pareto dominance. This characteristic is of high interest and relevance for the problems with a large number of objectives.

Hypervolume calculation requires high computational effort. Several algorithms have been proposed for calculating hypervolume exactly. Wu and Azarm [22] proposed the inclusion-exclusion algorithm (IEA) for hypervolume calculation, and the complexity of this algorithm is for solutions and objectives. Fleischer [23] introduced the algorithm based on the Lebesgue measure, and its complexity is . While et al. [24] suggested a fast hypervolume by slicing objective (HSO) algorithm, and the complexity is . Based on HSO, Fonseca et al. [25] proposed an improved dimension-sweep algorithm for calculating hypervolume. The proposed algorithm achieved complexity in the worst case. The fastest algorithm yet known for exact hypervolume calculation is the Walking Fish Group (WFG) algorithm proposed by While et al. [26]. Its complexity is in the worst case; however, even relatively small percentages of dominated points can make this algorithm’s performance a huge improvement.

On the other hand, some approximate algorithms to calculate hypervolume have also been developed in recent years. Bader and Zitzler [27] proposed the approximate algorithm based on Monte Carlo sampling. In their work, the approximate algorithm was adopted to calculate the hypervolume values of the problems with more than 5 objectives. Bringmann and Friedrich [28] presented a new approximation algorithm also based on Monte Carlo sampling and worked extremely fast for all tested practical instances. Ishibuchi et al. [29] proposed the approximation algorithm using achievement scalarizing functions with uniformly distributed weight vectors.

EHVI is the expected value of hypervolume improvement in the Kriging model. The hypervolume improvement is defined as the difference of hypervolume between the current sample set and the next sample set, as illustrated in Figure 1, and its expected value is expressed aswhere denotes the Gaussian random variable . is the probability density function, and is the reference value used for calculating hypervolume. The maximization of EHVI is considered as the updating criterion to determine the location of an additional sample point. It is a single objective optimization problem.

Figure 1: EHVI updating criterion of the Kriging model.

In the present optimizations, objective function values are normalized between 0 and 1 for all given sample points, and the reference values are set to be 1.1 for all normalized objective functions. When the number of objectives is small (≤5), hypervolume is calculated using the fastest exact algorithm WFG [26]. When the number of objectives is large (>5), in consideration of the tradeoff between the EHVI accuracy and the overall computing time, hypervolume is calculated using the approximate algorithm as described in [27]. The number of samples to estimate an approximated value of hypervolume in this algorithm is set to be 300 for 8- and 10-objective problems and 500 for 12- and 15-objective problems. It is difficult to calculate EHVI by the analytical integration. As a simple alternative, we use Monte Carlo integration [30] by producing 1,000 random points with Gaussian distribution around the candidate sample point to measure the EHVI.

3. Kriging-Surrogate-Based EA for Many-Objective Optimization

Figure 2 gives the flowchart of the Kriging-surrogate-based EA for many-objective optimization. The initial sample points are generated using Latin hypercube sampling (LHS) method [31] and distribute uniformly in the design space. LHS determines the locations of the points to be sampled taking account of the orthogonality condition, which means that each of the samples does not have overlapping values regarding all the design variables, and LHS can comprehend the whole variable space even with a smaller sample size than the Monte Carlo method. Using the initial samples, the Kriging model is constructed to approximate the responses of objective functions to design variables in the form of simple algebraic functions. These algebraic functions are derived to interpolate the initial sample points with real objectives function values. Therefore, the Kriging model can promptly give the estimation function values at other points with unknown objective function values. In order to investigate the performance of EHVI, two other updating criteria EI and EST [19] are adopted for comparing with EHVI in this study. An EMO algorithm, NSGA-II [32], is adopted to search for the nondominated solutions for EI or EST and the optimal solution for EHVI on the response surfaces. The parameter values [33] used in NSGA-II are given in Table 1.

Table 1: Parameter values used in NSGA-II.
Figure 2: Flowchart of the Kriging-surrogate-based EA for many-objective optimization.

In the optimization based on EHVI, a single solution with the maximum EHVI is obtained and employed as the additional sample point. This indicates that the sample points are added one by one. On the other hand, a set of multiple nondominated solutions is obtained in the optimization based on EI or EST. For a fair comparison, only the solution closest to the centroid of the obtained nondominated solutions in the design space is chosen as the additional sample point. When the additional sample point determined in the previous step is added to the set of initial samples, the Kriging model is reconstructed and the optimization is performed again. The termination condition of updating the Kriging model is that the total number of the initial sample points plus the additional sample points reaches 70.

4. Numerical Test

4.1. Test Problems

Three- to fifteen-objective DTLZ1-7 problems [3] are considered in the present numerical experiments. All these problems are nonconstrained and scalable to an arbitrary number of objective functions. Table 2 lists these test problems and their key properties.

Table 2: The test problems and key properties.
4.2. Performance Metric

The inverted generational distance (IGD) [34] is considered as the performance metric in the present numerical experiments. IGD can provide the combined information about the convergence and diversity of the obtained solutions. IGD is defined as follows:where denotes the set of the points located on the true Pareto optimal front. is the nondominated solution set obtained by the optimization. Smaller IGD indicates better performance in multiobjective optimization. The solutions in distribute uniformly on the true Pareto optimal front. The size of varies with the number of objectives and is shown in Table 3.

Table 3: The size of with the number of objectives.

4.3. Results and Discussion

For each test problem, 20 runs are carried out with different initial datasets of ten samples points. The IGD values that are given in the following are averaged over 20 runs. Figure 3 gives the IGD histories during the Kriging model update based on EHVI, EI, and EST criteria in 3- to 15-objective DTLZ1 problems. EST achieves faster reduction of IGD than EHVI and EI except for the case of ; EHVI achieves the fastest IGD reduction. In addition, EST always obtains faster reduction of IGD than EI for any number of objectives. It indicates that the updating criterion EST without considering estimation errors works better than the updating criterion EI considering estimation errors. It is because DTLZ1 problem is simple and easily approximated by the Kriging model.

Figure 3: IGD histories during the Kriging model update in DTLZ1.

The IGD histories during the Kriging model update based on EHVI, EI, and EST criteria in 3- to 15-objective DTLZ2 problems are shown in Figure 4. When and 5, the optimization performance of using EHVI as the updating criterion is poor. This is because the updating strategy based on EHVI may result in the additional sample points falling into a local optimal region. When , EHVI obtains a little faster IGD reduction than EI and EST. In addition, it is noted that EI obtains much faster reduction of IGD than EHVI and EST when is larger than 10.

Figure 4: IGD histories during the Kriging model update in DTLZ2.

Figure 5 presents the IGD histories during the Kriging model update based on EHVI, EI, and EST criteria in 3- to 15-objective DTLZ3 problems. EHVI always achieves better convergence and diversity performances than EI and EST for any number of objectives. When , EI obtains faster IGD reduction than EST. When is larger than 3, the final IGD values obtained by EI and EST are very close. It indicates that the problems are well approximated by the Kriging model, and the estimation errors are relatively small.

Figure 5: IGD histories during the Kriging model update in DTLZ3.

Figure 6 shows the IGD histories during the Kriging model update based on EHVI, EI, and EST criteria in 3- to 15-objective DTLZ4 problems. For any number of objectives, EST without considering estimation errors obtains faster IGD reduction than EI considering estimation errors. As increases, the convergence and diversity performances of using EHVI as updating criterion become better. When is larger than 8, EHVI outperforms EI and EST. This indicates that EHVI updating criterion is particularly advantage for the problems with big number of objectives.

Figure 6: IGD histories during the Kriging model update in DTLZ4.

The IGD histories during the Kriging model update based on EHVI, EI, and EST criteria in 3- to 15-objective DTLZ5 problems are shown in Figure 7. When is smaller than 12, EST obtained faster reduction of IGD than EI. However, EI outperforms EST when and 15. It indicates that EI considering estimation errors shows positive impact on convergence when the number of objectives is large. On the other hand, when and 5, the optimization performance of EHVI is poor. However, the optimization performance based on EHVI is gradually improved as increases. When is larger than 8, EHVI obtains the fastest reduction of IGD.

Figure 7: IGD histories during the Kriging model update in DTLZ5.

Figure 8 presents the IGD histories during the Kriging model update based on EHVI, EI, and EST criteria in 3- to 15-objective DTLZ6 problems. EHVI always obtains much faster IGD reduction than EI and EST for any number of objectives. The updating criterion EHVI shows the efficient exploitation capacity of the Kriging-surrogate-based EA for many-objective optimization. EI achieves faster reduction of IGD than EST except for the cases of and 10.

Figure 8: IGD histories during the Kriging model update in DTLZ6.

The IGD histories during the Kriging model update based on EHVI, EI, and EST criteria in 3- to 15-objective DTLZ7 problems are shown in Figure 9. EHVI outperforms EI and EST for any number of objectives. When is larger than 8, EHVI shows more obvious advantage compared with EI and EST. On the other hand, EI has better optimization performance than EST except for the case of .

Figure 9: IGD histories during the Kriging model update in DTLZ7.

In order to investigate the effect of the number of design variables on the optimization performance of using EHVI as updating criterion, DTLZ7-k10 is conducted for the many-objective optimization. In the original DTLZ7 [15], the , the number the required variables which must take any function that is not less than 0, is suggested to be 20. However, the is set to be 10 in DTLZ7-k10 problem. It indicates that the number of design variables of DTLZ7-k10 problem is 10 less than that of the corresponding DTLZ7 problem. The IGD histories during the Kriging model update based on EHVI, EI, and EST criteria in 3- to 15-objective DTLZ7-k10 problems are shown in Figure 10. EHVI obtains faster reduction of IGD than EI and EST except for the case of ; EST achieves the fastest IGD reduction. By comparing the optimization performances in both DTLZ7 and DTLZ7-k10 problems, the advantage of EHVI over EI and EST in DTLZ7 is more obvious than that in DTLZ7-k10. It is indicated that EHVI is more suitable for the problems with large number of design variables.

Figure 10: IGD histories during the Kriging model update in DTLZ7-k10.

5. Conclusion

The Kriging-surrogate-based EA considering EHVI, EI, and EST as updating criteria was conducted in 3- to 15-objective DTLZ1–7 test problems, and the optimization performances using different updating criteria were compared in this study. Thereinto, the fastest WFG exact algorithm and the approximate algorithm based on Monte Carlo sampling were adopted for the hypervolume calculation.

In DTLZ1 problem, EHVI had faster IGD reduction than EI and EST when , but EHVI did not keep the advantage when was larger than 5. In DTLZ2 and DTLZ5 problems, when and 5, the optimization performance of using EHVI as the updating criterion was poor. However, the optimization performance of EHVI was improved when the number of objectives was larger than 5. In DTLZ4 and DTLZ5 problems, the advantage of EHVI was shown gradually as increased, and EHVI obtained faster reduction of IGD than EST and EI when was larger than 8. In DTLZ3, DTLZ6, and DTLZ7 problems, EHVI always obtained better convergence and diversity performances than EI and EST for any number of objectives. In DTLZ7-k10 problem, the advantage of EHVI over EI and EST was reduced compared with that in DTLZ7. On the whole, in the nonconstrained case, EHVI shows that it is highly competitive for the updating of the Kriging model compared with EI and EST in many-objective optimization, especially when the test problem is complex and the number of objectives or design variables is large.

On the other hand, EST always obtained faster reduction of IGD than EI for any number of objectives in DTLZ1 and DTLZ4 problems. It is because the two problems are easily approximated by the Kriging model. In most of the other many-objective optimizations, the updating criterion EI considering estimation errors had better optimization performance than EST without considering estimation errors, especially for the problems with big number of objectives.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. C. A. C. Coello, G. B. Lamont, and D. A. van Veldhuizen, Evolutionary Algorithms for Solving Multi-Objective Problems, Springer, New York, NY, USA, 2007. View at MathSciNet
  2. M. Garza-Fabre, G. T. Pulido, and C. A. Coello Coello, “Ranking methods for many-objective optimization,” in MICAI 2009: Advances in Artificial Intelligence: 8th Mexican International Conference on Artificial Intelligence, Guanajuato, México, November 9-13, 2009. Proceedings, vol. 5845 of Lecture Notes in Computer Science, pp. 633–645, Springer, Berlin, Germany, 2009. View at Publisher · View at Google Scholar
  3. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable test problems for evolutionary multi-objective optimization,” TIK-Technical Report No. 112, 2001. View at Google Scholar
  4. S. F. Adra and P. J. Fleming, “Diversity management in evolutionary many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 2, pp. 183–195, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. D. Hadka and P. Reed, “Borg: an auto-adaptive many-objective evolutionary computing framework,” Evolutionary Computation, vol. 21, no. 2, pp. 231–259, 2013. View at Publisher · View at Google Scholar · View at Scopus
  6. S. Huband, P. Hingston, L. Barone, and L. While, “A review of multiobjective test problems and a scalable test problem toolkit,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 5, pp. 477–506, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari, “Multiobjective optimization test instances for the CEC 2009 special session and competition,” Tech. Rep. CES-487, University of Essex, 2009. View at Google Scholar
  8. A. Lopez, C. A. Coello Coello, A. Oyama, and K. Fujii, “An alternative performance relation to deal with many-objective optimization problems,” in Evolutionary Multi-Criterion Optimization, vol. 7811 of Lecture Notes in Computer Science, pp. 291–306, Springer, Berlin, Germany, 2013. View at Publisher · View at Google Scholar
  9. S. Yang, M. Li, X. Liu, and J. Zheng, “A grid-based evolutionary algorithm for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 5, pp. 721–736, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point based non-dominated sorting approach, part I: solving problems with box constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014. View at Google Scholar
  11. Y. Lian and M.-S. Liou, “Multiobjective optimization using coupled response surface model and evolutionary algorithm,” AIAA Journal, vol. 43, no. 6, pp. 1316–1325, 2005. View at Publisher · View at Google Scholar · View at Scopus
  12. H. Fang, M. Rais-Rohani, and M. F. Horstemeyer, “Multiobjective crashworthiness optimization with radial basis functions,” in Proceedings of the 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, pp. 2095–2104, Albany, NY, USA, September 2004. View at Scopus
  13. K. Shimoyama, S. Yoshimizu, S. Jeong, S. Obayashi, and Y. Yokono, “Multi-objective design optimization for a steam turbine stator blade using LES and GA,” Journal of Computational Science and Technology, vol. 5, no. 3, pp. 134–147, 2011. View at Publisher · View at Google Scholar
  14. M. Papadrakakis, N. D. Lagaros, and Y. Tsompanakis, “Optimization of large-scale 3-D trusses using evolution strategies and neural networks,” International Journal of Space Structures, vol. 14, no. 3, pp. 211–223, 1999. View at Publisher · View at Google Scholar · View at Scopus
  15. D. R. Jones, M. Schonlau, and W. J. Welch, “Efficient global optimization of expensive black-box functions,” Journal of Global Optimization, vol. 13, no. 4, pp. 455–492, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
  16. S. Jeong, Y. Minemura, and S. Obayashi, “Optimization of combustion chamber for diesel engine using Kriging model,” Journal of Fluid Science and Technology, vol. 1, no. 2, pp. 138–146, 2006. View at Publisher · View at Google Scholar
  17. M. T. M. Emmerich, K. C. Giannakoglou, and B. Naujoks, “Single- and multiobjective evolutionary optimization assisted by Gaussian random field metamodels,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 4, pp. 421–439, 2006. View at Publisher · View at Google Scholar · View at Scopus
  18. M. Li, G. Li, and S. Azarm, “A kriging metamodel assisted multi-objective genetic algorithm for design optimization,” Journal of Mechanical Design, Transactions of the ASME, vol. 130, no. 3, Article ID 031401, 2008. View at Publisher · View at Google Scholar · View at Scopus
  19. K. Shimoyama, K. Sato, S. Jeong, and S. Obayashi, “Updating Kriging surrogate models based on the hypervolume indicator in multi-objective optimization,” Journal of Mechanical Design, vol. 135, no. 9, Article ID 094503, 2013. View at Publisher · View at Google Scholar
  20. N. A. C. Cressie, Statistics for Spatial Data, John Wiley & Sons, New York, NY, USA, 1993, Revision.
  21. E. Zitzler and L. Thiele, “Multiobjective optimization using evolutionary algorithms—a comparative case study,” in Proceedings of the 5th International Conference on Parallel Problem Solving from Nature (PPSN '98), vol. 1498 of Lecture Notes in Computer Science, pp. 292–301, Amsterdam, The Netherlands, 1998. View at Publisher · View at Google Scholar
  22. J. Wu and S. Azarm, “Metrics for quality assessment of a multiobjective design optimization solution set,” Journal of Mechanical Design, Transactions of the ASME, vol. 123, no. 1, pp. 18–25, 2001. View at Publisher · View at Google Scholar · View at Scopus
  23. M. Fleischer, “The measure of pareto optima applications to multi-objective metaheuristics,” in Evolutionary Multi-Criterion Optimization, vol. 2632 of Lecture Notes in Computer Science, pp. 519–533, Springer, Berlin, Germany, 2003. View at Publisher · View at Google Scholar
  24. L. While, P. Hingston, L. Barone, and S. Huband, “A faster algorithm for calculating hypervolume,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 1, pp. 29–38, 2006. View at Publisher · View at Google Scholar · View at Scopus
  25. C. M. Fonseca, L. Paquete, and M. López-Ibáñez, “An improved dimension-sweep algorithm for the hypervolume indicator,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 1157–1163, July 2006. View at Scopus
  26. L. While, L. Bradstreet, and L. Barone, “A fast way of calculating exact hypervolumes,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 1, pp. 86–95, 2012. View at Publisher · View at Google Scholar · View at Scopus
  27. J. Bader and E. Zitzler, “Hype: an algorithm for fast hypervolume-based many-objective optimization,” TIK-Technical Report 286, 2008. View at Google Scholar
  28. K. Bringmann and T. Friedrich, “Approximating the least hypervolume contributor: NP-hard in general, but fast in practice,” in Evolutionary Multi-Objective Optimization, vol. 5467 of Lecture Notes on Computer Science, pp. 6–20, Springer, Berlin, Germany, 2009. View at Google Scholar
  29. H. Ishibuchi, N. Tsukamoto, Y. Sakane, and Y. Nojima, “Hypervolume approximation using achievement scalarizing functions for evolutionary many-objective optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '09), pp. 530–537, Trondheim, Norway, May 2009. View at Publisher · View at Google Scholar · View at Scopus
  30. S. Weinzierl, “Introduction to Monte Carlo methods,” Tech. Rep. NIKHEF-00-012, NIKHEF, Theory Group, Amsterdam, The Netherlands, 2000. View at Google Scholar
  31. M. D. McKay, R. J. Beckman, and W. J. Conover, “A comparison of three methods for selecting values of input variables in the analysis of output from a computer code,” Technometrics, vol. 21, no. 2, pp. 239–245, 1979. View at Publisher · View at Google Scholar · View at MathSciNet
  32. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multi-objective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  33. K. Deb and R. B. Agrawal, “Simulated binary crossover for continuous search space,” Complex Systems, vol. 9, no. 2, pp. 115–148, 1995. View at Google Scholar · View at MathSciNet
  34. D. A. Van Veldhuizen and G. B. Lamont, “Multiobjective evolutionary algorithm research: a history and analysis,” Tech. Rep. TR-98-03, Air Force Institute of Technology, Dayton, Ohio, USA, 1998. View at Google Scholar