Abstract

The paper proposes a global optimization algorithm employing surrogate modeling and adaptive infill criteria. The surrogates are exploited to screen the design space and provide lower-fidelity predictions across it; on the other hand, specific criteria are designed to suggest new points for high-fidelity evaluation so as to enrich the optimizer database. Both Kriging and radial basis function network are used as surrogates with different training strategies. Sequential design is achieved by introducing several infill criteria according to the realization of the exploration-exploitation trade-off. Optimization results are provided both for scalable and analytical test functions and for a practical aerodynamic shape optimization problem.

1. Introduction

The trade-off between high fidelity and short response time is an essential part of today’s real-world engineering design applications. On the one hand, the industrial request to shift the focus and part of the costs from experimental to numerical design and analysis leads to the introduction of more and more physics modeling into numerical simulation codes. On the other hand, the price to pay is related to increasing computational time of single analyses and design cycles which is detrimental for the purpose of reducing the time to market. This trade-off is even more evident when computational fluid dynamics (CFD) is involved in the design loop as the need to accurately evaluate very complex configurations and the required high number of CFD simulations represents a further issue. The engineering computational design process may be speeded up by either accelerating the high-fidelity evaluation or reducing/accelerating the steps of the numerical algorithm used for exploring the design space. In the first group, High-Performance Computing (HPC) methods are numbered to increase the global performance of a single call to the CFD solver (through code parallelization, vectorization, and profiling). With reference to the second group, several research trends are focused on reducing the problem dimensionality (hence, the iterations needed to find the solution), improving the search algorithms for achieving faster and better solutions, and tuning the algorithm parameters to automatically and efficiently adapt to the response. An interesting alternative branch is represented by surrogate-based or metamodel-assisted optimization (SBO) which was originally conceived to relieve the computational loading associated with the usage of black box response functions. SBO consists in replacing the high-fidelity model (or “truth” model, e.g., the CFD simulation) with a fast, lower-fidelity model which has preliminarily “learned” from high-fidelity data. Since the pioneering work by Jones et al. [1], several theoretical studies [218] have been published on the topic. The proposed methods differ for one of the following items: the employed surrogate model (e.g., model type and single or multiple models), the training approach (e.g., optimizing the prediction error, the cross-validation error, the generalized cross-validation error, and the likelihood function), the model updating strategy (e.g., usage of surrogate minimizers, infill criteria, and random criteria), and the optimization method adopted to find the model parameters and to explore the surrogate (e.g., heuristic, gradient-free or gradient-based, and global or local). SBO has been successfully applied in the aerospace engineering field [1927]. In continuity with other works by the author [2831], the present paper proposes an adaptive SBO framework for design optimization with different updating strategies and optimization algorithms. A sketch of the main building blocks is provided in Figure 1. A choice of surrogate models is also available for selection. The main novelty of the paper is in the proposal of two groups of diverse infill strategies and in the capability to apply many of them during the adaptive sampling cycle by defining activation probabilities. Moreover, the usage of two different optimization libraries (public domain NLopt and in-house ADGLIB) allows performing both hyperparameter optimization for surrogate model training and objective function minimization with a variety of approaches. Finally, while previous investigations focused only on aerodynamic optimization cases, here an extensive study is carried out on analytic test functions in order to quickly assess the performance of the algorithm on known data. The paper is structured as follows: the first section is devoted to the surrogate model definition and to the training methods; then, the sequential design by means of various infill criteria is discussed and some examples on a basic test function are proposed; furthermore, having introduced all the computational pieces, the whole surrogate-based sequential optimization algorithm is described in detail and interactions between subphases are highlighted; finally, the experimental test campaign on multidimensional scalable test functions is discussed as well as the results obtained by using different setups; an example of real-world application is given in the very final section where a benchmark aerodynamic shape optimization case is faced and results are compared with a previous work by the same author [31].

2. Surrogate Models

2.1. Kriging Model

The Kriging model assumes that the function value at each point in the domain is represented by a separate random variable correlated with all the other points. Given that is the function response of interest, a Kriging surrogate is defined as a realization of a regression model and a stochastic process having zero mean, process variance , and covariance model between and with parameter vector [32]: where is the regression coefficient vector and is the regression vector. The correlation between training sites is condensed within the covariance matrix and given by . In multidimensional cases, the covariance is obtained as a tensor product of one-dimensional covariance functions: where is the dimension of the problem, is the length scale in the -th dimension, is the -th component of the vector , and is the one-dimensional Matern function: with the Gamma function, , and three possible values of the parameter :

In practice, the covariance matrix may become ill-conditioned or regression features are required to handle noisy functions. As a consequence, “nugget” or noise terms are plugged in the covariance matrix diagonally which becomes , where the Kronecker convention has been used and is the noise ratio. Predictions at a generic location are obtained by using the following expression: where is the linear regression matrix and is the generalized least square estimate of , is the covariance matrix, is the covariance vector between the generic design site and the training sites, and is the vector of function values. The stochastic nature of the Kriging model allows the obtainment of an estimate of the prediction variance in the form: where is the estimated process variance, , and . The prediction and its variance are both functions of the hyperparameters, i.e., the length scales , the process variance , and the noise magnitude . The hyperparameters have to be tuned in order to make the model output more “likely” against a given set of training data. In other words, the aim is to maximize the probability that the observed data follows the Gaussian process assumed with a specific set of hyperparameters. This is typically achieved by optimizing the maximum likelihood estimator (MLE) [33]. In the following, two optimization strategies are proposed and hereinafter referred to as “full optimization” and “partial optimization.” The first is required when the function noise has to be fitted, while the second is required when ill conditioning of the covariance matrix is likely to occur. In both cases, the optimization algorithms are called from the NLopt library (available online at http://ab-initio.mit.edu/nlopt) in a loosely coupled global-local approach: first, a global exploration is performed with the evolutionary strategy ESCH algorithm [34]; then, the best solution is taken as the starting point of a local refinement by using a reviewed version of the Nelder-Mead simplex algorithm [35].

2.1.1. Full Optimization

In this approach, the regression parameters are found by imposing an optimality condition and given by

All the hyperparameters (length scales, process variance, and noise level) are obtained through the maximization of the likelihood function given by where is the number of training points, is the number of terms in the regression, and the regression matrix is defined as

2.1.2. Partial Optimization

In this formulation, the process variance and regression parameters are both computed thanks to the optimality condition; hence, the optimization process involves only the covariance length scales . The optimal process variance is set to the value which cancels the partial derivative of the likelihood function with respect to , given by

The likelihood function to be maximized reduces to

This final formula is a function of the length scales and the noise level ratio . The optimization is performed only over the length scales, and the noise level ratio is fixed throughout the optimization. A typical choice is to set the noise level to a small fraction of the process variance to avoid ill conditioning.

2.2. Radial Basis Function Network

Radial basis functions (RBF) are a powerful tool for data interpolation and regression. The response depends on the location of centres and on the Euclidean distance from the centres. By appropriately defining centres and weighting the contribution of each RBF, a RBF network is obtained as

Here, the centre of each RBF is and the weights are (which, in turn, depend on a regularization parameter ). The amplitude of the radial basis functions is controlled by means of the width parameters . The type of the RBF kernel may assume various mathematical forms; here, the following types are considered ( is the Euclidean distance from the centre):

Great emphasis has been put in the RBF literature on the choice of the width parameters. Gutmann [36] showed that both the interpolation error and the solution matrix conditioning are highly sensible to the value of : in particular, should be low enough to improve stability; however, the highest approximation accuracy is often found for large value which may lay in the unstable region. As a consequence, a conflict arises between accuracy and stability, sometimes referred to as the “trade-off principle” [37]. This is solved by using state-of-the-art optimization techniques and considering a unique scalar width for each RBF centre. The optimization algorithm autonomously chooses the kernel function type and optimizes the width parameters taking the leave-one-out cross-validation error as the objective function to be minimized. Similar to Tenne and Armfield [23], the procedure works as follows: (1)All the kernel functions are trained on the current training set(2)The leave-one-out (LOO) error norm is computed as where is the value of the function at the training site and is the RBF prediction at when the model is trained without and . The computation of the terms does not require the training of RBF models; indeed, it can be computed effortlessly thanks to Rippa’s formula [38](3)The combination of the RBF kernel and width parameter which gives the lowest LOO error norm is sought by solving, for each kernel, the optimization problem: Two options are available: (i)Grid search: a grid of discrete couples is generated, all the possible combinations of parameter values are evaluated, and the best combination (in terms of minimum ) is retained(ii)Numerical optimization: the minimization problem (18) is solved considering the hyperparameters as continuous variables and using the same algorithms for searching the Kriging hyperparameters

Once the optimal width parameters were found, the weights have to be computed. A regularization parameter (also known as ridge regression parameter in the RBF literature) is introduced to avoid overfitting and improve the interpolation matrix conditioning. By imposing the interpolation condition [37] on the training set, the weights are the solution of the linear system: where are the unknown RBF weights, and are the function values at the training sites. The prediction variance is estimated as where , , and .

3. Sequential Design and Infill Criteria

This section will give details about the sequential enrichment of a surrogate model assisting the optimization task. Indeed, once a surrogate model is trained, a basic and simplistic approach would be to optimize the surrogate and find the model minimizers. A further step could be to reevaluate the suggested points with the high-fidelity model, update the training dataset, build a new surrogate, and then optimize again in an iterative manner to drive to true optimality quickly. However, feeding the surrogate back with no information about its error measure and no tendency to exploration would easily lead the updating process to get trapped in local minima. The weak point lies in considering the model prediction as the only source of “knowledge” for increasing the quality of the approximation (“exploitation”). An advanced solution is obtained by mixing the information coming from the available data, the surrogate prediction, and the estimation of the predictive behaviour away from the training set (“exploration”): this could drive to a “smarter” selection of new points and, thus, improve the surrogate.

The aim is to obtain a model that cleverly supports the optimization path, being ideally very accurate near the global/local optimal location and acceptably rough elsewhere. Of course, the updating strategy has to take into account the specific surrogate model at hand, so that the adaptive sampling criteria should be designed upon its features (e.g., vector or scalar data, availability of an estimated prediction error, and interpolation/regression character). Adaptivity is another key point in view of inserting new points depending on the samples and response function values collected so far. The exploration/exploitation trade-off usually drives the adaptation by mixing the contribution of high prediction error areas (exploration) with potentially promising regions (exploitation). Indeed, explorative search is a cornerstone for global optimization; however, it may lead to the continuous unveiling of poor regions of the design space; on the other hand, exploitation induces to trust the surrogate prediction, which surely improves the local accuracy but may also lead to being stuck in local minima. Only a proper and balanced combination of both components will be effective in leading to an efficient global optimization.

By way of example, Figure 2 illustrates the addition of a new point obtained, respectively, by employing pure exploitation, pure exploration, and balanced approaches. The picture shows the set of training points (black-filled circles), the true function (solid black line), and the surrogate prediction (dashed black line) built on the training points. Starting from this, a pure exploitation criterion places a new point at the surrogate global minimum, i.e., very close to one of the training points (triangle mark); by pursuing pure exploration, instead, the new point is located very far from the training set (circle mark), thus sampling is performed where the prediction uncertainty is highest; finally, the new point predicted with a balanced exploration/exploitation approach (square mark) is located in an interesting position very close to the true optimum: a new surrogate model trained on the old training set plus the new point will surely lead to the detection of the global optimum quickly.

Infill criteria are here referred to as means for adding new samples to the training set by designing auxiliary functions for generic surrogate models. Let us say is the design vector which defines a generic location in the design space, is the objective function to be minimized, is the set of available training points, is the corresponding set of true objective function values, and is the response at of the surrogate model built on . An infill criterion is defined as finding a new sample which maximizes an auxiliary function called the potential of improvement: .

Hereinafter, the maximization of is not achieved by using numerical optimization techniques, but rather by hugely sampling the design space (e.g., five hundred times its dimension) with a Latin hypercube sampling method and selecting the point at which the maximum value of is met. Despite the size of the test dataset, the evaluation of requires limited computational effort as the function evaluation involves the surrogate prediction, which is fast to compute, and the true objective function values (already collected). An important point is related to the fact that, as the test dataset is generated many times along with multiple updating iterations, duplication of the selected sample may occur. In order to avoid this, the seed of the Latin hypercube is changed unambiguously at each iteration.

As concerns the type and nature of the potential of improvement function , previous investigations [30] showed that error-driven infill criteria may lead to the intensive exploration of the design space in order to reduce the prediction error, but, conversely, this resulted in a lack of efficiency of the whole optimization process at fixed total computational budget. Hence, in the following section, the discussion will focus on approaches which proved to be more suitable to global optimization. In particular, two sets of criteria will be proposed: the first is based on the factorization of the potential of improvement in order to explicitly realize the trade-off between exploration and exploitation and the second is defined according to the expected improvement concept.

3.1. Factorized Infill Criteria

The first proposal of adaptive infill criterion is aimed at combining exploration and exploitation by means of a generic factorization as follows:

The functions and measure the exploration and model trust contribution, respectively. In particular, the exploration function should estimate how strong the influence of the set of already collected samples on a generic candidate is. One of the preferred approaches is to make the function dependent on the Euclidean distance between the generic design space location and the -th element of the training set :

On the other hand, the exploitation function should take into account how the surrogate prediction compares with the available set of true objective function values . In particular, this contribution should put emphasis on trusting the model prediction; hence, the function should exhibit its maxima in correspondence with some identified features of (e.g., minima, discontinuities, or local strong nonlinearities). Of course, different infill criteria can be designed by properly defining the functions and . A set of choices is presented here.

3.1.1. Leave-One-Out Error Criterion

The leave-one-out error (LOOE) criterion is aimed at individuating the regions where the surrogate model lacks accuracy and is much more sensible to the insertion of new designs. The factorization functions are as follows: where, given a generic location , is the location of the nearest training sample. Of course, as this criterion will tend to select new candidates around training samples exhibiting the highest LOO error, the clustering of points around specific regions will be avoided.

3.1.2. Weighted Leave-One-Out Error Criterion

As the LOOE criterion may be too much exploratory and ignore the information given by the surrogate model, an alternative is given by weighting the function with a term which measures the trust in the surrogate prediction. The weighted leave-one-out error (WLOOE) criterion modifies the function as follows: where is a tuning parameter, , and . This choice of the function provides two main features: (1)The value of approaches the LOOE prediction when approaches (2)For , the value of the function is higher than the LOOE

With respect to the LOOE criterion, the multiplicative exponential term augments the error-minimizing term with a goal-oriented exploitation term: hence, “bad” candidates (according to the surrogate prediction) will be filtered out, while “good” candidates will be recognized and rewarded with higher rank.

3.1.3. Lipschitz Constant Criterion

This criterion (LC) is aimed at selecting new design samples where the local complexity of the function is high. The Lipschitz constant is considered here as an indicator of the local complexity. Given a domain and a function defined in , the Lipschitz constant denotes the smallest constant in the Lipschitz condition, namely, the nonnegative number:

Such a constant is usually employed to bound the nonlinear character of the function : for instance, it gives an upper bound on the number of oscillations of a given amplitude or it limits the maximum and minimum value a function can assume in a given range. Of course, it has a local character; hence, a function may exhibit subregions with either small or large values of the constant. In the present context, the Lipschitz constant has to be estimated at every possible location within the design space. This is done by computing the -means clusters of and considering the variation of between the nearest training sample and the set of all nodes belonging to the cluster. Algorithm 1 details the estimation of the Lipschitz constant.

The effective number of clusters is the result of an iterative cluster solution: indeed, it is required to have at least two candidates in each cluster in order to be able to compute the Lipschitz constant and, in some cases, this may not occur naturally (e.g., strong aggregation of training samples). As a consequence, is initialized to , but it is downgraded every time a single-component cluster is found. According to the algorithm, a Lipschitz constant value is associated with each training sample: at a generic location , it is assumed that the Lipschitz constant is the same as the nearest training sample, i.e., , where .

The functions and for the LC criterion are defined as where the division by has been introduced for normalization purposes and the exponential term provides for a tuned decay weight (through parameter ) of the function. In fact, when the design candidate is near to a training point, the term is small and the weight of the local Lipschitz constant is accordingly decreased with respect to a design candidate belonging to the same cluster and located farther.

1: compute the K-means clusters of the set with
2: for all sample do
3:  Say the cluster containing
4:  for all sample do
5:    compute
6:  end for
7:  Set
8: end for
3.1.4. Weighted Distance Criterion

The weighted distance (WD) criterion is based on the following definitions for functions and : where the nomenclature is the same as described above. Again, the exploitation is given by the exponential term which gives confidence in the surrogate prediction, while the exploration element is represented by how far the candidate is from the nearest training sample. Analogous to the WLOOE criterion, candidates with a surrogate prediction lower than the current function minimum will be more likely to be selected. However, if they are too close to samples stored in , they will be penalized by the function. Hence, a trade-off is realized between surrogate prediction and location in the design space.

3.2. Expected Improvement-Based Infill Criteria

Another set of infill criteria is defined to accomplish the exploration-exploitation trade-off from a different point of view. The expected improvement algorithm by Schonlau et al. [39] and Jones et al. [1] represents a standard design strategy to add one new sample aimed at achieving the global optimum of the response surface. In the following, the original criterion and some variants of it are presented.

3.2.1. Expected Improvement (EI) Criterion

The sequential algorithm is based on the notion of “improvement” defined by where the function is here supposed to be a random variable as for the Kriging model (see equation (1)). After integration with respect to the conditional distribution , the expected improvement (EI) function is expressed as where is the Kriging predictor (equation (7)), is the corresponding prediction variance (equation (8)), and and are, respectively, the cumulative distribution and probability density functions. The search for the global minimum is enriched by finding the point that maximized the EI function. Two additive contributions are clearly evident: (i)The first term gives importance to sample points having predicted values much less than (thus exploiting the surrogate model)(ii)The second term enhances samples with high uncertainty about the prediction (thus fostering global search)

Both terms are weighted by the probability measure of the standard normal distribution defined in the modified variable : hence, depending on this ratio, the EI landscape is usually featured with many sharp peaks and wide regions where its value is almost zero. Equation (30) may be used even if the prediction model is not stochastic in nature: for example, the prediction function and the associated variance coming from a RBF network (equations (15) and (21)) may be plugged in it and adopted as EI infill criterion for choosing new points.

3.2.2. EI-Like Criterion

This criterion has been designed trying to mimic the same rationale of the expected improvement criterion, which was originally conceived for a Gaussian process surrogate. The present approach, referred to as “EI-like” hereinafter, represents a generalization of that algorithm: indeed, for a generic surrogate model, the information about the prediction uncertainty may not be available as in the case of Kriging or RBF network model. The idea is to define a model for the prediction error which is theoretically applicable to any function and any surrogate and to link it to the local complexity of the true function. The potential of improvement function is designed to have the same form of the expected improvement function (equation (30)), but here the prediction error is estimated as follows: where is a tuning parameter and is the local Lipschitz constant estimated as in Section 3.1.3. The function is related to the order of magnitude of the local maximum difference of : indeed, the Lipschitz constant is multiplied by a distance so as to make the quantity similar to from a dimensional point of view. The prediction variance has been designed in order to increase with increasing distance from an available sample. The negative exponential term and the parameter in it allow for adjusting the rate of change of while moving away from known points: for low values of , the variance quickly increases and plateau-like regions are generated between samples, while for high values of , the rise is milder and a series of hills (midsamples) and valleys (near-samples) are generated. An example will be provided at the end of the section to better explain the landscape of the EI-like function.

3.2.3. Expected Improvement for Global Fit (EIGF) Criterion

A modified version of the expected improvement criterion is here considered as proposed by Lam [10]. Instead of locating the global optimum, the aim is to add new points which are located in regions with significant variation of the response function and to improve the global fit of the model. In other words, the rationale is close to the Lipschitz criterion. Similar to the EI criterion and adopting the same notation, the improvement function is here defined as where is the training site closest (in distance) to . After taking the expected value of equation (32) and recalling that , the expected improvement for global fit function is derived as

Again, this potential of improvement function consists of two components, one local and one global. The first (local) is large where the predicted response varies significantly with respect to the nearest sample. The second (global) increases when the uncertainty in the prediction is high, i.e., far from the sampled points.

3.2.4. Generalized EIGF Criterion

Starting from equation (32) and taking the expected value of , the generalized expected improvement for global fit function is obtained as

The main difference with respect to equation (33) is that now the change in response and the prediction uncertainty are not separated as they interact with each other. This should drive the search to regions where both terms are important as their combined effect would be amplified.

3.3. Test Example

A simple illustrative example is given here by considering the one-dimensional function: defined for . A set of 5 training points with associated set of function values has been extracted by uniformly sampling the design space : Figure 2 shows the function , the surrogate prediction (here, a Kriging model is used), and the training samples. Figure 3 shows the potential of improvement functions obtained by applying the criteria described so far for two values of the tuning parameters. It can be observed that, apart from the LOOE criterion where no tuning parameter is introduced, the levels of are globally reduced with increasing or ; indeed, a different scale of the -axis is used.

The effect of the parameter is clearly observable by comparing the WLOOE curves: indeed, for , the maximum value of the WLOOE criterion is located at and the function is quite peaky, while for , the peak shifts to and the function is null almost everywhere. This occurs because the relative importance of the exploitation term (equation (25)) decreases rapidly with increasing , thus raising the exploration contribution within the factorization (equation (22)). The same behaviour is observed also for the WD criterion. The LC criterion, instead, suffers only a global and uniform damping of the peaks; hence, the location of high-ranking candidates is not altered by changing the tuning parameter.

Figure 4(a) shows the potential of improvement functions for EI-based criteria. The test function, the number and location of the initial training samples, and the surrogate model are the same as in the previous section. Due to the different orders of magnitudes of the function values, two scales are reported on the -axis: the EI and EI-like functions have to be read on the left axis (ranging from 0.0 to 2.0), while the EIGF and GEIGF have to be read on the right axis (ranging from to in a logarithmic scale). Both EI and EI-like functions have a global maximum around (similar to WD and WLOOE criteria) and a lower peak around . The main difference between the two criteria is observable in the interval , where the EI function exhibits three local maxima while the EI-like function is null. To better clarify this, Figure 4(b) shows the prediction variance as provided by the EI criterion (Kriging predictor variance, equation (8)) and by the EI-like criterion (equation (31)) for . In fact, the EI-like prediction error is smaller where the variation in is limited, as for , and larger where significant gradients are present, while equation (8) depends only on the correlation between points, i.e., on distance and spatial distribution of points. EIGF and GEIGF criteria privilege the point at (as LC and LOOE criteria) and show 5 more local maxima (corresponding to the interval end points and to the midpoints between the sample data) that may be selected in successive iterations of the method. This characteristic of placing samples “close to the midpoint” has been highlighted also by Lam who suggests to start with a small number of points from an LHS sampling in order to feed the EIGF function with a smooth predicted response.

Figure 5 shows the result of 5 repeated updates of the surrogate model with each of the proposed criteria. The new 5 points are depicted with black circles, while the initial training points (the same set for every criterion) are represented by light gray circles. Each plot shows the test function and the surrogate prediction after the infill process. All proposed criteria manage to capture the location of the global optimum and to minimize the prediction error in the surrounding region. A strong clustering is observed for WLOOE and WD criteria, a clear effect of the exponential weight built on the surrogate prediction. On the other hand, LC and LOOE criteria are naturally more explorative as they place new points where the prediction error is high or where the function derivative is large.

Figure 6 shows the results of 5 iterations with each of the EI-based criteria. The EI-like criterion rapidly detects the global minimum and tends to cluster points around it, thus showing an “optimizer” behaviour. Similar results are obtained with the EI criterion, even if the final location of the global optimum is approximated. As predicted, EIGF and GEIGF provide a rather dispersed distribution of samples; however, a global optimization would take advantage of this as the region around the minimum is well captured.

4. Surrogate-Based Sequential Optimization

The workflow of the surrogate-based optimization is depicted in Figure 7. The method is built around the training database which is progressively fed and updated throughout the surrogate enrichment. Three major stages are conceived and designed, answering different needs in surrogate training and optimization: the space-filling initialization, the adaptive infill, and the sequential optimization infill. Each stage will be discussed and detailed in the following sections.

4.1. Space-Filling Initialization Stage

As a first step, the training database has to be initialized in order to build the first instance of the surrogate model. This stage is aimed at providing basic information about the objective space and selecting samples to maximize the informative level. This stage is usually referred to as a priori sampling because it does not require any detail about the response function. A space-filling design of the experiment technique is deemed appropriate to this aim, e.g., Latin hypercube sampling or Latinized central Voronoi tessellation techniques. Typically, according to literature results and the author’s experience, the number of initial samples produced in this stage should not exceed one-third of the total computational budget. Moreover, as multiple and explorative infill criteria may be applied in the second stage, the number of initial samples has to be kept as lowest as possible. Generating multiple training samples all together has the great advantage that they can be evaluated simultaneously; thus, this stage can be executed in parallel to speed up the simulation. Once the evaluation process has finished, the selected surrogate model can be built as described in Section 2.

4.2. Design Exploration via Sequential Adaptive Sampling

The training of the initial surrogate is not driven by optimization purposes neither by considerations concerning the prediction error. Being a pure space-filling exercise, the aim is to evenly distribute the samples across the design space: as a consequence, the metamodel cannot be as accurate as required by the optimization task as (1) no control over the prediction errors has been introduced and (2) the proper identification of global minima is not investigated. The second stage of the SBO process (here referred to as “smart exploration” or sequential adaptive sampling stage) reflects the need to provide the optimizer with an improved and reliable surrogate model. The cycle iterates to update the training database with new samples suggested by infill criteria as described in Section 3. The iterative procedure is structured as follows: (1)Initialization step: the number of new samples to be inserted at each infill iteration is defined and the infill criterion is chosen among the available ones. Two options are implemented: (1) the single predefined criterion or (2) the criterion is chosen randomly at each iteration according to a given activation probability for each criterion. The current training sample set is of size (2)Testing step: a huge space-filling testing dataset is generated. The number of test samples is , with being the dimension of the design space. The Latin hypercube is used with different seeds each time in order to avoid sample duplication issues. According to the selected criterion, the potential of improvement function is evaluated at each point of the testing dataset and a vector of size is obtained(3)Infill step: a single-point or multipoint selection is available—in the first case, each infill iteration produces a single candidate to be evaluated; hence, a complete sequential infill approach is realized and the surrogate has to be built times; in the second one, multiple samples are simultaneously selected, so that the true function can be evaluated in parallel before refitting the surrogate model. In the latter case, a batch sequential selection is performed. However, the two approaches pose different issues: the single-point approach gives the possibility to update the surrogate many times and to select more criteria during the sequential process, but of course it is much slower; on the other hand, by selecting the multipoint approach, the best scoring candidates will probably lead to cluster points in a specific area with no diversity and hamper the surrogate adaptation. To overcome this issue, the following procedure is followed. The vector is ranked according to the score represented by the value of the function. If , the highest scoring candidate is selected; if and a multipoint selection is requested, the highest scoring candidate is again selected and included in the batch selection set . The scores of the remaining are reweighted according to the distance penalization introduced by Maljovec et al. [40]. For , the distance from the nearest training sample is ; the mean value of those distances over is where is the dimension of the batch selection set (equal to 1 at this point). A reweighted score function is then assigned to each test sample and defined by using a penalization term : Finally, the new candidate is chosen by selecting . The batch selection set is now increased by the new sample . By repeating the reweighting process and the highest score selection more times, a set of new candidates is obtained and passed to the parallel evaluation with the true model. Note that the new samples are selected by using the same surrogate prediction and the same training set ; hence, the associated computational cost is negligible(4)New candidate evaluation step: the parallel evaluation of the candidates in is performed and the corresponding set of response values is created(5)Surrogate update step: the surrogate model is refitted over the updated training set , and the training set size is updated (6)Check step: if the infill computational budget is not over , a new iteration is started by going back to point (2); otherwise, the sequential infill process is terminated

4.3. Sequential Metamodel Optimization

The last phase of the surrogate optimization process is devoted to the exploitation of the effort spent so far in fitting an accurate and improved metamodel. The training database of design solutions is here enriched with samples suggested by many sequential optimizations on the metamodel: at the end of each optimization, the best candidate is evaluated with the high-fidelity model and appended to the database for a new model fitting. This leads to the generation of a sequence of suboptimal candidates which continuously improves the objective function and approaches the design space region where the “true” optimum resides. The iterative procedure is structured as follows: (1)Surrogate minimization step: given the current metamodel built over the training set , the new suboptimal candidate is found by solving the optimization problem . The optimizer may be chosen among a wide range of options: the in-house genetic algorithm library ADGLIB [41], the CMA-ES algorithm [42], or a combination of global and local algorithms by the open-source library NLopt. Typically, as the metamodel evaluation is very quick, launching even the most computationally demanding algorithm does not represent an issue(2)New candidate evaluation step: the new candidate is evaluated with the high-fidelity objective function, and the value of is obtained(3)Surrogate update step: the surrogate model is refitted over the updated training set , and the training set size is updated to (4)Check step: if the computational budget allocated for sequential optimization is not over , a new iteration is started by going back to point (1); otherwise, the sequential optimization process is terminated

5. Experiments with Analytical Test Functions

5.1. Experimental Setup

A numerical campaign has been set up to experimentally test the optimization method. Despite the fact that the method is naturally conceived to reduce the computational load in engineering-based design cases, in the present section, representative test functions for global optimization are used to investigate the algorithm capabilities and limits. A simple and practical application is presented in the next section. Performances are evaluated in terms of , where is the known function value at global optimum and is the numerical optimum function value found by the algorithm. Each run is launched at fixed computational budget which, in turn, is a function of the design space dimension . Recalling that is a sum of single-stage contributions, the relation between and is given by . This means that 50% of the total computational budget is devoted to the adaptive exploration: this is not surprising as goal-oriented infill criteria are used in combination with error-driven infill criteria to realize a high-level exploration-exploitation trade-off. The test functions are described in the following sections.

5.1.1. Ackley Function

The Ackley function is defined as where , , and . It is characterized by a nearly flat outer region, where many local minima are located, and a large hole at the centre. The function is continuous, nonconvex, multimodal, and scalable on the -dimensional space. The input domain is for all . The function has one global minimum at at .

5.1.2. Michalewicz Function

The Michalewicz function is defined as where . The function is featured with alternating steep valleys and ridges and has local minima. The function is continuous, nonconvex, multimodal, and scalable on the -dimensional space. The input domain is for all . As the global minimum and its location vary with the input dimension, Table 1 reports the corresponding values for .

5.1.3. Rastrigin Function

The Rastrigin function is defined as

The function is continuous, convex, and scalable on the -dimensional space, has several local minima, and is highly multimodal. However, the locations of the minima are regularly distributed. There is one global minimum at . The function is evaluated on the hypercube for all .

5.1.4. Schwefel Function

The Schwefel function is defined as

The function is continuous, nonconvex, multimodal, and scalable on the -dimensional space. Many local minima are irregularly distributed in the input space, and one global minimum is present. Moreover, with respect to the global minimum, the nearest local minimum in the objective space is the farthest in the input space. These features make the Schwefel function very hard to solve by an approximated approach. The function is evaluated in the hypercube for all , and the global minimum is at .

5.1.5. Details of Surrogate Models under Testing

Despite the fact that the surrogate models are two (RBFN and Kriging model), some differences may arise according to the method chosen for training, i.e., for selecting good values for the hyperparameters. Table 2 shows four types of surrogates that have been used for testing purposes. The search for the hyperparameters must have a global character as often the associated cost function may have more than one extremum and being trapped in a local minimum/maximum may significantly impact the model accuracy. The hyperparameter bounds are fixed as follows: (i)For RBFN models: with and being the minimum and maximum Euclidean distance, respectively, between training samples(ii)For Kriging models: , with being the average Euclidean distance between training samples

5.2. Results and Discussion

As the surrogate-based optimization package discussed so far offers several choices, this section is dedicated to the investigation of the effect of selecting among the available alternatives. In particular, the influence of the type of surrogate model, infill criteria, and computational budget allocation will be studied and discussed.

5.2.1. Influence of Surrogate Model Selection

As mentioned in Section 5.1.5, four types of surrogates can be trained according to the mathematical nature (Gaussian or RBF) and optimal hyperparameter selection. All models have undergone a first experimental campaign to underline differences and draw up a ranking. The EI, EI-like, and WLOOE criteria are activated with probabilities of 50%, 30%, and 20%, respectively.

Figures 811 show the boxplot distribution of over 5 repetitions for each surrogate and dimension. Outliers are highlighted by grey-filled circles, and all repeated points are plotted as small grey circles. The Gaussian-based models clearly outperform in average the RBFN except for the Ackley function. Table 3 reports a summary of optimization results, highlighting in bold character the best performing model for each case. By taking into account also the standard deviation of for ranking purposes, Gaussian-based approaches seem to be more robust in that they offer better results with less variability. For RBFN, optimizing the hyperparameters instead of grid searching is by far the preferred approach. For Kriging, there is no clear evidence about the best training strategy as krig-hp and krig-all show similar performances and trends.

In order to provide a more comprehensive insight into the optimization data, Figures 1215 depict the evolution of the Pearson correlation coefficient for all cases and rbf and krig-hp models. In the present case, it is used to compute the correlation between samples of the “true” objective function and the leave-one-out prediction by the surrogate . It is defined as

Each plot does not start from zero because the first training of the surrogate is done after those samples have been computed. It can be observed that all simulations start with low correlation between data and surrogate prediction. This is much more evident for high-dimensional function cases, where in some cases negative correlation is even found. As a general consideration, all methods achieve to reach very high correlation at the end of the optimization process, but with different paths. In particular, Gaussian-based approaches present a constant and continuous increase of throughout both the adaptive sampling phase and the sequential optimization process. On the other hand, RBF methods tend to stagnate during the infill phase while the correlation gets better with the addition of the last points. This would seemingly suggest that infill criteria are not effective in reducing the prediction error of RBF methods. However, it should be also considered that trends are quite the opposite if considering only the Ackley function; hence, the test function characteristics play a major role. Hence, it derives that a more extended suite of test functions should be considered for further investigation.

5.2.2. Influence of the Infill Criterion

This section is devoted to testing different infill criteria and combination of them with the aim of assessing which infill solution is more capable to provide an improved surrogate function from the optimizer’s point of view. In order to ease the understanding of the results, two models (krig-hp and rbf) and a single test function (Schwefel function with ) have been used. 16 infill strategies have been defined, and Table 4 summarizes all of them. The first 9 strategies exploit each single criterion without combination. On the other hand, the last 7 strategies have been designed by coupling a factorization-based criterion with an expected improvement-based one. 10 repetitions have been carried out for each infill strategy in order to extract reliable mean values. The optimization setup is identical to that in the previous section. Results are shown in Figure 16 as boxplot distribution and reported in Table 5 in terms of mean and standard deviation of (over 10 repetitions). By considering both mean values and outliers, the best strategies are 3, 5, 6, 9, 10, and 11 for the Kriging model and 0, 2, 6, 8, 9, and 13 for the RBFN model. Thus, as a general consideration, Gaussian-based models benefit from a blend of expected improvement-based criteria (EI, EI-like, EIGF) and error-driven ones (mainly LOOE and WLOOE). Conversely, RBFN models work well with global fit-oriented criteria (EIGF, GEIGF, and LC) and pure EI search.

5.2.3. Influence of the Computational Budget Allocation

In this section, the influence of the optimization setup is investigated in terms of computational budget allocation to the infill and sequential optimization phases, having fixed the total budget . Table 6 reports the base setup used so far as setup no. 1, while setup no. 2 refers to the new one to be studied. The main difference is found in a more extended search with adaptive sampling to the detriment of the optimization iterations which are reduced. On the other hand, a more pronounced emphasis is put on infill criteria (EI and EI-like) specifically devoted to global optimization purposes. Results obtained with the new setup are depicted in Figures 1720. All test functions have been considered, but only the best models (namely, krig-hp and rbf) from previous investigations have been used. The pictures are the analog of Figures 811 and must be compared with them in order to draw conclusions. In particular, a general worsening of the algorithm performance is highlighted by employing the new setup. Table 7 shows the new results and compares them with setup 1. The bold character is used to underline improvements with respect to setup 1. Setup 2 is beneficial in reducing the standard deviation in several cases and in average offers meaningful improvement only for the Rastrigin function. This observation confirms that results may vary significantly depending on the selection of the set of test functions as well as on the methodology and setup.

6. Aerodynamic Shape Optimization Problem

In order to test the present approach in a real-world and representative case, a benchmark problem has been selected from those proposed within the AIAA Design Optimization Discussion Group (ADODG), namely, the RAE 2822 airfoil optimization case. This section proposes a critical analysis of the results obtained and represents a sort of prosecution of a previous work [31] which will be taken here as a reference.

6.1. RAE 2822 Airfoil Shape Optimization

The shape optimization problem is formulated as a drag (total drag coefficient ) reduction task while keeping the lift level (lift coefficient ), trim control of the pitching moment coefficient , and minimum airfoil area constant. It reads as follows: where is the generic design vector representing an airfoil shape, is the total area enclosed by the airfoil, and is the corresponding value for the baseline RAE 2822 airfoil. The lift constraint is explicitly satisfied by performing the flow simulation at fixed lift by varying the angle of attack. The pitching moment and the geometric constraint are treated by using a penalization approach; hence, the objective function is defined as with being the drag coefficient of the RAE 2822 airfoil. According to this position, the baseline airfoil has a unit objective function value , while feasible design solutions “better” than the baseline shape will be rewarded with . A unit airfoil chord is assumed, the pitching moment is evaluated at the quarter-chord, the Mach number is 0.734, and the Reynolds number is .

6.1.1. Parameterization

In the reference paper [31], the Class-Shape Transformation (CST) by Kulfan [43] was used to make the RAE 2822 shape parametric using 14 design variables. In the present work, the open-source SU2 code capabilities [4446] are exploited and the FFD (free-form deformation) approach is used to arbitrarily change the geometry and, consequently, the volume mesh. In particular, 20 FFD control points (CPs) are defined around the RAE 2822 airfoil, as depicted in Figure 21, and the vertical displacements of the control points are employed as design variables. The two CPs on the leading edge and on the trailing edge are constrained to have the same displacement, so the total number of design variables is reduced to 18.

6.1.2. Optimization Studies

Three optimization studies have already been performed employing different methods and computational loads [31]. Details are reported in Table 8, together with information about the present optimization studies. In all cases, RANS simulations are launched to evaluate the objective function when high fidelity is required, e.g., to validate the samples suggested by infill criteria or by optimizing on the surrogate. Practically, the CFD approach is different: reference cases employed the in-house multiblock structured ZEN code [47], the TNT turbulence model, and a fine mesh selected after a detailed mesh convergence analysis; as already mentioned, new cases take advantage of the unstructured SU2 suite running with the Spalart–Allmaras turbulence model on a coarser mesh. Hence, any comparison between cases should take into account such differences.

SBOSA (Surrogate Based Optimization with Sequential Adaptation) was similar to the present one, but considered a lower dimensional space (14 instead of 18 design variables, CST parameterization instead of FFD) and a surrogate model based on proper orthogonal decomposition (POD) and RBF interpolation of coefficients. EGO (Efficient Global Optimization) refers to the classical Kriging-based approach made popular by Jones et al. [1] and implemented within the Dakota package [48]. PGA (Plain Genetic Algorithm) consisted in a pure, intensive evolutionary optimization where no surrogate model was used and all evaluations were performed by using the high-fidelity CFD approach. Here, two further approaches are added for benchmarking: SU2AO and SU2SBO. SU2AO uses the adjoint gradient-based optimization method embedded within the SU2 code which includes the flow solution, continuous adjoint flow solution, geometry modification, mesh deformation, and gradient optimization (SLSQP method). SU2SBO, instead, involves the presented surrogate-based approach and the SU2 tools for flow solution, airfoil shape modification, and mesh deformation. Table 9 reports the setup used for this simulation which reflects the experimental knowledge gained with the test functions in the previous sections.

As for Table 8, the number of CFD evaluations needed by the SU2AO approach comprises the calls to the adjoint flow solver (one for each objective/constraint, 3 in total according to the problem statement in Section 6.1), as each additional adjoint evaluation has approximately the same cost of a single CFD solution. However, the number of gradient evaluations (and, hence, the calls to the adjoint flow solver) is smaller than CFD ones (30 out of 70) as the gradient vector is not updated at each iteration.

Figure 22 shows the progress of the minimum value of the objective function found through the sequential optimizations SU2SBO and SU2AO. A log scale for the -axis is used to make the picture clearer. The local approach (SU2AO) is much faster to reach feasible and optimal regions of the design space; indeed, the objective function drops down to approximately 30% with respect to the baseline after only 30 design cycles. The global approach (SU2SBO) takes more CFD effort to accurately train the surrogate model: after the initial design space sampling by LHS (180 samples), the objective function has been decreased to just 8%. From that point on, the global approximation starts to predict the objective function landscape and the adaptive sampling provides new insight into unknown regions: their combined effect causes a significant performance improvement at around 300 design solutions. The last 100 solutions are sequentially introduced in the database by optimizing the surrogate with an in-house genetic algorithm [41]. A further drop is observable in the very last stages which pushes the obtained performance beyond the SU2AO best result.

Figure 23 depicts a geometric and aerodynamic comparison of baseline and optimized airfoil shapes. The shock wave intensity reduction and the consequent improvement of the boundary layer behaviour past the flow discontinuity are highlighted by the redesign of the front airfoil region (to increase the flow expansion peak) of the whole pressure side (to recover lift and pitching moment) and of the rear part of the suction side (to control the shock). Similar results have been found in the reference paper. Figure 24 depicts the Mach number contour map and shows how the supersonic region has been greatly reduced and moved upstream. Finally, Table 10 provides quantitative figures about the optimal solutions from the reference paper and present simulations. The baseline airfoil drag coefficients obtained with both ZEN and SU2 codes have been reported as they represent the reference performance for corresponding cases: indeed, the percentage change is also shown in the rightmost column. The present results are fully in line with previous studies as the optimized airfoils satisfy all constraints and feature drag reductions of the order of 35%–40% with respect to the baseline shape.

7. Conclusions

A framework for surrogate-assisted optimization has been presented, featuring design exploration, adaptive sampling, and sequential global search. A series of infill criteria have been proposed to adaptively choose new points to be added to the surrogate database: most of them pursue the exploration-exploitation balance and are aimed at providing an improved version of the surrogate to the final optimization stage. A new feature has been added which allows triggering several criteria during the sampling phase according to activation probabilities. Gaussian-based and radial basis function network models are used as surrogates of the objective function. Two experimental test campaigns have been performed: the first has involved analytic test functions with multidimensional and scalable character in order to quickly analyze the performance of the method and the second campaign is a prosecution of a previous work as it deals with the aerodynamic shape optimization of an airfoil profile in transonic viscous flow conditions. The open-source SU2 package has been exploited in most of its functionalities related to aerodesign. Results have been compared to the reference and to an adjoint gradient-based optimization. The work confirmed that approximate solutions, close to the global optima, can be found with the proposed surrogate-assisted optimization. Simulations with test functions highlighted that the proper combination of surrogate and infill criteria can be quite sensible to the function type and suggested that, by having multiple surrogates and criteria available, probably the best option is to exploit them in an ensemble approach. This will be the main topic of a future research study. Concerning the aeroshape optimization case, the achievements are in line with previous investigations as all the basic (geometric and aerodynamic) features of the optimal shape have been captured by the present approach. In the context of global and real-world optimization, the adoption of surrogate models is essential to reduce the computational burden. The proposed example clearly indicates the benefits associated with their usage in terms of cost/performance trade-off. Further application will deal with more complex cases, e.g., three-dimensional benchmark problems defined within the AIAA Aerodynamic Design Optimization Discussion Group and interference drag reduction of wing/pylon/nacelle configuration.

Data Availability

The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The author declares that there is no conflict of interest regarding the publication of this paper.