Abstract
The variancebased global sensitivity analysis technique is robust, has a wide range of applicability, and provides accurate sensitivity information for most models. However, it requires input variables to be statistically independent. A modification to this technique that allows one to deal with input variables that are blockwise correlated and normally distributed is presented. The focus of this study is the application of the modified global sensitivity analysis technique to calculations of reactor parameters that are dependent on groupwise neutron crosssections. The main effort in this work is in establishing a method for a practical numerical calculation of the global sensitivity indices. The implementation of the method involves the calculation of multidimensional integrals, which can be prohibitively expensive to compute. Numerical techniques specifically suited to the evaluation of multidimensional integrals, namely, Monte Carlo and sparse grids methods, are used, and their efficiency is compared. The method is illustrated and tested on a twogroup crosssection dependent problem. In all the cases considered, the results obtained with sparse grids achieved much better accuracy while using a significantly smaller number of samples. This aspect is addressed in a ministudy, and a preliminary explanation of the results obtained is given.
1. Introduction
The apportioning of uncertainty in the output of a model (numerical or otherwise) to different sources of uncertainty in the model input is known as sensitivity analysis [1], and the associated quantitative values are known as sensitivity indices. The sensitivity indices can be used to rank the input variables of the model, based on the influence they have on the output. It thus becomes possible to recognize the probabilistically insignificant/unessential input variables that exert little influence on the output. This allows for the reduction of the dimensionality of the problem by fixing the unessential input variables, whilst more experiments, computations, research, and so forth can be done to determine the essential input variables with a higher degree of accuracy.
The focus of this study will be on global sensitivity analysis (GSA), which explores the full phase space of input parameters, as opposed to local sensitivity analysis (LSA) methods that are usually based on derivatives and analyse the behaviour of the model output around a chosen point. The implementation of GSA can be achieved by using either variance[1–3] or entropy[4, 5] based methods. In our study, we will use the Sobol’s variancebased method [3]. This method is referred to as “variancebased” because within the framework of this approach, the uncertainty of the output is characterized by its (output) variance. The Sobol’s method is robust, has a wide range of applicability, and, as stated in [6, 7], provides accurate sensitivity information for most models. However, the Sobol’s method is defined for mutually independent input variables that are uniformly distributed. A modification of the method which allows one to deal with input variables that are blockwise correlated and normally distributed is presented in this work.
The modified method can then be applied to nuclear reactor calculations. Many reactor parameters of interest (such as the neutron multiplication factor, decay heat, reaction rates, etc.) are dependent on neutron crosssections. These crosssections are often described by only their first two statistical moments and are assumed to be normally distributed [8]. The uncertainties associated with the crosssections are propagated to the final result of the calculated reactor parameters, and the uncertainty in a calculated reactor parameter can be apportioned to the different sources of uncertainty in the neutron crosssections.
In this paper, we will present the method of global sensitivity analysis that will address the previous limitations and take into account the previously mentioned assumptions with an emphasis on the numerical/calculational aspects in implementing the method. The rest of the paper is organised in the following way. Section 2 contains two major parts: in the beginning, we give theoretical background and some mathematical derivations for the method we present, and in the second part of the section, we discuss its practical numerical implementation. The theory description is supported by two appendices: Appendix A is used to summarize the definitions and properties of the functional ANOVA decomposition, and Appendix B provides explanations concerning the sparse grid integration method. In Section 3, we describe the particularities of our implementation of the proposed method and the problem we use to test and characterise the method, as well as the results obtained. Finally, Section 4 is used to present our conclusions.
2. Method
2.1. Definitions and Assumptions
Consider a problem in which some important reactor parameters, such as the neutron multiplication factor and the decay heat, depend on multigroup or fewgroup neutron crosssections. We will use to denote the reactor parameter of interest and to denote the crosssections. The dependence of the parameter of interest on crosssections can be written as a model where are called inputs and is called the output or response. Model (1) is generally nonlinear and often calculated numerically in practice.
The crosssections can be gathered in a column vector , where the symbol “” denotes the operation of transposing a row to a column. If input is a random vector with a joint probability density function , then the response is a random variable with the expected value and the variance defined as correspondingly. Note that we will use, as it is the rule in statistics, a capital letter to denote a random variable and a lowercase letter to denote its value (realizations).
In this work, we will assume that the crosssections are random variables distributed according to the normal law with known means and covariances. The multivariate normal distribution for the probability is characterized by the probability density function [9] where is the column vector of random variables, is the column vector of their expected values, and is the covariance matrix.
2.1.1. BlockCorrelated Random Variables
Let us assume that the input vector can be partitioned into subsets of variables, that is, , and that random vectors and from this partitioning are mutually independent for .
Using the definition of a covariance matrix, one can show [9] that in this case for . Hence, the covariance matrix becomes block diagonal, that is, , where is the covariance matrix of . The inverse of a block diagonal matrix is another block diagonal matrix, composed of the inverse of each block, that is, . Moreover, taking into account that for block matrices , one can write the expression for the joint probability density function defined in (3) in a form that reflects the block independence of variables: where is the joint probability density function of a subset and is the number of variables in .
2.2. Global Sensitivity Analysis
The variancebased global sensitivity analysis method aims to quantify the relative importance of each input parameter in the response variance. It involves the calculation of the global sensitivity indices, sometimes called Sobol’s sensitivity indices [2, 10].
In order to describe the global sensitivity indices, let us introduce the following notations: let be the set of input variable indices and let be its arbitrary subset. Hence, is a subset of variables whose indices are in , whereas are the complimentary variables, that is, variables with indices not in . Notation will be used for the cardinality of the set . Variables from nonoverlapping sets and constitute the input vector .
Let us consider a subset of input variables. Two types of sensitivity indices of the model response to the input random variables can be introduced: (i)the main effect sensitivity index , which describes the fraction of variance of the output that is expected to be removed if the true values of variables become known. (ii)the total sensitivity index , which can be interpreted as the fraction of variance of the output that is expected to remain if the true values of variables become known.
In other words, represents the effect due to only, and represents the contribution to the variance of with all the interactions of this variable with other variables.
The definition of sensitivity indices and their theoretical justification comes from functional ANOVA (analysis of variance). In Appendix A, we summarize formulae of the functional ANOVA decomposition, assuming that inputs are independent random variables with arbitrary continuous distributions.
Sobol [2, 3] introduced an alternative way of calculating sensitivity indices by sampling directly from , that is, without passing through the ANOVA decomposition. Sobol’s alternative formulae are valid for uniformly distributed, independent random variables. Generalizing this result for continuous independent random variables with an arbitrary probability density function , one can write: Here, the prime symbol over a variable (e.g., as in ) means that this variable has to be sampled independently from the corresponding marginal distribution ( in this case) of its unprimed analogue. Using the results from (5)–(7), the global sensitivity indices can be calculated as ratios: Note that and correspond to the output mean and the output variance introduced in (2).
The independence condition for input variables can be relaxed. As discussed in [11], it is not necessary that all variables are mutually independent—this result holds when assuming independent blocks of input variables instead of single independent input variables . Thus, if subsets of variables from and are mutually independent, that is, , the sensitivity analysis formulas (6) and (7) are still applicable. Moreover, as one can see from (5), the formula for the output variance does not explicitly involve any particular subset of input variables. As a result, the variance of the output can be calculated with the method presented here even in the case when all input variables are correlated. Since the variance is used to characterise the uncertainty in the output due to the uncertainty of the input, the method from this paper can be used for uncertainty analysis disregarding whether normally distributed inputs are correlated or not.
As follows from the previous description, the evaluation of sensitivity indices requires the calculation of the integrals in (5)–(7), which can be written in the following general form: where is the integration operator, represents a function being integrated, is the effective dimensionality of the integral, and is the joint probability density function of . For instance, in integral (6), function represents , , , and the effective dimensionality is .
2.3. Standard Normal Law Representation
Though the blockwise representation (4) of the joint probability density function (3) allows the exploiting of the independence of different subsets of variables, it gives no information about the practical way of a sensitivity index calculation. It is convenient to rewrite the expression in the socalled standard form in order to simplify future numerical evaluations of the global sensitivity indices.
Since covariance matrices are both symmetric and positive definite, for each there is a nonsingular matrix such that (Cholesky factorization). Consider the linear transformation . For any , it leads to and one can show that , , where is the identity matrix. Since , the joint probability density function can be written in the standard form: where . New standard random variables () have zero mean, standard deviations equal to one, and are not correlated, that is, .
Representation (11) can now be used for the calculation of sensitivity indices: variables can be sampled individually from and the corresponding points can be calculated as where goes over all subsets of . Nevertheless, in order to simplify the sampling procedure, to allow the use of a single calculational path and make a wider range of numerical integration techniques suitable for solving the problem, we do one more transformation from the normally distributed variables to the uniformly distributed ones.
Consider the following coordinatewise change of variable from to : where is the cumulative distribution function for the normal distribution. From the properties of follows , , Applying this transformation coordinatewise (i.e., for ) and introducing give the representation of the integral (9) in the form Here, for , where is the inverse cumulative distribution function for the normal distribution, called the probit function, and is defined by (12).
2.4. Numerical Calculation of Sensitivity Indices
2.4.1. Numerical Quadratures
The integral in (15) can be approximated with a quadrature (sometimes called cubature in the literature), that can be written in the following general form: where are methoddependent quadrature weights, are samples of the integrand at methoddependent nodes , and is the number of samples.
The integral in (15) is multidimensional, and, therefore, special numerical techniques, that can cope with the curse of dimension, are required to calculate it. Monte Carlo (including quasiMonte Carlo) and sparse grid integration methods are suitable for this task and will be considered in our paper. Later, we will briefly introduce these methods and discuss their implementation in our work.
2.4.2. Monte Carlo and QuasiMonte Carlo Quadratures
In the case of the traditional Monte Carlo method, the integral is sampled on a set of dimensional pseudorandom points , uniformly distributed in the unit hypercube . In the case of quasiMonte Carlo, socalled low discrepancy sequences of quasirandom points (also uniformly distributed in ), are used for integration. For both traditional Monte Carlo and quasiMonte Carlo, the weights are pointindependent and equal, that is, . The quasiMonte Carlo quadratures have a higher asymptotic convergence rate and often outperform the traditional Monte Carlo quadrature in practical applications [12].
There is a strong similarity between traditional Monte Carlo and quasiMonte Carlo quadratures except for the type of sampling points (pseudorandom or quasirandom) and the way of error estimation. The error estimation will be done in the same way for both quadratures (see the discussion later). Hence, in this paper, both the traditional Monte Carlo and the quasiMonte Carlo quadratures will be referred to as Monte Carlo quadratures.
In this work, we follow Sobol’s recommendations [3] on the implementation of the Monte Carlo quadratures for the calculation of sensitivity indices. In particular, sampling is done from hypercube instead of and, in order to improve the accuracy of the estimation in (15), the function is evaluated instead of in (5)–(7), where .
Our estimation of the accuracy of the Monte Carlo quadratures is based on a socalled randomization procedure [13]. This procedure consists of calculating independent estimates, , of integral (15). The approximation to integral (15) is then calculated as an average of independent estimates, that is: and the error of such an approximation is characterized by the sample standard deviation, defined as Each estimate is based on an independent sequence of quasi or pseudorandom points, where each new sequence of points is obtained from the initial one by a random modulo 1 shift [13].
2.4.3. Sparse Grid Quadratures
A sparse grid is a set of dimensional points, which is generated using Smolyak construction [14] and is based on a chosen sequence of the univariate quadrature formulas , where is the accuracy level of (see Appendix B for details). When applied to the integration of multivariate functions, the Smolyak construction is a multidimensional quadrature based on a tensor product of onedimensional quadratures , which are combined in a special way in order to optimize the quadrature convergence rate [15, 16]. The sequence of univariate quadrature formulae leads to a sequence of sparse grid quadratures with an increasing sparse grid accuracy level .
is a linear functional that depends on through function values at the set , and the number of terms in (16) is defined by its cardinality. The sparse grid points and the quadrature weights can be calculated using the procedure described in Appendix B.
If , the quadrature is called nested. Nested quadratures permit the use of function values from previous levels, thus making integration less computationally expensive. Quadrature rules are said to be open when they do not include points on the boundary and closed otherwise. Points on the boundary (i.e., or for ) represent a problem for the numerical integration in (9), as a transformation will lead to infinities in these points. Hence, only nested and strictly open sparse grid quadratures will be used in this work.
The sequence of sparse grid quadratures naturally leads to a formula for a practical estimation of the integration error although this estimation is usually quite conservative.
Note that sparse grids are often defined on the hypercube . In this case, they can easily be mapped to the unit hypercube using the transformation of variables . When this mapping is performed, all sparse grid quadrature weights have to be adjusted by a factor of .
2.4.4. Inversion of the Standard Normal Cumulative Density Function
According to the methodology discussed in the previous section, each sample vector , generated with either Monte Carlo or sparse grid techniques, requires transformation to the corresponding vector.
The traditional way to generate normally distributed points in conventional Monte Carlo is to sample from the uniform distribution and then to use the socalled BoxMuller transformation [17]. Unfortunately, it is not recommended [18] for use with quasiMonte Carlo and is not suitable for use with sparse grids. An alternative way is to sample from the uniform distribution and then to use the inverse of the standard normal cumulative density function. It is recommended to use Moro’s inversion algorithm [19], which is reported to be faster than the BoxMuller approach and has good accuracy for both the central region and the tails of the normal distribution [18].
In our work, Moro’s algorithm is used for variable transformation () coordinatewise (i.e., for each , where ) for both Monte Carlo and sparse grid samples.
2.4.5. Algorithms for Calculation of Global Sensitivity Indices
Algorithms 1 and 2 provide examples of how to calculate sensitivity indices based on a Monte Carlo quadrature and a sparse grid quadrature, respectively. Note that these algorithms are given for the sake of illustration and do not contain details about possible memory management or performance enhancements.


3. Results
3.1. Test Problem Description
The OECD LWR UAM (OECD: Organization for Economic Cooperation and Development; LWR: light water reactor; UAM: Uncertainty Analysis in Modelling) benchmark [8] seeks to determine the uncertainty in LWR system calculations at all stages of coupled reactor physics/thermal hydraulics calculations. The benchmark specification consists of three phases, where the first phase is the neutronic phase.
The neutronic phase involved obtaining multigroup microscopic crosssection libraries. These libraries would then be used to calculate few group macroscopic crosssections, which are to be used in criticality (steady state) standalone calculations. One of the reactors that was chosen as a reference LWR for the benchmark was the Peach Bottom reactor. By energy collapsing and spatial homogenization of microscopic crosssection and covariance data [20], Williams et al. [21] obtained the 2group homogenized neutron crosssection, with an energy boundary of 0.625 eV, and the corresponding covariance matrix for the Peach Bottom reactor fuel assembly. The neutron crosssections are given in Table 1, they are assumed to be independent and normally distributed, and their mean values (given in the third column of Table 1) correspond to the vector used in our methodology. The covariance matrix is shown in Table 2, where the diagonal terms are the percentage relative standard deviation and the offdiagonal terms are the correlation coefficients.
The global sensitivity analysis methodology discussed in Section 2 was applied to nuclear reactor calculations. The reactor parameter of interest that was chosen for this study is the infinite neutron multiplication factor, , and it was modelled as [22] where the traditional notation for macroscopic crosssections is used (see Table 1). To illustrate our methodology, three cases were considered (all three cases are shown in Table 2): one with a diagonal covariance matrix, another one with a blockdiagonal covariance matrix, and the last one with the full covariance matrix.
In the first case (hereafter referred to as Case A), it was assumed that the input parameters (crosssections) are not correlated, and the covariance matrix consisted of only the diagonal entries, highlighted in bold, while all other elements of the matrix were set to zero.
In the second case (hereafter referred to as Case B), we assumed a test blockdiagonal covariance matrix. The test matrix was artificially constructed based on the 2group covariance matrix from [21] in such a way that the input variables can be partitioned into three mutually independent subsets , , and , such that elements in the offdiagonal blocks are set to zero, that is, terms highlighted in italic are set to zero. It should be noted that the elements of the first subset correspond to those terms that contribute to the absorption crosssection. The elements of the second subset correspond to those terms that contribute to the production of neutrons, and the last subset corresponds to the removal of neutrons from the fast group to the thermal group.
For the last case (hereafter referred to as Case C), it was assumed that all the input parameters (crosssections) are correlated with one another, and the full covariance matrix was used, that is, all entries highlighted in bold, italic, and nonitalic.
It should be emphasised that neither of the first two examples (Cases A and B) considered pretends to reflect physical reality, but both the crosssection values and the elements of the test covariance matrix are of a plausible order of magnitude (close to the values given in [21]): hence, this example is representative and suitable for testing of our method. Therefore, the results and conclusions will be given in order to characterise the method presented and not the neutron multiplication properties of the Peach Bottom reactor.
3.2. Method Implementation
A Fortran 90 program was written to implement all the steps of the methodology outlined in Section 2. The program was subdivided into blocks of code, where each block had an input to be evaluated to give an expected output and corresponded to step(s) along the calculational path of the methodology. The testing, verification, and validation of the program were done for each block of code using test functions, for which the corresponding results could be evaluated analytically.
Pseudorandom points were generated using the Fortran intrinsic subroutine random_number(). In implementing quasiMonte Carlo, a Sobol quasirandom number generator written by J. Burkardt [23] was used. Furthermore, a randomization procedure was used in estimating the integration error, , for the Monte Carlo quadratures, by considering independent sequences with samples in each sequence.
The implementation of sparse grid quadratures was greatly facilitated by subroutines written by J. Burkardt [23]. Different open sparse grid quadrature rules such as Fejer, GaussPatterson, and GaussLegendre rules were applied (note that closed rules were also tested and, as expected, numerical problems for the boundary points were encountered). The GaussLegendre quadrature outperformed the other rules in terms of computational time needed to achieve a given accuracy for the cases considered, and its results will be reported up to a sparse grid level of . A conservative procedure defined by (19) was used in estimating the integration error for the sparse grid quadratures and is reported in this paper.
Variations were introduced into the neutron crosssections by using a standardizing transformation as explained in (12), that is, , where is the extended Cholesky decomposed neutron crosssection covariance matrix, and is obtained by using Moro’s inversion of samples required by each of the implemented quadratures. Finally, in order to improve the accuracy of the Monte Carlo estimation of integral (15), a variance reduction technique [3], which consists of sampling function instead of in (5)–(7), where , was used.
3.3. Computed Uncertainty and Sensitivity
The uncertainty of multiplication factor , computed in terms of variance , are given in Table 3 for Cases A, B, and C. Though the output variance is the natural result of variancebased sensitivity analysis, the standard deviation is preferred in the literature because it allows an intuitive interpretation as the error bar for the value of the analysed parameter. The standard deviations are calculated as square root of variance, , and reported in Table 3 in parentheses for all cases and each quadrature. The uncertainty of the multiplication factor (expressed in relative units as ), which we obtained in Case C, was 0.51% for each of the three quadratures, and this result is in good agreement with the value of 0.49% reported in [21].
The computed sensitivity indices for each of the variables (crosssections) in Case A and for each subset in Case B are given in Table 4. No sensitivity analysis was performed for Case C, since any sensitivity indices computed with our method would be meaningless because all input parameters are correlated in this case.
Considering the results of Case A, the input variable with the greatest influence on the infinite neutron multiplication factor is the thermal neutron production, , and the input variable with the least influence is the fast neutron fission, . This is similar to what we anticipated, given the fact that the infinite neutron multiplication factor is highly dependent on the number of neutrons produced in the system. Since the system being considered is thermal, the thermal neutron production should account for most of the neutrons produced, and the effect of fast neutron fission was not expected to be significant.
Considering the results of Case B, the subset , which corresponds to the neutron production, had the greatest influence on the infinite neutron multiplication factor. The subset , which corresponds to the fast neutron removal, was the least influential. It should be noted that the value of the sensitivity index for is different in Cases A and B. This is because the offdiagonal terms of the covariance matrix influenced the results for . In other words, due to the offdiagonal terms in the correlation matrix, Cases A and B define different problems.
3.4. Error Analysis
The results for Cases A and B, in Tables 3 and 4, were obtained by using a high number of samples with all three numerical quadratures (, in the case of Monte Carlo quadratures and , in the case of sparse grid quadrature), and these results are taken as the reference. There seems to be very good agreement of the computed uncertainties and sensitivity indices between all three quadratures.
The accuracy obtained for the reference results is much better than the accuracy needed to draw practical conclusions concerning the contribution of uncertainties of different crosssections. By this we mean that the accuracy of the sensitivity index estimation has to be, at least, sufficient to discriminate between the contribution of different inputs and should also be able to discriminate between and for a given input .
Therefore, an error estimation study was done in order to determine the influence of the number of samples on the absolute and relative quadrature error of the computed sensitivity indices, where the relative quadrature error is given by where the absolute quadrature error is given by either (18) or (19). This study would help in determining the number of samples that is needed to get a good estimation of the sensitivity indices with the different numerical methods. For Monte Carlo methods, three different sample sizes were considered, , , and . In all cases, the number of independent sequences was taken as . For the sparse grid, levels to were considered.
It was observed in both cases that the results obtained for and are statistically similar for all subsets of the input variables, for all the three numerical methods that were used. This implies that the interaction effects can be neglected. It was also observed that the integration error for was smaller than for in all the cases; hence, from now on, we will only consider .
When considering Case A, it was observed that increasing by a factor of resulted, as expected, in a reduction of the integration error by a factor of approximately for all the computed total sensitivity indices when using traditional Monte Carlo. The results for quasiMonte Carlo showed that increasing from to , and subsequently from to , resulted in a decrease of the integration error by a factor of about and , respectively, for all the computed total sensitivity indices. For the sparse grid, a level change from to and from to both resulted in a decrease of the integration error by a factor of about 3.
The maximal absolute and relative errors for the total sensitivity indices computed with different number of samples are reported in Table 5 for Monte Carlo quadratures and Table 6 for sparse grid quadrature. These maximal absolute errors are obtained by taking the maximal absolute error of all the sensitivity indices for a given case, a given number of samples, and a given quadrature. The maximal relative error is obtained in the same way.
As one can see from Table 5, a relatively small number of samples in the case of Monte Carlo gave fairly good accuracy (about ) in the estimation of the total sensitivity indices.
It should be noted that levels and for the sparse grid were not considered in the error estimation. This is because for level , the abscissa consists of only one point, and the variance is zero; hence, the total sensitivity index will be undefined. For the same reason, the application of (19) cannot give reasonable results for level . However, looking at Table 6, it can be seen that the maximal difference between the results obtained for levels and is smaller than , and the maximum relative quadrature error obtained when moving from level to is smaller than %. Hence, this shows that for both cases, level , which contains only points, is sufficient to estimate the total sensitivity indices with a very good accuracy.
The relatively small number of sparse grid points needed for an accurate estimation of the sensitivity indices as well as the absence of interactions between input variables (as discussed earlier) was unexpected. This result can potentially be explained in the following way: the uncertainty in crosssections is so small that only the vicinity of the crosssection mean values contributes to the integrals used in the estimation of sensitivity indices. In this vicinity, the neutron multiplication factor, which is used as the example, can be approximated with a fairly linear function.
A small numerical experiment was done to clarify this aspect. The standard deviations given in Table 1 were initially multiplied by arbitrary factors between 1 and 10 and, in the second phase, by arbitrary factors between 1 and 20, and the sensitivity indices were recalculated. These factors were chosen to make the effect of a wider distribution more prominent without introducing a significant nonphysical effect due to negative crosssection values at the left tail of the distributions. It was observed that as the values of the standard deviations increase, interaction effects can be observed, that is, becomes statistically different from . Furthermore, a larger number of points (higher levels) is needed to achieve the same accuracy as in the reference case. These results may be used to confirm our assumption on the nature of the good performance of the sparse grid quadrature. However, a proper study was done to confirm our conclusion, and the results are reported in [24].
4. Conclusions
In this paper, the global variancebased sensitivity and uncertainty analysis of reactor parameters dependent on fewgroup or multigroup neutron crosssections was discussed. It was assumed that the crosssections are normally distributed random variables, with known means and correlation matrices, which can be partitioned into statistically independent blocks of variables and that this partitioning allows one to formulate scientifically and practically sound sensitivity analysis problems. The theoretical and mathematical aspects of the calculation of the global sensitivity indices under the previous assumptions have been discussed. The problem of practical numerical calculations of the variancebased global sensitivity indices was addressed; namely, different options for numerical integration were considered. A consistent overall path for the calculation of sensitivity indices was proposed and described.
The method was successfully implemented in practice and was tested on a problem that involved twogroup assembly homogenised crosssections as input variables. The performance of different numerical integration techniques was tested on a reactor problem with arbitrary, but plausible, twogroup crosssections and covariance matrices. Different implementations gave consistent results for the test problem under consideration. The implementation based on sparse grid quadrature demonstrated the best accuracy with as low as a few dozen samples.
This good performance of sparse grid integration was not expected and a special ministudy was performed with the purpose of explaining its origin as well as the absence of interactions in the obtained sensitivity indices. The results of this study confirmed our hypothesis that the observed results can be explained by the very small crosssection error. Nevertheless, this conclusion still has to be supported by a theoretical explanation.
From the methodological point of view, the method presented in the paper is applicable to problems with an arbitrary number of input variables. Nevertheless, one has to be cautious when dealing with multivariate problems in order to escape the curse of dimension. In this work, the applicability of our method to a fewgroup problem was demonstrated, but its applicability to multigroup reactor problems will be the topic of future studies.
Appendices
A. Functional ANOVA Decomposition for Independent Random Variables
Let be a joint probability density function of random variables : Let be a square integrable function over . The expected value and the variance of the function with respect to the probability density function are defined as
The functional ANOVA decomposition is a representation of the function as a sum of terms of increasing dimensionality: where the sum is assumed over subsets and is a function that depends on only through with . Here, is a subset of variables whose indices are in , whereas are the variables with indices not in , and is the cardinality of the set .
According to Sobol’s definition, for the representation given by (A.3) to be a functional ANOVA decomposition it has to satisfy the socalled zero means and orthogonality properties [2, 3]. Let random variables be mutually independent with a joint probablity density function . Using an analogy with the case of uniformly distributed input variables, one can demonstrate that the functional ANOVA can be constructed by applying the following recurrent formula: The constant mean term, , is thus obtained by calculating , first order effects (where ) are obtained from and so on. For functions obtained with recurrence (A.4), the zero means property becomes The orthogonality property holds in the weighted form Properties (A.5) and (A.6) are crucial for the functional ANOVA method because they lead to the variance decomposition formula: Let us assume that the function allows orderwise decomposition over subsets of variables (A.3). Applying the variance operator (A.2) to the lefthand side and righthand side of (A.3) and using a standard statistical formula, we can write: By definition, and it can be observed from (A.9) that properties (A.5) and (A.6) lead to the zerocovariance condition for and hence to the variance decomposition in the form of (A.7).
B. Approximation of Multidimensional Integrals with Sparse Grid Quadratures
Let be a continuous function of its arguments and with bounded mixed derivatives of order : where , is the dimensionality of the problem and () are bounded or unbounded intervals. We consider an approximation to the integral where , with the tensor product form of the weight function .
In order to construct a multidimensional sparse grid quadrature, let us consider a sequence of univariate quadrature formulas which approximate onedimensional integrals Here, is a continuous function of its argument, , is the accuracy level of the quadrature formula, is the number of abscissas (knots) of the quadrature, and is the corresponding weight. The index is written explicitly over abscissas and weights in order to remind that they may change for different levels. will be used to denote the set of knots of the onedimensional quadrature formula.
In the sparse grid method, the integral (B.2) is approximated via the Smolyak formula [14, 15], defined for an accuracy level of the sparse grid as follows: where , and the multiindex contains the accuracy level of the onedimensional quadrature (B.4) for each dimension. The tensor product in (B.5) can be calculated as where the tensor product of quadrature weights is replaced with the ordinary product, since they are real numbers. As one can see from the structure of (B.5) and (B.6), quadrature is a linear functional that depends on through function values at a finite set of points. This set of points is called a “sparse grid” and is denoted by . A sparse grid is defined as the union For nested onedimensional sets , the corresponding sparse grids are also nested and can be simplified, yielding The integral (B.2) can now be approximated by the sum: where multidimensional knots can be constructed based on (B.7) and (B.8). The formulae for the quadrature weights in (B.9) can be obtained in an analytical form only in a few particular cases; in all the other cases weights can be either precalculated or calculated online using (B.5) and (B.6).
Acknowledgments
The authors would like to thank the South African National Research Foundation (NRF) for their financial support and Professor Kostadin N. Ivanov from Pennsylvania State University for his valuable suggestions.