`Science and Technology of Nuclear InstallationsVolume 2012 (2012), Article ID 253045, 8 pageshttp://dx.doi.org/10.1155/2012/253045`
Research Article

## Asymptotic Analysis for the Variance-Based Global Sensitivity Indices

Research and Development Division, The South African Nuclear Energy Corporation (Necsa), Building 1900, P.O. Box 582, Pretoria 0001, South Africa

Received 24 April 2012; Accepted 8 August 2012

Copyright © 2012 Pavel M. Bokov. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We discuss the estimation of the uncertainty and sensitivity parameters for a model response under the assumption that the input variables are normally distributed and block-wise correlated with the covariance matrix, which is small in some norm. These conditions may arise when considering the impact of the group-wise neutron cross-sections' uncertainties on the uncertainty of some reactor parameters such as the neutron multiplication factor. The variance-based global sensitivity analysis, considered in our work, involves the calculation of multidimensional integrals. When the input uncertainties are small, the values of these integrals can be estimated using an asymptotic analysis method called the Laplace approximation. The asymptotic formulas for the output variance and for the global sensitivity indices have been obtained using the Laplace approximation method. It is demonstrated that the asymptotic formula for uncertainty propagation matches the uncertainty propagation formula being used in the local sensitivity analysis. The applicability of the obtained asymptotic approximations was successfully demonstrated on a test problem with realistic cross-section and covariance matrix values.

#### 1. Introduction

Uncertainty analysis (UA) is a mathematical tool that allows one to quantify the uncertainty of the model output as a result of the uncertainty in the model input. If the input consists of more than one variable, another mathematical tool, sensitivity analysis (SA), may be used to quantify the contribution of each parameter to the output uncertainty. SA and UA methods can be gathered in two broad families: local sensitivity analysis (LSA) and global sensitivity analysis (GSA).

LSA methods allow one to analyze the behavior of the model output in the vicinity of a chosen point. They are usually efficient in computer time but may be inadequate for nonlinear models. GSA methods allow one to explore the full-phase space of input parameters and to take the nonlinearity of the model into account. GSA methods are generally more computationally intensive than LSA methods. The major difference between these two approaches can be summarized with the following quote from Sobol’ [1]: “Global sensitivity indices should be regarded as a tool for studying the mathematical model rather than its specified solution.” A detailed discussion on SA methods can be found in [13]. The overview of SA as applied in the nuclear reactor calculation field is given in [4] and the references therein.

In this paper we will concentrate our attention on the so-called variance-based GSA. Amongst different GSA methods, the variance-based methods are the methods of preference because they are well established, robust and model-independent. On the other hand, despite the attractiveness of the variance-based GSA, UA, and SA for nuclear reactor problems are traditionally based on the LSA methods and rarely on GSA methods. The reason why the variance-based GSA method was not widely applied is that there are a few obstacles in the way of their implementation in reactor calculations. The first obstacle is the correlation between input parameters, that makes the variance-based GSA in its traditional formulation meaningless. This limitation may appear, for example, when considering multigroup neutron cross-sections as input parameters, which may have correlations between cross-sections from different energy groups. The partial remedy for this problem was proposed by Jacques et al. [5], who remarked that the variance-based GSA is still applicable to the mutually independent subsets of input variables. Another, more radical solution, is to abandon the variance-based GSA and to use, for instance, entropy-based GSA [6, 7]. Entropy-based GSA methods can be used for correlated inputs, but they are not as well-established as variance-based ones, are algorithmically more complex and computationally more expensive. The second obstacle is the potential impracticality (and even intractability) when the number of input parameters is big. This problem appears because GSA methods involve the calculation of multidimensional integrals and is traditionally circumvented by applying the integration methods suitable for multidimensional problems, such as Monte Carlo [1, 8] or sparse grid [9, 10] quadratures. These quadratures are often applied in conjunction with model reduction methods [1113], which are often based on the functional ANOVA (analysis of variance) decomposition [8, 14]. When applied in the field of nuclear reactor calculations, UA and SA methods have an additional limitation: the model value is often the result of an expensive calculation and this limits the number of samples available for the analysis.

Nevertheless, there are some indications that a large number of samples may not be needed for practical applications. Adetula and Bokov [15, 16] described a numerical method for the calculation of global sensitivity parameters for multigroup cross-section dependent problems with block-wise correlated inputs. The method was illustrated using a test two-group problem that involved realistic cross-section data. Various integration techniques were used and one of them, sparse grid quadrature, has demonstrated a surprisingly good accuracy with a small number of sample points. Moreover, it was observed that the interaction between inputs is negligible. Hence, it was concluded that the model is virtually additive and can be approximated with low-order polynomials. This result was explained with the following hypothesis: the uncertainty in cross-sections is so small that only the vicinity of the cross-section mean values contributes to the integrals used in the estimation of sensitivity indices. In this vicinity the neutron multiplication factor can be approximated with a fairly linear function. This hypothesis was tested and confirmed empirically (see [15, 16] for detail), but none theoretical support has been provided.

The present work aims to fill this gap and to provide a theoretical explanation and a quantitative description of this phenomenon. The idea behind this hypothesis is rather simple and may be illustrated with the sketch given in Figure 1 for a one-dimensional case. Consider a model represented with a smooth function , where the input variable is normally distributed with a mean and a standard deviation . Let us assume that it is necessary to evaluate the integral in the following form: where is the probability density function for . As one can observe in Figure 1, only the interval under the Gaussian (which is of the order of a few ) effectively contributes to the integral. In the limit, when , that corresponds to the case , the probability distribution becomes the delta function and the contribution of the function to the integral is point-wise, that is, . When is small enough, as in the case , the part of the function , contributing to the integral, is almost linear. Finally, if the the Gaussian spans wider intervals, then the nonlinearity of becomes progressively more prominent (see cases for and ). A similar behavior can be observed in the multidimensional case. The qualitative description of this behavior in a one- and multidimensional case can be given using a method known as the Laplace approximation through the asymptotic approximation of the involved integral.

Figure 1: Effective vicinities of the cross-section mean, which contribute to the integral, for the standard deviations . Dots on the curve represent the boundaries of the corresponding effective intervals on which integrand is essentially not zero.

In this work we intend to apply the Laplace method for the calculation of the output variance and sensitivity indices as they are defined in GSA. The application of the Laplace approximation in a one-dimensional case is relatively straightforward, while in a multidimensional case it may become cumbersome, this is why we limit our approximation with the first nonvanishing term in the approximation. Nevertheless, as we will demonstrate below, this approximation is sufficient to explain the results as obtained in [15, 16].

The rest of the paper is organized in the following way. Section 2 provides necessary definitions followed by a derivation of the asymptotic approximation. Section 3 is used to discuss the asymptotic approximation. Section 4 contains the description and analysis of the test problem. Finally, Section 5 is used to present our conclusions.

#### 2. Asymptotic Approximation of Sensitivity Indices

##### 2.1. Variance-Based Global Sensitivity Analysis

Let be a set of continuous random variables, be its arbitrary subset and be a subset of complimentary variables, that is, . The random variables can be gathered into the corresponding column vectors: , , and . Following the arguments given in [5, 15, 16], we assume that random variables from and are mutually statistically independent, hence their joint probability function is the product of the marginal probability density functions: Note that we will use, as is the rule in statistics, a capital letter (e.g., ) to denote a random variable and the corresponding lowercase letter () to denote its value (realizations). Let us assume that the random variables are distributed according to the normal law with known means and covariances. The multivariate normal distribution is characterized by the probability density function [17] where is the column vector of the input random variables, is the column vector of their expected values, is the covariance matrix and denotes the mathematical expectation and the symbol “T” denotes the operation of transposing a row to a column.

Consider a model where the function is generally nonlinear and may result from numerical calculations. are called inputs and is called the output or response. The response is a random variable itself, with the expected value and the variance . The variance of the output, , can be used to characterize the uncertainty in the response due to the uncertainty in the input, or in other words, the uncertainty propagation. Note that can be calculated, regardless of whether the normally distributed inputs are correlated or not. Sensitivity indices characterize the contribution of a subset (and, correspondingly, vector ) to the response variance and they are meaningful only if subset and its complementary subset are statistically independent. Two types of global sensitivity indices, with respect to a subset , can be introduced:(1), which is called the main sensitivity index (MSI) and it represents the effect due to only; (2), which is called the total sensitivity index (TSI) and it represents the contribution to the variance of along with all the interactions of this variable with other variables.

The global sensitivity indices can be calculated using the ratios [1, 8] where parameters , , and can be calculated through the following multidimensional integrals [12, 15, 16]: which are a generalization of the classical formulas given by Sobol’ [1, 8]. Here the prime symbol over a variable (e.g., as in ) means that this variable has to be sampled independently from the corresponding marginal distribution ( in this case) of its unprimed analogue and is the number of elements in a vector, that is, the dimensionality of the vector. Calculation of the sensitivity indices requires the calculation of integrals (6)–(9), which can be written in the following general form: where the integration operator denotes the mathematical expectation with respect to extended random variables , represents a function being integrated and is the effective dimensionality of the integral. For instance, in integral (8) function represents , represents , and the effective dimensionality is . Similarly, in the integral (6) function represents , and .

##### 2.2. Laplace Approximation of Integrals

In the limit when elements of the covariance matrix are small in some norm, one can use for the calculation of integrals (6)–(9) the asymptotic analysis technique known as the Laplace approximation. The Laplace approximation is an approximation to integrals in the following form: as with . Here is a measurable subset of and is a real-valued function of . When applying the Laplace approximation formula, it is assumed that is an infinity differentiable function with its maximum value at a single point in the interior of (see [18] for details) and is a sufficiently smooth function of its argument.

Integrals in the form (10), used for the calculation of sensitivity indices, fulfill the above requirements for the applicability of the Laplace approximation formula if one considers and assumes that elements of the covariance matrix are small. Let us assume that the covariance matrix is small in some norm, that is, in the limit . For the sake of the asymptotic analysis, let us introduce the big parameter as the inverse absolute value of the determinant of the covariance matrix We will assume that the covariance matrix is strictly positive definite, hence the absolute value sign in (12) may be omitted. Factoring out the coefficient allows us to introduce the normalized covariance matrix Taking into account that , the probability density function (3) can be written in a form suitable for the Laplace approximation: The covariance matrix is real-valued, symmetric, and positively definite, hence matrix allows the Cholesky factorization: where is a lower triangular matrix with strictly positive diagonal entries. The exponent in the joint probability density function can then be transformed in the following way: where new random variables are introduced by the linear transformation . The initial variables can be obtained via the inverse linear transformation Changing variables by using the linear transformation (17) in the integral (10) and taking into account that the Jacobian matrix for this transformation is gives for the integral (10): Taking into account that and substituting the explicit formula for after the change of variables, one obtains The Laplace approximation formula for (11) can be found, for example, in [18]. Applying this formula to integral (20) leads, after a few simplifications, to the asymptotic series: where If the expansion is truncated at , the obtained formula is usually referred to as the Laplace approximation with . For many applications this is sufficient, but not for ours: as will be shown below, the first nontrivial term in the asymptotic expansion for integrals (6)–(9) is linear with respect to . Therefore, we retain the first two terms in the asymptotic expansion: By introducing the -dimensional gradient and the Laplace operators, defined as: respectively, one can write (23) in a compact form: The subscript next to the nabla operator (e.g., next to ) will be used to indicate explicitly the variables with respect to which partial derivatives are taken. Asymptotic formulas (23)–(25) can now be applied for the estimation of the integrals (6)–(9) used in the calculation of sensitivity indices.

##### 2.3. Calculation of Sensitivity Indices
###### 2.3.1. Variance

In order to calculate parameter , given by (7), one can use the well-known statistical formula for the variance: where is substituted. As one can see from (26) two integrations are needed: for and . We will approximate these integrals independently to the second order, that is, . The application of the approximation formula (25) to the first term in (26) yields Note, that the vector algebra formula for the Laplacian of the product of two scalar fields, and was used in the above derivation.

The approximation formula for can be obtained by a simple substitution of instead of into (25), that is, Keeping the terms up to the second order with respect to in the asymptotic approximation of gives For the sake of the derivation in the next section, let us rewrite (30) in a different form: where denotes the trace of a matrix and is the Hessian matrix for the function at a point with the matrix elements defined as After substitution of (27) and (30) into (26), one obtains the asymptotic approximation to the variance of : Returning back to the initial variables and taking into account that the gradient operator transforms as where is the gradient of with respect to the “initial” variables (where ), (33) becomes Furthermore, taking into account properties (13) and (15), one obtains:

Finally, by introducing the sensitivity vector the asymptotic formula for can be written as:

###### 2.3.2. Numerator in the Formula for MSI

Formula (8) contains the integrand, which depends on variables from , , and . Let us introduce the extended vector of input variables: Note that the tilde symbol will be used for extended variables. By the assumptions discussed previously , , and are statistically independent, hence . Therefore, the random vector has the mean and the covariance matrix , which can be written in the following block form: Let us introduce auxiliary functions: and for convenience. Applying formula (28) to and following the path used in the previous section, one obtains the asymptotic approximation of the first term in (8): where and are the Hessian matrices for and , respectively, at . As follows from the definitions of functions and , these matrices have the following block structure: where the blocks are formally defined as Similarly, gradients and have the following matrix structure: where and are sensitivity vectors with respect to and . Taking into account (41) and (43), the trace operators in (40) can be modified in the following way: . From the definitions of and it follows that , hence (43) becomes After subtracting (31) from (44) one obtains the following approximation for the factor in the formula for the Main Sensitivity Index:

###### 2.3.3. Numerator in the Formula for TSI

The calculation of requires the calculation of integral (9). In this section let us introduce the extended variable vector as the column vector Vector has the mean and covariance matrix that have the following block representation: Introducing, for convenience, the auxiliary functions: , and , one can write function in (10) as The Laplace approximation for can be obtained by substituting function (48) into formula (25), which gives: From the definition of functions and it follows that and the first term in (49) vanishes. The second term can be transformed in the following way: Taking into account the structure of the extended covariance matrix, given by (47), and the structure of the gradients in (50), given by one obtains the following formula:

###### 2.3.4. Asymptotic Formulas for the Global Sensitivity Indices

In the previous sections we have obtained approximation formulas for the integrals involved in the calculation of the global sensitivity indices. As we have demonstrated, these integrals can be approximated with the asymptotic series , where the coefficients depend on the derivatives of the model function with respect to input variables. Based on this result, the asymptotic formula for the sensitivity indices, and can be written as a ratio: The first two terms of the Maclaurin series with respect to the small parameter are written explicitly on the right-hand side of (53). Hence the first nontrivial term in the approximation of the sensitivity indices is the ratio of the first nonvanishing term in the integral approximations obtained above. Thus, the approximation for MSI can be obtained by dividing (45) by (37): Similarly, the approximation for the TSI is

#### 3. Discussion

Formula (37) relates the input uncertainties, given by the covariance matrix , to the output uncertainty, characterized by the variance and, hence, describes the propagation of error. Contrary to what one would expect from (23), the approximation formula (36) involves first-order partial derivatives of and not zeroth and second-order derivatives.

As one can see, the structure of the first nonvanishing term in this expression corresponds to the famous sandwich formula for error propagation used in the derivative-based LSA [3, 4]. Note, that in this work sensitivity vector components are defined as , whereas in literature dimensionless sensitivity coefficients are often used.

Formulas (45) and (52) have the same sandwich structure as (37), but contains only the sensitivities and uncertainties with respect to the input variables . Moreover, factor matches factor under conditions used in the approximation and the corresponding sensitivity indices coincide. As a result, sensitivity indices given by (54) and (55) do not differ in the first nonvanishing term in the approximation. The difference between and may be of the order . Formulas (54) and (55) involve, besides elements of the covariance matrix, only gradients of the model function, which are proportional to the coefficients of the linear expansion of the model at . This indicates that, in the considered limit, the model is approximately linear. As a consequence, interactions are negligible and coincides with that is reflected by results (54) and (55).

The remaining question of this analysis is the criterion for the applicability of the asymptotic results. We could not derive this criterion directly for the sensitivity indices. The indirect indication can obtained by using (27): by taking into account that the calculation of the sensitivity index involves the calculation of integrals for .

#### 4. Example

Adetula and Bokov [15, 16] discussed a test problem, where the model represents the two-group infinite multiplication factor described by the analytical formula [19]: where variables with represent the following macroscopic cross-sections: is the fast capture, is the thermal capture, is the fast fission, is the thermal fission, is the fast neutron production, is the thermal neutron production, and is the fast removal. The mean values of the cross-sections and covariance matrix used were based on the data given in [20]: where the cross-sections are given in and the variances are given in . The test covariance matrix has three uncorrelated blocks, corresponding to subsets: , , and , for which the calculation of sensitivity indices is not meaningless.

Values of the global sensitivity indices, which are calculated using the asymptotic formulas obtained in our work and the values as reported in [15, 16] are presented in Table 1. The output uncertainty, characterized by the variance () or the standard deviation () calculated with these two methods are also given in Table 1. The values taken from [15, 16] were calculated with the sparse grid quadrature, as it has been found to be the most accurate amongst all the applied quadratures. (Other quadratures were Monte Carlo and quasi-Monte Carlo. Values obtained with all three quadratures were in a good agreement, this indicates that the quadratures have converged and the result obtained may be taken as the reference.) As one can see from Table 1, the results for sensitivity indices and uncertainties are in good agreement and differ for only one case in the fourth significant digit for sensitivity indices and in the third significant digit for the variance. It is worth mentioning that this discrepancy is still within the error of numerical integration as reported in [15, 16]. Results given in Table 1 confirm the validity of the asymptotic approximation for the considered test problem. Similar results have been obtained for the second case, as discussed in [15, 16], namely the case of the diagonal covariance matrix. These results also confirm the correctness of the hypothesis given in [15, 16] and discussed in earlier in Section 1 of this paper.

Table 1: Sensitivity indices and output uncertainty (both dimensionless) as calculated with the asymptotic formula and multidimensional quadrature.

#### 5. Conclusions

The problem of calculating the uncertainty and sensitivity parameters in the framework of the variance-based Global Sensitivity Analysis was addressed. It was assumed that the input variables are normally distributed and block-wise correlated. Under the additional assumption that the covariance matrix is small in some norm, the Laplace approximation technique was applied and the asymptotic approximation of the output variance and the global sensitivity indices was obtained.

Our results demonstrate that the first nontrivial term in the asymptotic expansion of the output uncertainty (variance) has the so-called sandwich structure well known from the local sensitivity analysis, that is, it depends linearly on the covariance matrix and quadratically on the partial derivatives of the outputs with respect to the inputs. The dependence of uncertainty on the partial derivatives means that, under the above assumptions, the model can be considered to be approximately linear, and, as a consequence, the interaction (but not correlation) between inputs is negligible. The above conclusion is supported by the asymptotic approximation obtained for sensitivity indices: the expression for the main sensitivity indices coincides with the expression for the total sensitivity indices, thus indicating that interactions between input variables can be neglected.

A test problem with realistic values of multigroup cross-sections and their covariances was considered. The values of the global sensitivity indices calculated via asymptotic formula were found to be in good agreement with the values calculated from the exact formulas using numerical integration. This demonstrate that the results obtained may not only be of academic interest but of practical interest as well. The theoretical and numerical results obtained in this work indicate that, from the point of view of numerical calculations, the sparse-grid integration-based method for the uncertainty propagation can be an alternative to the perturbation/derivative methods used in the local sensitivity analysis. Moreover, the results for the uncertainty obtained by these two methods are interchangeable in the limit of small cross-section errors.

#### Acknowledgments

The author is grateful to Dr. Djordje I. Tomašević for inspiring this study and for pointing out an error in the initial paper. The author is also thankful to the South African National Research Foundation (NRF) for the financial support.

#### References

1. I. M. Sobol', “Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates,” Mathematics and Computers in Simulation, vol. 55, no. 1–3, pp. 271–280, 2001.
2. A. Saltelli and I. M. Sobol', “Sensitivity analysis for nonlinear mathematical models: numerical experience,” Matematicheskoe Modelirovanie, vol. 7, no. 11, pp. 16–28, 1995.
3. A. Saltelli, S. Tarantola, F. Campolongo, and M. Ratto, Sensitivity Analysis in Practice: A Guide to Assessing Scientific Models, John Wiley & Sons, 2004.
4. D. G. Cacuci, M. Ionescu-Bujor, and I. M. Navon, Sensitivity and Uncertainty Analysis: Applications To Large-Scale Systems, CRC Press, 2005.
5. J. Jacques, C. Lavergne, and N. Devictor, “Sensitivity analysis in presence of model uncertainty and correlated inputs,” Reliability Engineering and System Safety, vol. 91, no. 10-11, pp. 1126–1134, 2006.
6. N. Lüdtke, S. Panzeri, M. Brown et al., “Information-theoretic sensitivity analysis: a general method for credit assignment in complex networks,” Journal of the Royal Society Interface, vol. 5, no. 19, pp. 223–235, 2008.
7. B. Iooss and M. Ribatet, “Global sensitivity analysis of computer models with functional inputs,” Reliability Engineering and System Safety, vol. 94, no. 7, pp. 1194–1204, 2009.
8. I. M. Sobol', “Sensitivity estimates for nonlinear mathematical models,” Mathematical Modeling and Computational Experiment, vol. 1, pp. 407–414, 1993.
9. E. Novak and K. Ritter, “High dimensional integration of smooth functions over cubes,” Numerische Mathematik, vol. 75, no. 1, pp. 79–97, 1996.
10. E. Novak and K. Ritter, “The curse of dimension and a universal method for numerical integration,” in Multivariate Approximation and Splines, vol. 125 of International Series of Numerical Mathematics, p. 177, Birkhäuser, 1998.
11. Ch. Lemieux and A. B. Owen, “Quasi-regression and the relative importance of the ANOVA components of a function,” in Monte Carlo and Quasi-Monte Carlo Methods, 2000, pp. 331–344, Springer, 2002.
12. W. Chen, R. Jin, and A. Sudjianto, “Analytical variance-based global sensitivity analysis in simulation-based design under uncertainty,” Journal of Mechanical Design, vol. 127, no. 5, pp. 875–886, 2005.
13. N. H. Kim, H. Wang, and N. V. Queipo, “Adaptive reduction of random variables using global sensitivity in reliability-based optimisation,” International Journal of Reliability and Safety, vol. 1, pp. 102–119, 2006.
14. A. B. Owen, “Orthogonal arrays for computer experiments, integration and visualization,” Statistica Sinica, vol. 2, pp. 439–452, 1992.
15. B. A. Adetula and P. M. Bokov, “Computational method for global sensitivity analysis of reactor neutronic parameters,” submitted to Science and Technology of Nuclear Installations, special issue on Uncertainty Analysis in Reactor Physics Modeling.
16. B. A. Adetula and P. M. Bokov, “Method for calculation of global sensitivity indices for few-group cross-section-dependent problems,” in Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering (MC '11), Rio de Janeiro, Brazil, May 2011.
17. R. A. Johnson and D. W. Wichern, Applied Multivariate Statistical Analysis, Prentice Hall, Englewood Cliffs, NJ, USA, 1998.
18. P. D. Miller, Applied Asymptotic Analysis, American Mathematical Society, 2006.
19. W. M. Stacey, Nuclear Reactor Physics, Wiley-Interscience, 2nd edition, 2007.
20. M. Williams, M. Jessee, R. Ellis, and B. Rearden, “Sensitivity/uncertainty analysis for OECD UAM benchmark of peach bottom BWR,” in Proceedings of the Uncertainty Analysis and Modelling Workshop (UAM '04), Pisa, Italy, April 2010.