Research Article  Open Access
W. Wieselquist, T. Zhu, A. Vasiliev, H. Ferroukhi, "PSI Methodologies for Nuclear Data Uncertainty Propagation with CASMO5M and MCNPX: Results for OECD/NEA UAM Benchmark Phase I", Science and Technology of Nuclear Installations, vol. 2013, Article ID 549793, 15 pages, 2013. https://doi.org/10.1155/2013/549793
PSI Methodologies for Nuclear Data Uncertainty Propagation with CASMO5M and MCNPX: Results for OECD/NEA UAM Benchmark Phase I
Abstract
Capabilities for uncertainty quantification (UQ) with respect to nuclear data have been developed at PSI in the recent years and applied to the UAM benchmark. The guiding principle for the PSI UQ development has been to implement nonintrusive “black box” UQ techniques in stateoftheart, productionquality codes used already for routine analyses. Two complimentary UQ techniques have been developed thus far: (i) direct perturbation (DP) and (ii) stochastic sampling (SS). The DP technique is, first and foremost, a robust and versatile sensitivity coefficient calculation, applicable to all types of input and output. Using standard uncertainty propagation, the sensitivity coefficients are folded with variance/covariance matrices (VCMs) leading to a local firstorder UQ method. The complementary SS technique samples uncertain inputs according to their joint probability distributions and provides a global, allorder UQ method. This paper describes both DP and SS implemented in the lattice physics code CASMO5MX (a special PSImodified version of CASMO5M) and a preliminary SS technique implemented in MCNPX, routinely used in criticality safety and fluence analyses. Results are presented for the UAM benchmark exercises I1 (cell) and I2 (assembly).
1. Introduction
The OECD/NEA benchmark for uncertainty analysis in modeling (UAM) was launched a few years ago to promote the development, assessment, and integration of comprehensive uncertainty quantification (UQ) methods in bestestimate multiphysics coupled simulations of LWRs during normal as well as transient conditions [1]. Although very ambitious by nature (due to the complexity of the task to treat all potential sources of uncertainties), the benchmark has nevertheless achieved one of its first objectives, namely, to constitute a major (if not the main) international framework to drive forward the development of methodologies for the propagation of nuclear data uncertainties in reactor simulations. This topic was proposed as the first phase of the benchmark, and since research in precisely this area was at the same time being launched within the STARS project [2] at the Paul Scherrer Institut (PSI), participation to this benchmark was considered as a timely and highly valuable opportunity to complement the development and assessment of the PSI methods. In that context, two parallel lines of development were in fact initiated at PSI. On the one hand, the development of a UQ methodology for the propagation of neutronic uncertainties in the deterministic CASMO/SIMULATE/SIMULATE3 K chain of reactor analysis codes and used for safety assessment of the Swiss reactors was launched. On the other hand, the development of a corresponding UQ methodology for neutron transport simulations with the stochastic continuousenergy MCNPX and with primary emphasis on criticality safety was recently initiated. In this paper, the principles and concepts of both methodologies are first summarized. Then, the results obtained for the UAM Phase I benchmark cases analyzed so far are presented. The primary focus is given to the CASMO5 M analyses conducted so far for Phase I1, aimed at cell physics, and Phase I2, dedicated to lattice physics. Regarding MCNPX, the first set of solutions obtained for the hotzeropower pin cell cases of Phase I1 will also be presented.
1.1. Motivation
In order to rigorously establish the accuracy (or bias) of the socalled bestestimate codes, the precision (or uncertainty) must be quantified. (The measure of accuracy is bias: low accuracy implies a large bias and high accuracy implies a small bias. The measure of precision is uncertainty: low precision implies large uncertainty and high precision implies small uncertainty.) This includes propagation of input uncertainty (all inputs are really distributions) to output uncertainty, which is the basic task of UQ. The most straightforward benefit of UQ is the new information about the distribution of outputs which can be used to qualify designs and/or provide confidence in results. However, with UQ a much more rigorous validation procedure is also available and the value of this should not be underestimated. With UQ, one can compare calculations with uncertainty to experimental results with uncertainty using overlap testing, instead of the conservative assumption of no uncertainty in calculations or subjective use of expert judgment to decide if is close enough to . A bestestimate code (and its validation) should avoid such conservative assumptions and expert judgment by definition.
1.2. Preliminaries
Consider input, , and output, , with nominal values and and perturbed values and . When sampling perturbed values from a distribution, the th sample of the input is and the corresponding output is . With computer codes, there is typically a large set of input and output, which may be denoted and for nominal sets and , and for perturbed sets.
1.3. FirstOrder UQ Using Uncertainty Propagation
The cornerstone of local, firstorder UQ methods is the capability to calculate sensitivity coefficients: which are vital to sensitivity analysis (SA). It is very convenient to introduce the definition of a perturbation factor, , such that the perturbed input and the corresponding output factor, , . Thus the sensitivity coefficient may be written simply as
Nonintrusive SA can then be implemented simply as a numerical differentiation of with respect to p, referred to here as direct perturbation (DP) as in [3]. Two factors make it difficult to use DP in an automated manner to obtain accurate estimates of : due to finiteprecision arithmetic, “too small” perturbations do not change the output significantly and due to unknown relationships between inputs and outputs, “too small” perturbations for one input may be “too large” for another. With loworder numerical differentiation formulas (e.g., firstorder finite differences) especially, “too large” perturbation can greatly increase the approximation error. Due to the relatively high cost of calculations in nuclear simulations, loworder formulas are typically used.
Although DP can be straightforwardly extended to simultaneously estimate for multiple outputs, it cannot handle simultaneous input perturbations; that is, input perturbations are always one at a time. Thus with many more input parameters than output parameters, DP is not very efficient. For nuclear data uncertainty propagation, this was the basic reason behind the development of very efficient (but intrusive) perturbation theorybased algorithms for sensitivity coefficient estimation, for example, in the SCALE code system [4].
Using the calculated sensitivity coefficients in UQ simply requires the classic firstorder uncertainty propagation formula [5], shown below for multiple input and output parameters: with relative variance/covariance matrix (VCM) of the outputs in terms of the relative VCM of the inputs and the sensitivity coefficients, , now a matrix defined as for input parameter index (row) and output parameter index (column) .
1.4. SamplingBased UQ
Samplingbased UQ, or stochastic sampling (SS), has been historically used for nonlinear systems with few correlated parameters [6]. However, currently SS is increasingly applied for all types of UQ, including neutronics, due to its nonintrusive nature [7], flexibility to handle many uncertain parameters [8], theory for nonparametric tolerance intervals (i.e., Wilks’ formula), and global sampling of the solution space. In order to implement an SS method, one simply has to sample inputs from their distributions (choosing appropriate distributions is another matter), run the code with each sampled set, and analyze the distribution of outputs.
1.4.1. Simple Random Sampling from Multivariate Gaussian Distributions
In the case of the distribution of nuclear data, one generally assumes that the input obeys an dimensional Gaussian (normal) distribution [5]: with input VCM of dimension , the determinant, and mean of dimension . A matrix of simple random samples () which respects the correlations of data may be constructed as described in [9].(1)Decompose VCM using a “Choleskylike” decomposition (see below), , where is .(2)Make random samples from the standard normal distribution (zero mean and unit variance) and store the results in the matrix . (3)The random samples are then given as , where is the th row of .
The term “Choleskylike” is used because a true Cholesky factorization requires a (square) symmetric positive definite (SPD) matrix whereas a general VCM can be symmetric positive semidefinite (SPSD), for example, due to perfect (anti) correlation of parameters. In this case the matrix is rankdeficient with rank and is rectangular, with , and (4) must use the generalized inverse and pseudodeterminant.
1.4.2. Nonparametric Statistics and Wilks’ Formula
Given random samples of a quantity, the formula for the tolerance interval in terms of coverage (a) and confidence (b), without assuming a particular distribution, is known colloquially as Wilks’ formula, due to the seminal work of S. S. Wilks in nonparametric statistics [10]. Nonparametric (or order) statistics is the name given to the set of statistical techniques which does not require data belonging to a particular distribution (e.g. normal) and frequently requires ordering samples, for instance, from least to greatest. For a more complete discussion of nonparametric statistics applied to neutronics calculations, see [8]. In order for Wilks’ formula to be valid, a simple random sampling process must be used; that is, stratified sampling or variance reduction is not allowed according to the theory. For example, with samples, a twosided tolerance interval can be declared as [], where and are the minimum and maximum results from the 93 samples. Such a tolerance interval is guaranteed to contain the (middle) of the distribution with confidence. Note that, with , the behavior of log versus log is roughly linear.
1.4.3. Sample Statistics
In neutronics UQ, the variance (or standard deviation) is used most often as the measure of uncertainty. With UQ methods based on the uncertainty propagation formula (e.g., DP), the variance of outputs is simply the diagonal of the output VCM. With SS, it is convenient to use the sample variance from sample statistics: where is the (perturbed) result of sample , is the nominal calculation value, and is the total number of samples. It is well known in statistics that the sample variance of a normal distribution is a scaled chisquare distribution of degrees of freedom which can be used to provide dependent bounds on the sample variances as shown in [6].
2. Methodology
Although both direct perturbation (DP) and stochastic sampling (SS) schemes are “nonintrusive” by nature, in order to develop UQ techniques for the CASMO5 M lattice physics code, some source modifications were necessary as CASMO5M’s nuclear data library is stored in a proprietary binary format and “perturbed libraries” could not be easily created.
For the relatively newer developments concerning UQ with MCNPX, ACE format libraries may be created directly and thus no source code modifications of MCNPX are required. The following sections will first describe the CASMO5MX code, then the DP and SS techniques as designed for use with CASMO5MX, and finally the SS technique development for MCNPX.
2.1. General Development of CASMO5MX
The capability to perturb the nuclear data library of the lattice physics code is the first step in order to perform any “nonintrusive” UQ with respect to nuclear data. Because of the aforementioned proprietary nature of CASMO5M’s 586group ENDF/BVII.R0based nuclear data library, source code modifications were the easiest way to gain access to this library to perform perturbations. For this purpose, a special module called “PERTXS” and a corresponding crosssection (XS) “perturbation file” was developed. The perturbation file can simply be thought of as a new (optional) input file that demands nuclear data perturbations to apply to the nominal library at runtime. This PSImodified version of CASMO5 M will hereon be referred to as CASMO5MX and the DP technique has been described in [3] and SS technique in [9]. Here, for the reader’s convenience, all necessary elements will be reviewed.
2.1.1. Allowed Nuclear Data Perturbations
Currently CASMO5MX allows nuclear data perturbations to the following microscopic data for all nuclides in the library (ENDF MT numbers in parentheses).(1)elastic scattering ,(2)inelastic scattering ,(3),(4)fission , (5)capture ,(6)average neutrons per fission , and(7)average fission spectrum .
In addition, external utilities have been created to perturb any parameter contained in the standard input file, facilitating sensitivity/uncertainty analysis with respect to such parameters as clad thickness, fuel enrichment, and so forth. With nuclear data perturbations, it is important to understand that perturbations are made relative to the existing data on the library; that is, values on the library are not replaced with new perturbed values but increased/decreased by a perturbation factor .
2.1.2. Perturbation Formulas
A very convenient feature of CASMO5MX is that perturbations may be supplied in any group structure, for example, the 19group “coarse” structure used by default in CASMO5 M for UO2 assembly method of characteristics (MOC) transport calculations, an arbitrary twogroup structure, or the full 586group library structure. However, using coarse groups for perturbations keeps data files smaller and in most cases, it has been found that using a very fine group structure (e.g., 586 library groups) does not significantly alter the final output uncertainty estimates. (A small study of this will be provided later.) Additionally, because the underlying VCM data is in the SCALE6 44group structure, it does not make too much sense to go beyond this. Inside CASMO5MX, the following perturbation formulas are used to map perturbations from the input group structure to the 586group library structure: Equation (6) defines the perturbation factor to be applied to the library fission spectrum in library group with input fission spectrum and input perturbation , where the input (coarse) energy groups use index . Equation (7) defines the perturbation factor for a crosssection with standard flux weighting. The weights and are given by the following equations: The upper and lower bounds of the numerator integrals are basically the union grid boundaries for the union of group and group ; therefore, the weights are only nonzero where groups overlap. If the supplied perturbation group structure and the library group structure are aligned and there is only one group for one or more groups (i.e., the input structure is coarser) and the formulas reduce considerably to (Figure 1). This means one may perturb the library using only relative information, that is, a set of values. However, when the perturbation group structure is nonaligned with the library, then both the weights and crosssection factors, for example, and , do not cancel in (6) and (7) and dependence on the intragroup weighting functions and crosssections are introduced. This means that one cannot simply use the perturbations and an approximate spectrum, for example, , and reference values for the data in the perturbation group structure, for example, , must be provided as well. In CASMO5MX, the weighting is assumed and the reference values in the SCALE6 VCM library are used.
The ability to supply perturbations in any group structure effectively gives the user the ability to generate sensitivity profiles at different resolutions for different reactions. For example, to simply evaluate the orderofmagnitude effect for a particular reaction, twogroup perturbations could be supplied. If the sensitivity is high, perturbations in a finer structure could be made to generate a refined sensitivity profile. The limiting resolution is simply that of the underlying 586group library.
2.1.3. Nuclear Data Variance/Covariance Matrices
The uncertainty in groupwise nuclear data is typically expressed only in terms of variance/covariance matrices (VCMs), which implies an underlying Gaussian (normal) distribution of the data. At the singlenuclide, singlereaction level with energy groups, this is a matrix of size with the diagonal elements giving the groupwise variance and offdiagonal elements giving the covariance between two groups. Close groups tend to be highly correlated, for example, it is improbable that the data in one fast group would increase and the next one down would decrease. Component crosssections (e.g., scattering and capture) tend to be anticorrelated, as they must sum to the total cross section. Because measurements are frequently made on compounds, not single nuclides, there is additional correlation between some of the singlenuclide data.
With correlations that cannot be neglected and huge datasets (e.g., 300 nuclides with 44 energy groups and 6 reactions is about 80,000 “inputs”), nuclear data uncertainty propagation is difficult and unique. Because this data is only recently being fully utilized, there are few choices for robust and reasonable VCM evaluations. The SCALE6 VCM [4] data is among the most widely used and developed for these purposes and has been used exclusively in this work, with one single additional approximation due to current limitations in some of the processing tools: crossnuclide covariances (e.g., fission anticorrelated with fission) are neglected. The data available on the VCM library and the data which may be perturbed with CASMO5MX are for the most part consistent. Two exceptions are that the VCM library contains data for each partial capture reaction – and the CASMO5 M data library combines elastic, inelastic, , and into a single “scattering matrix.” The first issue is easily circumvented using the uncertainty propagation formula in (3) to combine the partial VCMs for to 109 into a single VCM [11]. The second issue dealing with combined scattering is described in the next section.
Because the SCALE6 VCM library is provided in a 44group structure, nonaligned with the CASMO5MX library structure, there are two options to use this data:(1)make perturbations in the 44group structure, relying on (6) and (7) to map these perturbations to 586groups,(2)convert the 44group VCMs to a different group structure, ideally to a coarse group structure aligned with the library.
The second option has been investigated and the code ANGELO which performs the conversion has been provided for the purposes of this benchmark [12]. Although its applicability has not been rigorously determined, for converting 44group SCALE6 VCMs to coarse 8, 19, and 31group structures of CASMO5MX, the scheme seems reasonable.
2.1.4. Scattering Matrix Perturbations and External Scattering Fraction Data
Many lattice physics codes, including CASMO5 M, store a single “combined scattering matrix” for each nuclide, lumping elastic and inelastic scattering with the and reactions. Additionally, on the VCM library, uncertainty information for these reactions is only present in “1D” or “vector” form; that is, it has been “summed” over all final energy groups. With these two constraints, perturbations could originally [3] only be applied in the following manner to the combined scattering matrix: where perturbation depends only on the initial group and is applied identically to all final groups, , in combined scattering matrix, One upside to this type of perturbation is that the mapping formula from (9) can still be used for scattering perturbations. To denote this type of special perturbation, the special number was introduced to denote “combined scattering” perturbations within the CASMO5MX system.
However, it became apparent that the combined treatment tends to underestimate the uncertainty due, in particular, to inelastic scattering in U238 [11], which is actually one of the dominant sources of uncertainty for many responses. However, an approach to separate these effects was described in [9], where one can perform additional NJOY calculations to estimate the socalled “scattering fractions,” that is, fractions of the combined scattering matrix which are due to elastic, inelastic, and so forth. The scattering fractions become an auxiliary library to be used when separation of effects is important. In this case, the scattering matrix perturbation formula becomes where the terms are the scattering fractions, tabulated for each nonzero pair for that reaction. Currently the scattering fractions have been prepared for U235 and U238 only, and only at a temperature of 500 K and a background cross section of 40 barns, after some initial studies, found them to be remarkably constant with respect to temperature and background crosssection variations.
2.1.5. Resonance SelfShielding
The way that resonance selfshielding is performed in CASMO5M makes it difficult to perturb nuclear data before the resonance selfshielding calculation. Therefore, the resonance selfshielded and infinitely dilute data are perturbed by the same factor , which neglects the effect changes in the data have on selfshielding. Because selfshielding is a “negative” type of feedback, the current approach in CASMO5MX is thought to produce slightly higher uncertainties, but comparisons to SCALE6 TSUNAMI, which does include the effect, have not shown a significant effect [3, 9]. The difference should be most noticeable with strong and highly uncertain resonances, for which perhaps the U238 dominated systems tested so far do not qualify.
2.2. Direct Perturbation with CASMO5MX/DP
The main difficulties applying the DP technique to calculate sensitivity coefficients, namely, fixedprecision and eliminating secondorder and higher effects, have been overcome using an adaptive technique [3] in which (1)a scoping calculation is used to assess the magnitude of the response change; (2)then extra calculations are made which satisfy precision requirements; (3)finally a polynomial fit (linear or parabolic) is constructed from the pool of available calculations and used to estimate the sensitivity coefficient.
Numerous schemes have been designed within this general framework, for example, using one or two scoping calculations and one or two extra calculations, for a range of two to four calculations per input parameter. Clearly with nuclear data one cannot hope to perform DP on all 80,000 parameters. However, CASMO5MX/DP serves numerous purposes:(1)provide sensitivity profiles for codetocode comparisons (e.g., with SCALE6 TSUNAMI),(2)provide reference local, firstorder uncertainty results to assess other CASMO5MX methodologies, such as SS,(3)provide sensitivity coefficients for nonnuclear data parameters, for example, fuel enrichment.
Figure 2 shows a flow chart for the CASMO5MX/DP technique. The basic sequence is to begin with a perturbation factor of unity, that is, , and perform the nominal calculation. After the base calculation, depending on the specific DP mode chosen (see Table 1), the DP driver will select and perform additional perturbed cases. Using the resultant from the first perturbed case, the DP driver can now calculate a sensitivity coefficient, S. In the “2point simple” mode, DP would stop here, using simple finite differences (i.e., linear fit) for the estimate of . In the “3point adaptive” mode, a second calculation is performed with estimated such that the new satisfies a “small but not too small” criterion; for example, only the three least significant digits show variation. is updated using a linear fit of and . In the “4point adaptive” mode, one additional perturbed case allows a parabolic fit with estimated as the slope of the fit at .

Although Figure 2 is shown assuming a single output, CASMO5MX/DP can effectively produce sensitivity coefficients for all outputs simultaneously, especially with the 4point adaptive scheme. Figure 2 also makes the distinction that nuclear data perturbations are based on relative perturbation and affect the XS perturbation file, whereas perturbations of general input file parameters result in replacement of in the standard input file with . Once sensitivity coefficients are available, UQ may be performed using standard firstorder uncertainty propagation via (3).
2.3. Stochastic Sampling with CASMO5MX
The CASMO5MX stochastic sampling (SS) methodology from [9], shown in Figure 3, uses a very similar framework to the DP methodology (Figure 2). The major differences are summarized below.(1)DP varies a single input parameter at a time whereas SS varies them all simultaneously . (2)DP is first a sensitivity analysis technique and with UQ possible through local and firstorder uncertainty propagation, whereas SS is first a UQ technique (global and allorder) with approximation due to a finite sample size.(3)Due to the adaptive nature, the robust DP presented requires serial execution of up to 4 cases (although sensitivities of different inputs may be investigated simultaneously) whereas SS is inherently parallel.
The basic sequence in SS (refer to Figure 3) is as follows.(1)Each input is sampled times according to their underlying probability distributions and respecting correlation to other inputs, if any. The th sample input set contains is denoted , and note that the main data of the XS perturbation file is just the relative perturbations .(2)CASMO5MX is run times with each set of data; that is, .(3)The distribution of the sets of output is analyzed statistically, for example, with the sample variance.
Note that, in Figure 3, stages of the calculation which result in sets of data/files are shown with a “shadow.”
2.4. Stochastic Sampling with MCNPX/NUSS
In parallel to CASMO5MX/SS, activities to implement SS in the Monte Carlo code MCNPX have led to the development of MCNPX plus nuclear data uncertainty with stochastic sampling, MCNPX/NUSS, which functions very similarly to CASMO5MX/SS, except that due to the open nature of the MCNPX ACE library format, it is possible to create perturbed nuclear data libraries and source code modification are not necessary, as shown in Figure 4. As in CASMO5MX, the same simple random sampling procedure is used but a new tool is needed to apply perturbations to create the perturbed ACE library from the nominal one. Note that the decision to perturb data at the ACE library stage, instead of upstream when data is in the ENDF format, is mainly motivated by the relative ease of access to data in the ACE format. Future versions of MCNXP/NUSS may modify data at the ENDF stage.
Because the currently used VCM library is based on the SCALE 44group structure, data perturbations are provided in this structure; however, the system is not restricted to any particular group structure for perturbations. In the “library rewriting” stage, a constant perturbation is applied to pointwise data: for the perturbation of group which ranges from lower to upper energies, and . Note that with perturbation of partial crosssections in the ACE library, the total and absorption crosssection must also be adjusted to preserve consistency in the nuclear data files. The final procedure of the MCNPX/NUSS tool is to systematically supply MCNPX calculations with the generated random ACEformatted nuclear data files. The MCNPX outputs of interest can be analyzed by the same statistical means as in CASMO5MX/SS, except for a statistical error term which is inherent to the Monte Carlo calculations. When the distribution of an MCNPX output is characterized, it is important to separate the statistical variance from the variance due to data variations: The magnitude of is related to the number of neutron histories in the Monte Carlo calculations and has been estimated to be small compared to data contribution (i.e., nuclear data) for all cases considered here.
3. Results
An overview of the UAM Phase I cases analyzed in this paper is provided in Table 2. Notably, there is no depletion and no soluble boron for any of these cases. In the cell cases of exercise I1, there is no thermal expansion; however, in the PWR lattice case of exercise I2, thermal expansion has been assumed which decreases the density and increases the size of all materials. The operating conditions of hot zero power (HZP) and hot full power (HFP) dictate the fuel temperature (), moderator temperature (), void fraction (), and control rod insertion.

3.1. CASMO5MX Results
This section presents CASMO5MX results for both exercises I1 and I2. All uncertainty results are in terms of relative standard deviation in percent. For both CASMO5MX/DP and SS, perturbations are made in the 19group CASMO5M group structure, unless otherwise noted. The number of samples used was in all cases. With CASMO5MX calculations, uncertainty was assumed for all nuclides present in each problem and all reactions available in the SCALE6 VCM library.
3.1.1. Exercise I1: Cell Physics
The uncertainty summary of exercise I1 cases is given in Table 3 for the PB2 (BWR) cases, including results for both CASMO5MX/DP (C5MX/DP) and CASMO5MX/SS (C5MX/SS), and in Table 4 for the TMI1 and Generation III (GenIII) MOX cases, only with CASMO5MX/SS. Results show the general trend in eigenvalue uncertainty of approximately 0.5% and 1group cross section uncertainty of about 1% for most absorption cross sections and nuclides which have mainly thermal fission, but about 4% for nuclides which have significant fast fission. Making the spectrum harder, by introduction of 40% void in the PB2 HFP case or by using MOX fuel (in the GenIII case), increases the influence of the fast spectrum, which almost always has higher uncertainty than the nuclear data in the thermal range.


To assess the effect of the perturbation group structure, two additional group structures were investigated as shown in Table 5: the next finer 31group structure in CASMO5 M and the 44group structure of SCALE6. CASMO5MX/DP was used in order to investigate the breakdown of the uncertainty, that is, which uncertain nuclear data contributed most to an uncertain output. This is presented in terms of the variance fraction, that is, the variance due to that parameter divided by the total variance, which naturally sums to unity.

The most influential parameters are easily defined by sorting from greatest to least variance fraction, and the cumulative value can be used to limit the important parameters, for example, the set representing 99% of the total variance, as shown in Figure 5 for the eigenvalue uncertainty and in Figure 6 for the 1group U235 fission and absorption cross section uncertainty. Good agreement between the uncertainty breakdowns is observed except for the U235 fission spectrum component which increases considerably with the 44group structure. As shown in Figure 7, this was found due to the coarse fast groups in the CASMO5 M 19 and 31group structures and the highly varying U235 fission spectrum uncertainty in the fast range in the native 44groups. Because all perturbations are applied to the CASMO5MX 586group library structure, detailed sensitivity profiles may be generated as shown in Figure 7.
(a)
(b)
(a)
(b)
3.1.2. Exercise I2: Lattice Physics
The lattice physics cases in exercise I2 are concerned with propagating both nuclear data uncertainty and the socalled “technological parameter” uncertainty to the twogroup nodal data used in conventional core simulators based on twogroup nodal diffusion. The output parameters of interest here are mainly the homogenized macroscopic cross sections for fast and thermal absorption ( and ), neutron production ( and ), removal , diffusion coefficients ( and ), and assembly discontinuity factors ( and ). A summary of nodal parameters’ nominal values and uncertainties are shown in Table 6 for the TMI1 PWR assembly at HFP conditions only, with control rods out (unrodded) and in (rodded), considering only nuclear data uncertainty. Additionally, the uncertainty in pin powers was examined at 3 locations: the location of the unrodded case peak power (unr. peak loc.), the location of the rodded case peak power (rod. peak loc.), and the gadolinium pin power (Gd pin loc.). See Figure 8 for the locations in the southeast quarter of the PWR assembly. The uncertainty in unrodded assembly pin powers was remarkably low; only for the gadolinium pin is the uncertainty greater than 0.1%. In the rodded assembly, pin power uncertainty was slightly greater, on the order of 0.2% for most pins.

At the time of this publication, the probability distributions of the technological parameters were not generally agreed upon, and so only a sensitivity analysis has been performed using CASMO5MX/DP which can easily compute sensitivity coefficients of any input file parameter. As in the benchmark specification, five technological parameters were considered: fuel density (fdens), fuel enrichment (enr), fuel pellet radius (rfuel), clad thickness (tclad), and gap thickness (tgap). The sensitivity coefficients with respect to each technological parameter are shown in Table 7. One generally sees the highest sensitivity to the radius of the fuel pellet (rfuel) which can be over 1% variation in an output per 1% variation in pellet radius.

3.2. MCNPX Results
Results obtained with MCNPX/NUSS for eigenvalue uncertainty in the PB2 and TMI1 cell models at HZP are shown in Table 8. Simultaneous variations were performed for U235 and U238 reactions, consistent with CASMO5MX/SS with one exception; the partial crosssection is considered explicitly in MCNPX and not the total capture as in CASMO5 M. For some nuclides with significant reactions, comparisons would not be consistent as includes but does not, but for U235 and U238, the difference between and total capture is minor. Due to the long runtimes of MCNPX calculations, only samples were made; however, this achieved statistical uncertainty more than two orders of magnitude less than the data uncertainty for these cases.

Although the number of samples was fairly small at 80, a study of the running average eigenvalue and uncertainty (onesigma error bars) in Figure 9 shows little sample bias in the sample mean and stable behavior of the sample standard deviation. Additional discussion may be found in [13].
(a)
(b)
4. Discussion
In this section, various results from the previous section will be further discussed, namely, (i)BWR uncertainties predicted by both the CASMO5MX/DP and SS methodologies,(ii)BWR versus PWR uncertainties,(iii)UO2 versus MOX uncertainties,(iv)CASMO5MX versus MCNPX/NUSS results.
4.1. Comparison of BWR Uncertainties versus UQ Methodology
Consistent trends are observed with both methodologies for the exercise I1 PB2 (BWR) case, with slightly higher uncertainties observed at HFP, both in eigenvalue (denoted “Kinf”) and 1group cross sections, especially U238 fission. This is due to spectrum hardening in the HFP case, with nearly 40% void, which acts to increase uncertainty because data in the fast range is generally more uncertain. For the 1group cross sections, a faster spectrum also increases the impact of U238 inelastic scattering, which contributes greatly to the overall uncertainty [9]. Assuming DP as a reference solution, SS shows excellent agreement (see Figure 10), with smaller eigenvalue uncertainty by less than 4% and larger cross section uncertainty by at most 3% (U238 absorption at HFP).
(a)
(b)
4.2. Comparison of Stochastic Sampling Uncertainties versus LWR Reactor Type
At HZP conditions, almost identical uncertainties are observed for the exercise I1 PB2 (BWR) and TMI1 (PWR) cases (see Figure 11). There is slightly larger uncertainty at HZP for the U235 1group cross sections due to higher enrichment in the PWR (4.85 wt%) compared to the BWR (2.93 wt%). Because of the previously discussed hardening of the spectrum for the BWR case at HFP, the uncertainty in the U238 1group fission cross section is noticeably higher.
(a)
(b)
4.3. Comparison of Uncertainties for UO2 and MOX Fuel Types
For MOX fuel from the exercise I1 GenIII MOX case, nearly double the uncertainty (0.95%) in eigenvalue is observed compared to UOX fuel (0.51%). See the graphical summary in Figure 12. This marked increase is not only due to the higher uncertainty for the Pu isotopes but also due to the faster spectrum in those cases, which increases uncertainty due to the shift to the more uncertain fast range. Notably, and 1group cross section uncertainties are greater than 4%.
(a)
(b)
4.4. Comparison of Uncertainties for CASMO5MX/SS and MCNPX/NUSS
The MCNPX results showed a total uncertainty in eigenvalue of 0.54% using MCNPX/NUSS which was very consistent with both the CASMO5MX/SS and CASMO5MX/DP results using the same nuclear data uncertainty but different nuclear data libraries and codes systems. Additional tests cases with oneatatime perturbations of single reactions have been prepared for a more detailed investigation, comparing to a breakdown from CASMO5MX/DP, with perturbations in the SCALE6 44group for maximum consistency with MCNXP/NUSS. The results in Table 9 show the top 5 contributors according to each methodology, and in general one sees excellent agreement. It is perhaps only interesting that CASMO5MX/DP shows 0.53% uncertainty in the top 5 whereas MCNPX/NUSS shows 0.50%.

5. Conclusions
The UAM benchmark has provided the opportunity to develop stateoftheart methodologies for uncertainty quantification (UQ) and the framework for international collaboration and comparison. At PSI, within the STARS project, the first development was CASMO5MX, a modification of the production CASMO5 M code to perturb nuclear data libraries through an auxiliary input file with the capabilities to provide perturbations in any group structure and perturb individually the inelastic and elastic scattering components despite the internal use of a combined scattering matrix with elastic and inelastic scattering lumped together. Building on this capability, a sensitivity analysis (SA) tool using direct perturbation (DP) was developed, CASMO5MX/DP, which performs adaptive perturbations in order to robustly estimate sensitivity coefficients of arbitrary outputs with respect to arbitrary inputs, including nuclear data. Using standard firstorder uncertainty propagation, CASMO5MX/DP can also be used for local, firstorder UQ. However, to be used for production UQ, CASMO5MX/DP requires too many calculations, and for these reasons, a second UQ methodology based on stochastic sampling (SS) was developed, CASMO5MX/SS, which can provide uncertainty estimates for arbitrary outputs at a fixed cost of 100 to 1000 calculations. Most recently, development of an SS methodology for a continuousenergy, Monte Carlo code, MCNPX, was initiated, called MCNPX/NUSS.
Results for the UAM benchmark exercises were presented, including the LWR cell cases from exercise I1 and the PWR assembly case from exercise I2. For the cell cases, uncertainty in the eigenvalue and 1group collapsed microscopic cross sections (in terms of relative standard deviation) was found to be about 0.5% and 1%, respectively. For the GenIII MOX case, the eigenvalue uncertainty was nearly double (1%) and Pu242 and Am241 1group cross sections uncertainties’ reached 5%. In fuel, the most important contributors to the eigenvalue uncertainty were found to be capture , neutrons per fission , and U235 capture , accounting for over 80% of the total variance in eigenvalue. For 1group cross section uncertainty, U238 inelastic scattering accounted for well over 50% of variance alone and usually more. For the TMI1 PWR assembly case, uncertainty in eigenvalue was consistent with the cell cases at about 0.5%. Uncertainty in other assembly outputs ranged from less than 0.1% for the assembly discontinuity factors (ADFs), powers at the nominal peak pin locations, and the thermal diffusion coefficient to 1% for the fast diffusion coefficient and the fast absorption cross section . Both rodded and unrodded cases were analyzed and uncertainty was found to remain the same or slightly increase when control rods were inserted.
Finally, sensitivity coefficients were calculated for technological parameters for the exercise I2 TMI1 PWR assembly and it was found that the radius of the fuel pellet is the most sensitive parameter, having sensitivity coefficients of absolute value from 1 to 2 for many outputs. For example, the sensitivity coefficient of the removal cross section () with respect to pellet radius is −2.1 for unrodded case, which means that, for a 1% change in pellet radius, will decrease by 2%! It is clear, however, that a better understanding of the distributions of the technological parameters is necessary, and in particular, how the batchbased nature of manufacturing introduces correlations across the fuel pellets, assemblies. For example, should all the fuel pellets in a single assembly be considered to come from the same batch, different batches, or a fixed number of batches? If a fixed number of batches, how is it determined which pellets are from which batch? Answering these questions requires either more knowledge of how a particular fuel assembly was manufactured or simulation of the actual manufacturing processes! Otherwise, conservative, limiting cases must be created, which is in direct opposition to the overarching goal of best estimate analyses with UQ.
Future work in the area of neutronics UQ at PSI includes enhancement of the MCNPX/NUSS continuousenergy Monte Carlo strategy, implementing the capability to perturb fission product yields and decay constants, and extension of the SS methodology from the lattice code CASMO5 M to the core simulator SIMULATE3.
References
 OECD Report, “Technology relevance of the uncertainty analysis in modelling project for nuclear reactor safety,” NEA/NSC/DOC, 2007. View at: Google Scholar
 http://stars.web/psi.ch.
 W. Wieselquist, A. Vasiliev, and H. Ferroukhi, “Towards an uncertainty quantification methodology with CASMO5,” in Proceedings of the Mathematics and Computations Division of the American Nuclear Society Topical Meeting (M&C '11), Rio de Janeiro, Brazil, May 2011, CDROM. View at: Google Scholar
 “SCALE: A Modular Code System for Performing Standardized Computer Analyses for Licensing Evaluations,” ORNL/TM2005/39, Version 6, Vols. IIII, 2009. View at: Google Scholar
 D. L. Smith, Probability, Statistics, and Data Uncertainties in Nuclear Science and Technology, American Nuclear Society, USA, 1991.
 M. Klein, L. Gallner, I. Pasichnyk, A. Pautz, and W. Zwermann, “Influence of nuclear data covariance on reactor core calculations,” in Proceedings of the Mathematics and Computations Division of the American Nuclear Society Topical Meeting (M&C '11), on CDROM, Rio de Janeiro, Brazil, May 2011. View at: Google Scholar
 D. Rochman, A. J. Koning, S. C. Van Der Marck, A. Hogenbirk, and C. M. Sciolla, “Nuclear data uncertainty propagation: perturbation vs. Monte Carlo,” Annals of Nuclear Energy, vol. 38, no. 5, pp. 942–952, 2011. View at: Publisher Site  Google Scholar
 R. Macian, M. A. Zimmermann, and R. Chawla, “Statistical uncertainty analysis applied to fuel depletion calculations,” Journal of Nuclear Science and Technology, vol. 44, no. 6, pp. 875–885, 2007. View at: Publisher Site  Google Scholar
 W. Wieselquist, A. Vasiliev, and H. Ferroukhi, “Nuclear data uncertainty propagation in a lattice physics code using stochastic sampling,” in Proceedings of the International Topical Meeting on Advances in Reactor Physics (PHYSOR '12), Knoxville, Tenn, USA, April 2012, on CDROM. View at: Google Scholar
 S. S. Wilks, “Determination of sample sizes for setting tolerance limits,” The Annals of Mathematical Statistics, vol. 12, no. 1, pp. 91–96, 1941. View at: Google Scholar
 M. Pusa, “Incorporating sensitivity and uncertainty analysis to a lattice physics code with application to CASMO4,” Annals of Nuclear Energy, vol. 40, no. 1, pp. 153–162, 2012. View at: Google Scholar
 I. Kodeli, “ANGELOLAMBDA Covariance matrix interpolation and mathematical verification,” NEADB Computer Code Collection, NEA1798/02, 2008. View at: Google Scholar
 T. Zhu, A. Vasiliev, W. Wieselquist, and H. Ferroukhi, “Stochastic sampling method with MCNPX for nuclear data uncertainty propagation in criticality safety applications,” in Proceedings of the International Topical Meeting on Advances in Reactor Physics (PHYSOR '12), Knoxville, Tenn, USA, April 2012, on CDROM. View at: Google Scholar
Copyright
Copyright © 2013 W. Wieselquist et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.