Research Article  Open Access
Artem Yankov, Benjamin Collins, Markus Klein, Matthew A. Jessee, Winfried Zwermann, Kiril Velkov, Andreas Pautz, Thomas Downar, "A TwoStep Approach to Uncertainty Quantification of Core Simulators", Science and Technology of Nuclear Installations, vol. 2012, Article ID 767096, 9 pages, 2012. https://doi.org/10.1155/2012/767096
A TwoStep Approach to Uncertainty Quantification of Core Simulators
Abstract
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “twostep” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the twostep method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor and in the core power distribution was examined in the framework of phase I3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and twostep methods were applied with certain limitations, and the results were compared to those produced by other stochastic samplingbased codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.
1. Introduction
Computational modeling of nuclear reactor stability and performance has evolved into a multiphysics and multiscale regime. Various computer codes have been developed and optimized to model individual facets of reactor operation such as neutronics, thermal hydraulics, and kinetics. These codes are most often coupled to produce more realistic results. While it is crucial to produce bestestimate calculations for the design and safety analysis of nuclear reactors, it is equally important to obtain design margins by propagating uncertainty information through the entire computational process. The purpose of the OECD (Organization for Economic Cooperation and Development) Uncertainty Analysis in Modeling (UAM) benchmark is to produce a framework for the development of uncertainty analysis methodologies in reactor simulations [1]. Three phases comprise the benchmark, with each phase building in scale on its predecessors. The first phase deals with uncertainties in neutronics calculations, the second phase deals with neutron kinetics, and the final phase requires the propagation of uncertainties through coupled neutronics/thermalhydraulics simulations.
The neutronics phase of the UAM benchmark deals specifically with the propagation of input parameter uncertainties to uncertainties in output parameters on a fullcore scale. In the established framework of fullcore analyses, lattice homogenized fewgroup cross sections are used as inputs to core simulators. Core simulators utilize a number of approximations to the exact transport equation, effectively introducing uncertainties into output parameters. Geometrical uncertainties and numerical method simplifications can also be attributed to the introduction of modeling uncertainties. While it is important to propagate all known uncertainties when conducting a thorough uncertainty analysis, the necessary methods to make this possible must still be developed. However, rigorous methods already have been developed to propagate cross section uncertainties from lattice transport solvers to core simulators. Consequently, fewgroup homogenized cross section errors are assumed for now to be the sole source of uncertainty in the subsequent analyses.
Two methods already exist for propagating cross section uncertainties through core simulators. The first method is commonly referred to as the stochastic sampling (Monte Carlo) method. The XSUSA (Cross Section Uncertainty and Sensitivity Analysis) code system is representative of this approach [2]. XSUSA was developed by GRS based on the SUSA code package [3]. An alternate approach, the twostep method, utilizes generalized perturbation theory in the first step and stochastic sampling in the second step [4, 5]. The purpose of this paper is to show consistency between these two methods in the framework of phase I3 of the UAM benchmark. As defined in the UAM benchmark specifications, the Three Mile Island Unit 1 (TMI) core will be the focus of application for the XSUSA and twostep methods. The TMI core is chosen for analysis mainly because it has been the focus of past benchmark problems and is therefore of great familiarity in the nuclear engineering community [6].
The twostep method is motivated largely by the computationally expensive solution of the transport equation. In the stochastic sampling approach there is effectively a onetoone mapping between the solutions of the transport equation and the set of homogenized cross section inputs for a core simulator. Alternatively, for practical problems the twostep method provides a means by which an unlimited number of core simulator inputs can be generated at the cost of relatively few transporttype solutions. The means mentioned above is a fewgroup covariance matrix whose elements are generated with linear perturbation theory. Hence, the quality of the core simulator inputs produced by the twostep method is limited by the extent to which linear perturbation theory can describe the system under study. Contrarily, stochastic sampling through the XSUSA approach produces core simulator random inputs whose distributions are not subject to linear approximations. This paper shows that the linear approximations used in the twostep method can be remarkably accurate.
2. Methodology
Both the stochastic and twostep methods actively use the modules in SCALE to propagate cross section uncertainties [7]. Also, both methods make strong use of SCALE’s 44group covariance library. The multigroup cross sections are assumed to follow a multivariate normal distribution and so expected values and a covariance matrix suffice to fully describe the distribution. In the XSUSA approach, all input parameters are varied simultaneously, and the number of required calculations to achieve a certain statistical accuracy in output parameters of interest is independent of the number of inputs [2]. The number of required runs can be calculated by Wilks’ formula, which gives the confidence level that the maximum code output will not exceed with some specified probability. Contrarily, the twostep method depends on the number of input parameters since each input requires a transportlike solution. The two different methodologies are summarized below.
2.1. XSUSA Approach
The covariance matrix between inputs plays a central role in stochastic sampling. Cross section uncertainties are correlated, and the degree of correlation can be described by a covariance matrix. Cross section uncertainties must be perturbed such that their correlation relations are always preserved. If is a vector of mean values whose covariance relations are defined by the matrix , then correlated random variables can be generated by applying [8] In (1) the operator is the upper right triangular matrix obtained by taking the Cholesky decomposition of the covariance matrix. Every covariance matrix is Hermitian and positive definite; thus, all covariance matrices have a Cholesky decomposition . The vector in (1) is a random normal vector. When multiplies , linear combinations of the uncertainties are taken in accordance with their covariance relations, and so is normally distributed with covariance .
Hence, to produce perturbed cross sections , only the Cholesky decomposition of the cross section’s covariance matrix is needed along with a random normal vector. In the XSUSA approach, ENDF/BVII nuclear data in the SCALE 238group structure are used. Spectral calculations are performed in BONAMI and CENTRM to produce a problemspecific cross section library, as seen in Figure 1. By use of SCALE’s 44group covariance library with the problemspecific library generated by the spectral calculations, the XSUSA code applies perturbations to create a set of varied, problemdependent cross section libraries. Specifically, the MEDUSA module samples the 44group covariance library that is enlarged to accommodate all problemspecific nuclides and reactions. CLAROLplus then takes the output from MEDUSA and creates a problemspecific multigroup library. The XSUSA code works to make sure varied data are physically consistent. This procedure does not include the implicit effects of uncertainties in selfshielding, but extensions are currently being made to include these effects [9].
Each set of cross section libraries produced by XSUSA is passed to SCALE’s lattice physics transport solver NEWT, which in turns produces perturbed, homogenized, fewgroup cross section libraries. The perturbed fewgroup libraries are then used as input for core simulators such as PARCS [11] and QUABOX/CUBBOX [12]. Once all libraries are processed by the core simulator, statistics can be taken on the output parameters of interest. As indicated in Figure 1, NEWT can be replaced by any of SCALE’s transport solvers. For example, XSDRN can be used for onedimensional (1D) calculations and KENO for Monte Carlo reference solutions [2].
2.2. TwoStep Method
Unlike the XSUSA approach, the twostep method is only partly based on sampling techniques. In the first step it makes use of the generalized adjoint for the transport equation [13]. In the twostep method, problemdependent selfshielded data are also generated before any perturbed cross sections are calculated, as seen in Figure 2. Using the problemdependent cross sections, the TSUNAMI module is applied to calculate the forward transport, adjoint transport, and generalized adjoint transport solutions to the problem at hand. The SCALE module SAMS then uses the problem solutions to calculate sensitivity coefficients for responses of interest. A response for reaction type in broadgroup is defined as a ratio of inner products with the forward neutron flux [10]: The explicit sensitivity coefficient of the response with respect to some nuclear data parameter in the transport equation is then given as [10] where is the solution of the forward transport equation, is the migration and loss operator, and is the production operator.
The generalized adjoint can be obtained by solving the generalized adjoint transport equation in [10] The solution of (4) requires the solution of the adjoint transport problem for each response. The pertinent responses of interest are the homogenized fewgroup cross sections needed for core simulators. Equations (3) and (4) above are used to compute explicit sensitivity coefficients. The TSUNAMI methodology incorporates implicit sensitivity effects arising from resonance selfshielding [10].
If the covariance matrix of some input parameters is available along with the sensitivities relating the change in outputs with respect to the change in input parameters, the “sandwich rule” can be applied to obtain a covariance matrix for the outputs . The “sandwich rule” is expressed in [14]
Consequently, since is the SCALE 44group covariance matrix, a covariance matrix for the fewgroup homogenized cross sections can be obtained. The SCALE module TSUNAMIIP is used to generate a global covariance matrix relating the fewgroup cross sections in each assembly and reflector regions comprising a fullcore problem. With (1), this global covariance matrix is sampled to produce perturbed cross section libraries that can then be used as input for a core simulator. By applying the XSUSA and twostep methods to calculate uncertainties in output parameters of interest for the fullcore TMI problem, it can be shown that the two different approaches produce consistent results.
3. Application
3.1. Implementation
The computational implementation of the twostep and XSUSA methods strays somewhat from their theoretical formulations. Specifically, modifications must be made since the generalized perturbation theory capabilities in SCALE are currently limited to only some of the responses required by core simulators. First, the TSUNAMI module currently cannot compute the uncertainty in the fewgroup homogenized transport cross section. However, uncertainties in the total and scatter cross sections can be calculated. To approximate perturbations to the transport cross section, is used the following: The average cosine of the scattering angle is held constant while the total cross sections and scatter cross sections are perturbed to yield an effectively perturbed transport cross section that can be used as input to a core simulator. Normally a critical spectrum based on either the P1 or B1 approximation is utilized to compute fewgroup cross sections. However, the critical spectrum cannot be correctly accounted for in the TSUNAMI generalized perturbation theory methodology. Consequently, in the proceeding analysis the default B1 critical spectrum calculation in SCALE is disabled in favor of the simplified P1 formulation shown in (6). Similarly, TSUNAMI does not generate uncertainties for kappa, the average energy release per fission event. To calculate a perturbed kappafission cross section, the average value of kappa is multiplied by a perturbed fission cross section to obtain an effectively perturbed kappafission cross section as shown in A more subtle modification must be made when calculating the uncertainties in assembly discontinuity factors (ADFs). At the assembly level where reflective boundary conditions are used, TSUNAMI can approximate ADF uncertainties very well by taking the ratio of the average flux of a thin surface at the assembly boundary to the assembly averaged flux [15]. A cell volume normalization factor is also needed to account for the size of the thin surface at the assembly boundary. While this approach is valid for an infinite system, TSUNAMI is currently not capable of accurately quantifying ADF uncertainties at reflector interfaces due to leakage effects. To calculate uncertainties in fewgroup ADFs along reflector interfaces, a method developed by Yankov et al. is used [15]. The method is based on the 1D adjoint diffusion approximation generally used to treat reflector interface ADFs, the neutron balance equation on a fuel assembly/reflector interface, and the “sandwich rule."
The twostep method is presented algorithmically in Table 1. The majority of the algorithm consists of file manipulations. In the second step of the algorithm, it is important to check that the global covariance matrix produced by TSUNAMIIP is positive definite. The global covariance matrix consists of examining the correlations among fewgroup cross sections between all assemblies in the core. Use of a global covariance matrix is essential when sampling cross sections for an entire core, since otherwise the similarity of the nuclide composition of different fuel assembly types is neglected. If the global covariance matrix is not used, the output parameter uncertainties can be greatly misrepresented. In most cases, the global covariance matrix produced by TSUNAMIIP will only be nearly positive definite due to a lack of diagonal dominance. However, the matrix can be made more diagonally dominant by multiplying the matrix’s offdiagonal terms by for some very small value .

Note that the XSUSA approach does not require any of these modifications since it is fundamentally a statistically based approach, whereas the twostep method uses a deterministic approach in the first step.
3.2. Results
3.2.1. PinCell Calculations
Before the twostep and XSUSA methods are applied to a full core problem, it is prudent to perform a preliminary investigation on an easily tractable problem. Such a tractable problem consists of a single TMI pincell, as defined in the UAM benchmark [1]. Since the two methods of interest fundamentally work with covariances, the preliminary investigation will compare how the twostep and XSUSA methods can calculate variances and covariances for fewgroup parameters. Recall that SCALE/TSUNAMI, the underlying code system used in the twostep method, considers both the explicit and implicit contributions from cross sections. The XSUSA method only considers explicit effects for the same perturbations. Consequently, to produce a fair comparison the implicit sensitivity coefficient component is disabled in TSUNAMI. To this end, 1000 XSUSA samples of fewgroup scatter and fission cross sections are compared to those produced by the modified TSUNAMI code.
First, the standard deviations for the scatter and fission cross sections are compared in Figure 3, which depicts ratios of standard deviations produced by XSUSA and SCALE. In Figure 3, two different SCALE results are shown. The first result, labeled “GPT(explicit),” considers only the explicit sensitivity coefficients in SCALE. The second result, labeled “GPT(explicit, XSUSA),” not only considers the explicit sensitivity coefficients but also utilizes the same perturbation factors generated by the XSUSA simulations. For an indepth discussion of how perturbation factors are used in the pertinent methodologies the interested reader is referred to [16]. Ideally, all ratios in Figure 3 would be identical unity in the case where the responses depend linearly on the uncertain parameters. However, since XSUSA is a statistical method, some variability is present in the results. Some variability can also result from nonlinear phenomena. When the same perturbation factors are used in SCALE and XSUSA, all points are well contained in the 95% confidence interval bounds. The same phenomenon can be observed when the generalized perturbation theory and statistically generated covariance matrices are compared in Figure 4.
(a)
(b)
In Figure 4 the correlation coefficients should ideally lay along the dotted line, representing a onetoone relationship. Figure 4(b) has points concentrated more closely around the dotted line because identical perturbation factors are used in XSUSA and SCALE. All effects considered, the slight discrepancies visible in Figure 4(b) must be from nonlinear effects. The black lines bounding the points in Figure 4 represent the 95% confidence bounds for the correlation coefficients calculated with the Fisher transformation [17].
3.2.2. FullCore Calculations
The TMI core under consideration consists of 11 different UO_{2} assemblies and a reflector region placed in 1/8 symmetry. All control rods are ejected from the core, which is at hot zero power [1]. By use of the XSUSA and twostep methods, uncertainties are obtained for the corewide multiplication factor and for the assemblywise relative power distribution. For a twogroup formulation, each assembly in the TMI core requires 11 perturbed cross sections. These are the transport, absorption, kappafission, and nufission cross sections along with a downscatter cross section and two ADFs. The reflector region requires only 7 cross section inputs for a total of 128 perturbed cross sections per core simulation.
The core simulator utilized for the proceeding analysis is PARCS. The multigroup NEM nodal kernel is used to execute all 290 core simulations [11]. Initially 300 core simulations were proposed, but some of the cross section perturbations in the twostep method were too large, so PARCS was unable to produce a converged solution. The large number of core simulations ensures that the largest output values obtained will not be exceeded with a high probability by Wilks’ formula. The multiplication factor uncertainty results obtained with the XSUSA and twostep methods for the TMI core are summarized in Table 2. The table clearly shows that the XSUSA and twostep methods can consistently calculate uncertainties in the multiplication factor.
 
*The 95% confidence interval is [0.00522, 0.00614]. 
Both the “onestep” reference solutions and the twostep and XSUSA methods produced results that are well within statistical uncertainty of each other, as evidenced by comparing Tables 2 and 3. The agreement between the “onestep” reference solutions and between the twostep and XSUSA methods appears to be better than the overall agreement among all four calculation schemes.
 
^{*1}The 95% confidence interval is [0.00563, 0.00661]. ^{*2}The 95% confidence interval is [0.00544, 0.00639]. 
The mean power distributions obtained from the XSUSA/PARCS and twostep methods are shown in Figure 5 along with XSUSA/KENO reference solutions. The values displayed in Figure 5 are relative power distributions such that the mean power in the core is unity. As expected, the mean power distributions predicted by the XSUSA and twostep methods are very similar, with the largest nodewise discrepancy being less than 1%. The relative standard deviation (%) in power for each node is shown in Figure 6.
Before looking at the numerical values of the uncertainty in the core power distribution calculated by the three methods in Figures 6 and 7, it is evident that the distribution of uncertainty is spread evenly in all the methods. Uncertainties with the highest magnitudes congregate around the center of the core. This is due to the radial heterogeneity of the core configuration [18]. The twostep method seems to attribute less uncertainty overall to each nodal power. Although the reasons for this observation are currently under investigation, the authors have noticed that the relative power distribution uncertainties are particularly sensitive to the way in which uncertainties are propagated to the transport cross section in the twostep method.
4. Conclusions
The core simulator output uncertainties for the TMI core obtained with the XSUSA and twostep methods indicate that both methods are consistent in general and are able to propagate nuclear data uncertainties to the core simulator. However, further investigation is needed to explain some of the discrepancies observed between the two methods, especially in the calculation of uncertainty in the relative power distribution. Since the TMI core used in this analysis is relatively homogeneous, the linear approximations employed by the twostep method are completely satisfactory. While the authors anticipate that the linear approximations will hold for more inhomogeneous cores, such as the MOX cores specified in the UAM benchmark [1], this matter should be examined in greater detail.
Despite some of the current limitations of the generalized perturbation theory implementations in SCALE, both uncertainty quantification methods yield an uncertainty of in the core simulator effective. Currently, the limitations of generalized perturbation theory as applied in the twostep method make the XSUSA approach a more robust choice for reactor uncertainty analysis. In order to perform a steadystate uncertainty analysis, methods should be developed in the current generalized perturbation theory framework in SCALE to capture all uncertainty within reach of the XSUSA approach. Methods should also be developed so that twosteptype methods can be applied to burnup and transient calculations, as defined in phases IIIII of the UAM benchmark. The XSUSA approach already allows for such calculations, as evident from [16, 19].
In terms of efficiency, the XSUSA and twostep methods require similar computation times if parallel computing is employed. For the TMI core, 128 transportlike solutions on an assembly were required to obtain a global covariance matrix in TSUNAMI, one solution for each response. To obtain the desired statistical accuracy this covariance matrix was sampled around 300 times. Relatively speaking, sampling the covariance matrix and running the perturbed cross sections through a core simulator are free. Since no covariance matrix is used in the XSUSA approach, some 3600 full transport solutions on an assembly are needed to be able to execute 300 core simulations (11 assemblies plus 1 reflector, multiplied by 300 perturbed cross section sets). To summarize, for fullcore problems the computational burden is much less when the twostep method is used. However, due to the nature of parallel processing the twostep and XSUSA methods can take the same amount of time. Overall, more work should be done with the twostep method to make it a viable tool for uncertainty quantification in core simulations. However, the results in this paper suggest that the twostep method can be made to be fully consistent with more versatile stochastic methods.
Acknowledgments
This work was supported by the German Ministry of Economics and Technology and the US Nuclear Regulatory Commission. This paper has been authored by UTBattelle LLC under Contract no. DEAC0500OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the paper for publication, acknowledges that the United States Government retains a nonexclusive, paidup, irrevocable, worldwide license to publish or reproduce the published form of this paper, or allow others to do so, for the United States Government purposes.
References
 K. Ivanov, M. Avramova, I. Kodeli, and E. Sartori, Benchmark for Uncertainty Analysis in Modeling (UAM) for Design, Operation and Safety Analysis of LWRs, Rep. NEA/NSC/DOC(2007)23, Nuclear Energy Agency, 2nd edition, 2007.
 M. Klein, L. Gallner, B. KrzykaczHausmann, A. Pautz, and W. Zwermann, “Influence of nuclear data uncertainties on reactor core calculations,” Kerntechnik, vol. 76, no. 3, pp. 174–178, 2011. View at: Google Scholar
 B. Krzykacz, E. Hofer, and M. Kloos, “A software system for probabilistic uncertainty and sensitivity analysis of results from computer models,” in Proceedings of the International Conference on Probabilistic Safety Assessment and Management (PSAM '94), San Diego, Calif, USA, 1994. View at: Google Scholar
 M. Williams, M. A. Jessee, R. Ellis, and B. Rearden, “Sensitivity and uncertainty analysis for OECD UAM benchmark of peach bottom BWR,” in Proceedings of the 4th Uncertainty Analysis in Modelling Benchmark Meeting, Pisa, Italy, April 2010. View at: Google Scholar
 A. Yankov, M. Klein, M. A. Jessee et al., “Comparison of XSUSA and “twostep” approaches for fullcore uncertainty quantification,” in Proceedings of the International Conference on the Physics of Reactors (PHYSOR '12), Knoxville, Tenn, USA, April 2012. View at: Google Scholar
 K. Ivanov, T. M. Beam, A. J. Baratta, A. Irani, and N. Trikouros, “Pressurised water reactor Main Steam Line Break (MSLB) benchmark,” Tech. Rep. NEA/NSC/DOC(99)8, Nuclear Energy Agency, 1999. View at: Google Scholar
 SCALE: A Comprehensive Modeling and Simulation Suite for Nuclear Safety Analysis and Design, ORNL/TM2005/39, Version 6. 1, Radiation Safety Information Computational Center at Oak Ridge National Laboratory as CCC785, Oak Ridge, Tenn, USA, 2011.
 P. R. Bevington and K. Robinson, Data Reduction and Error Analysis for the Physical Sciences, McGrawHill, Boston, Mass, USA, 2003.
 M. Williams, D. Wiarda, H. Smith et al., “Development of a statistical sampling method for uncertainty analysis with scale,” in Proceedings of the International Conference on the Physics of Reactors (PHYSOR '12), Knoxville, Tenn, USA, April 2012. View at: Google Scholar
 M. A. Jessee, M. L. Williams, and M. D. DeHart, “Development of generalized perturbation theory capability within the scale code package,” in Proceedings of the International Conference on Mathematics, Computational Methods, and Reactor Physics (M&C '09), Saratoga Springs, New York, NY, USA, May 2009. View at: Google Scholar
 T. Downar, Y. Xu, and V. Seker, “PARCSv3. 0 Theory Manual,” UMNERS09001, October 2009. View at: Google Scholar
 S. Langenbuch and K. Velkov, “Overview on the development and application of the coupled code system ATHLET—QUABBOX/CUBBOX,” in Proceedings of the Mathematics and Computation, Supercomputing, Reactor Physics and Nuclear and Biological Applications, Avignon, France, September 2005. View at: Google Scholar
 M. Williams, “Perturbation theory for nuclear reactor analysis,” in CRC Handbook of Nuclear Reactor Calculations, vol. 3, pp. 63–188, 1986. View at: Google Scholar
 M. A. Jessee, Cross section adjustment techniques for BWR adaptive simulation [Dissertation], North Carolina State University, Raleigh, NC, USA, 2008.
 A. Yankov, B. Collins, M. A. Jessee, and T. Downar, “A generalized adjoint approach for quantifying reflector assembly discontinuity factor uncertainties,” in Proceedings of the International Conference on the Physics of Reactors (PHYSOR '12), Knoxville, Tenn, USA, April 2012. View at: Google Scholar
 M. L. Williams, G. Ilas, M. A. Jessee et al., “A statistical sampling method for uncertainty analysis with SCALE and XSUSA,” submitted to Nuclear Technology. View at: Google Scholar
 R. Fisher, “On the ‘probable error’ of a coefficient of correlation deduced from a small sample,” Metron, vol. 1, pp. 3–32, 1921. View at: Google Scholar
 M. Klein, L. Gallner, B. KrzykaczHausmann, A. Pautz, K. Velkov, and W. Zwermann, “Interaction of loading pattern and nuclear data uncertainties in reactor core calculations,” in Proceedings of the International Conference on the Physics of Reactors (PHYSOR '12), Knoxville, Tenn, USA, April 2012. View at: Google Scholar
 I. Pasichnyk, M. Klein, K. Velkov, W. Zwermann, and A. Pautz, “Nuclear data uncertainties by the PWR MOX/UO2 core rod ejection benchmark,” in Proceedings of the International Conference on the Physics of Reactors (PHYSOR '12), Knoxville, Tenn, USA, April 2012. View at: Google Scholar
Copyright
Copyright © 2012 Artem Yankov et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.