About this Journal Submit a Manuscript Table of Contents
Science and Technology of Nuclear Installations
Volume 2012 (2012), Article ID 767096, 9 pages
http://dx.doi.org/10.1155/2012/767096
Research Article

A Two-Step Approach to Uncertainty Quantification of Core Simulators

1Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109, USA
2Reactor Safety Research Division, Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) mbH, Boltzmannstraße 14, 85748 Garching bei München, Germany
3Reactor and Nuclear Systems Division, Oak Ridge National Laboratory, P.O. Box 2008 MS6172, Oak Ridge, TN 37831, USA

Received 30 July 2012; Accepted 7 December 2012

Academic Editor: Kostadin Ivanov

Copyright © 2012 Artem Yankov et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor and in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.

1. Introduction

Computational modeling of nuclear reactor stability and performance has evolved into a multiphysics and multiscale regime. Various computer codes have been developed and optimized to model individual facets of reactor operation such as neutronics, thermal hydraulics, and kinetics. These codes are most often coupled to produce more realistic results. While it is crucial to produce best-estimate calculations for the design and safety analysis of nuclear reactors, it is equally important to obtain design margins by propagating uncertainty information through the entire computational process. The purpose of the OECD (Organization for Economic Cooperation and Development) Uncertainty Analysis in Modeling (UAM) benchmark is to produce a framework for the development of uncertainty analysis methodologies in reactor simulations [1]. Three phases comprise the benchmark, with each phase building in scale on its predecessors. The first phase deals with uncertainties in neutronics calculations, the second phase deals with neutron kinetics, and the final phase requires the propagation of uncertainties through coupled neutronics/thermal-hydraulics simulations.

The neutronics phase of the UAM benchmark deals specifically with the propagation of input parameter uncertainties to uncertainties in output parameters on a full-core scale. In the established framework of full-core analyses, lattice homogenized few-group cross sections are used as inputs to core simulators. Core simulators utilize a number of approximations to the exact transport equation, effectively introducing uncertainties into output parameters. Geometrical uncertainties and numerical method simplifications can also be attributed to the introduction of modeling uncertainties. While it is important to propagate all known uncertainties when conducting a thorough uncertainty analysis, the necessary methods to make this possible must still be developed. However, rigorous methods already have been developed to propagate cross section uncertainties from lattice transport solvers to core simulators. Consequently, few-group homogenized cross section errors are assumed for now to be the sole source of uncertainty in the subsequent analyses.

Two methods already exist for propagating cross section uncertainties through core simulators. The first method is commonly referred to as the stochastic sampling (Monte Carlo) method. The XSUSA (Cross Section Uncertainty and Sensitivity Analysis) code system is representative of this approach [2]. XSUSA was developed by GRS based on the SUSA code package [3]. An alternate approach, the two-step method, utilizes generalized perturbation theory in the first step and stochastic sampling in the second step [4, 5]. The purpose of this paper is to show consistency between these two methods in the framework of phase I-3 of the UAM benchmark. As defined in the UAM benchmark specifications, the Three Mile Island Unit 1 (TMI) core will be the focus of application for the XSUSA and two-step methods. The TMI core is chosen for analysis mainly because it has been the focus of past benchmark problems and is therefore of great familiarity in the nuclear engineering community [6].

The two-step method is motivated largely by the computationally expensive solution of the transport equation. In the stochastic sampling approach there is effectively a one-to-one mapping between the solutions of the transport equation and the set of homogenized cross section inputs for a core simulator. Alternatively, for practical problems the two-step method provides a means by which an unlimited number of core simulator inputs can be generated at the cost of relatively few transport-type solutions. The means mentioned above is a few-group covariance matrix whose elements are generated with linear perturbation theory. Hence, the quality of the core simulator inputs produced by the two-step method is limited by the extent to which linear perturbation theory can describe the system under study. Contrarily, stochastic sampling through the XSUSA approach produces core simulator random inputs whose distributions are not subject to linear approximations. This paper shows that the linear approximations used in the two-step method can be remarkably accurate.

2. Methodology

Both the stochastic and two-step methods actively use the modules in SCALE to propagate cross section uncertainties [7]. Also, both methods make strong use of SCALE’s 44-group covariance library. The multigroup cross sections are assumed to follow a multivariate normal distribution and so expected values and a covariance matrix suffice to fully describe the distribution. In the XSUSA approach, all input parameters are varied simultaneously, and the number of required calculations to achieve a certain statistical accuracy in output parameters of interest is independent of the number of inputs [2]. The number of required runs can be calculated by Wilks’ formula, which gives the confidence level that the maximum code output will not exceed with some specified probability. Contrarily, the two-step method depends on the number of input parameters since each input requires a transport-like solution. The two different methodologies are summarized below.

2.1. XSUSA Approach

The covariance matrix between inputs plays a central role in stochastic sampling. Cross section uncertainties are correlated, and the degree of correlation can be described by a covariance matrix. Cross section uncertainties must be perturbed such that their correlation relations are always preserved. If is a vector of mean values whose covariance relations are defined by the matrix , then correlated random variables can be generated by applying [8] In (1) the operator is the upper right triangular matrix obtained by taking the Cholesky decomposition of the covariance matrix. Every covariance matrix is Hermitian and positive definite; thus, all covariance matrices have a Cholesky decomposition . The vector in (1) is a random normal vector. When multiplies , linear combinations of the uncertainties are taken in accordance with their covariance relations, and so is normally distributed with covariance .

Hence, to produce perturbed cross sections , only the Cholesky decomposition of the cross section’s covariance matrix is needed along with a random normal vector. In the XSUSA approach, ENDF/B-VII nuclear data in the SCALE 238-group structure are used. Spectral calculations are performed in BONAMI and CENTRM to produce a problem-specific cross section library, as seen in Figure 1. By use of SCALE’s 44-group covariance library with the problem-specific library generated by the spectral calculations, the XSUSA code applies perturbations to create a set of varied, problem-dependent cross section libraries. Specifically, the MEDUSA module samples the 44-group covariance library that is enlarged to accommodate all problem-specific nuclides and reactions. CLAROL-plus then takes the output from MEDUSA and creates a problem-specific multigroup library. The XSUSA code works to make sure varied data are physically consistent. This procedure does not include the implicit effects of uncertainties in self-shielding, but extensions are currently being made to include these effects [9].

767096.fig.001
Figure 1: Flow diagram of the XSUSA approach starting from use of the ENDF/B-VII 238-group library and ending with a statistical evaluation of output parameters.

Each set of cross section libraries produced by XSUSA is passed to SCALE’s lattice physics transport solver NEWT, which in turns produces perturbed, homogenized, few-group cross section libraries. The perturbed few-group libraries are then used as input for core simulators such as PARCS [11] and QUABOX/CUBBOX [12]. Once all libraries are processed by the core simulator, statistics can be taken on the output parameters of interest. As indicated in Figure 1, NEWT can be replaced by any of SCALE’s transport solvers. For example, XSDRN can be used for one-dimensional (1D) calculations and KENO for Monte Carlo reference solutions [2].

2.2. Two-Step Method

Unlike the XSUSA approach, the two-step method is only partly based on sampling techniques. In the first step it makes use of the generalized adjoint for the transport equation [13]. In the two-step method, problem-dependent self-shielded data are also generated before any perturbed cross sections are calculated, as seen in Figure 2. Using the problem-dependent cross sections, the TSUNAMI module is applied to calculate the forward transport, adjoint transport, and generalized adjoint transport solutions to the problem at hand. The SCALE module SAMS then uses the problem solutions to calculate sensitivity coefficients for responses of interest. A response for reaction type in broad-group is defined as a ratio of inner products with the forward neutron flux [10]: The explicit sensitivity coefficient of the response with respect to some nuclear data parameter in the transport equation is then given as [10] where is the solution of the forward transport equation, is the migration and loss operator, and is the production operator.

767096.fig.002
Figure 2: Flow diagram for the proposed two-step method, which mainly utilizes the generalized perturbation theory modules in SCALE [10].

The generalized adjoint can be obtained by solving the generalized adjoint transport equation in [10] The solution of (4) requires the solution of the adjoint transport problem for each response. The pertinent responses of interest are the homogenized few-group cross sections needed for core simulators. Equations (3) and (4) above are used to compute explicit sensitivity coefficients. The TSUNAMI methodology incorporates implicit sensitivity effects arising from resonance self-shielding [10].

If the covariance matrix of some input parameters is available along with the sensitivities relating the change in outputs with respect to the change in input parameters, the “sandwich rule” can be applied to obtain a covariance matrix for the outputs . The “sandwich rule” is expressed in [14]

Consequently, since is the SCALE 44-group covariance matrix, a covariance matrix for the few-group homogenized cross sections can be obtained. The SCALE module TSUNAMI-IP is used to generate a global covariance matrix relating the few-group cross sections in each assembly and reflector regions comprising a full-core problem. With (1), this global covariance matrix is sampled to produce perturbed cross section libraries that can then be used as input for a core simulator. By applying the XSUSA and two-step methods to calculate uncertainties in output parameters of interest for the full-core TMI problem, it can be shown that the two different approaches produce consistent results.

3. Application

3.1. Implementation

The computational implementation of the two-step and XSUSA methods strays somewhat from their theoretical formulations. Specifically, modifications must be made since the generalized perturbation theory capabilities in SCALE are currently limited to only some of the responses required by core simulators. First, the TSUNAMI module currently cannot compute the uncertainty in the few-group homogenized transport cross section. However, uncertainties in the total and scatter cross sections can be calculated. To approximate perturbations to the transport cross section, is used the following: The average cosine of the scattering angle is held constant while the total cross sections and scatter cross sections are perturbed to yield an effectively perturbed transport cross section that can be used as input to a core simulator. Normally a critical spectrum based on either the P1 or B1 approximation is utilized to compute few-group cross sections. However, the critical spectrum cannot be correctly accounted for in the TSUNAMI generalized perturbation theory methodology. Consequently, in the proceeding analysis the default B1 critical spectrum calculation in SCALE is disabled in favor of the simplified P1 formulation shown in (6). Similarly, TSUNAMI does not generate uncertainties for kappa, the average energy release per fission event. To calculate a perturbed kappa-fission cross section, the average value of kappa is multiplied by a perturbed fission cross section to obtain an effectively perturbed kappa-fission cross section as shown in A more subtle modification must be made when calculating the uncertainties in assembly discontinuity factors (ADFs). At the assembly level where reflective boundary conditions are used, TSUNAMI can approximate ADF uncertainties very well by taking the ratio of the average flux of a thin surface at the assembly boundary to the assembly averaged flux [15]. A cell volume normalization factor is also needed to account for the size of the thin surface at the assembly boundary. While this approach is valid for an infinite system, TSUNAMI is currently not capable of accurately quantifying ADF uncertainties at reflector interfaces due to leakage effects. To calculate uncertainties in few-group ADFs along reflector interfaces, a method developed by Yankov et al. is used [15]. The method is based on the 1D adjoint diffusion approximation generally used to treat reflector interface ADFs, the neutron balance equation on a fuel assembly/reflector interface, and the “sandwich rule."

The two-step method is presented algorithmically in Table 1. The majority of the algorithm consists of file manipulations. In the second step of the algorithm, it is important to check that the global covariance matrix produced by TSUNAMI-IP is positive definite. The global covariance matrix consists of examining the correlations among few-group cross sections between all assemblies in the core. Use of a global covariance matrix is essential when sampling cross sections for an entire core, since otherwise the similarity of the nuclide composition of different fuel assembly types is neglected. If the global covariance matrix is not used, the output parameter uncertainties can be greatly misrepresented. In most cases, the global covariance matrix produced by TSUNAMI-IP will only be nearly positive definite due to a lack of diagonal dominance. However, the matrix can be made more diagonally dominant by multiplying the matrix’s off-diagonal terms by for some very small value .

tab1
Table 1: Algorithm for applying the two-step method using SCALE and a core simulator.

Note that the XSUSA approach does not require any of these modifications since it is fundamentally a statistically based approach, whereas the two-step method uses a deterministic approach in the first step.

3.2. Results
3.2.1. Pin-Cell Calculations

Before the two-step and XSUSA methods are applied to a full core problem, it is prudent to perform a preliminary investigation on an easily tractable problem. Such a tractable problem consists of a single TMI pin-cell, as defined in the UAM benchmark [1]. Since the two methods of interest fundamentally work with covariances, the preliminary investigation will compare how the two-step and XSUSA methods can calculate variances and covariances for few-group parameters. Recall that SCALE/TSUNAMI, the underlying code system used in the two-step method, considers both the explicit and implicit contributions from cross sections. The XSUSA method only considers explicit effects for the same perturbations. Consequently, to produce a fair comparison the implicit sensitivity coefficient component is disabled in TSUNAMI. To this end, 1000 XSUSA samples of few-group scatter and fission cross sections are compared to those produced by the modified TSUNAMI code.

First, the standard deviations for the scatter and fission cross sections are compared in Figure 3, which depicts ratios of standard deviations produced by XSUSA and SCALE. In Figure 3, two different SCALE results are shown. The first result, labeled “GPT(explicit),” considers only the explicit sensitivity coefficients in SCALE. The second result, labeled “GPT(explicit, XSUSA),” not only considers the explicit sensitivity coefficients but also utilizes the same perturbation factors generated by the XSUSA simulations. For an in-depth discussion of how perturbation factors are used in the pertinent methodologies the interested reader is referred to [16]. Ideally, all ratios in Figure 3 would be identical unity in the case where the responses depend linearly on the uncertain parameters. However, since XSUSA is a statistical method, some variability is present in the results. Some variability can also result from nonlinear phenomena. When the same perturbation factors are used in SCALE and XSUSA, all points are well contained in the 95% confidence interval bounds. The same phenomenon can be observed when the generalized perturbation theory and statistically generated covariance matrices are compared in Figure 4.

767096.fig.003
Figure 3: The ratio of the stochastic to generalized perturbation theory standard deviations of few-group cross sections. The independent axis represents various cross section indices.
fig4
Figure 4: Correlation coefficients calculated by XSUSA compared to those calculated by SCALE. (a) Only the explicit effect is represented in the SCALE coefficients. (b) The explicit effect and the XSUSA perturbation factors are considered.

In Figure 4 the correlation coefficients should ideally lay along the dotted line, representing a one-to-one relationship. Figure 4(b) has points concentrated more closely around the dotted line because identical perturbation factors are used in XSUSA and SCALE. All effects considered, the slight discrepancies visible in Figure 4(b) must be from nonlinear effects. The black lines bounding the points in Figure 4 represent the 95% confidence bounds for the correlation coefficients calculated with the Fisher transformation [17].

3.2.2. Full-Core Calculations

The TMI core under consideration consists of 11 different UO2 assemblies and a reflector region placed in 1/8 symmetry. All control rods are ejected from the core, which is at hot zero power [1]. By use of the XSUSA and two-step methods, uncertainties are obtained for the core-wide multiplication factor and for the assembly-wise relative power distribution. For a two-group formulation, each assembly in the TMI core requires 11 perturbed cross sections. These are the transport, absorption, kappa-fission, and nu-fission cross sections along with a down-scatter cross section and two ADFs. The reflector region requires only 7 cross section inputs for a total of 128 perturbed cross sections per core simulation.

The core simulator utilized for the proceeding analysis is PARCS. The multigroup NEM nodal kernel is used to execute all 290 core simulations [11]. Initially 300 core simulations were proposed, but some of the cross section perturbations in the two-step method were too large, so PARCS was unable to produce a converged solution. The large number of core simulations ensures that the largest output values obtained will not be exceeded with a high probability by Wilks’ formula. The multiplication factor uncertainty results obtained with the XSUSA and two-step methods for the TMI core are summarized in Table 2. The table clearly shows that the XSUSA and two-step methods can consistently calculate uncertainties in the multiplication factor.

tab2
Table 2: Uncertainty in the effective multiplication factor from using the two-step and XSUSA methods with a sample size of 290.

Both the “one-step” reference solutions and the two-step and XSUSA methods produced results that are well within statistical uncertainty of each other, as evidenced by comparing Tables 2 and 3. The agreement between the “one-step” reference solutions and between the two-step and XSUSA methods appears to be better than the overall agreement among all four calculation schemes.

tab3
Table 3: Uncertainty in the effective multiplication factor from using “one-step” schemes.

The mean power distributions obtained from the XSUSA/PARCS and two-step methods are shown in Figure 5 along with XSUSA/KENO reference solutions. The values displayed in Figure 5 are relative power distributions such that the mean power in the core is unity. As expected, the mean power distributions predicted by the XSUSA and two-step methods are very similar, with the largest node-wise discrepancy being less than 1%. The relative standard deviation (%) in power for each node is shown in Figure 6.

767096.fig.005
Figure 5: Mean power distribution calculated by the two-step and XSUSA methods along with the XSUSA/KENO reference. Values shown are calculated such that unity is the core average power. Quarter symmetry is displayed.
767096.fig.006
Figure 6: Relative standard deviation (%) calculated by the two-step and XSUSA methods along with the XSUSA/KENO reference. Uncertainties in assembly discontinuity factors are included. Quarter symmetry is displayed.

Before looking at the numerical values of the uncertainty in the core power distribution calculated by the three methods in Figures 6 and 7, it is evident that the distribution of uncertainty is spread evenly in all the methods. Uncertainties with the highest magnitudes congregate around the center of the core. This is due to the radial heterogeneity of the core configuration [18]. The two-step method seems to attribute less uncertainty overall to each nodal power. Although the reasons for this observation are currently under investigation, the authors have noticed that the relative power distribution uncertainties are particularly sensitive to the way in which uncertainties are propagated to the transport cross section in the two-step method.

767096.fig.007
Figure 7: The 95% confidence bounds are shown for the relative standard deviations corresponding to Figure 6.

4. Conclusions

The core simulator output uncertainties for the TMI core obtained with the XSUSA and two-step methods indicate that both methods are consistent in general and are able to propagate nuclear data uncertainties to the core simulator. However, further investigation is needed to explain some of the discrepancies observed between the two methods, especially in the calculation of uncertainty in the relative power distribution. Since the TMI core used in this analysis is relatively homogeneous, the linear approximations employed by the two-step method are completely satisfactory. While the authors anticipate that the linear approximations will hold for more inhomogeneous cores, such as the MOX cores specified in the UAM benchmark [1], this matter should be examined in greater detail.

Despite some of the current limitations of the generalized perturbation theory implementations in SCALE, both uncertainty quantification methods yield an uncertainty of in the core simulator -effective. Currently, the limitations of generalized perturbation theory as applied in the two-step method make the XSUSA approach a more robust choice for reactor uncertainty analysis. In order to perform a steady-state uncertainty analysis, methods should be developed in the current generalized perturbation theory framework in SCALE to capture all uncertainty within reach of the XSUSA approach. Methods should also be developed so that two-step-type methods can be applied to burnup and transient calculations, as defined in phases II-III of the UAM benchmark. The XSUSA approach already allows for such calculations, as evident from [16, 19].

In terms of efficiency, the XSUSA and two-step methods require similar computation times if parallel computing is employed. For the TMI core, 128 transport-like solutions on an assembly were required to obtain a global covariance matrix in TSUNAMI, one solution for each response. To obtain the desired statistical accuracy this covariance matrix was sampled around 300 times. Relatively speaking, sampling the covariance matrix and running the perturbed cross sections through a core simulator are free. Since no covariance matrix is used in the XSUSA approach, some 3600 full transport solutions on an assembly are needed to be able to execute 300 core simulations (11 assemblies plus 1 reflector, multiplied by 300 perturbed cross section sets). To summarize, for full-core problems the computational burden is much less when the two-step method is used. However, due to the nature of parallel processing the two-step and XSUSA methods can take the same amount of time. Overall, more work should be done with the two-step method to make it a viable tool for uncertainty quantification in core simulations. However, the results in this paper suggest that the two-step method can be made to be fully consistent with more versatile stochastic methods.

Acknowledgments

This work was supported by the German Ministry of Economics and Technology and the US Nuclear Regulatory Commission. This paper has been authored by UT-Battelle LLC under Contract no. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the paper for publication, acknowledges that the United States Government retains a nonexclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this paper, or allow others to do so, for the United States Government purposes.

References

  1. K. Ivanov, M. Avramova, I. Kodeli, and E. Sartori, Benchmark for Uncertainty Analysis in Modeling (UAM) for Design, Operation and Safety Analysis of LWRs, Rep. NEA/NSC/DOC(2007)23, Nuclear Energy Agency, 2nd edition, 2007.
  2. M. Klein, L. Gallner, B. Krzykacz-Hausmann, A. Pautz, and W. Zwermann, “Influence of nuclear data uncertainties on reactor core calculations,” Kerntechnik, vol. 76, no. 3, pp. 174–178, 2011. View at Scopus
  3. B. Krzykacz, E. Hofer, and M. Kloos, “A software system for probabilistic uncertainty and sensitivity analysis of results from computer models,” in Proceedings of the International Conference on Probabilistic Safety Assessment and Management (PSAM '94), San Diego, Calif, USA, 1994.
  4. M. Williams, M. A. Jessee, R. Ellis, and B. Rearden, “Sensitivity and uncertainty analysis for OECD UAM benchmark of peach bottom BWR,” in Proceedings of the 4th Uncertainty Analysis in Modelling Benchmark Meeting, Pisa, Italy, April 2010.
  5. A. Yankov, M. Klein, M. A. Jessee et al., “Comparison of XSUSA and “two-step” approaches for full-core uncertainty quantification,” in Proceedings of the International Conference on the Physics of Reactors (PHYSOR '12), Knoxville, Tenn, USA, April 2012.
  6. K. Ivanov, T. M. Beam, A. J. Baratta, A. Irani, and N. Trikouros, “Pressurised water reactor Main Steam Line Break (MSLB) benchmark,” Tech. Rep. NEA/NSC/DOC(99)8, Nuclear Energy Agency, 1999.
  7. SCALE: A Comprehensive Modeling and Simulation Suite for Nuclear Safety Analysis and Design, ORNL/TM-2005/39, Version 6. 1, Radiation Safety Information Computational Center at Oak Ridge National Laboratory as CCC-785, Oak Ridge, Tenn, USA, 2011.
  8. P. R. Bevington and K. Robinson, Data Reduction and Error Analysis for the Physical Sciences, McGraw-Hill, Boston, Mass, USA, 2003.
  9. M. Williams, D. Wiarda, H. Smith et al., “Development of a statistical sampling method for uncertainty analysis with scale,” in Proceedings of the International Conference on the Physics of Reactors (PHYSOR '12), Knoxville, Tenn, USA, April 2012.
  10. M. A. Jessee, M. L. Williams, and M. D. DeHart, “Development of generalized perturbation theory capability within the scale code package,” in Proceedings of the International Conference on Mathematics, Computational Methods, and Reactor Physics (M&C '09), Saratoga Springs, New York, NY, USA, May 2009.
  11. T. Downar, Y. Xu, and V. Seker, “PARCSv3. 0 Theory Manual,” UM-NERS-09-001, October 2009.
  12. S. Langenbuch and K. Velkov, “Overview on the development and application of the coupled code system ATHLET—QUABBOX/CUBBOX,” in Proceedings of the Mathematics and Computation, Supercomputing, Reactor Physics and Nuclear and Biological Applications, Avignon, France, September 2005.
  13. M. Williams, “Perturbation theory for nuclear reactor analysis,” in CRC Handbook of Nuclear Reactor Calculations, vol. 3, pp. 63–188, 1986.
  14. M. A. Jessee, Cross section adjustment techniques for BWR adaptive simulation [Dissertation], North Carolina State University, Raleigh, NC, USA, 2008.
  15. A. Yankov, B. Collins, M. A. Jessee, and T. Downar, “A generalized adjoint approach for quantifying reflector assembly discontinuity factor uncertainties,” in Proceedings of the International Conference on the Physics of Reactors (PHYSOR '12), Knoxville, Tenn, USA, April 2012.
  16. M. L. Williams, G. Ilas, M. A. Jessee et al., “A statistical sampling method for uncertainty analysis with SCALE and XSUSA,” submitted to Nuclear Technology.
  17. R. Fisher, “On the ‘probable error’ of a coefficient of correlation deduced from a small sample,” Metron, vol. 1, pp. 3–32, 1921.
  18. M. Klein, L. Gallner, B. Krzykacz-Hausmann, A. Pautz, K. Velkov, and W. Zwermann, “Interaction of loading pattern and nuclear data uncertainties in reactor core calculations,” in Proceedings of the International Conference on the Physics of Reactors (PHYSOR '12), Knoxville, Tenn, USA, April 2012.
  19. I. Pasichnyk, M. Klein, K. Velkov, W. Zwermann, and A. Pautz, “Nuclear data uncertainties by the PWR MOX/UO2 core rod ejection benchmark,” in Proceedings of the International Conference on the Physics of Reactors (PHYSOR '12), Knoxville, Tenn, USA, April 2012.