Abstract

The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transient PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.

1. Introduction

Title 10 Part 50 (10 CFR 50.46) of the United States Code of Federal Regulations first allowed “Best Estimate” calculations rather than conservative code models of safety parameters in nuclear power plants in the 1980s, stipulating, however, that uncertainties be identified and quantified [1]. The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires validation and verification of HTGR design and safety models and codes, and the predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods.

In general, code uncertainty refers to uncertainty in the ability of a computer software product, coupled with a specific model, to accurately describe the actual physical system of interest. The computer model is an integration of the mathematical model, the numerical techniques used to solve those equations, and the representation of the physical model by the input geometry and material specifications. Each element contributes to the total uncertainty in the output parameter of interest, usually referred to as the Figure of Merit (FOM) in nuclear safety studies. The mathematical model consists of one or more governing equations that describe the balance between the creation, destruction, and flow of some quantity of interest (e.g., heat, coolant mass, or neutron flux) within a homogeneous control volume. It also consists of one or more subgrid equations that relate these gross phenomena to more complex physics that are neglected at the scale of the homogeneous control volume (e.g., neutron streaming between pebbles, heat conduction from the kernels to the pebble surface, etc.).

A further complication is that very few computer codes solve the analytic form of its governing equations. Instead, the differential operators in these equations are expanded as a truncated series and cast as a set of difference equations solved over a discrete mesh. If the equations are well posed, the solution is unique and refining the mesh reduces the error between the solutions of the discretized equation and the original differential equation. Unfortunately, unlimited mesh refinement is not possible and one must tolerate some truncation error. Furthermore, in many complex fluid system simulation codes, the combination of governing equations and subgrid correlations yields ill-posed systems of differential equations that do not converge to the analytical solution upon refinement of the mesh.

Another important source of uncertainty is that the input model is a simplification of the actual physical geometry. For example, the distribution of fuel pebbles in a HTGR core is neither regular nor uniform, but to model individual pebbles is computationally prohibitive. Complex geometrical detail in some of the prismatic HTGR designs can likewise be very difficult to model accurately. The fourth major source of input uncertainty is the material neutronic and thermophysical properties. For core analysis, these include thermal properties such as conductivity and heat capacity, fluid properties such as density and viscosity, and neutronic properties such as cross sections. Knowledge of these parameters for each material of interest may be limited in the range of conditions found in a typical HTGR. Such uncertainty can be reduced through material testing and measurement, but the amount of testing is often limited by cost and schedule constraints and must be propagated through the calculations. In some cases, the natural variability of a given parameter under even the best experimental conditions may be large enough to inject uncertainty that cannot be ignored. Finally, when modeling of an actual operating reactor is considered, it is well known that the operational conditions (power level, inlet temperature, measured mass flow rates) can also have associated uncertainty ranges.

Of the four types of uncertainty sources indicated here, the uncertainties in material properties can usually be addressed by relatively simple manipulation of the corresponding values in the input decks, and geometry simplifications can be benchmarked against higher fidelity codes (e.g., 2D versus 3D effects). In contrast, propagation of uncertainties in mathematical models and solver techniques are much more challenging, and in most cases not yet attempted in industry. Developments in uncertainty methodology are therefore currently focused on cross-section, model and material input data uncertainties, and specifically on the propagation of uncertainties through sets of coupled neutronic and thermal-fluid calculations.

Several large international uncertainty quantification programs have been developed in recent years, of which the LWR Uncertainty Analysis in Modeling (UAM) benchmark [2] and the earlier BEMUSE [3] project are two examples. Both these efforts are coordinated by the Organization for Economic Co-operation and Development/Nuclear Energy Agency (OECD/NEA). These benchmarks include code-to-code comparisons of uncertainty predictions for cell, lattice and core physics calculations using common datasets, as well as comparisons with experimental and operational Light Water Reactor (LWR), Boiling Water Reactor (BWR), and Vodo-Vodyanoi Energetichesky Reactor (VVER) facilities. The HTGR community has however only recently initialized a dedicated Coordinated Research Project (CRP) for the quantification and propagation of uncertainties in a pebble bed and prismatic HTGR design [4]. One of the crucial advantages of the LWR community is the availability of large operational databases, where numerous measurements can support uncertainty validation efforts. In the HTGR domain, very few operational facilities exists (e.g., the Chinese HTR-10 and the Japanese HTTR), or the historical measured data is not sufficiently accurate (German AVR) or available in the public domain (USA—Fort St. Vrain).

The Very High Temperature Reactor (VHTR) Methods Development group at the Idaho National Laboratory (INL) identified the need for a defensible and systematic uncertainty and sensitivity approach that conforms to the code scaling, applicability, and uncertainty (CSAU) process in 2009. The Gesellschaft für Anlagen und Reaktorsicherheit (GRS) has incorporated a stochastic sampling CSAU approach that is particularly well suited for coupled reactor physics and thermal fluids core analysis into the Software for Uncertainty and Sensitivity Analyses (SUSA) code [5]. The stochastic sampling method is utilized by several participants in the LWR UAM benchmark [6, 7], where the use of ordered statistics to derive upper and lower tolerance intervals have been widely adopted.

This paper summarizes the results of an uncertainty quantification investigation performed with SUSA, utilizing a typical HTGR benchmark (the International Atomic Energy Agency CRP-5 Pebble Bed Modular Reactor 400 MW Exercise 2) and the INL code suite PEBBED-THERMIX. For this study, the effects of uncertainties in the cross-section data and the propagated cell-to-lattice-to-core uncertainties have not yet been included—the focus here is on the effect of model and material input uncertainties for a coupled transient case. The CRP on HTGR UAM [4] does however propose to use the established methodology from the OECD/NEA LWR UAM program, and a similar depressurized loss of forced cooling (DLOFC) transient is included as part of the pebble bed test set. At INL the modeling focus has shifted to the prismatic MHTGR-350 MW design, and propagated uncertainties for this design will also be assessed as part of the CRP on HTGR UAM. A dedicated OECD/NEA code-to-code benchmark on the MHTGR-350 design was also recently launched [8], and the same codes and models will be utilized in the core modeling part of the HTGR UAM study.

2. Uncertainty Methodology

Two major approaches to perform uncertainty propagation in a statistically rigorous manner can be identified [9].(1)Methods based on the propagation of input uncertainties (also known as statistical or stochastic sampling methods) as represented by SUSA and Sandia National Laboratory code DAKOTA [10].(2)Methods based on the propagation of output uncertainties (also known as deterministic methods), for example, the capability of internal assessment of uncertainty method [11].

(The direct perturbation method is also identified as a possible third, but expensive, approach in [6]).

The two approaches can be summarized as follows.(i)Statistical methods (input uncertainty propagation).(1)Assign subjective probability ranges and distributions to (an unlimited number of) input parameters.(2)Propagate the combined uncertainty through the core models to determine the statistical distribution properties of the Figure of Merit (FOM). Typically lower and upper tolerance intervals can be obtained within some defined confidence level, using an ordered statistics approach.(3)Disadvantages include the subjective selection of input parameter distributions types and ranges, and the fact that at least 59 model calculations are required to obtain acceptable statistical information for an one-sided 95% tolerance limit with 95% confidence. (ii)Deterministic methods (output uncertainty propagation).(1)Use of a relevant set of experimental data to establish a database of uncertain data for a large number of input parameters.(2)Create hypercubes characterizing physical parameters and their dependencies for a wide variety of plant conditions, transients, and so forth. (3)Perform a single calculation utilizing all input parameters to determine the error bands enveloping the output FOM.(4)Disadvantages include the availability of large operational/experimental datasets for the construction of the hypercubes, and the inability of this method to distinguish the major contributors to the overall uncertainty.

Typically, because of the deterministic method requirement to have a large and comprehensive experimental database available, LWR and BWR uncertainty studies can use this method (especially for thermal fluid uncertainty studies). However, in the HTGR domain, very limited experimental and operational data exists, and the use of statistical uncertainty methods is currently the only viable approach for coupled uncertainty propagation.

The first step in the GRS method is to select the set of uncertain input parameters that will be used to evaluate the desired FOM. It is important to note that the stochastic sampling methodology is independent of the number of input parameters selected, and that all of them are sampled simultaneously as a single set. Information from the manufacture of nuclear power plant components as well as from experiments and previous calculations are used to define the mean value and probability distribution or standard deviation of uncertain parameters. Normal (and in some cases uniform) distributions are used in the absence of more knowledge about the input parameters. Once these distributions and dependencies have been established, the analyst can:(i)generate a random sample of size for the input parameters from their probability distributions by a Monte Carlo module contained in the SUSA package ( for two-sided 5% lower and 95% upper tolerance intervals, with a 95% confidence level—see (1) below);(ii)perform the corresponding simulations with the codes. Each simulation generates one possible solution of the model, and all solutions together represent a sample from the unknown probability distribution of the model results;(iii)calculate quantitative uncertainty statements, for example, 5% and 95% tolerance intervals within a specified confidence level, usually 95% (denoted here as 95%/95%);(iv)calculate quantitative sensitivity measures to identify those uncertain parameters that contribute most to the uncertainty of the results.

The number of code calculations is determined by the requirement to estimate a tolerance interval for the quantity of interest with a specified confidence level. Wilks’ formulae [12], shown here as (1), are used to determine the number of calculations required to obtain the desired one-sided and two-sided tolerance intervals:

where is the confidence level (%) that the maximum code result will not be exceeded with the probability (a × 100 ) of the corresponding output distribution, and the number of calculations required. For example, the one sided “upper” 95% tolerance interval of the (unknown) peak fuel temperature distribution can be obtained with a confidence level of 95% by selecting , and the same number of model calculations would be needed for the “lower” 5% tolerance interval. If both tolerance intervals are required (i.e., a bounding statement is desired on the lower and upper “range” of the peak fuel temperature), 93 model runs would be required for the 95%/95% two-sided tolerance intervals. It is important to note that this method does not generate the “true” distributions of the output parameters (for this to be possible, several hundred or thousand model runs would be required), but rather selected statistical information about these unknown output distributions, based on comparisons with standard statistical distributions.

2.1. Application to the PBMR 400 Benchmark Problem

Figure 1 presents a simplified flow diagram of the statistical sampling methodology followed for this study, utilizing the PEBBED-THERMIX codes [13] and the PBMR 400 Exercise 2 benchmark [14] test case. A combination of three reactor physics codes was utilized: COMBINE-7 for cross section preparation, PEBBED for pebble-bed reactor design and fuel cycle analysis, and THERMIX-KONVEK for thermal fluid analysis [13]. PEBBED is used for analysis of neutron flux and isotopic depletion and buildup in a pebble bed HTGR. The code can treat arbitrary pebble circulation schemes, and it permits more than one type of pebble to be specified. The large temperature and burnup variations across the core require that it be discretized into many smaller “spectral zones” over which cross sections are assumed to be constant. Because the burnup and temperature profiles of the equilibrium core are not known a priori, the core neutronic and temperature profiles must be solved iteratively along with the generation of cross sections. To that end, PEBBED calls THERMIX-KONVEK and COMBINE after each burnup sweep and continues this iteration until the neutron source, temperatures, pebble flow maps, and zone-wise infinite medium multiplication factors are converged within tolerances specified by the user.

The THERMIX-KONVEK code was developed in Germany during the German HTGR program for the thermal fluid analysis of pebble bed HTRs [13]. It is capable of solving conduction and convection heat transfer in two dimensions (R-Z), and includes a simplified treatment of the radiative heat transport between the core structures. The code can also predict the time-dependent conduction heat transport during a DLOFC event by assuming that all convection terminates instantaneously.

An existing PEBBED-THERMIX model of the CRP-5 PBMR 400 MW benchmark was used as the starting point of the uncertainty study.

2.2. Input Parameters, Distribution Types, and Variations

The typical FOM for a DLOFC event is the peak fuel temperature, that is, the maximum spatial and temporal temperature reached in the fuel spheres. As a demonstration of the uncertainty methodology applied to a typical HTGR problem, eight input parameters were selected for this study, as shown in Table 1. Note that these eight parameters are an incomplete subset of all possible parameters that could affect the DLOFC peak fuel temperature, since this study is not designed to provide a complete assessment of the effect of all uncertainties.

Since the selection of the input parameters and their distributions is one of the known weak points of the stochastic sampling methodology, the results of a separate TINTE study, performed for the PBMR 400 MW design [15], was used to determine which eight parameters to include for this PEBBED DLOFC case. This study indicated that metal and graphite emissivity only influenced the metal component temperatures, and uncertainties in the helium thermal physical properties also did not result in any changes in the FOM. Out of 19 input parameters investigated in the TINTE study, six had no effect on the steady state and DLOFC peak fuel temperatures, eight had less than 10°C (0.6%) effect, and only five factors resulted in changes larger than 10°C in the DLOFC FOM. These five input variables, together with three more that had a larger than 1% influence on the steady state temperature, were selected for inclusion in this study.

The variations on the power and reactor inlet gas temperature were applied directly on the absolute value of the input variable itself (e.g., 2% on 400 MW), in contrast to the decay heat, specific heat, and thermal conductivity, where the variations were applied as multiplication factors on the complex correlations that are used to calculate these variables. For example, the specific heat capacity of the reflector graphite material is a third-order polynomial function of temperature and the density , where

The sampled multiplicative factor mod is then applied to the interim value to determine the final specific heat value in

It should be noted that the decay heat is an almost linearly dependent variable of the long term steady state reactor power, and as such it is not an independent input variable to this uncertainty quantification. While the SUSA code allows for input parameter dependencies (correlations) to be specified as part of the input preparation, this option was not selected for this study. It was rather decided to see if this input parameter correlation can be observed in the output data sensitivity analysis.

The mean and two standard deviations values shown in Table 1 were obtained from material manufacturers (specific heat and conductivity data for NBG-18 graphite from the SGL company), expert engineering judgment (power and inlet temperature), and from established industry standards (the German DIN standard for HTGR decay heat [16]).

A second potential weak point of the statistical method is the justification for the selection of the probability density function (PDF) types. Typical thermal physical properties, such as specific heat and thermal conductivity, can be obtained from the manufacturers, and are usually specified as normal PDFs with mean standard deviation values. More complex variable PDFs (e.g., variations in the core bypass flows) can be biased/skewed to one side, for example, gap widths grow larger or shrink over time as the graphite reflectors swell and shrink with fast fluence exposure. In cases where no definitive uncertainty information exists, a uniform or a normal/Gaussian PDF can be used with or without truncated tails. For this study, both normal and uniform PDFs were selected to assess if this factor plays a significant role in the DLOFC peak fuel temperature uncertainty. Figure 2 presents the PDF data for the specific heat and thermal conductivity input variations, with normal distributions selected and the mean and standard deviation values specified as listed in Table 1. Note that infinite tails of the normal distributions were all truncated at their values (at the 95.5% percentiles) to enable direct comparison with the uniform distributions’ minima and maxima.

A total of six case sets (consisting of four sets of 100 and two sets of 200 model runs each) were performed for this study as described below and summarized in Table 2:(i)Number of model runs: the number of model runs was doubled from 100 to 200 to investigate if a larger population sample produced significantly different statistical indicators. (ii)Sampling methodology: SUSA is capable of using either the simple random sampling (SRS) [17] or the Latin hypercube sampling (LHS) method [17] for generating the values of the input variables from their specified distributions. A comparison set of 200 model runs was performed to compare the FOM uncertainty estimates generated by these two sampling methods.(iii)Distribution type: two of the 100 model run sets were designed to quantify possible differences that could result when the input parameter PDFs are changed from uniform to normal distribution types. (iv)Number of parameters sampled: a final point of interest was the uncertainty contribution of a few dominant input parameters compared to the combination of all eight input parameters. To this end, two sets of 100 model runs each were performed, varying only the material correlations and only the power, RIT, and decay heat correlation, respectively.

In the discussions that follow, the notation will be of the format “number, sampling method, distribution type”; for example, the first entry in Table 2 will be referred to as 100 LHS Uniform set.

The SUSA generated input data can be verified for conformance to the user’s specifications using scatter plots, as shown in Figure 3 for the total power input parameter (for the 200 LHS Normal set).

3. Discussion of PEBBED-THERMIX Results

For the next step of the uncertainty quantification procedure, the SUSA-generated data for the eight input parameters were used to create PEBBED and THERMIX model input files for the steady state and DLOFC calculations. The six sets listed in Table 2 required a total of 800 PEBBED-THERMIX calculations at an average run time of 35 minutes each on a single processor. Since it is possible to assign each model run to a dedicated processor on a multiprocessor cluster, the performance of large uncertainty studies is essentially determined by the number of processors available, and can in many cases be comparable to a single model run time if the cluster supports several hundred processors. The new FISSION cluster at INL has 12512 processors available.

The time behavior of the maximum fuel temperature during the DLOFC transient is shown in Figures 4, 5, and 6 for the first 30 cases of the 100 SRS Uniform, 100 LHS Normal, and 200 LHS Normal datasets, respectively. Note that the maximum fuel temperature is a spatial function, since the PEBBED-THERMIX model calculates temperatures in 110 core zones/meshes. The temporal maximum of this maximum fuel temperature during the DLOFC is defined as the peak fuel temperature, which is the FOM for this study. A few observations on the general trends can be made from this data as follows.(i)The PBMR core design leads to the typical HTGR loss of cooling behavior, that is, a slow increase in the maximum fuel temperature over several hours, with the peak fuel temperature reached 40–60 hours into the transient. (ii)The shapes of the curves in the first 30 cases are similar but the gradients are not. Although the same physical phenomena are present in all the DLOFC events, the rate of energy deposition (correlated to the decay heat) and energy removal (correlated to the fuel and reflector specific heat and thermal conductivities) differ for each of these cases, according to the sampled input values. (iii)Changes in the eight input parameters have opposite effects on the maximum fuel temperature: an increase in the decay heat will increase the fuel temperature, but an increase in the fuel graphite conductivity will remove heat faster from the core, and therefore lead to a lower fuel temperature. Since each DLOFC case consists of a random sampled set of the eight input parameters, the low fuel temperature curves can be the result of a few parameters sampled low (or high) simultaneously, and an average fuel temperature curve could be caused by a cancellation of effects. These factors also cause the shift in time when the peak fuel temperature values are reached. (iv)The spread in maximum fuel temperatures between the first 30 cases is not constant with time. For example, it starts off with less than 5°C in the first hour and increases to 98°C for the 200 LHS Normal set, as shown in Figure 6. This divergence over time is a direct result of the sampled input parameter values, as explained above. For a time dependent event such as this DLOFC, a single and constant fuel temperature uncertainty result can therefore not be expected—it will be a function of time as well. It can be seen in Figures 4, 5 and 6 that the temperature spread between the cases continue to increase after the peak values have been reached, and a full uncertainty study should take this effect into account if it is required to determine what the maximum uncertainty bandwidth is. (v)The 95%/95% two-sided tolerance intervals are not compared at a fixed time point, but rather at the varying time point where a specific case reaches its peak DLOFC fuel temperature. This study therefore compares the bounding value fuel temperature uncertainty for the DLOFC event, regardless of when this point is reached, since the DLOFC peak fuel temperature is of major interest in HTGR reactor design safety studies. (vi)The temperature variation bandwidth for the 3 cases shown here seem to be quite different. The two sets that consisted of 100 runs each produced significantly larger variations than the 200 LHS Normal set (e.g., 141°C versus 94°C), and there is also a smaller difference between the 100 LHS Normal and 100 SRS Uniform sets (131°C versus 141°C). It is however important to note that the figures just show the first 30 model runs of each set for clarity sake. For the 200 LHS Normal set, the remaining 170 model runs sample a larger portion of the “true” unknown distribution, with a resultant larger bandwidth, as shown in Figure 7. (It is shown in Table 4 in the next section that the uncertainty estimates for the 6 full sets are very similar). (vii)The primary FOM for this study (DLOFC peak fuel temperature) results are presented in Figure 8 as a scatter plot for the 200 LHS Normal set. The mean and values, as determined by SUSA (see next section) are also indicated. The same data are compared in Figure 9 in a more useful format, where the normalized DLOFC peak fuel temperature histograms for the 200 LHS and SRS Normal sets are shown. The histograms both approximate normal distributions, so that (visually) no significant differences can be observed at this stage between the SRS and LHS sampling methods. (viii)A more pronounced visual difference can be observed in Figure 10, where the normalized histograms are shown for the 100 LHS Normal and Uniform sets. It is tempting to conclude from this figure that the uniform input sampling created a “flatter” output peak temperature distribution (but still with significant tails), and it might indeed be the case for this small population. It should however be kept in mind that both sets are subsets of the “true” empirical output peak temperature distribution, which will only be obtained after several hundred or even more than 1000 model runs, and that careful conclusions are required when the sample populations is this small. Quantitative statistical data, as determined by SUSA and discussed in the next section, are definitely required before comparison conclusions can be made.

4. SUSA Uncertainty Analysis

For the uncertainty quantification step, SUSA can perform several statistical correlation fitness tests on the output data to determine the properties of the unknown FOM distribution. For example, the Kolmogorov-Smirnov (K-S) test [18] quantifies the distance between the empirical distribution function of the sample and the cumulative distribution function of a reference distribution, and can be used to compare a population sample for conformance with a specific distribution (normal, log-normal, Weibull, uniform, Beta or Gamma). SUSA can also perform the Lilliefors test [19], which is a modification of the K-S test that tests for normal, exponential or log-normal distribution conformance. Once a statistically significant distribution match has been found, the mean, standard deviation, and other indicators of the population can be determined from the empirical distribution function of the sample population.

An illustration of the K-S test is shown in Figures 11 to 12, where the peak DLOFC fuel temperature results of the 100 LHS Normal set is compared with SUSA fits of the normal (Figure 11) and uniform (Figure 12) reference distributions. The quality of the fit can also be visually assessed, but the K-S values (0.9919 versus 0.0026) confirm that a normal distribution is a much better match to the peak DLOFC fuel temperature data set.

A second example of the Lilliefors and K-S test results for the 100 LHS Uniform set is presented in Table 3. Both tests confirm that for this uniform-distributed input sampled set, a log-normal distribution fit provides the best match for the peak fuel temperature distribution.

A summary of the mean and 95%/95% two-sided tolerance intervals at the time when the peak fuel temperature are reached are shown in Table 4 for the six SUSA-sampled sets. The number of successful model calculations is also indicated. for three of the sets, one calculation each did not complete successfully, but the remaining 99 and 199 model runs were still an adequate sample size for the statistical analysis.

The following observations can be made from this data.

Mean Values
The mean values for all six datasets are almost identical (e.g., 2°C variation on 1604°C), that is, regardless of the sampling method, parameters included, or distribution types, these six independent random sets predict the same mean DLOFC peak fuel temperature (1604°C).

Dominant Input Parameters
Even before an analytical sensitivity study is performed to determine which of the factors are responsible for most of the variations in the output data, the first two datasets shown here already show that the power, inlet gas temperature, and decay heat variations contribute significantly to the variation seen in the DLOFC peak fuel temperature. On their own, these three small input variations produced lower and upper tolerance intervals ranging between 1545°C and 1664°C (a spread of 119°C), while the much larger uncertainty variations in the five material correlations lead to values of 1555°C and 1652°C (a smaller spread of 97°C). Both these sets can be compared with the 100 LHS Uniform set where all eight input variables were included and 95%/95% two-sided tolerance intervals of 1513°C and 1686°C were obtained.

Distribution Type (100 LHS Uniform versus 99 LHS Normal)
The use of an uniform distribution will result in the sampling of high and low input values more frequently compared to a normal distribution, since the probability of sampling a high, mean, or low value is identical for a uniform distribution, but there is a lower probability to sample from the low and high tails of the normal distribution. This effect could partly explain the lower and higher tolerance intervals on the peak fuel temperature (1513°C versus 1536°C, and 1686°C versus 1675°C) for the 100 LHS Uniform set, compared to the 99 LHS Normal set values. The difference between the two lower and upper tolerance intervals predictions is however minimal: only 23°C and 11°C on a mean value of 1604°C, respectively. The use of normal distributions for input parameter variations, as is most commonly applied when no other information is available, could therefore lead to slightly lower estimates of the tolerance limits, compared to the use of uniform distributions. This observation might however only be valid for this specific HTGR design, code and model combination and these sampled datasets.

Sampling Method (200 LHS Normal versus 199 SRS Normal)
An interesting current issue in the uncertainty quantification community revolves around the issue of applying stratified sampling techniques (like Latin Hypercube) to improve the coverage of the input sample set ([2022]). It can be seen in Table 4 that the lower and upper tolerance intervals for the SRS set are larger compared to the intervals for the LHS set (1531°C versus 1537°C, and 1680°C versus 1658°C). It would therefore seem that the use of simple random sampling lead to a slightly more conservative estimate of specifically the upper 95%/95% tolerance limit, which is an important safety case parameter in HTGR reactor design. This observation was also noted in a recent comparison study of SRS and LHS methodologies [23].
It should again be noted however that these differences are small compared to the mean peak DLOFC fuel temperature (only 1.4% on 1604°C), so that the conclusions reached in a Sandia Laboratory study [20] can also be affirmed here: no significant differences are observed between the results of SRS or LHS sampled datasets. These studies recommend the use of LHS because of its enforced stratification over the sample range. On the other side of the debate an argument was also made that LHS does not conform to the requirements of the classical ordered statistics approach, and that tolerance intervals may not be obtained from LHS sampled sets [24, page 73]. In the “lessons learned” section of the latest BEMUSE report [24, page 77], the SRS method is specifically recommended for use when tolerance intervals are required. A recent effort to specifically address the use of LHS to derive asymptotically valid tolerance intervals was however reported in [22], so it seems that the use of both techniques still warrant further investigation.

Number of Model Calculations (99 LHS Normal versus 200 LHS Normal)
Model calculations are crucial time-consuming factors for the statistical uncertainty method. This study did not observe significant differences between the tolerance intervals obtained with sets consisting of 100 or 200 model runs, that is, covering the Wilks’ formula range from the second (93 runs) to the fifth (181 runs) order. The Wilks’ formula second order application (93 runs) therefore seems to be sufficient for this core design, model and transient. This conclusion is supported by the ATHLET PWR study [5] where 100 model runs were performed for 56 uncertain input parameters, and most of the participants in the BEMUSE benchmark study [3] did between 93 and 150 model runs for 13 to 49 input parameters. Four of the BEMUSE benchmark participants found that the 95th percentile can typically be directly obtained from a converged PDF after 400 to 500 model runs, if parallel resources are available, or if the model run times are not significant. The final recommendation in the recent Phase VI BEMUSE report [24] was that Wilks’ formula should be applied at the third or fourth order (between 124 and 153 model runs), if the upper tolerance interval approaches the regulatory limit on the FOM.
A single example of the time dependent nature of the data shown in Table 4 is provided in Figure 13, which shows the data for the 200 LHS Normal set. As indicated earlier, the minima and maxima (which is the upper and lower bounds of the maximum fuel temperatures in Figure 7) vary with time, and the resultant distribution properties show similar variations. The uncertainty bandwidth increases with time beyond the time point where the peak fuel temperature is reached, that is, in this example the highest fuel temperatures and the largest uncertainty variations do not occur at the same time point.
It has been shown in this section that the input uncertainties in only eight parameters already lead to 95%/95% two-sided tolerance intervals of 1531°C and 1680°C on a mean value of 1604°C (the data for the 200 SRS Normal set has been used here). These values represents an uncertainty band/spread of approximately 4.6% around the mean value of 1604°C for the peak fuel temperature during a DLOFC transient in the PBMR design. A more complete study, taking into account all known input uncertainties could possibly lead to a larger uncertainty bandwidth. These uncertainties need to be taken into account during the HTGR reactor safety margin design process.

5. SUSA Sensitivity Analysis

This section presents selected results from the SUSA sensitivity analysis. An overview of the definitions, uses, and advantages of typical sensitivity parameters (regression coefficients, correlation measurements, partial and empirical coefficients, etc.) can be found in [17]. As indicated previously, SUSA can calculate several quantitative measures of correlations that might exist between the uncertainties in input parameters and the subsequent variations in the output data. Since the model calculations are usually expensive in terms of computational requirements, it is accepted practice to use the same datasets for the sensitivity and uncertainty analyses.

The Kendall rank correlation coefficients and the empirical correlations ratios shown in Figure 14 for the 100 LHS Uniform and 200 LHS Normal sets can be generated for any of the six datasets. A description of the parameters is included in Table 5.

All data shown here is for their effects on the FOM, the DLOFC peak fuel temperature. The magnitude of the coefficients provides insight into the degree which a specific input parameter influences the output parameter values: zero values imply almost no link between the input and output uncertainty variations, values up to ~0.4 are considered weak correlations, and values above ~0.7 indicate strong correlations. The sign of the Kendall coefficient indicate a positive or negative relationship (e.g., an increase in decay heat will lead to an increase in fuel temperature). This information can be used to confirm the effect of known physical phenomena or to highlight the primary drivers behind the output uncertainty variations.

Apart from a change in order between input parameters 7 and 8, the Kendall rank correlation coefficients results shown Figures 14(a) and 14(b) are very similar. According to the SUSA analysis, the three primary drivers of uncertainty in the fuel temperature are the decay heat (index no. 3) and the reflector and pebble bed conductivity (no. 7 and no. 8). This finding is in agreement with the simple one-by-one parametric sensitivity study results performed previously for the PBMR 400 MW design with a different model and code [15], where these three factors were also identified as being responsible for the largest changes in the output fuel temperature. The dominant role played by the decay heat and graphite thermal conductivity can also be expected from the basic physical phenomena that determines the fuel temperature during a loss of cooling transient: the decay heat is the only remaining active heat source, and the thermal conductivity of the several hundred tons of graphite material is the dominant resistance on the heat flow path towards the final heat sink. It is also of interest to note that the FOM sensitivity to the decay heat multiplier is approximately three times higher than the sensitivity to the steady state reactor power, that is, a similar ratio when compared to their uncertainty variation magnitudes (2% versus 5.9%). (As indicated in Section  2.2, a roughly linear dependency exists between the decay heat and the long term steady state reactor power). The correlated nature of the decay and total power can be observed in the high partial Kendall correlation coefficient of the for the decay heat parameter, presented in Figure 14(c).

The empirical correlation ratios presented in Figures 14(d) and 14(e) do not show a directional influence (all values are positive), but the three primary drivers can still be readily identified. This indicator is known to be more sensitive to the number of model calculations performed; when the data for the 100 LHS Uniform set shown in Figure 14(c) and the 200 LHS Normal set shown in Figure 14(d) are compared, it can be seen that the three primary parameters’ amplitudes (3, 7, and 8) remained similar, but the values of all of the lesser contributors decreased for the 200 model calculations set. The need for a higher number of model runs to distinguish low-level contributions in sensitivity studies is also identified in the literature [3, 17], and should be kept in mind when a small number of model runs are used for sensitivity conclusions. The Wilks criteria on the validity of using a limited number of model runs only apply to uncertainty studies, and cannot be extended to sensitivity studies.

As a final example, the time dependent empirical correlation ratios for the 200 LHS Normal data set shown in Figure 15 illustrate the principle that the rank of the input parameters is not constant over time (e.g., compare the correlation ratios at 5 and 100 hours). The trends shown here highlights an often neglected feature of sensitivity studies: in time-dependent problems, various parameters can rise/fall in importance during the evolution of the transient, depending on the physical phenomena involved.

6. Conclusions

This report summarizes the results of an uncertainty and sensitivity quantification study performed with the GRS code SUSA, utilizing a typical high temperature reactor benchmark (the IAEA CRP-5 PBMR 400 MW Exercise 2) and the INL suite of codes PEBBED-THERMIX. The following steps were performed as part of the uncertainty and sensitivity analysis.(1)Eight PEBBED-THERMIX model input parameters were selected for inclusion in the uncertainty study: the total reactor power, inlet gas temperature, decay heat, and the specific heat capacity and thermal conductivity of the fuel, pebble bed, and reflector graphite. (2)The input parameters variations and probability density functions were specified, and a total of 800 PEBBED-THERMIX model calculations were performed, divided into 4 sets of 100 and 2 sets of 200 steady state and DLOFC transient calculations each.(3)The DLOFC peak fuel temperature was supplied to SUSA as model output parameters of interest. Using both the Simple Random and the Latin Hypercube Sampling techniques, the Wilks formulation was applied to the 6 datasets, and the 5% and 95% tolerance limits were determined with 95% confidence levels. (4)A SUSA sensitivity study was performed to obtain correlation data between the input and output parameters, and to identify the primary contributors to the output data uncertainties.

It was found that the uncertainties in the decay heat, pebble bed, and reflector thermal conductivities were responsible for significant contributions to the propagated uncertainty in the DLOFC peak fuel temperature. No significant differences were observed between the results of SRS or LHS sampled datasets, and the same conclusion was made from a comparison between the results of sets that used uniform input parameter distributions as opposed to normal distributions. The 95%/95% two-sided tolerance intervals values of 1531°C and 1680°C represent an uncertainty band/spread of approximately 4.6% around the mean value of 1604°C for the peak fuel temperature during a DLOFC transient in the PBMR 400 MW design.

Possible future investigations in the HTGR uncertainty assessment program at INL include the following.(i)Clarify the approach on complex non-statistical uncertainties: bypass flows through the reflectors, including radial and axial power peaking factors, control rod worths, and so forth.(ii)The propagation of the uncertainties in the cross section data from the basic nuclear ENDF libraries to the coupled transient solutions represents a significant challenge. In this regard, it is currently planned to utilize the Generalized Perturbation Theory (GPT) and stochastic sampling (XSUSA) capabilities of the upcoming SCALE 6.2 release [25] to propagate the cross-section uncertainties through to a typical steam generator tube rupture scenario, as part of the new IAEA CRP on HTGR UAM [14].

Acknowledgment

This work is supported by the U.S. Department of Energy, Assistant Secretary for the Office of Nuclear Energy, under DOE Idaho Operations Office Contract DE-AC07-05ID14517.