Abstract

The evaluation of uncertainty constitutes the necessary supplement of best-estimate calculations performed to understand accident scenarios in water-cooled nuclear reactors. The needs come from the imperfection of computational tools, on the one side, and the interest in using such a tool to get more precise evaluation of safety margins. The paper reviews the salient features of three independent approaches for estimating uncertainties associated with predictions of complex system codes. Namely, the propagations of code input error and calculation output error constitute the keywords for identifying the methods of current interest for industrial applications, while the adjoint sensitivity-analysis procedure and the global adjoint sensitivity-analysis procedure, extended to performing uncertainty evaluation in conjunction with concepts from data adjustment and assimilation, constitute the innovative approach. Throughout the developed methods, uncertainty bands can be derived (both upper and lower) for any desired quantity of the transient of interest. For one case, the uncertainty method is coupled with the thermal-hydraulic code to get the code with capability of internal assessment of uncertainty, whose features are discussed in more detail.

1. Introduction

In September, 1988, following the improved understanding of ECCS (emergency core cooling system) performance during various reactor transients, the NRC reviewed and amended the requirements previously fixed in the 10CFR50 (§ 50.46 and App. K). As a consequence of the conservatism of the methods specified in App. K, being responsible of reactor-operating restrictions and according to industry requests for improved evaluation models, the existing approach was oriented toward basing the licensing decisions on realistic or so-called best-estimate (BE) calculations of plant behavior. The amendments essentially concern the permission of using BE models for ECC performance calculation as an alternative to codes (evaluation models) that use the App. K conservative requirements. The rule changes also include the quantification of uncertainties of best-estimate calculation.

Notwithstanding the efforts made for qualifying the codes and the feedback upon the development, results of system thermal-hydraulics predictions are still affected by noticeable errors, including the unavoidable approximations in the constitutive equations, from the limited capabilities of numerical solution methods, from uncertainties in the knowledge of boundary and initial conditions, and from errors in setting up the nodalization. As a consequence, the result of a best-estimate prediction by a system thermal-hydraulics code, not supplemented by the proper uncertainty evaluation, constitutes nonsense.

The uncertainty analysis is, according to [1] and related to system thermal-hydraulic code predictions, “an analysis to estimate the uncertainties and error bounds of the quantities involved in, and the results from, the solution of a problem. Estimation of individual modeling or overall code uncertainties, representation (i.e., nodalization related) uncertainties, numerical inadequacies, user effects, computer compiler effects, and plant data uncertainties for the analysis of an individual event.” Furthermore, to conclude with a citation from [2], “ uncertainty is not an accident of the scientific method but its substance.” Within the present context, the uncertainty is the necessary supplement for a best-estimate thermal-hydraulic code prediction [3].

The first framework for calculating the uncertainty was proposed by US NRC and denominated code scaling, applicability, and uncertainty (CSAU, see [4]). The application of the CSAU methodology resulted in the calculation of the peak-cladding temperature (PCT) during an LBLOCA design basis accident event for a Westinghouse 4-loop pressurized water reactor with the uncertainty to a 95% confidence level. The peak temperature was calculated using the TRAC thermal-hydraulic analysis code and was given as a single-valued number with uncertainty bands. In the meantime, a number of uncertainty methodologies were proposed in other countries. These methods, although sharing a common goal with CSAU, use different techniques and procedures to obtain the uncertainties on key calculated quantities. More importantly, these methods have progressed far beyond the capabilities of the early CSAU analysis. Presently, uncertainty bands can be derived (both upper and lower) for any desired quantity throughout the transient of interest, not only point values like peak cladding temperature.

This paper reviews the salient features of three independent approaches for estimating uncertainties associated with predictions of complex system codes. The origins of the problem and topics relevant for uncertainty evaluation (called hereafter TRUE) are discussed with the aim to analyze how they have been identified and characterized in each method. Finally, a methodology based on internal assessment of uncertainty is presented as an answer to the identified limitations.

2. Sources of Uncertainty in Thermal-Hydraulic System Codes

A fundamental step in the application of best-estimate (BE) method to the safety analysis of nuclear-power plants (NPP) is the identification and characterization of uncertainties. This is connected with the approximate nature of the codes and of the process of code applications. In other words, “sources of uncertainty” affect predictions by BE codes and must be taken into account. The major sources of uncertainty in the area of safety analysis are represented by the: (i)code or model uncertainty (associated with the code models and correlations, solution scheme, model options, data libraries, deficiencies of the code, and simplifying assumptions and approximations);(ii)representation or simulation uncertainties (accuracy of the complex facility geometry, 3D effects, control, and system simplifications) including the scaling issue;(iii)plant data uncertainties (unavailability of some plant parameters, instrument errors, and uncertainty in instrument response).

In addition, the so-called “user effect” is implicitly present and characterizes each of the broad classes of uncertainty above mentioned.

A more detailed list of uncertainty sources, some of them supported by documented evidences (see Figures 16), is given hereafter, where an attempt has been made to distinguish “independent” sources of “basic” uncertainty. Complex interactions among the basic uncertainty sources are expected and justify (in advance) the complex structure of an uncertainty method. Comprehensive research programs have been completed [5] or are, in progress [6, 7], aimed at thermal-hydraulic system code assessment and improvement to reduce the influence of the basic uncertainties upon the results.

(A) Balance (or conservation) equations are approximate, that is, (i)all the interactions between steam and liquid are not included,(ii)The equations are solved within cylindrical pipes without consideration of geometric discontinuities (situation not common for code applications to the analysis of NPPs transient scenarios).

(B) Presence of different fields of the same phase, for example, liquid droplets and film. However the codes consider only one velocity per phase and this results in an another source of uncertainty.

(C) Geometry averaging at a cross-section scale. The need “to average” the fluid conditions at the geometry level makes necessary the “porous media approach.” Velocity profiles happen in the reality. These correspond to the “open media approach.” The lack of consideration of the velocity profile, that is, cross-section averaging, constitutes an uncertainty source of “geometric origin.”

(D) Geometry averaging at a volume scale. Only one velocity vector (each phase) is associated with a hydraulic mesh along its axis. Different velocity vectors may occur in the reality (e.g., inside a lower plenum of a typical reactor pressure vessel, at the connection between a cold leg and a down comer, etc.). The volume averaging constitutes a further uncertainty source of “geometric origin.”

(E) Presence of large and small vortex or eddy. Energy and momentum dissipation associated with vortices are not directly accounted for in the equations at the basis of the codes, thus introducing a specific uncertainty source. In addition, a large vortex may determine the overall system behavior (e.g., two-phase natural circulation between hot and cold fuel bundles), not necessarily consistent with the prediction of a code-discretized model.

(F) The second principle of thermodynamics is not necessarily fulfilled by codes. Irreversible processes occur as a consequence of accidents in nuclear reactor systems. This causes “energy” degradation, that is, transformation of kinetic energy into heat. The amount of the transformation of energy is not necessarily within the capabilities of current codes, thus constituting a further specific energy source.

(G) Models of current interest for thermal-hydraulic system codes are constituted by a set of partial derivatives equations. The numerical solution is approximate; therefore, approximate equations are solved by approximate numerical methods. The “amount” of approximation is not documented and constitutes a specific source of uncertainty.

(H) Extensive and unavoidable use is made of empirical correlations. These are needed “to close” the balance equations and are also reported as “constitutive equations” or “closure relationships.” Typical situations are as follows.

(i)The ranges of validity are not fully specified. For instance, pressure and flow rate ranges are assigned, but void fraction or velocity (or slip ratio) ranges may not be specified.(ii)Relationships are used outside their range of validation. Once implemented into the code, the correlations are applied to situations, where, for instance, geometric dimensions are different from the dimensions of the test facilities at the basis of the derivation of the correlation. One example is given by the wall-to-fluid friction in the piping connected with reactor pressure vessel: no facility has been used to derive (or to qualify) friction factors in two-phase conditions when pipe diameters are of the order of one meter. In addition, once the correlations are implemented into the code, no (automatic) action is taken to check whether the boundaries of validity, that is, the assigned ones, are overpassed during a specific application.(iii)Correlations are implemented approximately into the code. The correlations, apart from special cases, are derived by scientists or in laboratories that are not necessarily aware of the characteristics or of the structure of the system code where the correlations are implemented. Furthermore, unacceptable numeric discontinuities may be part of the original correlation structure. Thus correlations are “manipulated” (e.g., extrapolated in some cases) by code developers with consequences not always ascertained. Figure 1 shows how a valid and qualified correlation (Shah correlations, at two different velocities, for the condensation heat transfer) has been (necessarily) implemented into a system code.(iv)Reference database is affected by scatter and errors. Correlations are derived from ensembles of experimental data that unavoidably show “scatter” and are affected by errors or uncertainties. The experimentalist must interpret those data and achieve an “average-satisfactory” formulation.

(I) A paradox will be noted: a “steady-state” and “fully developed” flow condition is a necessary prerequisite or condition adopted when deriving correlations. In other terms, all qualified correlations must be derived under the steady state and fully developed flow conditions. However, almost in no region of the NPP, those conditions apply during the course of an accident.

(J) The state and the material properties are approximate. Various materials used in an NPP are considered in the input deck, including liquids, gases, and solids. Thermo-physical properties are part of the codes or constitute specific code user input data. These are of empirical nature and typically subjected to the limitations discussed under item H. A specific problem within the current context can be associated with the derivatives of the water properties.

(K) The code user effect exists [8, 9]. Different groups of users having, available, the same code and the same information for modeling an NPP do not achieve the same results. User effect is originated by the following: (i)nodalization development, see also item (N), below;(ii)interpreting the supplied (or the available) information, usually incomplete; see also item (M) below and Figure 2 where the same (imperfect) information from experimentalists (pressure drops across the steam generator are equal to -2.7 ± 5?KPa in the natural circulation test A2-77 performed in the LOBI facility) are correctly interpreted by the code users in different ways, thus generating (without surprise) different steady state results (see Figure 3);(iii)accepting the steady-state performance of the nodalization: code users must accept steady-state results before performing the transient analysis;. the “acceptance” of the steady-state results (Figure 3) “reflects” the choices made and affects (without surprise) the transient results;(iv)interpreting transient results, planning and performing sensitivity studies, modifying the nodalization, and finally achieving “a reference” or “an acceptable” solution. The user effect might result in the largest contribution to the uncertainty and is connected with the user expertise, quality, and comprehensiveness of the code-user manual and of the database available for performing the analysis.

(L) The computer/compiler effect exists. A computer code is developed making use of the hardware selected by the code developers and available at the time when the code development starts. A code development process may last for a dozen of years, during which period, profound code hardware changes occur. Furthermore, the code is used on different computational platforms, and the current experience is that the same code with the same input deck applied within two computational platforms produces different results. Differences are typically small in “smoothly running transients,” but may become noticeable in the case of threshold- or bifurcation-driven transients. Figure 4 depicts the comparison between the primary side pressure, during the PORV cycling period, of two calculations performed using exactly the same input deck and running on different computer configurations: the calculation labeled “psb_test7c1gg” has been run using a P-IV, 32?bits, 2800?MHz processor and Windows 32?bits as the operating system; the calculation labeled as “psb_testtc1ggAMD” has been run adopting an AMD Athlon, 64?bits 3200 + 2200?MHz as aprocessor and Windows 32?bites as the operating system. The experimental results are also added.

(M) The nodalization effect exists. The nodalization is the result of a wide range brainstorming process, where user expertise, computer power, and code manual play a role. There is a number of required code input values that cannot be covered by logical recommendations: the user expertise needed to fix those input values (e.g., Figure 2) may reveal inadequate and constitutes the origin of a specific source of uncertainty. Figure 5 shows how the same facility (LOBI) is modeled with a different level of detail (i.e., number of control volumes) by the code users using either the same or different codes.

(N) Imperfect knowledge of boundary and initial conditions. Some boundary and initial conditions values are unknown or known with approximation: the code user must add information. This process unavoidably causes an impact on the results, which is not easily traceable and constitutes a specific source of uncertainty. Figure 6 constitutes an evident example of how the imperfect knowledge of the steam-generator secondary-side-heat losses (between 20?kW and 50?kW for the SBLOCA BL-12 performed in the LOBI-MOD 2 facility) has a strong impact (about 200?K) on the prediction of the peak cladding temperature (PCT).

(O) Code/model deficiencies cannot be excluded. The system code development started toward the end of the sixties, and systematic assessment procedures were available since the eighties. A number of modeling errors and inadequacies have been corrected or dealt with, and substantial progress has been made in improving the overall code capabilities. Nevertheless, deficiencies or lack of capabilities cannot be excluded nowadays. Examples, not applicable to all thermal-hydraulic system codes, are connected with the modeling of (i)the heat transfer between the free liquid surface and the upper-gas-steam space;(ii)the heat transfer between a hotter wall and the cold liquid down-flowing inside a steam-gas-filled region. Those deficiencies are expected to have an importance only in special transient situations.

2.1. Code Uncertainty

A system thermal-hydraulic code is a computational tool that typically includes three different sets of balance equations (of energy, mass, and momentum), closure or constitutive equations, material, and state properties, special process or component models, and a numerical solution method. Balance equations are not sophisticated enough for application in special components or for the simulation of special processes. Examples for those components are the pumps and the steam separators, and examples for those special processes are the countercurrent flow-limiting (CCFL) condition and the two-phase critical flow, though this is not true for all the codes. Empirical models “substitute” the balance equations in such cases. The sources of uncertainty connected with the code are those identified as (A) to (I) and (O) in the above list. Namely, the following association between uncertainty sources and code parts applies: (i)balance equations, uncertainty sources (A) to (F);(ii)closure and constitutive equations, uncertainty sources (H) and (I);(iii)material properties, uncertainty source (J);(iv)special process and component models, uncertainty sources (H), (I), and (O);(v)numeric, uncertainty source (G).

2.2. Representation Uncertainty

The representation uncertainty deals with the process of setting up the nodalization (idealization). The nodalization constitutes the connection between the code and the “physical reality” that is the objective of the simulation. The process for setting up the nodalization is a brainstorming activity carried out by the group of code users that aims at transferring the information from the real system (e.g., the NPP), including the related boundary and initial conditions, into a form understandable to the code. Limitation in available resources (in terms of man-months), lack of data, target of the code application, capabilities/power of the available computational machines, and expertise of the users have a role in this process. The result of the process may heavily affect the response of the code.

The source of uncertainty connected with the nodalization is identified as (M) in the above list, but the (J) source can also have a role.

2.3. Scaling Issue

Scaling is a broad term used in nuclear-reactor technology as well as in basic fluid dynamics and in thermal hydraulics. In general terms, scaling indicates the need for the process of transferring information from a model to a prototype. The model and the prototype are typically characterized by different geometric dimensions as well as adopted materials, including working fluids, and different ranges of variation for thermal-hydraulic quantities.

Therefore, the word “scaling” may have different meanings in different contexts. In system thermal hydraulics, a scaling process, based upon suitable physical principles, aim at establishing a correlation between (a) phenomena expected in an NPP transient scenario and phenomena measured in smaller scale facilities or (b) phenomena predicted by numerical tools qualified against experiments performed in small scale facilities.

Owing to limitations of the fundamental equations at the basis of system codes, the scaling issue may constitute an important source of uncertainties in code applications and may envelop various basic uncertainties. Making reference to the identified list, the sources of uncertainty connected with the scaling are those applicable to the balance equations, for example, identified as (A) to (I). More precisely uncertainty sources associated to the scaling are (A) to (E), (H), and (I).

2.4. Plant Uncertainty

Uncertainty or limited knowledge of boundary and initial conditions and related values for an assigned NPP are reported as plant uncertainty. Typical examples are the pressurizer level at the start of the assigned transient, the thickness of the gap of the fuel rod, the conductivity of the UO2, as well as the gap itself.

It might be noted that quantities like gap conductivity and thickness are relevant for the prediction of safety parameters (e.g., PCT) and are affected by other parameters like burn up whose knowledge is not as much detailed (e.g., each layer of a fuel element that may be part of the nodalization) as required. Thus such a source of error in the class of “plant uncertainty” cannot be avoided and should be accounted for by the uncertainty method. The source of uncertainty connected with the plant is identified as (N) in the above list.

2.5. User Effect

Complex systems codes such as RELAP5, CATHARE, TRAC, and ATHLET have many degrees of freedom that allow misapplication (e.g., not using the countercurrent flow-limiting model at a junction where it is required) and errors by users (e.g., inputting the incorrect length of a system component). In addition, even two competent users will not approach the analysis of a problem in the same way and, consequently, will likely take different paths to obtain a problem solution. The cumulative effect of user community members to produce a range of answers using the same code for a well-defined problem with rigorously specified boundary and initial conditions is the user effect. The sources of uncertainty connected with the code user are those identified as (K) and (J). The code user has part of the responsibility associated with the source of uncertainty (L).

3. Approaches for Computing Uncertainty

In this section, the salient features of independent approaches for estimating uncertainties associated with predictions of complex system codes are reviewed as follows. (i)The propagation of code input errors (Figure 7). This can be evaluated as being the most adopted procedure nowadays, endorsed by industry and regulators. It adopts the statistical combination of values from selected input uncertainty parameters (even though, in principle, an unlimited number of input parameters can be used) to calculate the propagation of the errors throughout the code.(ii)The propagation of code output errors (Figure 8). This is the only demonstrated independent working alternative to the previous one and has also been used for industrial applications. It makes full and direct reference to the experimental data and to the results from the assessment process to derive uncertainty. In this case, the uncertainty prediction is not propagated throughout the code.(iii)A third and independent way, that is, different from propagation of code input errors or from propagation of code output errors, (Figure 9) is based on adjoint sensitivity-analysis procedure (ASAP), global adjoint sensitivity-analysis procedure (GASAP) [10, 11], and data adjustment and assimilation (DAA) methodology [12] by which experimental and calculated data, including the computation of sensitivities (derived from ASAP), are mathematically combined for the prediction of the uncertainty scenarios.

The first approach, reviewed as the prototype for propagation of code input errors, is the so-called “GRS method” [13], which includes the so-called “CSAU method” (code scaling, applicability, and uncertainty) [4] and the majority of methods adopted by the nuclear industry. Although the entire set of the actual number of input parameters for a typical NPP input deck, ranging up to about 105 input parameters, could theoretically be considered as uncertainty sources by these methods, only a “manageable” number (of the order of several tens) is actually taken into account in practice. Ranges of variations, together with a suitable PDF (probability density function), are then assigned for each of the uncertain input parameter actually considered in the analysis. The number of computations, needed for obtaining the desired confidence in the results, can be determined theoretically by the Wilks formula [14]. Subsequently, the identified computations (ca. 100) are performed using the code under investigation to propagate the uncertainties inside the code, from inputs to outputs (results). The logical steps of the approach are depicted in Figure 7. The main drawbacks of such methods are connected with (a) the need of engineering judgment for limiting (in any case) the number of the input uncertain parameters, (b) the need of engineering judgment for fixing the range of variation and the PDF for each input uncertain parameter, (c) the use of the code-nodalization for propagating the uncertainties, if the code-nodalization is wrong, not only the reference results are wrong but also the results of the uncertainty calculations (see sources of uncertainty (L), (M), and (O) in Section 2 and the first TRUE in Section 4), and (d) the process of selecting the (about) 100 code runs is demonstrably not convergent, and the investigation of results from two or more different sets of 100 calculations shows different values for uncertainty. A support to the last consideration is supplied by Figure 10 that summarizes a study performed by Kaeri in the framework of the Phase III of BEMUSE project [7]. A direct Monte Carlo simulation consisting of 3500 runs was performed for simulating the LBLOCA L2-5 in the LOFT facility, and several samples of n = 59 and n = 93 calculations were considered. The following considerations apply. (i)From about 1000 runs, the mean value (equal to 1034?K) and the 95% empirical quantile (equal to 1173?K) of the first PCT are almost stabilized.(ii)The 95% quantile value of 1173?K has to be compared with the value of 1219?K obtained with the sample of 93 calculations used for evaluating the upper tolerance limit of the first PCT in the BEMUSE project. A difference of 46?K has been attained.(iii)The dispersion of the upper limit obtained by using Wilks formula at the first (i.e., the maximum value is retained) and second order (i.e., the second maximum value is retained), with a probability of 95% and a confidence level of 95%, was studied. The following aspects have to be outlined.(a)The spread of the results predicted for the upper limit of the first PCT is equal to, roughly, 200?K at the first order and 120?K at the second order;(b)At first order, among the 58 calculations ranging from 1170?K to 1360?K, none was found significantly lower than the 95% quantile of the 3500 code runs, notwithstanding, statistically three cases (i.e., 5% of 58) are expected.(c)At the second order, among 37 calculations ranging from 1150?K to 1270?K, one case was found below 1173?K. The second approach (Figure 8), reviewed as the propagation of code output errors, is representatively illustrated by the UMAE-CIAU (uncertainty method based upon accuracy extrapolation [15] “embedded” into the code with capability of internal assessment of uncertainty [16, 17]). Note that this class of methods includes only a few applications from industry. The use of this method depends on the availability of “relevant” experimental data, where, here, the word “relevant” is connected with the specific NPP transient scenario under investigation for uncertainty evaluation. Assuming such availability of relevant data, which are typically ITF (integral test facility) data, and assuming that the code correctly simulates the experiments, it follows that the differences between code computations and the selected experimental data are due to errors. If these errors comply with a number of acceptability conditions [15], then the resulting (error) database is processed and the “extrapolation” of the error takes place. Relevant conditions for the extrapolation are (i) building up the NPP nodalization with the same criteria as was adopted for the ITF nodalizations, and (ii) performing a similarity analysis and demonstrating that NPP calculated data are “consistent” with the data measured in a qualified ITF experiment.

The main drawbacks of this method are as follows. (i)The method is not applicable in the absence of relevant experimental information.(ii)A considerable amount of resources is needed to establish a suitable error database, but this is a one-time effort, independent of subsequent applications of this method.(iii)The process of combining errors originating from different sources (e.g., stemming from different ITF or SETF (Separate Effect Test Facility), different but consistent nodalizations, and different types of transient scenarios) is not based upon fundamental principles and requires detailed validation.

The third approach, depicted in Figure 9, is based upon the powerful mathematical tools of ASAP, GASAP and, DAA, by which all parameters that affect any prediction, being part of either the code models or the input deck can be considered. The adjoint sensitivity-analysis procedure (ASAP) [10, 11] is the most efficient deterministic method for computing local sensitivities S of large-scale systems, when the number of parameters and/or parameter variations exceeds the number of responses R of interest (that is the case of most problems of practical interest). In addition, also the system critical points (i.e., bifurcations, turning points, saddle points, and response extrema) can be considered and determined by the global adjoint sensitivity-analysis procedure (GASAP) [10, 11] in the combined phase space formed by the parameters, forward state variables, and adjoint variables. Subsequently, the local sensitivities of the responses R located at critical points are analyzed by the efficient ASAP.

Once the sensitivity matrix S of the responses R, with respect to the parameters , is available, the moment propagation equation is adopted to obtain the computed covariance matrix of the responses starting from the covariance matrix of the system parameters. The elements of the matrix reflect the state of knowledge about the input (uncertainty) parameters that can be characterized by ranges and PDFs. It is very well known that, in system thermal hydraulics only few elements of are obtained from experimental observations (mainly from SETF), whereas, for the major part of them, engineering judgment is adopted for deriving “first”-guess values of ranges and PDFs. The imperfect knowledge of the input uncertainty parameter obviously affects the computed responses R and the relative covariance and constitutes the main reason, for which, proper experimental data (i.e., connected with the specific NPP transient scenario under investigation for uncertainty evaluation) are needed. The technique, by which, experimental observations are combined with code predictions and their respective errors to provide an improved estimate of the system state is known as data adjustment and assimilation (DAA) [12], and it is based on a Bayesian inference process.

The idea at the basis of DAA can be made more specific as follows. The computed results R and the respective statistical errors predicted by mathematical models and based on prior- or first-guess PDFs for the input parameters (i.e., ) are combined with proper experimental observations M of the states of a system to generate “adjusted” values for the system parameters (, where the suffix IE stays for improved estimate values) and the respective input covariance matrix (, or “posterior” PDFs). From this process, which can be considered as improved estimate analysis of the system states, the responses and the respective covariance matrix () are finally derived.

In conclusion, to reduce uncertainties in both the system parameters and responses, the Bayesian inference procedure is used to consistently assimilate computational and experimental information. There are several approaches possible when performing a DAA process in conjunction with time-dependent nonlinear systems, but the “online data adjustment/assimilation” is the best suited for uncertainty analysis of large-scale highly nonlinear time-dependent problems. It can be performed online (i.e., sequentially in time and interactively with the code that calculates the system dependent variables and responses) by decomposing the original system into simpler but interacting subsystems. In the present case, the assimilation process involves, at every time node, the minimization of a quadratic objective function subject to constraints.

Once a suitable database of improved estimates for the input parameters () and for the respective input covariance matrix () is available, the application of the method to an NPP scenario is straightforward and requires (a) the calculation of the reference responses , where, here, the word “reference” is connected with the reference NPP boundary and initial conditions supplemented by improved estimates of the input parameters () when other information are not available, (b) the computation of the sensitivity coefficients S, and (c) the application of the moment-propagation equation to obtain the computed covariance matrix of the responses starting from the covariance matrix of the system parameters supplemented by improved estimates of the input covariance matrix () when other information are not available.

The main drawbacks of this approach are as follows. (i)The method is not applicable in the absence of relevant experimental information.(ii)The adjoint model, needed for computing the sensitivity S, requires relatively modest additional resources to develop and implement if this is done simultaneously with the development of the original code; however, if the adjoint model is constructed a posteriori, considerable skills may be required for its successful development and implementation.(iii)A considerable amount of resources is needed to establish a suitable database of improved estimates for the input parameters () and for the respective input covariance matrix (), but this is a one-time effort, independent of subsequent applications of the method.

The maturity of the methods at the first two items may be considered as proved and also based upon applications completed within the framework of initiatives of international institutions (OECD/NEA [57] and IAEA). The reason for the consideration of the approach at the third item derives from its potential to open an independent way (i.e., different from propagation of code input errors or from propagation of code output errors) for performing global uncertainty analysis. In this case, the method itself, as an uncertainty procedure, is not an established technology, but it constitutes an established idea and framework to pursue a mathematically based road to evaluate the uncertainty in system-code predictions.

In the following subsections, short descriptions of the GRS and UMAE methods are given for the sake of completeness. More detailed information about the CIAU methodology, based on (code) internal assessment of uncertainty and on propagation of output errors, are given in Section 5.

3.1. GRS Method

The GRS method [13] is a probabilistic method based on the concept of propagating the input uncertainties. All relevant uncertain parameters including the code, representation, and plant uncertainties are identified, any dependencies between uncertain parameters are quantified and ranges and/or PDFs for each uncertain parameter are determined. Expert judgment and experience from code applications to separate and integral test and full plant application are principal sources of information for uncertain parameters identification and quantification. Peculiarities of the GRS method are as follows. (i)The uncertainty space of input parameters (defined by their uncertainty ranges) is sampled at random according to the combined “subjective” probability distribution of the uncertain parameters, and code calculations are performed by sampled sets of parameters.(ii)The number of code calculations is determined by the requirement to estimate a tolerance-confidence interval for the quantity of interest (such as peak-clad temperature). The Wilks formula [14] is used to determine the number of calculations needed for deriving the uncertainty bands.(iii)Statistical evaluations are performed to determine the sensitivities of input parameter uncertainties on the uncertainties of key results (parameter-importance analysis).(iv)There are no limits for the number of uncertain parameters to be considered in the analysis, and the calculated uncertainty has a well-established statistical basis.(v)The method relies only on actual code calculations without using approximations like fitted response surfaces. For the selected plant transient, the method is applied to an integral-effects test simulating the same scenario prior to the plant analysis. If experimental data are not bounded, the set of uncertain input parameters has to be modified. Experts identify significant uncertainties to be considered in the analysis, including the modeling uncertainties and the related parameters, and identify and quantify dependencies between uncertain parameters. Subjective probability density functions are used to quantify the state of knowledge of uncertain parameters for the specific scenario. The term “subjective” is used here to distinguish uncertainty due to imprecise knowledge from uncertainty due to stochastic or random variability.

Uncertainties of code model parameters are derived based on validation experience. The scaling effect has to be quantified as model uncertainty. Additional uncertain model parameters can be included, or PDF can be modified, accounting for results from the analysis of separate-effects tests.

Input parameter values are simultaneously varied by random sampling according to the subjective PDF and dependencies. A set of parameters is provided to perform the required number n of code runs. For example, the 95% fractile and 95% confidence limit of the resulting subjective distribution of the selected output quantities are directly obtained from the n-code results, without assuming any specific distribution. No response surface is used or needed.

Sensitivity measures by using regression or correlation techniques from the sets of input parameters and from the corresponding output values allow the ranking of the uncertain input parameters in relation to their contribution to output uncertainty. Therefore, the ranking of parameters is a result of the analysis, not of prior expert judgment. The 95% fractile, 95% confidence limit, and sensitivity measures for continuous-valued output parameters are provided.

Upper-statistical tolerance limits are the upper ß confidence for the chosen a fractile. The fractile indicates the probability content of the probability distributions of the code results (e.g., a = 95% means that PCT is below the tolerance limit with at least a = 95% probability). One can be ß % confident that at least a% of the combined influence of all the characterized uncertainties are below the tolerance limit. The confidence level is specified because the probability is not analytically determined. It accounts for the possible influence of the sampling error due to the fact that the statements are obtained from a random sample of limited size. The smallest number n of code runs to be performed is given by the Wilks formula and is representing the size of a random sample (a number of calculations) such that the maximum calculated value in the sample is an upper-statistical tolerance limit. For two-sided statistical tolerance intervals (investigating the output parameter distribution within an interval), the formula is The minimum number n of calculations for both one sided and two sided can be found in Table 1. As a consequence, the number n of code runs is independent of the number of selected input uncertain parameters, only depending on the percentage of the fractile and on the desired confidence-level percentage. The number of code runs for deriving sensitivity measures is also independent of the number of parameters. As an example, a total number of 100 runs is typical for the application of the GRS method. For regulatory purposes, where the margin to licensing criteria is of primary interest, the one-sided tolerance limit may be applied, that is, for a 95th/95th percentile, 59 calculations should be performed.

3.2. UMAE Method

The UMAE [15], whose flow diagram is given in Figure 11, is the prototype method for the description of “the propagation of code-output-errors” approach. The method focuses not on the evaluation of individual parameter uncertainties but on the propagation of errors from a suitable database calculating the final uncertainty by extrapolating the accuracy from relevant integral experiments to full scale NPP.

Considering ITF of reference water-cooled reactor and qualified computer codes based on advanced models, the method relies on code capability, qualified by application to facilities of increasing scale. Direct data extrapolation from small-scale experiments to reactor scale is difficult due to the imperfect scaling criteria adopted in the design of each scaled down facility. So only the accuracy (i.e., the difference between measured and calculated quantities) is extrapolated. Experimental and calculated data in differently scaled facilities are used to demonstrate that physical phenomena and code predictive capabilities of important phenomena do not change while increasing the dimensions of the facilities (see right loop FG in Figure 11).

Other basic assumptions are that phenomena and transient scenarios in larger-scale facilities are close enough to plant conditions. The influence of user and nodalization upon the output uncertainty is minimized in the methodology. However, user and nodalization inadequacies affect the comparison between measured and calculated trends; the error due to this is considered in the extrapolation process and gives a contribution to the overall uncertainty.

The method utilizes a database from similar tests and counterpart tests performed in ITF, that are representative of plant conditions. The quantification of code accuracy (step f in Figure 11) is carried out by using a procedure based on the fast Fourier transform-based method (FFTBM) [18], characterizing the discrepancies between code calculations and experimental data in the frequency domain, and defining figures of merit for the accuracy of each calculation. Different requirements have to be fulfilled in order to extrapolate the accuracy.

Calculations of both ITF experiments and NPP transients are used to attain uncertainty from accuracy. Nodalizations are set up and qualified against experimental data by an iterative procedure, requiring that a reasonable level of accuracy is satisfied. Similar criteria are adopted in developing plant nodalization and in performing plant transient calculations (see left loop FG in Figure 11). The demonstration of the similarity of the phenomena exhibited in test facilities and in plant calculations, accounting for scaling-laws considerations (step “k” in Figure 11), leads to the analytical simulation model (ASM), that is, a qualified nodalization of the NPP.

4. Relevant Topics for Uncertainty Evaluation

Fundamental aspects to be considered when developing an uncertainty method are briefly presented [19]. The definition of “topics relevant for uncertainty evaluation,” with the acronym TRUE, is introduced to emphasize the central role they have to play in structuring the architecture of a methodology. The following three TRUEs are discussed, and for each of them, one example is given, together with the lesson learned. (i)The nodalization choices. Different input decks (i.e., nodalization user choices) produce different effects upon relevant code output parameters.(ii)The code versions. Different code versions (same developer) have a strong impact on the prediction of relevant code-output parameters.(iii)The bifurcation analysis. Scenarios can be imagined where bifurcations bring the transient evolution far from the best-estimate deterministic prediction, thus invalidating the connected uncertainty evaluation.

4.1. The Nodalization Choices

Results from the analysis of the LBLOCA DEGB in the Angra 2 NPP are considered [20]. A “fictitious” 3D nodalization of the reactor pressure vessel (RPV) was adopted, and the influence of upper-plenum (UP) noding assumption was considered by developing three different RPV-UP nodalizations simulating one-uniform (Z in Figure 12) and two-nonuniform (X and Y in Figure 12) UP behaviors (top-down flow allowed in all channels, top-down flow allowed in all channels except in the hot assembly, and top-down flow allowed only in the determined breakthrough channels, resp.).

A comprehensive sensitivity study has been carried out, aiming at confirming the influence of selected input parameters upon the LBLOCA predicted scenario, and at showing the importance of nodalization upon the same prediction when an assigned input parameter is varied. Starting from the “reference” nodalizations (X, Y, and Z), single parameters are varied in each code run. Sixty one variations of input parameters, subdivided in six groups (“fuel,” “nodalization,” “loop hydraulics,” “PSA and ECCS,” “neutronics,” and “other”), were considered.

The dispersion of results for ?PCT (defined as the difference between the PCT of the reference calculation and the PCT obtained from the generic sensitivity run) can be seen in Figure 12. The following two outcomes can be detected: (i)the reference PCT are affected by the nodalization (i.e., choices);(ii)the ?PCT are strongly affected by the nodalization (i.e., a given input uncertain parameter is relevant or not depending upon the selected nodalization). Moreover, also the sign of ?PCT (i.e., the increase or decrease of the PCT value with respect to the reference calculation) is nodalization dependent (e.g., sensitivity case no. 55). It should be noted that the conclusions at items (i) and (ii) are also applicable when different thermal-hydraulic system codes are adopted. The lesson learned, that is, the importance of the nodalization and of the code upon the predicted scenario, should be duly considered when the evaluation of the uncertainty of relevant code output parameters is performed by the process of propagating-input uncertainties through the code (i.e., propagation of code input uncertainties) that is affected by the code itself and by the nodalization.

4.2. The Code Versions

After the closure of the uncertainty-method study (UMS), University of Pisa performed comparison calculations of experiment LSTF-SB-CL-18 using different versions of the RELAP5 code, that is, in chronological order MOD 2, MOD 3.2, and MOD 3.2.2. In the UMS study, Mod 2 was used by the University of Pisa, and MOD 3.2 by AEA Technology as well as ENUSA. In the performed a posteriori analysis, it turned out that MOD 3.2 calculated a 170?K higher-peak-clad temperature compared with MOD 2 and MOD 3.2.2 using the same input deck (Figure 13). This is in agreement with the AEAT reference peak-clad temperature value and may contribute to the relative high upper limit of the uncertainty ranges calculated by AEAT and ENUSA in the framework of UMS.

The lesson learned from this TRUE is that the code versions (highly evaluated and qualified system thermal-hydraulic code), with the same input deck, have strong impact upon results, and affect uncertainty prediction. Therefore, “direct” specific code qualification is needed for uncertainty evaluation and the “internal assessment of uncertainty” (see Section 5), by which the uncertainty methodology is strictly connected with the code version, is a highly recommended property to consider.

4.3. The Bifurcation Analysis

Scenarios can be imagined where bifurcations bring the transient evolution far from the best-estimate deterministic prediction, thus invalidating the connected uncertainty evaluation. Therefore, a bifurcation analysis may reveal necessary. Bifurcations can be originated by the actuation or lack of actuation of a system (e.g., pressurizer relief valves) or by the occurrence of a physical phenomenon characterized by a threshold (typically, the dryout). A tree of uncertainty bands can be predicted by CIAU, and the results of a sample application [21] can be seen in Figure 14. The CIAU-bifurcation capability was applied by University of Pisa in the post-UMS study and the uncertainty ranges obtained by AEAT (extreme results in the UMS framework) were (basically) reproduced by the CIAU bifurcation study. The lesson learned from this experience is that bifurcation study is possible and produces (as expected) wider uncertainty bands compared with a standard uncertainty study.

5. The Internal Assessment of Uncertainty

All uncertainty evaluation methods are mainly affected by the following limitations: (i)the resources needed for their application may be very demanding, ranging up to several man-years;(ii)the achieved results may be strongly method-user dependent. The last item should be considered together with the code-user effect, widely studied in the past, and may threaten the usefulness or the practical applicability of the results achieved by an uncertainty method. Therefore, the internal assessment of uncertainty (IAU) was requested as the follow-up of an international conference [22]. The approach CIAU, code with capability of IAU, has been developed with the objective of reducing the limitations discussed above.

5.1. CIAU Method

The basic idea of the CIAU [16, 17] can be summarized in two parts. (i)Consideration of plant status. Each status is characterized by the value of six “driving” (i.e., relevant) quantities (whose combination is called “hypercube”) and by the time instant when those values are reached during the transient.(ii)Association of uncertainty (quantity and time) to each plant status. The key feature of CIAU is the continuous full reference to the experimental database. Accuracies detected from the comparison between experimental and calculated data are extrapolated to obtain uncertainty in the system-code predictions. A solution to the issues constituted by the “scaling” and “the qualification” of the computational tools is embedded into the method [23, 24], through the UMAE methodology, that constitutes the engine for the development of CIAU and for the creation of the error database.

Assigned a point in the time domain, the accuracy in predicting the time of occurrence of any point is distinguished from the accuracy that characterizes the quantity value at that point. Thus the time domain and the phase space are distinguished: the time domain is needed to characterize the system evolution (or the NPP accident scenario), and the phase-space domain is used to identify the hypercubes. The safety relevance and the consistency with the technological achievements have been considered in order to select the driving quantities, listed as (1) through (6) in Table 2, whose upper and lower boundaries have been fixed together with a minimum-optimal number of intervals determined considering the following aspects: (i) design of primary system plant, (ii) design and licensing of ECCS, (iii) design and optimization of emergency operational procedures, (iv) benchmarking of simplified models, (v) training purpose, and (vi) code limitations. About the transient time, a stable steady-state (or stationary) situation must occur, or be specified, when a code calculation is concerned.

Quantity and time accuracies can be associated to errors-in-code models and uncertainties in boundary and initial conditions including the time sequence of events and the geometric modeling (or nodalization) of the problem. Consider the following facts. (i)The “transient-time-dependent” calculation by a code resembles a succession of steady-state values at each time step and is supported by the consideration that the code uses, and is based on a number and a variety of empirical correlations valid (and qualified) at a steady state with assigned geometric discretization (or nodalization) for the concerned system. Therefore, quantity accuracy can be associated primarily with errors-in-code models.(ii)Error associated with the opening of a valve (e.g., time when the equivalent full-flow area for the flow passage is attained) or inadequate nodalization induce time errors that cannot be associated to code-model deficiencies. Therefore, time accuracy can be associated primarily with uncertainties-in-boundary and initial conditions. Once the time-accuracy (uncertainty) vector (TAV, TUV) and the quantity-accuracy (uncertainty) matrix (QAM, QUM) are derived, the overall accuracy (and uncertainty) is obtained by the geometric combination of the two accuracies (and uncertainties) values, that is, time and quantity, in the two-dimensional space-time plane.

A general idea of the architecture of the CIAU methodology can be derived from Figure 15. Mainly two processes can be distinguished, the “error-filling process” by which the NPP statuses are filled with the values of the error database, and the “error-extraction process” by which the uncertainty values (derived from the extrapolation process of accuracy) are picked up from the NPP statuses selected during the transient calculation to generate continuous uncertainty bands enveloping the ASM, that is, the qualified NPP calculation in the UMAE nomenclature.

Summarizing, six dimensions constitute the phase-space domain and each combination of intervals of the driving quantities identifies one hypercube in that domain. Therefore, a hypercube and a time interval characterize a unique plant status in the frame of uncertainty evaluation and all plant statuses are characterized by a matrix of hypercubes and by a vector of time intervals. Each point of the curve (generic thermal-hydraulic code output plotted versus time) is affected by a quantity uncertainty and by a time uncertainty. Owing to the uncertainty, each point may take any value within the rectangle identified by the quantity and the time uncertainty. The value of uncertainty, corresponding to each edge of the rectangle, can be defined in probabilistic terms. This satisfies the requirement of a 95% probability level to be acceptable to the USNRC staff for comparison of best-estimate (BE) predictions of postulated transients to the licensing limits in 10?CFR Part 50.

One main difference between UMAE an CIAU has to be emphasized: in the UMAE methodology, the uncertainty of thermal-hydraulic quantity is an average of the values obtained in different simulations of the same class of transients and in the same facility or in similar tests performed in different facilities; in the case of CIAU, the results of any kind of transients can be combined to derive the accuracy and then the uncertainty if they pass through the same plant status (i.e., hypercube and time when the hypercube is reached).

5.2. Qualification Processes in CIAU Method

One important aspect of any tool developed in system thermal hydraulics is the possibility to perform an assessment and eventually to show the quality level, utilizing databases independent of those utilized in the development of the tool itself. Two qualification steps are foreseen in the case of CIAU.

The first one can be identified as the internal qualification process. Data gathered inside each hypercube or each time interval of QUM and TUV for uncertainty evaluation, or inside QAM and TAV for accuracy evaluation, are labeled before being combined. In other terms, each uncertainty- or accuracy-connected value includes its origin, that is, the transient scenario type and the part of the hypercube that is concerned. A statistical analysis can be used to find whether groups of data, coming from different events or related to different parts of the same hypercube, are different. For instance, it might happen that data from the analysis of several SBLOCAs produce uncertainty values much higher than data from the analysis of a similar number of LBLOCAs, when the same hypercubes are concerned. In this case, the number of hypercubes, that is, the ranges of variation of the driving quantities, must be changed or the transient type must be identified inside each hypercube. More in detail, it must be shown that accuracy and uncertainty values in each hypercube or in each time interval do not depend upon (i)time (into the transient) when the hypercube is reached;(ii)volume scaling factors;(iii)transient type (e.g., SBLOCA, LBLOCA, LOFW, etc.);(iv)dimension of hypercubes;(v)ITF, SETF or NPP characteristics. Example of how the internal qualification process [25] is lead is given in Figure 16. For reason of simplicity, here, the analysis is focused only on the accuracy values of one quantity inside one hypercube. The selected quantity is the cladding temperature, whereas the selected hypercube is one of those containing the higher number of experimental transients (hypercube 9-4-2-3-5-3, where each digit varies between 1 and the number of intervals by which each “driving” quantity is subdivided, see Table 2). Figure 16 shows that inside the selected hypercube, no correlations based on the transient type and the on-volume scaling factor (Kv) can be established among the accuracy values of the cladding temperature.

The internal qualification process is continuously ongoing during the development of the method: the experience gained, so far, does not bring to any need to increase the number of hypercubes nor to characterize the event type.

The second qualification step is carried out when a reasonable number of hypercubes and time intervals have been filled. In this case, the CIAU is run to simulate qualified transients measured in ITF that have not been utilized for getting uncertainty values. The success consists in demonstrating that CIAU calculated uncertainty bands envelop the experimental data. This must be intended as the reference (external) qualification process for the CIAU, together with the condition that uncertainty bands are reasonably large. The completion of this step will also allow establishing, on an objective basis, the confidence level of the uncertainty statements. The increase in the number of positively completed qualification analyses will increase the confidence level of the procedure. No correlation has been established yet between the number of qualification analyses and the expected confidence level of the uncertainty results, though the target is to achieve the 95% confidence level.

6. Conclusions

The uncertainty evaluation constitutes the ending necessary step for the application of a system thermal-hydraulic code to the nuclear technology. Therefore, any application of a best estimate code without the uncertainty evaluation is meaningless simply because an error is unavoidable for any prediction. The nuclear safety principles and, primarily, the concepts like defense in depth are the reasons why an uncertainty analysis is performed. It must be ensured that the nominal result of a code prediction, “best estimate” in the present case, is supplemented by the uncertainty statement, that can be simplified as “uncertainty bands,” in such a way that connected safety margins are properly estimated.

Several sources of uncertainty have been classified (first goal of the paper) and topics relevant for uncertainty evaluation have been emphasized (second goal) to investigate which of these are embedded into the currently adopted methods and which comes out in the frame of their applications. The third purpose of the paper is twofold: (a) to identify the roadmaps for uncertainty evaluation adopted by the methods currently applied to the cases of industrial interest, making reference to the classification based on propagation of code input errors and propagation of code output errors, and (b) to propose an innovative method (based on the adjoint and global adjoint sensitivity-analysis procedure extended to performing uncertainty evaluation in conjunction with data adjustment and assimilation) that might not suffer from the drawbacks identified for the current methods.

Finally, a method to calculate the uncertainty associated with NPP computer-code calculations directly integrated in the code has been presented. The main advantage of an IAU approach consists in avoiding, from the methodology user point of view, the interpretation of logical statements that are part of the application process for all current uncertainty methods, that is, avoiding user effect when using uncertainty methodologies. The above consideration does not exclude the use of engineering judgment: rather, engineering judgment is embedded into the development of the IAU method and is not needed in its application.

List of Abbreviations
ASAP:Adjoint sensitivity-analysis procedure
ASM:Analytical simulation model
BE:Best estimate
BEMUSE:Best-estimate methods uncertainty and sensitivity evaluation
CCFL:Counter current-flow limiting
CIAU:Code with the capability of internal assessment of uncertainty
CSAU:Code scaling, applicability, and uncertainty evaluation
DAA:Data adjustment and assimilation
DEGB:Double-end guillotine break
ECCS:Emergency core-cooling system
FFTBM:Fast Fourier transform-based method
GASAP:Global adjoint sensitivity-analysis procedure
IAU:Internal assessment of uncertainty
ITF:Integral test facility
LBLOCA:Large break loss-of-coolant accident
LOCA:Loss-of-coolant accident
LOFW:Loss-of-feed water
NPP:Nuclear power plants
PCT:Peak cladding temperature
PDF:Probability density function
PORV:Pilot-operated relief valve
PSA:Probabilistic safety analysis
QAM:Quantity accuracy matrix
QUM:Quantity uncertainty matrix
RPV:Reactor pressure vessel
SBLOCA:Small break loss-of-coolant accident
SETF:Separate effect-test facility
TAV:Time accuracy vector
TRUE:Topics relevant for uncertainty evaluation
TUV:Time uncertainty vector
UMAE:Uncertainty methodology based on accuracy extrapolation
UMS:Uncertainty methods study
UP:Upper plenum.