Abstract

A general evaluation methodology development and application process (EMDAP) paradigm is described for the resolution of severe accident safety issues. For the broader objective of complete and comprehensive design validation, severe accident safety issues are resolved by demonstrating comprehensive severe-accident-related engineering through applicable testing programs, process studies demonstrating certain deterministic elements, probabilistic risk assessment, and severe accident management guidelines. The basic framework described in this paper extends the top-down, bottom-up strategy described in the U.S Nuclear Regulatory Commission Regulatory Guide 1.203 to severe accident evaluations addressing U.S. NRC expectation for plant design certification applications.

1. Introduction

Associated with rare, hazardous events, such as nuclear power plant (NPP) severe accidents, is a degree of uncertainty that provides a significant challenge to the evaluation and resolution of related design and analysis methods issues. For events occurring at some sufficiently observable frequency, design improvements can evolve through the understanding gained from such events and applicable test programs, leading to long-term acceptance. The broad uncertainties associated with severe accident initiators and event progression impose inherent limits on the benefits of this approach for severe accident design. As such, there is greater reliance on analysis and emphasis on the better understood severe accident phenomena. The engineering design process for an NPP’s severe accident response strategy has evolved to a process that relies on the(i)establishment of safety goals,(ii)identification of processes and phenomena,(iii)iterative design processes focused on risk reduction,(iv)test programs,(v)expert elicitation on important severe accident safety issues,(vi)analysis methods development,(vii)process studies.

Consistent with current US regulatory requirements and guidance, final acceptance and resolution of relevant beyond-design-basis events is demonstrated through detailed deterministic studies and probabilistic risk assessment (PRA).

The unique characteristic of this process for severe accidents is the consideration of risk in the resolution of severe accident safety issues. Generation III and IV advanced reactors designs incorporate features that significantly reduced risk relative to current-generation light water reactors (LWRs). Practical consideration of this reduced risk requires that this information be incorporated into measures not only of acceptable performance, but also of relevance.

The key objectives of any severe accident safety issue resolution methodology are to(i)define the technical basis for the engineered event prevention and mitigation features,(ii)develop/identify analysis tools,(iii)identify key uncertainties and uncertainty treatments impacting acceptance criteria figures of merit,(iv)determine the calculation matrix to demonstrate containment performance during severe accidents,(v)calculate safety margins against regulatory expectation.

While the major components of severe accident engineering are the credited test programs and corresponding analytical methods, the identification of the necessary analyses involves engineering insights that combine regulation, industry experience, fundamental understanding of thermal hydraulic and severe accident phenomena, and risk/consequence factors. The principal severe accident design goal for all cases is the demonstration that the containment is preserved as a leak-tight barrier for at least 24 hours. By virtue of the inherent low probability of severe accidents, there is broad diversity in postulated mechanisms that can lead to containment failure. In light of this unique challenge, priority must be established so that meaningful conclusions can be drawn from analysis.

This paper presents a general paradigm for the resolution of severe accident safety issues related to the more likely severe accidents. It is a natural extension of the top-down, bottom-up analysis framework described in the U.S. Nuclear Regulatory Commission Regulatory Guide 1.203 on the Evaluation Methodology Development and Application Process (EMDAP) [1]. The top-down, that is, requirements, element begins with the identification of safety goals, documentation of severe accident engineering activities related to addressing these goals, and the identification of important phenomena that relate to acceptance criterion. The bottom-up, or methodology adequacy element addresses the many facets of uncertainty management, including test data applicability, code and model development, code verification and validation, uncertainty quantification, human reliability, and the consequence of failure.

This methodology complements PRA activities that address the broader event trees describing all possible scenarios leading to core damage and radiological releases. It addresses the elements stated above and has been applied to AREVAs U.S. EPR design [2].

2. Methodology Description

The objective of an evaluation methodology is to confirm the adequacy of a particular system, structure, or component to reliably and safely perform under phenomenological challenges associated with normal operation, anticipated operational occurrences, postulated accidents, and severe accidents. For an NPP design, the EMDAP addresses the regulatory expectation related to the content of the safety analysis reports that are reviewed and approved by the safety authority. In the US the industry is guided by US NRC’s Standard Review Plan and Regulatory Guide 1.206 [3, 4] for preparing safety analysis report content.

In 2005, the US NRC published RG 1.203, describing the structured evaluation model development and assessment process. With its introduction, greater responsibility has been placed on applicants for defining the technical basis of design-basis evaluation methodologies, rather than procedural compliance with the elements of 10 CFR 50 (e.g., Appendix K) or the SRP. The EMDAP is considered to be generally applicable to the development of analysis methods for the purpose of evaluating safety issues related to NPP unanticipated transients and accidents. EMDAP starts from the definition of the objectives, the functional requirements, and the identification of important phenomena. Guided by these top-level priorities, code development and assessment follow, ultimately leading to the evaluation model adequacy decision.

With regard to severe accident analysis, the U.S. NRC’s SECY-93-087 [5] provides further clarification of the regulatory expectation by stating that “containment integrity be maintained for approximately 24 hours following the onset of core damage for the more likely severe accident challenges.” The application of an evaluation methodology is not simply analysis, but a suite of activities that guide an acceptability determination. They encompass the breadth of understanding on the subject, beginning with the characterization of perceived risks. This includes the statement of safety goals and identification of the corresponding safety issues. Safety issue resolution begins with a review of the severe accident engineering accomplishments that demonstrate proof of principle, such as the identification of relevant phenomena, the credited test programs, the evolution of analytical techniques, and related ranges of applicability of the conclusions drawn from these activities.

With this foundation, the development of an evaluation methodology can begin in earnest. Safety goals are translated into analysis measures (i.e., figures of merit), process and phenomenological uncertainties are characterized, and calculations are designed to demonstrate the completeness of the design in terms of the expected domain of possibilities. Calculations are performed and conclusions drawn. These activities are illustrated in Figure 1 within four elements for this severe accident safety issue resolution methodology. The nomenclature deviates from that appearing in RG 1.203 by emphasizing the role of an evaluation methodology to address specific regulatory compliance issues and the unique role of risk assessment to inform severe accident analysis. By emphasizing compliance in a separate process element, activities related to managing preexisting knowledge and experience are naturally combined in the second element. Elements 3 and 4 are analogous to elements 3 and 4 of RG 1.203 while uniquely addressing severe accident issues and the role of PRA and severe accident management guidelines.

2.1. Managing Compliance

Managing compliance begins with the identification of the regulatory goals, which emphasize safety. The ultimate NPP safety goal is the protection of the public from uncontrolled release of fission products through a breach in containment following a severe accident. As outlined in WASH-1400 [6], the commonly recognized modes of containment failure following a postulated severe accident are steam explosion, containment bypass, hydrogen explosion, containment overpressurization, and basemat ablation.

These containment failure mechanisms are expected to be resolved through design features providing both preventive and consequence mitigation protection. To benchmark plant safety for new light water reactor designs, the NRC has outlined in SECY-93-087 and SRP Section  19.2 the following acceptance criteria for a plant’s response to severe accidents.

2.1.1. Hydrogen Mitigation
(i)Accommodate hydrogen generation equivalent to a 100% metal-water reaction of the fuel cladding.(ii)Limit containment hydrogen concentration under 10%.(iii)Provide containment-wide hydrogen control.
2.1.2. Core Debris Coolability
(i)Provide reactor cavity floor space to enhance debris spreading.(ii)Provide a means to flood the reactor cavity to assist in the cooling process.(iii)Protect the containment liner and other structural members with concrete, if necessary.(iv)Ensure that the best-estimate environmental conditions (pressure and temperature) resulting from core-concrete interactions do not exceed service level C for steel containments or factored load category for concrete containments, for approximately 24 hours. Also ensure that the containment capability has margin to accommodate uncertainties in the environmental conditions from core-concrete interactions.
2.1.3. High-Pressure Melt Ejection (HPME)
(i)Provide a reliable depressurization system.(ii)Provide cavity design that limits the amount of ejected core debris that reaches the upper containment.
2.1.4. Containment Performance
(i)Preserve the containment’s role as a reliable, leak-tight barrier for approximately 24 hours following the onset of core damage under the more likely severe accident challenges.(ii)Beyond 24 hours, ensure that the containment continues to serve as a barrier against the uncontrolled release of fission products.
2.1.5. Equipment Survivability
(i)Maintain reliability of functions during relevant severe accident scenarios.
2.2. Managing Data and Expertise

While there is still much to learn about the nature of severe accident causes and progression, a rich database of international research and development exists to support the technical basis of an evaluation methodology (see, e.g., [764]). In general, these references provide a good compilation and discussion of the experience gained and conclusions drawn from those tests. This information gets fed back into the engineering process to(1)refine design features,(2)provide technical bases for resolving safety issues,(3)validate computational tools for production analysis,(4)define applicability and uncertainty ranges to be considered in performance analyses.

Generation III and IV NPP designs incorporate design features to specifically address the severe accident safety issues identified in Section 2.1. Associated with each of these issues are processes and phenomena expected during a severe accident. Drawing on the general understanding of severe accident initiators and their progression, as developed from research and development, and considering analysis objectives, expert opinion is captured into phenomena identification and ranking tables (PIRTs) [65], many of which appear in the public domain (see, for example, [66, 67]). With severe accidents, there is often the temptation to include sequences having low frequency with potentially high consequence. Such events are best examined individually (see discussion in Section 2.4.3), as the uncertainties associated with these more remote events are, by definition, much more difficult to quantify. Table 1 provides a general list of phenomenological categories that theoretically could appear during a severe accident. The related processes and phenomena must be characterized and understood to make an informed judgment on the merits of a particular design.

2.3. Managing Analytical Capability

Analytical capability relies on a calculation procedure or instruction, the basis of which builds from the complete evaluation methodology. Addressing the completeness and range of applicability of individual analytical models and correlations represents a necessary, but not unique code/model challenge to demonstrating the adequacy of an analysis tool. Code/model development challenges may begin with the form of the governing equations, but, for complex events like a severe accident, certainly will appear with any empiricism introduced for analytical closure.

Beyond code structure, flexible user-defined input presents a near endless opportunity for improper nodalization and/or useroptions. The so-called user effect reflects the consequence of a robust and flexible modeling interface popular within the modeling and simulation community [68]. The most common strategy used to address the user effect is to define a priori restrictions on nodalization and the application of user options that are codified into automation.

Several computer codes have been developed to specifically address severe accident phenomena. Among those developed to support the US industry and address most of the phenomenological categories appearing in Table 1 are MELCOR and SCDAP/RELAP5, sponsored by the U.S. NRC [69, 70], and MAAP, sponsored by the Electric Power Research Institute (EPRI) [71].

While these codes have been developed to address a broad range of severe accident phenomena and the integral relationship of the core, reactor coolant system, and containment, specialty codes are often required to assess unique phenomena, such as corium spreading and stabilization and fission product transport.

The credibility of an evaluation methodology relies on the associated verification and validation through code assessments. Verification is the confirmation that documented statements accurately reflect the coding and evaluation methodology procedure mechanics, while validation is the act of demonstrating or testing. Verification typically takes the form of a line-by-line review of coding and supporting evaluation documentation, and confirmation of compliance with an approved quality assurance plan. Standard problems, benchmarks with other codes, or analytical exercises with known solutions (e.g., method of manufactured solutions) are also useful complements to explicit code-to-data comparisons.

Computational tools are validated using an appropriate developmental assessment matrix consisting of both separate and integral effects test program data that address the more important phenomena influencing the figures of merit. The assessment matrix supports the evaluation methodology development in defining nodalization, quantifying code accuracy, and demonstrating any code or model scaling effects. The principal objectives are to demonstrate sufficient accuracy in modeling dominant physical processes (determined from a PIRT), appropriate nodalization, independence of scale effects, and the relative insensitivity of compensating error. Table 2 relates many of the severe accident phenomena appearing in Table 1 to major test programs conducted internationally.

Analysis uncertainty in modeling and simulation has many sources. These include those associated with approximated models that describe the underlying physics; those associated with the settings of parameters used in physical models, those associated with performing a simulation at a given spatial resolution; those associated with approximations in the numerical algorithms. Uncertainty quantification is the process of characterizing, estimating, propagating, and analyzing all kinds of uncertainty to support decision making.

Because of the investment involved in quantifying uncertainty, the set of uncertainty parameters must be manageable and consider the task outcomes from the previous evaluation methodology development tasks, in particular, expertise documented in a PIRT. Data from separate-effects tests are separated into control and validation sets. The control set is used to derive the uncertainty and, as the name implies, the validation set is used to validate the integrity of the uncertainty model. A general uncertainty model is characterized by a bias and a probability density function; however, the uncertainty model does not necessarily reflect a task unique to performing a probabilistic analysis. A deterministic analysis can be viewed as one built from uncertainty models consisting of parameter biases. While the bias may simply reflect a static error in a model parameter, it can also be used to define a conservative or bounding treatment based on a limited set of test data.

A broad definition of uncertainty quantification includes risk quantification, which is particularly applicable when examining severe accident phenomena. Risk quantification requires a model that takes system response metrics and their uncertainties as an input and produces risk metrics and their uncertainties as the output. For severe accident analysis, an applicable system response metric could be core damage frequency (CDF). Defining CDF threshold to use to manage the scope of analysis must have a reasonable technical basis. For Generation III and IV NPP designs, new safety features have driven the CDF very low. As such, a poorly identified threshold might exclude so many events that the completeness of a severe accident analysis comes into question. Herein lies the concept of “relevant” or “more likely” events, a term that comes from SECY 93-087 and is interpreted to mean that there exists a threshold of relevance for which certain events or combination of events becomes so unlikely that detailed analytical consideration is unnecessary. Just as PIRT provides guidance of important phenomena, the risk metric CDF provides guidance of the risk-relevant scenarios.

2.4. Managing Analysis and End-User Products

Considering the high degree of uncertainty often associated with severe accident progression, the assignment of event studies can be speculative. Accordingly, a 3-fold strategy for the development of a sufficient calculation matrix is employed incorporating(i)best-estimate calculations of relevant events,(ii)uncertainty analysis calculations,(iii)supplemental sensitivity calculations.

An overview of activities involved in preparing these calculations is described in the following subsections.

2.4.1. Best-Estimate Analysis

Best-estimate calculations of the risk-relevant scenarios are included to reveal performance target insights appropriate for a relevant discussion. Such calculations are best-estimate considering both risk factors leading to a particular event and the subsequent phenomenological progression. These risk-relevant scenarios are identified by incorporating risk information from PRA to select those most probable events that lead to core damage and challenge containment integrity. Regarding the former, deterministic analysis is performed to identify particular initiating events and subsequent failures that lead to the onset of core damage. For licensing purposes, the criteria delineating a severe accident are simply taken as a core state that exceeds the design-basis LOCA regulatory limit on clad temperature, which per US code is 2200°F (1204 C). The fidelity of the analytical tool being used in this exercise must be considered. As such, adjustments on that criterion may be necessary (see [72]). Analysis and conclusion from the PRA are expected to provide (1) the demarcation of specific core damage end states (CDESs) and (2) the probability of a particular “risk-relevant” event type leading to a particular CDES.

Table 3 presents common initiating event families considered in typical PWR level 1 PRA. In practice, not all of these event families can be considered relevant. U.S. NRC’s Regulatory Guide 1.216 on containment structural integrity evaluations describes an acceptable way to identify the more likely severe accident initiators as a suite of sequences or plant damage states that, when ordered by percentage contribution, represent 90 percent or more of the CDF. A CDF threshold is identified associated with that criterion, which becomes the filter for identifying the suite of relevant events or event families.

CDESs are used by PRA to link the level 1 core damage event trees to the level 2 containment event trees by bringing together core damage sequences with similar characteristics and using those sequences as the initiating event for examining severe accident mitigation and containment failure probability. A selection of CDES for a typical PWR includes(i)a high RCS pressure at core damage such as transient sequences where the bleed valves have not been opened prior to core damage;(ii)a low RCS pressure at core damage sequence such as with a stuck open pressurizer safety/relief valve;(iii)a loss-of-offsite power concurrent with a small LOCA (e.g., pump seal);(iv)sequences initiated by Small LOCAs with bleed failure.

PRA considers an expanded list of CDES and plant damage ends states; however, for the purpose of identifying relevant scenarios, the CDESs of interest are those representing a unique RCS condition at the onset of core damage. Specifically, this is system pressure and the nature of ongoing feed and bleed. Concern about the RCS condition ends when the reactor pressure vessel fails. For these reasons, the four categories of CDESs were viewed as sufficient to cover (1) the situation of no RCS boundary failure (high pressure) as would result from the loss of secondary cooling to (2) a TMI-2-like scenario with a stuck pressurizer relief valve to (3) a LOOP with RCP-seal LOCAs and (4) small (and larger) LOCAs. It should be noted that RCS breaches are addressed at both low system locations with the RCP-seal and small breaks and high system locations with the stuck pressurizer relief valve high in the system.

By considering only the relevant event families and binning event families based on similar CDES, the relevant events and the corresponding event frequency, presented as percent total CDF, may be compiled as shown in Table 4. The event frequency data provides the probability of a particular event type given a random relevant severe accident.

2.4.2. Uncertainty Analysis

An objective of the uncertainty analysis is to consider the range of conditions over which severe accidents are most likely to occur. Performance metrics associated with the identified severe accident safety issues (e.g., hydrogen concentrations, containment pressure, corium temperature, RCS pressure at RPV failure, etc.) and sensitive to the many processes and phenomena appearing in Table 1 can be evaluated using this uncertainty analysis. A large number of different code model parameters can be associated with those 18 phenomenological categories. Table 5 presents a mapping from phenomena to code input. The specific model parameters appear in the MAAP4 code; however, analogous model parameters appear in other severe accidents like MELCOR. As a practical matter, MAAP4 was not viewed as being the preferred code to address all severe accident phenomena. As such, Table 5 only present the set of phenomena (and corresponding code input) well represented by the MAAP4 code. MAAP4 is particularly convenient for this exercise because uncertainty ranges for most of these parameters have been identified by Fauske and Associates, Inc., EPRI’s contractor responsible for code development.

The common method for convolving this uncertainty domain relies on a “Monte-Carlo”-like nonparametric statistical approach. For each analysis code execution, each of the important uncertainty parameters being treated statistically is randomly sampled based on a previously determined probability distribution. Among the model parameters sample, are model parameters that describe the event initiator, which is sampled according to the predicted frequency (i.e., Table 4). Each sample code calculation can be viewed as the performance of an experiment with the experimental parameters being the important phenomena and plant process parameters and the result being any appropriately represented performance metric (i.e., based on correlation with uncertainty parameters).

Based on the results of a suite of 59 sample calculations, the uncertainty domain of any particular performance metric of interest is quantified. The selection of 59 samples is based on the work of Wilks for defining tolerance regions [73, 74]. Following this nonparametric statistical approach, when 59 observations are drawn from an arbitrary, random distribution of outcomes, it can be shown that the largest value is such that with 95 percent confidence, at least 95 percent of all possible observations from that distribution will be less than the resulting largest value; that is, this result is the 95/95 tolerance limit. For severe accident evaluation applications, this 95/95 benchmark is assumed to be a sufficient estimation of the total tolerance limit of any particular performance metric used to demonstrate the US EPR severe accident response features.

2.4.3. Supplemental Analysis

Supplemental analyses should be performed to complete or complement the best-estimate analysis of the relevant scenarios and the uncertainty analysis. Of particular note are analyses for containment failure probability from HPME and fuel-coolant interactions (both in- and ex-vessel) using parametric models applying methodologies similar to [75]; combustion loads; source term; selected low-frequency, high-consequence events. Inputs required for these studies can usually benefit from extracting bounding values presented in the uncertainty analysis results.

2.4.4. Methodology Confirmation

The methodology confirmation step serves to confirm methodology assumptions introduced through the PIRT and through the reliance on data derived from scaled experiments. The principal objective is to assure that the important processes of interest are well scaled and, in situations in which scale distortion is evident, quantify analytical biases that relate to the safety analysis tool’s ability to scale up the important phenomena. In practice, this exercise involves quantifying the degree to which a model input parameter affects a model output variable. Several approaches have been proposed [7680]; however, variance-based methods are well suited for accompanying nonparametric best-estimate-plus-uncertainty analysis. Variance-based “importance” analysis is performed by first building a mathematical model and, through a stepwise multiple regression exercise, identifying the set of inputs subject to large variability or uncertainty.

The relevance of this understanding is that improved resolution of importance serves to enhance the credibility of this new best-estimate evaluation methodology through the validation of the engineering judgments and code model scaleup that guide methodology development. In addition, it provides insight into the processes and phenomena that impact key analysis measures and thus limits unnecessary characterization of uncertainty contributors of lesser importance.

2.4.5. PRA and SAMG

For severe accidents, understanding phenomenological performance of plant design features addressing the prevention and mitigation of a severe accident only partially fulfills the regulations. 10 CFR 50.34f, presenting the TMI-2 inspired regulatory changes, also introduced the trifecta of phenomenological process studies, PRA, and SAMG as tools for severe accident technical issue resolution. As previously described, phenomenological analysis and PRA have a symbiotic relationship in that result from one can be used to improve the results of an other. This is also true with SAMG, which incorporates operator actions to the inherent process and phenomenological uncertainties. PRA and SAMG are both recognized components to ultimately closing severe accident technical issues. Methods and applications for PRA and SAMG are well represented in existing literature and are not described in this paper.

3. Results

Application of this severe accident issue resolution methodology appears in Section  19.2 of the U.S. EPR FSAR [81]. Specifically, Section  19.2.4 describes the containment performance analysis for the U.S. EPR that meets the regulatory goals that are presented in Section 2.1.

The containment performance analysis shows that the containment maintains its role as a reliable, leak-tight barrier for at least 24 hours following the onset of core damage for the following listed severe accident challenges:(i)hydrogen levels are kept sufficiently low to preclude containment failure by global deflagration and meets the 10 CFR 50.34(f)(2)(ix) requirement that uniformly distributed hydrogen concentrations in the containment do not exceed 10 percent during and following an accident that releases an equivalent amount of hydrogen as would be generated from a 100 percent fuel-clad metal water reaction;(ii)the corium is reliably conditioned in the reactor cavity to promote spreadability in the spreading area after melt gate failure. The core melt stabilization system transfers the corium into a coolable geometry within the spreading compartment, thus providing sufficient removal of residual decay and long-term stabilization;(iii)the U.S. EPR design, which incorporates several design features with enhanced preventive response to an HPME, precludes the potential mechanisms for HPME initiation and subsequent direct containment heating;(iv)design characteristics of the U.S. EPR inherently impede the potential for steam explosion-induced containment failure because the necessary conditions required for steam explosions to exist are avoided;(v)instrumentation and equipment that are relied upon to mitigate the consequences of a severe accident are qualified for use in beyond design basis accident environmental conditions.

In addition, the results of an importance analysis identified 11 uncertainty contributors that dominate U.S. EPR severe accidents. These are summarized in Table 6. The principle phases of a severe accident are well characterized by these 11 uncertainty contributors. The dominant initiating event is the loss of offsite power with pump seal LOCA (i.e., one capturing the broadest spectrum of process and phenomenological characteristics). Corresponding to that event, the break size parameter appears as an important uncertainty contributor. During the core heatup and degradation phase, metal-water reaction and fuel and control rod melt temperature dominate. Event progression is also seen to be impacted by the timing of the primary depressurization system signal. The ex-vessel phase of the event begins with the lower head failure, sensitive to the lower head damage fraction for failure. Core debris coolability and containment response are obviously sensitive to energy input, which appears as high-temperature corium. Fuel and control rod melt temperatures play a role in setting the initial corium temperature as it enters the reactor cavity. Heat removal occurs primarily in the spreading area, where the flat-plate critical heat flux (CHF) Kutateladze number parameter defines the heat transfer rate from the corium pool. Operation of the many passive autocatalytic recombiners (PARs) mitigates the hydrogen threat and contributes to containment cooling. As such, PAR performance parameters have an observable impact on severe accident progression during this phase.

4. Conclusions

A general paradigm for severe accident safety issue resolution, adopting several principles from EMDAP, as appearing in U.S. NRC Regulatory Guide 1.203, has been described. This methodology provides a thorough report of the severe accident engineering activities for addressing the regulatory expectations for demonstrating severe accident response features. While the major components of severe accident engineering are the credited test programs and corresponding analytical methods, the identification of the necessary analyses involves engineering insights that combine regulation, industry experience, fundamental understanding of thermal-hydraulic and severe accident phenomena, and risk/consequence factors.

This safety issue resolution evaluation methodology was applied to the U.S. EPR for the development on content for its final safety analysis report. The results confirmed the designs adequacy to address the principal acceptance criteria. The conclusion is that the approach for demonstrating an NPP’s performance during a severe accident is systematic, complete, and comprehensive and will provide sufficient insight for resolution of severe accident safety issues.

Nomenclature

CDES:core damage end states
CDF:Core damage frequency
CHF:Critical heat flux
EMDAP:Evaluation methodology development and application process
EPRI:Electric power research institute
FSAR:Final safety analysis report
HPME:High pressure melt ejection
LBOP:Loss of balance of plant
LOCA:Loss of coolant accident
LOMFW:Loss of main feedwater flow
LOOP:Loss of offsite power
MCCI:Molten core-concrete interaction
PIRT:Phenomena identification and ranking table
PAR:Passive autocatalytic recombiner
PDS:Primary depressurization system
PRA:Probabilistic risk assessment
SAMG:Severe accident management guidelines.

Acknowledgments

This paper reflects a similarly titled lecture delivered by the author for the attendees of the Scaling, UNcertainty, and 3D COuPled code calculations seminars (3D S.UN.COP). The 3D S.UN.COP seminars, which provided support for the preparation of this paper, is organized by the University of Pisa’s San Piero a Grado Nuclear Research Group and highlights contemporary insights on topics including system codes, evaluation methodologies, uncertainty qualification, licensing.