Abstract

Thermal-hydraulic system computer codes are extensively used worldwide for analysis of nuclear facilities by utilities, regulatory bodies, nuclear power plant designers, vendors, and research organizations. The computer code user represents a source of uncertainty that can influence the results of system code calculations. This influence is commonly known as the “user effect" and stems from the limitations embedded in the codes as well as from the limited capability of the analysts to use the codes. Code user training and qualification represent an effective means for reducing the variation of results caused by the application of the codes by different users. This paper describes a systematic approach to training code users who, upon completion of the training, should be able to perform calculations making the best possible use of the capabilities of best estimate codes. In other words, the program aims at contributing towards solving the problem of user effect. In addition, this paper presents the organization and the main features of the 3D S.UN.COP (scaling, uncertainty, and 3D coupled code calculations) seminars during which particular emphasis is given to the areas of the scaling, uncertainty, and 3D coupled code analysis.

1. Introduction

A wide range of activities has recently been completed in the area of system thermal-hydraulics as a follow-up to considerable research efforts. Problems have been addressed, solutions to which have been at least partly agreed upon on international ground. These include the need for best-estimate system codes [1, 2], the general code qualification process [3, 4], the proposal for nodalization qualification, and attempts aiming at qualitative and quantitative accuracy evaluations [5]. Complex uncertainty methods have been proposed, following a pioneering study at USNRC [6]. This study attempted, among other things, to account for user effects (see Section 2 for definition) on code results. An international study aiming at the comparison of assumptions and results of code uncertainty methodologies has been completed [7].

More recently (during the period 1997–1999), the IAEA (International Atomic Energy Agency) developed a document consistent with its revised Nuclear Safety Standards Series [8] that provides guidance on accident analysis of nuclear power plants (NPPs). The report includes a number of practical suggestions on the manner in which to perform accident analysis of NPPs. These cover the selection of initiating events, acceptance criteria, computer codes, modeling assumptions, the preparation of input, qualification of users, presentation of results, and quality assurance. The suggestions are both conceptual as well as formal and are based on present practice worldwide for performing accident analysis. The report covers all major steps in performing analyses and is intended primarily for code users.

Within the framework of the “Nuclear Safety Standard Series” the important role of the user’s effects on the analysis has been addressed. The need for user qualification and training has been clearly recognized and the systematic training of analysts was emphasized as being crucial for the quality of the analysis results. Three areas of training, in particular, have been specified in the following:

(i)practical training on the design and operation of the plant;(ii)software specific training;(iii)application specific training. Training on the phenomena and methodologies is typically provided at the university level, but cannot always be considered sufficient. Furthermore, training on the specific application of system codes is not usually provided at this level, whereas practical training on the design and operation of the plant is essential for the development of the plant models. Software specific training is important for the effective use of the individual code. Application specific training requires the involvement of a strong support group that shares its experience with the trainees and provides careful supervision and review. Training at all three levels ending with examination is encouraged for a better effectiveness of the training. Such a procedure is considered a step in the direction of establishing a standard approach that could be applicable to an international basis.

Based on the above considerations and facts, the paper outlines the role of the code user, addresses the problem of the user’s effect in Section 2, provides a proposal for a permanent training course for system codes in Section 3, and gives a tangible example of user-training-course (i.e., 3D S.UN.COP), mostly focused on the development and application of best-estimate codes emphasizing scaling, best-estimate, uncertainty, and 3D coupled code calculations analyses, in Section 4.

2. Thermal-Hydraulic Codes and Code Users

2.1. Role and Relevance of Code User

The best estimate thermal-hydraulic codes used in the area of nuclear reactor safety have reached a marked level of sophistication. Their capabilities to predict accidents and transients at existing plants have substantially improved over the past years as a result of large research efforts and can be considered satisfactory for practical needs provided that they are used by competent analysts.

Best estimate system codes (RELAP, TRAC, CATHARE, or ATHLET) are currently used by design-er/vendors of NPPs, by utilities, licensing authorities, research organizations including universities, nuclear fuel companies, and by technical support organizations. The objectives of using the codes may be quite different, ranging from design or safety assessment to simply understanding the transient behavior of a simple system. However, the ap-plication of a selected code must be proven to be adequate to the performed analysis. Many aspects from the design data necessary to create input to the selection of the noding solutions and the code itself are the user’s responsibility [911].

The role of the code user is extremely relevant: experience with large number of International Standard Problems (ISPs) has shown the dominant influence of the code user on the final results and the goal of reduction of user effects has not been achieved. It has been observed previously that

(i)the user gives a contribution to the overall uncertainty that unavoidably characterizes system code calculation results;(ii)in the majority of cases, it is impossible to distinguish among uncertainty sources like “user effect,” “nodalization inadequacy,” “physical model deficiencies,” “uncertainty in boundary or initial conditions,” and “computer/compiler effect;”(iii)“reducing the user effect” or “finding the optimum nodalization” should not be regarded as a process that removes the need to assess the uncertainty. Performing an adequate code analysis or assessment involves two main aspects.

(1)Code adequacy. The adequacy demonstration process must be undertaken by a code user when a code is used outside its assessment range, when changes are made to the code, and when a code is used for new applications where different phenomena are expected. The impact of these changes must be analyzed and the analyses must be thoroughly reviewed to ensure that the code models are still adequate to represent the phenomena that are being observed.(2)Quality of results. Historically the results of code predictions, specifically when compared with experimental data gathered from applicable scaled test facilities, have revealed inadequacies raising concerns about code reliability and their practical usefulness. Discrepancies between measured and calculated values were typically attributed to model deficiencies, approximation in the numeric solutions, computer, and compiler effects, nodalization inadequacies, imperfect knowledge of boundary and initial conditions, unrevealed mistakes in the input deck, and to “user effect." In several ISPs sponsored by OECD (Organisation for Economic Cooperation and Development), several users modeled the same experiment using the same code, and the code-calculated results varied widely, regardless of the code used. Some of the discrepancies can be attributed to the code user approach as well as to a general lack of understanding of both the facility and the test. The two items are the main aspects, both related to the code user. The first aspect is included in the qualification framework of the code and nodalization. The second aspect is directly related to the user choices generally referred to as User Effect.

2.2. User Effect

Complex systems codes such as RELAP5, CATHARE, TRAC, and ATHLET have many degrees of freedom that allow misapplication (e.g., not using the countercurrent flow-limiting model at a junction where it is required) and errors by users (e.g., inputting the incorrect length of a system component). In addition, even two competent users will not approach the analysis of a problem in the same way and consequently, will likely take different paths to obtain a problem solution. The cumulative effect of user community members to produce a range of answers using the same code for a well-defined problem with rigorously specified boundary and initial conditions is the user effect (see Figure 1).

The following are some of the reasons for the user effects.

(i)Code use guidelines are not fully detailed or comprehensive.(ii)Based on the current state of the art, the actual 3D plant geometries are usually modeled using several 1D zones; these complex 3D geometries are suitable for different modeling alternatives; as a consequence an assigned reactor vessel part is modeled differently by different users of the same code. Beside the major 1dimensional code modules, a number of empirical models for system components, such as pumps, valves, and separators, are specified by the users, sometimes based on extrapolation from scaled devices, thereby introducing additional inaccuracies.(iii)Experienced users may overcome known code limitations by adding engineering knowledge to the input deck.(iv)Problems inherent to a given code or a particular facility have been dealt with over the years by the consideration and modeling of local pressure drop coefficients, critical flow rate multipliers, or other bias to obtain improved solutions. This has been traditionally done to compensate for code limitation (e.g., application of steady-state qualified models to transient conditions, and lack of validity of the fully developed flow concept in typical nuclear reactor conditions). Furthermore, specific effects such as small bypass flows or distribution of heat losses might exacerbate the user effect.(v)The increasing number of users performing analysis with insufficient training. As such, their lack of understanding of the code capabilities and limitations leads to incorrect interpretation of results. The failure to obtain a stable steady state by the user prior to the initiation of the transient is included in this item.(vi)A nonnegligible effect on code results comes from the compiler and the computer used to run an assigned code selected by the user; this remains true for very recent code versions.(vii)Error bands and the values of initial and boundary conditions which are needed as code inputs are not well defined; this ambiguity is used to justify inappropriate model modifications or interpretation of results.(viii)Analysts lack complete information about facilities before developing input and hence filling the gaps with unqualified data.(ix)Although the number of user options is thought to be reduced in the advanced codes, for some codes there are several models and correlations for the user to choose. The user is also required to specify parameters such as pressure loss coefficients, manometric characteristics, efficiencies, and correlation factors which may not be well defined.(x)Most codes have algorithms to adjust the time step control (e.g., Courant limit) to maximum efficiency and minimize run time. However, users are allowed to change the time step to overcome code difficulties and impose smaller time steps for a given period of the transient. If the particular code uses an explicit numerical scheme, the result will vary significantly with the time step size.(xi)Quality assurance guidelines should be followed to check the correctness of the values introduced in the input despite the automatic consistency checks provided by the code. Typical examples of user and other related effects on code calculations of selected experiments are presented in several CSNI reports (e.g., ISP-25, ACHILLES reflooding test; LOBI natural circulation test; ISP-22 on SPES Loss-Of-Feed-Water test; ISP-26 on LSTF 5% cold-leg-break loss-of-coolant-accident (LOCA); ISP-27 on BETHSY 2" cold-leg LOCA) and based on these outcomes different organizations have defined in what follows some general principles in order to reduce the user effects.

(i)The misapplication of the system code should be eliminated (or reduced at least) by means of sufficiently detailed code description and by relevant code user guidelines.(ii)Errors should be minimized: any analysis of merit should include quality assurance procedures designed to minimize or eliminate errors. In a sense, the mis-application of the system code is itself a certain class of error.(iii)The user community should preferably use the same computing platform (i.e., the machine round-off er-rors and treatment of arithmetic operations are as-sumed the same).(iv)The system code should preferably be used by a relatively large user community (a large sample size).(v)The problem to be analyzed should be rigorously specified (i.e., all geometrical dimensions, initial conditions, and boundary conditions should be clearly specified). Within the defined framework, the user effect can be quantified and be a function of

(i)the flexibility of the system code. An example is the flexibility associated with modeling a system compo-nent such as the steam generator: for instance, the TRAC code has a specific component designed to model steam generators whereas a steam generator model created using RELAP5 is constructed of basic model components such as PIPE and BRANCH; consequently, there are more degrees of freedom available to the user, each requiring a decision, when a RELAP5 steam generator model is being constructed than when a TRAC-generated model of the same component is being defined;(ii)the practices used to define the nodalization and to ensure that a convergent solution is achieved. In this context, the code validation process, the nodalization qualification, and the qualitative or quantitative accuracy evaluation are necessary steps to reduce the possibility of producing poor code predictions [12, 13].

3. Permanent User Training Course for System Code: The Proposal

As a follow-up to the specialists meeting held at the IAEA in September 1998, the Universities of Pisa and Zagreb and the Joef Stefan Institute, Ljubljana, jointly presented a Proposal to IAEA for the Permanent Training Course for System Code Users [14]. It was recognized that such a course would represent both a source of continuing education for current code users and a means for current code users to enter the formal training structure of a proposed “permanent” stepwise approach to user training.

As a follow-up to the massive work conducted in different organizations, the need was felt to fix criteria for training the code user. As a first step, the kind of code user and the level of responsibility of a calculation result should be discussed.

3.1. Levels of User Qualification

Two main levels for code user qualification are distinguished in the following:

(i)code user, level “A" (LA);(ii)responsible for the calculation results, level “B" (LB). Two levels should be considered among LB code users to distinguish seniority (i.e., Level B, Senior (LBS)). Requisites are detailed hereafter for the LA grade only; these must be intended as a necessary step (in the future) to achieve the LB and the LBS grades. The main difference between LA and LB lies in the documented experience with the use of a system code; for the LB and the LBS grades, this can be fixed in 5 and 10 years, respectively, after achieving the LA grade. In such a context, any calculation having an impact in the sense previously defined must be approved by a LB (or LBS) code user and performed by a different LA or LB (or LBS) code user.

3.2. Requisites for Code User Qualification
3.2.1. LA Code User Grade

The identification of the requisites for a qualified code user derives from the areas and the steps concerned with a qualified system code calculation: a system code is one of the codes previously defined and a qualified calculation in principle includes the uncertainty analysis. The starting condition for LA code user is a scientist with generic knowledge of nuclear power plants and reactor thermal hydraulics (e.g., in possession of the master degree in US, of the “Laurea” in Italy, etc.).

The requisites competencies for the LA grade code user are in the following areas.

(A)Generic code development and assessment processes:Subarea (A1):conservation (or balance) equations in thermal hydraulics including definitions like HEM/EVET, UVUT(UP), Drift Flux, 1D, 3D, 1-field, Multifield, [2], conduction and radiation heat transfer, Neutron Transport Theory and Neutron Kinetics approximation, constitutive (closure) equations including convection heat transfer, special components (e.g., pump, separator), material properties, simulation of nuclear plant and BoP related control systems, numerical methods, general structure of a system code;Subarea (A2):developmental assessment, independent assessment including Separate Effect Tests (SETF) Code Validation Matrix [3], and Integral Test (ITF) Code Validation Matrix [4]. Examples of specific Code validation Matrices.(B)Specific code structure:Subarea (B1):structure of the system code selected by the LA code user: thermal hydraulics, neutronics, control system, special components, material properties, numerical solution;Subarea (B2):structure of the input; examples of user choices.(C)Code use-Fundamental Problems (FP):Subarea (C1): definition of Fundamental Problem (FP): simple problems for which analytical solution may be available or less. Examples of code results from applications to FP; different areas of the code must be concerned (e.g., neutronics, thermal hydraulics, and numerics);Subarea (C2):the LA code user must deeply analyze at least three specified FPs, searching for and characterizing the effects of nodalization details, time step selection and other code-specific features (to develop a nodalization starting from a supplied data base or problem specifications; to run a reference test case; to compare the results of the reference test case with data (experimental data, results of other codes, analytical solution), if available; to run sensitivity calculations; and to produce a comprehensive calculation report (having an assigned format).(D)Basic Experiments and Test Facilities (BETF):Subarea (D1):definition of Basic Experiments and test facilities (BETF): research aiming at the characterization of an individual phenomenon or of an individual quantity appearing in the code implemented equations, not necessarily connected with the NPP. Examples of code results from applications to BETF;Subarea (D2): the LA code user must deeply analyze at least two selected BETF, searching for and characterizing the effects of nodalization details, time step selection, error in boundary and initial conditions, and other code-specific features.(E)Code use-Separate Effect Test Facilities (SETF):Subarea (E1): Definition of Separate Effect Test Facility (SETF): test facility where a component (or an ensemble of components) or a phenomenon (or an ensemble of phenomena) of the reference NPP is simulated. Details about scaling laws and design criteria. Examples of code results from applications to SETF;Subarea (E2): The LA code user must deeply analyze at least one specified SETF experiment, searching for and characterizing the effects of nodalization details, time step selection, errors in boundary and initial conditions, and other code-specific features.(F)Code use-Integral Test Facilities (ITF):Subarea (F1): definition of Integral Test Facility (ITF): test facility where the transient behavior of the entire NPP is addressed. Details about scaling laws and design criteria. Details about existing (or dismantled) ITF and related experimental programs. ISPs activity. Examples of code results from applications to ITF;Subarea (F2): the LA code user must deeply analyze at least two specified ITF experiments, searching for and characterizing the effects of nodalization details, time step selection, errors in boundary and initial conditions and other code-specific features.(G)Code use-Nuclear Power Plant transient Data:Subarea (G1):description of the concerned NPP and of the relevant (to the concerned NPP and calculation) BoP and ECC systems. Examples of code results from applications to NPP;Subarea (G2): the LA code user must deeply analyze at least two specified NPP transients, searching for and characterizing the effects of nodalization details, time step selection, errors in boundary and initial conditions and other code-specific features.(H)Uncertainty Methods including concepts like nodalization, accuracy quantification, and user effects.

Description of the available uncertainty methodologies. The LA code user must be aware of the state of the art in this field.

3.2.2. LB Code User Grade

A qualified user at the LB grade must be in possession of the same expertise as the LA grade and

(I)he must have a documented experience in the use of system codes of at least 5 additional years;(J)he must know the fundamentals of Reactor Safety and Operation- and Design having generic expertise in the area of application of the concerned calculation;(K)he must be aware of the use and of the consequences of the calculation results; this may imply the knowledge of the licensing process.

3.2.3. LBS Code User Grade

A qualified user at the LBS grade must be in possession of the same expertise as the LB grade and

(L)he must have an additional documented experience in the use of system codes of at least 5 additional years. Moreover, the LBS code user is responsible for documenting user guidelines, methodology descriptions, and for providing technical leadership in R&D activities.

3.3. Course Conduct and Modalities for the Achievements of Code User Grades

The training of the code user requires the conduct of lectures, practical on-site exercises, homework, and examination, while for the senior code user, only a review of documented experience and on-site examination is foreseen. The code user training, including practical exercises which represent an essential part of the course, lasts two years and covers the areas from (A) to (H).

The modalities defined in Table 1 are necessary to achieve the LA, LB (5 years after the LA grade), and LBS (5 years after achieving the LB grade and following the demonstration of performed activity in the 5-year period) grades.

3.4. Training Exercises

Practical exercises foreseen during the training include development of the nodalization from the pre-prepared database with problem specifications. To this end, educational material and presentations/lectures on the exercise will be provided with a detailed explanation of the objectives of the work that the trainee must perform. Extensive application of the code by the trainee at his own institution following detailed recommendations and under the supervision of the course lecturers is foreseen as “homework.” The use of the code at the course venue is foreseen for the following applications:

(i)fundamental problems including nodalization development;(ii)basic test facilities and related experiments including nodalization development;(iii)SETFs and related experiments including nodalization development;(iv)ITF experiments with nodalization modifications; and(v)NPP transients including nodalization modifications. For each of the above cases, the trainee will be required to

(1)develop (or modify) a nodalization starting from the database or problem specifications provided;(2)run the reference test case;(3)compare the results of the reference test case with data (experimental data, results of other codes, and analytical solution);(4)run sensitivity calculations;(5)produce a comprehensive calculation report following a prescribed format whereby the report should include, for example,(a)the description of a particular facility;(b)the description of an experiment (including relevance to scaling and relevance to safety);(c)modalities for developing (or modifying) the nodalization;(d)the description and use of nodalization qualification criteria for steady-state and transient calculations;(e)qualitative and quantitative accuracy evaluation;(f)use of thresholds for the acceptability of results for the reference case;(g)planning and analysis of the sensitivity runs; and(h)an overall evaluation of the activity (code capabilities, nodalization adequacy, scaling, impact of the results on the safety and the design of NPP, etc.).

3.5. Examination

On-site examination at different stages during the course is considered a condition for the successful completion of the code user training. The homework that the candidate must complete before attempting the on-site examination includes

(A)studying the material/documents supplied by the course organizers and(B)solving the problems assigned by the course organizers. This also involves the preparation of suitable reports that must be approved by the course organizers. The on-site tests consist of four main steps that include the evaluation of the reports prepared by the candidate, answering questions on the reports and course subjects, and demonstrating the capability to work with the selected code. Each step must be accomplished before proceeding to the subsequent one.

4. 3D S.UN.COP Seminars: Follow-up of the Proposal

4.1. Background Information about 3D S.UN.COP Trainings

The 3D S.UN.COP (Scaling, Uncertainty, and 3D coupled code calculations) training aims to transfer competence, knowledge, and experience from recognized international experts in the area of scaling, uncertainty, and 3D coupled code calculations in nuclear reactor safety technology to analysts with a suitable background in nuclear technology.

The training (http://dimnp.ing.unipi.it/3dsuncop) is open to research organizations, companies, vendors, industry, academic institutions, regulatory authorities, national laboratories, and so on. The seminar is in general subdivided into three parts and participants may choose to attend a one-, two-, or three-week course. The first week is dedicated to the background information including the theoretical bases for the proposed methodologies; the second week is devoted to the practical application of the methodologies and to the hands-on training on numerical codes; the third week is dedicated to the user qualification problem through the hands-on training for advanced user and includes a final exam. From the point of view of the conduct of the training, the weeks are characterized by lectures, code-expert teaching, and by hands-on-application. More than thirty scientists are in general involved in the organization of the seminars, presenting theoretical aspects of the proposed methodologies and holding the training and the final examination. A certificate of qualified code user is released to participants that successfully solve the assigned problems during the exams.

The framework in which the 3D S.UN.COP seminars have been designed may be derived from Figure 2, where the roles of two main international institutions (OECD and IAEA) and of the US NRC (here playing the role of any other regulatory body of other countries) to address the problem of user effect are outlined together with the proposed programs and produced documents. Figure 3 depicts how the 3D S.UN.COP ensures the nuclear technology maintenance and advancements through the qualification of personnel in regulatory bodies, research activities, and industries by means of teaching by very well-known scientists belonging to the same type of institutions.

Seven training courses have been organized up to now and were successfully held at

(i)The University of Pisa (Pisa, Italy), 5–9 January 2004 (6 participants);(ii)The Pennsylvania State University (University Park, PA, USA), 24–28 May 2004 (15 participants);(iii)The University of Pisa (Pisa, Italy), 14–18 June 2004 (11 participants);(iv)The University of Zagreb (Zagreb, Croatia), 20 June–8 July 2005 (19 participants);(v)The Technical University of Catalonia (Barcelona, Spain), 23 January–10 February 2006 (33 participants);(vi)The “Autoridad Regulatoria Nuclear (ARN),” the “Comisión Nacional de Energía Atómica (CNEA),” the “Nucleoelectrica Argentina S.A (NA-SA),” and the “Universidad Argentina De la Empresa” (Buenos Aires, Argentina), 2 October–14 October 2006 (37 participants); and(vii)The Texas A&M University (College Station, Texas, USA), 22 January–9 February 2007 (26 participants).

4.2. Objectives and Features of the 3D S.UN.COP Seminar Trainings

The main objective of the seminar activity is the training in safety analysis of analysts with a suitable background in nuclear technology. The training is devoted to the promotion and use of international guidance and to homogenize the approach to the use of computer codes for accident analysis. The main objectives are

(i)to transfer knowledge and expertise in Uncertainty Methodologies, Thermal-Hydraulics System Code, and 3D Coupled Code Applications;(ii)to diffuse the use of international guidance;(iii)to homogenize the approach in the use of computer codes (like RELAP, TRACE, CATHARE, ATHLET, CATHENA, PARC, RELAP/SCDAP, MELCOR, and IMPACT) for accident analysis;(iv)to disseminate the use of standard procedures for qualifying thermal-hydraulic system code calculation (e.g., through the application of the UMAE “uncertainty methodology based on accuracy extrapolation" [15]);(v)to promote best estimate plus uncertainty (BEPU) methodologies in thermal-hydraulic accident analysis through the presentation of the current industrial applications [1620] and the description of the theoretical aspects of the deterministic and statistical uncertainty methods as well as the method based upon the propagation of output errors (called CIAU “code with the capability of internal assessment of uncertainty" [21, 22]);(vi)to spread available robust approaches based on BEPU methodology in licensing process;(vii)to address and reduce user effects; and(viii)to realize a meeting point for exchanges of ideas among the worlds of Academy, Research Laboratories, Industry, Regulatory Authorities, and International Institutions. The main features of the seminar course are identified as follows.

(i)The practical use of a mix of different codes. The use of different code is worthwhile to establish a common basis for code assessment and for the acceptability of code results.(ii)The exam. Exams were in the past courses (very) well accepted by code users. The exam gives them the possibility to show their expertise and to demonstrate the effort done during the course.(iii)The practical use of procedures for nodalization qualification. Standardized techniques for developing and qualifying nodalization (i.e., input) can be directly applied in the participants institutions.(iv)The practical use of procedures for accuracy quantification. The availability of methodologies and tools for quantifying qualitatively and quantitatively the accuracy (i.e., the discrepancy between experimental and calculated data) constitutes a key point for the acceptability of the code results.(v)The “joining” between BE codes and uncertainty evaluation. The use of BEPU methodology within the licensing process is worthwhile for predicting more “realistic” results and for demonstrating the existence of larger safety margins.(vi)The large participation of very well-known international experts. The establishment, integrity, and use of international guidance are promoted through lectures presented by top-level scientists coming from different institutions and countries.

4.3. Scientific and Technological Areas Presented at the 3D S.UN.COP

As the acronym 3D S.UN.COP implies, the following three scientifically relevant areas for the nuclear technology are addressed during the course.

(1)Scaling analysis.(2)Best estimate plus uncertainty analysis.(3)Three-dimensional coupled code analysis. Brief descriptions of each topic are given hereafter.

4.3.1. Scaling Analysis

Scaling is a broad term used in nuclear reactor technology, as well as in basic fluid dynamics, and in thermal hydraulics. In general terms, scaling indicates the need for the process of transferring information from a model to a prototype. The model and the prototype are typically characterized by different geometric dimensions as well as adopted materials, including working fluids, and different ranges of variation for thermal-hydraulic quantities.

Therefore, the word “scaling” may have different meanings in different contexts. In system thermal hydraulics, a scaling process, based upon suitable physical principles, aims at establishing a correlation between (a) phenomena expected in a NPP transient scenario and phenomena measured in smaller scale facilities or (b) phenomena predicted by numerical tools qualified against experiments performed in small scale facilities (in connection with this point, owing to limitations of the fundamental equations at the basis of system codes, the scaling issue may constitute an important source of uncertainties in code applications and may envelop various “individual” uncertainties).

Three main objectives can be associated to the scaling analysis:

(i)the design of a test facility,(ii)the code validation, that is, the demonstration that the code accuracy is scale independent,(iii)the extrapolation of experimental data (obtained into an ITF) to predict the NPP behavior. In order to address the scaling issue, different approaches have been historically followed:

(i)fluid balance equation, deriving nondimensional parameters adopting the Buckingham theorem,(ii)semi-empirical mechanistic equations, deriving non-dimensional parameters,(iii)to perform experiments at different scales (very expensive way and could not be totally exhaustive),(iv)to develop, to qualify, and to apply codes showing their capabilities at different scales. The first item recalls a typical approach based on a theorem (applied also to solve heat transfer problems) for determining the number of independent nondimensional groups needed to describe a phenomenon. It stated that a physical relationship among n variables, which can be expressed in a minimum of m dimensions, can be rearranged into a relationship among independent dimensionless groups of the original variables. Buckingham called the dimensionless groups pi-groups and identified them as . Thus a dimensional functional equation reduces to a dimensionless functional equation of the form The second item implies the definition of non-dimensional parameters derived from relationships that link in an empirical way some dependency, for example, from consideration of experimental evidence. Again dimensionless groups are defined similar to the pi-groups. It should be reminded that the relationships defined in this approach are valid for a restricted range thus also the dimensionless parameters are affected by this limitation.

Performing experiment at different scale (third item) might be a way to solve the scaling problem but firstly a lot of experiments should be conducted to cope with the wide range of the scaling factor, secondly the experimental results are affected by peculiarity related to the typical dimension of a test rig at a certain scale.

The last proposal to solve the scaling problem (fourth item) is to accept all the limitation remarked above, to develop a system code, to qualify it against experimental data, to prove that its accuracy is scale independent, and to apply such code to predict the same relevant phenomena that are expected to find in a same experiment (or transient) performed at different scale.

4.3.2. Best-Estimate Plus Uncertainty Analysis

In the past, large uncertainties in the computer models used for nuclear power system design and licensing have been compensated using highly conservative assumptions. The loss-of-coolant-accident (LOCA) evaluation model is one of the main examples about this approach. Conservative analysis was introduced to cover uncertainties at the level of knowledge in the 1970s and it is based on the variation of key components of the safety analysis (computer code, availability of components and systems, and initial and boundary conditions) in a way leading to pessimistic results relative to specified acceptance criteria. However, the results obtained by this approach may be misleading (e.g., unrealistic behavior may be predicted or order of events may be changed) and this typically leads to unphysical results. In addition, significant economic penalties, not necessarily commensurate to the safety benefits, may result as consequence of the unknown level of used conservatism. As a conclusion, the use of this approach is not longer recommended (e.g., in [23], however it is still mandatory in the USA for methodologies referencing the Appendix K of US NRC 10 Code of Federal Regulations 50 (10 CFR 50) [24]) and today the application of “realistic” code methods rather than “conservative” approaches can be identified.

By definition, a best estimate (BE) analysis (the term “best estimate” is usually used as a substitute for “realistic”) is an accident analysis which is free of deliberate pessimism regarding selected acceptance criteria, and is characterized by applying best estimate codes along with nominal plant data and with best estimate initial and boundary conditions. However, notwithstanding the important achievements and progress made in recent years, the predictions of the best estimate system codes are not exact but remain uncertain because [7] of the following.

(i)The assessment process depends upon data almost always measured in small scale facilities and not in the full power reactors.(ii)The models and the solution methods in the codes are approximate: in some cases, fundamental laws of the physics are not considered. Consequently, the results of the code calculations may not be applicable to give exact information on the behavior of a NPP during postulated accident scenarios. Therefore, best estimate predictions of NPP scenarios must be supplemented by proper uncertainty evaluations in order to be meaningful. The term “best estimate plus uncertainty” (BEPU) was coined for indicating an accident analysis which

(1)is free of deliberate pessimism regarding selected acceptance criteria,(2)uses a BE code, and(3)includes uncertainty analysis. Thus the word “uncertainty” and the need for uncertainty evaluation are strictly connected with the use of BE codes and, at least, the following three main reasons for the use of uncertainty analysis can be identified.

(i)Licensing and safety: if calculations are performed in best estimate fashion with quantification of uncertainties, a “relaxation” of licensing rules is possible and a more realistic estimates of NPPs’ safety margins can be obtained.(ii)Accident management: the estimate of code uncertainties may also have potential for improvements in emergency response guidelines.(iii)Research prioritization: the uncertainty analysis can help to identify correlation and code models that need the most improvement (code development and validation become more cost effective); it also shows what kind of experimental tests are most needed. Development of the BEPU approach has spanned nearly the last three decades. The international project on the evaluation of various BEPU methods—uncertainty methods study (UMS)—conducted under the administration of the OECD/NEA [7] during 1995–1998 already concluded that the methods are suitable for use under different circumstances and uncertainty analysis is needed if useful conclusions are to be obtained from best estimate codes. Similar international projects are in progress under the administration of OECD/NEA (BEMUSE—best estimate methods uncertainty and sensitivity evaluation [25]) and IAEA (Coordinated Research Project on investigation of uncertainties in best estimate accident analyses) to evaluate the practicability, quality, and reliability of BEPU methods.

Notwithstanding the above considerations, it is necessary to note that the selection of a BEPU analysis in place of a conservative one depends upon a number of conditions that are away from the analysis itself. These include the available computational tools, the expertise inside the organization, the availability of suitable NPP data (e.g., the amount of data and the related details can be much different in the cases of best estimate or conservative analyses), or the requests from the national regulatory body (e.g., in US licensing process, the BEPU approach was formulated as an alternative to Appendix K conservative approach defined in [24] to reflect the improved understanding of Emergency Core Cooling System (ECCS) performance obtained through the extensive research [1, 26]). In addition, conservative analyses are still widely used to avoid the need of developing realistic models based on experimental data or simply to avoid the burden to change approved code and/or the approaches or procedures to get the licensing.

4.3.3. Three-Dimensional Coupled Code Analysis

The advent of increased computing power with the present available computer systems is making possible the coupling of large codes that have been developed to meet specific needs such as three-dimensional neutronics calculations for partial anticipated transients without scram (ATWS), with computational fluid dynamics codes, and to study mixing in three-dimensions (particularly for passive emergency core cooling systems) and with other computational tools. The range of software packages that are desirable to couple with advanced thermal-hydraulics systems analysis codes includes

(i)multidimensional neutronics,(ii)multidimensional computational fluid dynamics (CFD),(iii)containment,(iv)structural mechanics,(v)fuel behavior, and(vi)radioactivity transport. There are many techniques for coupling advanced codes. In essence, the coupling may be either loose (meaning the two or more codes only communicate after a number of time steps) or tight such that the codes update one another time step to time step. Whether a loose coupling or a tight coupling is required is dependent on the phenomena that are being modeled and analyzed. For example, the need to consider heat transferred between the primary fluid and the secondary fluid during a relatively slow transient does not require close coupling and thus the codes of interest do not have to communicate time step by time step. In contrast, the behavior of fluid moving through the core region, where a portion of the core is modeled in great detail using a CFD code while the remainder of the core is modeled using a system analysis code would require tight coupling if the two codes were linked—since dramatic changes may occur during a NPP transient. Indeed, since CFD codes generally do not have the capability to model general system behavior due to the exceedingly large computer resource requirements, the only means to update a CFD analysis of a somewhat rapid transient in an NPP core region is via close coupling with a system analysis code used to model the NPP system. Thus the system analysis code provides boundary conditions to the CFD code if such an analysis need is identified.

4.4. The Structure of the 3D S.UN.COP

The seminar is subdivided into three main parts, each one with a program to be developed in one week. The changes between lectures, computer work, and model discussion have shown to be useful at maintaining participant interest at a high level. The duration of the individual sessions varied substantially according to the complexity of the subjects and the training needs of the participants.

(i) The first week (titled “fundamental theoretical aspects”) is fully dedicated to lectures describing the concepts of the proposed methodologies. The following technical sessions (with more than 40 lectures) are presented covering the main topics hereafter listed.

(a)Session I: System codes: evaluation, application, modeling, and scaling(1)Models and capabilities of system code models,(2)Development process of generic codes and developmental assessment,(3)Scaling of thermal-hydraulic phenomena,(4)Separate and integral test facility matrices.(5)Session II: International standard problems(1)Lesson learned from OECD/CSNI ISP,(2)Characterization and Results from some ISP.(c)Session III: Best estimate in system code applications and uncertainty evaluation(1)IAEA safety standards,(2)Origins of uncertainty,(3)Approaches to calculate uncertainty,(4)User effect,(5)Evaluation of safety margins using BEPU methodologies,(6)International programs on uncertainty (UMS [7] and BEMUSE [25]).(d)Session IV: Qualification procedures(1)Qualifying, validating, and documenting input,(2)The feature of UMAE methodology,(3)Description and use of nodalization qualification criteria for steady-state and transient calculations,(4)Use of thresholds for the acceptability of results for the reference case,(5)Qualitative accuracy evaluation,(6)Quantitative accuracy evaluation by fast Fourier transform based method (FFTBM).(e)Session V: Methods for sensitivity and uncertainty analysis(1)GRS statistical uncertainty methodology [27],(2)CIAU method for uncertainty evaluation,(3)Adjoint sensitivity analysis procedure (ASAP) and global adjoint sensitivity analysis procedure (GASAP), procedures for sensitivity analysis [28, 29],(4)Comparison of uncertainty methods with code scaling, applicability, and uncertainty (CSAU) evaluation methodology [6].(f)Session VI: Relevant topics in best estimate licensing approach(1)Best estimate approach in the licensing process in several countries (e.g., Brazil, Germany, US, etc.).(g)Session VII: Industrial application of the best estimate plus uncertainty methodology(1)Westinghouse realistic large break LOCA methodology [16],(2)AREVA realistic accident analysis methodology [17],(3)GE technology for establishing and confirming uncertainties [18],(4)Best estimate and uncertainty (BEAU) for CANDU reactors [19],(5)UMAE/CIAU application to Angra-2 licensing calculation [20].

(ii) The second week (titled “Practical Applications and Hands-on Training”) is devoted to lectures on the practical aspects of the proposed methodologies and to the hands-on training on numerical codes like ATHLET, CATHARE, CATHENA, RELAP5 USNRC, RELAP5-3D, TRACE, PARCS, RELAP/SCDAP, and IMPACT. The following technical sessions are presented covering the main topics hereafter listed.

(a)Session I: Coupling methodologies(1)Cross-section generation: models and applications,(2)Coupling 3D neutron-kinetics/thermal-hydraulic codes (3D NK-TH),(3)Uncertainties in basic cross-section,(4)CIAU extension to 3D NK-TH.(b)Session II: Coupling code applications(1)PWR-BWR-WWER analysis,(2)BWR stability issue,(3)WWER containment modeling,(4)System boron transport, boron mixing and validation.(c)Session III: CIAU/UMAE applications(1)Key applications of CIAU methodology,(2)Example of code results from application to ITF (LOFT, LOBI, BETHSY) and to a NPP (PWR-Type and WWER-Type),(3)“PSB Facility” counterpart test,(4)Bifurcation study with CIAU,(5)CIAU software.(d)Session IV: Computational Fluid Dynamics Codes(1)The role and the structure of the CFD codes,(2)CFD simulation in nuclear application: needs and applications. Each of the parallel hands-on trainings on numerical codes consists of about 20 hours and covers the following main topics:

(3)structure of specific codes,(4)numerical methods,(5)description of input decks,(6)description of fundamental analytical problems,(7)analysis and code hands-on training on fundamental problems (e.g., for RELAP5, fundamental proposed problems deal with boiling channel, blow-down of a pressurized vessel, and pressurizer behavior),(8)Example of code results from applications to ITFs (LOFT, LOBI, BETHSY). (iii)The third week (titled “Hands-on Training for Advanced Users and Final Examination”) is designed for advanced users addressing the user effect problem. The participants are divided into groups of three and each group receives the training from one teacher. The applications of the proposed methodologies (UMAE, CIAU, etc.) are illustrated through the BETHSY ISP 27 (small break LOCA) and LOFT L2–5 (large break LOCA) tests. Applications and exercises using several tools (RELAP5, WinGraf, FFTBM, UBEP, CIAU, etc.) are considered. The following main topics are covered:

(1)modalities for developing (or modifying) the nodalization,(2)plant accident and transient analyses,(3)examples of code results from application to a NPP (PWR-Type and VVER-Type), and(4)Code hands-on training through the application of system codes to ITFs (LOFT and BETHSY). A final examination on the lessons learned during the seminar is designed and consists of three parts.

(i)Written Part: questions about the topics discussed during the seminar are proposed and assigned both to each participant and to each group.(ii)Application Part: two types of problems are proposed to the single participant and to the group, respectively.(1)Detection of Simple Input Error:Each participant receives the experimental data of the selected transient, the correct RELAP5 nodalization input deck, and the restart file of the wrong input deck containing one simple input error. Each participant will identify the error.(2)Detection of Complex Input Error:Each group receives the experimental data of the selected transient, the correct RELAP5 nodalization input deck, and the restart file of the wrong input deck containing one complex input error. Each group will identify the error. Evaluation reports are submitted in a written form containing short notes about the reasons for the differences between results of the reference calculation and results from the “modified” nodalization. At least, one problem over two will be correctly solved to obtain the certificate.(iii)Final Discussion: each participant takes an oral examination discussing own results (or results obtained by own group) with the examiners. General questions related to lectures presented during the three-week seminar are asked to the participants. A certificate of type “LA Code User Grade” (see Table 1) like the one depicted in Figure 4 is released to participants that successfully solved the assigned problems.

4.5. 3D S.UN.COP 2007 at Texas A&M University (Texas, USA)

The 3D S.UN.COP 2007 was successfully held at the Texas A&M University (Texas, USA) from January 22nd to February 9th with the attendance of 26 participants coming from 12 countries and 17 different institutions (universities, vendors, national laboratories, and regulatory bodies). About 30 scientists (from 11 countries and 19 different institutions) were involved in the organization of the seminar, presenting theoretical aspects of the proposed methodologies and holding the training and the final examination. More details may be found in Table 2.

All the participants achieved a basic capability to set up, run, and evaluate the results of a thermal-hydraulic system code (e.g., RELAP5) through the application of the proposed qualitative and quantitative accuracy evaluation procedures.

At the end of the seminar a questionnaire for the evaluation of the course was distributed to the participants. All of them very positively evaluated the conduct of the training as can be derived from Figure 5.

5. Conclusions

An effort is being made to develop a proposal for a systematic approach to user training. The estimated duration of training at the course venue, including a set of training seminars, workshops, and practical exercises, is approximately two years. In addition, the specification and assignment of tasks to be performed by the participants at their home institutions, with continuous supervision from the training center, have been foreseen.

The 3D S.UN.COP seminars training courses constitute the follow-up of the presented proposal. The problem of the code-user effect along with the methodologies for performing the scaling-, the BEPU-, and the 3D coupled-code-calculation-analyses are the main topics discussed during the course. The responses of the participants during the training demonstrated an increase in their capabilities to develop and/or modify the nodalizations and to perform a qualitative and quantitative accuracy evaluation. It is expected that the participants will be able to set up more accurate, reliable, and efficient simulation models applying the procedures for qualifying the thermal-hydraulic system code calculations and for the evaluation of the uncertainty.

List of Abbreviations
ASAP:Adjoint sensitivity analysis procedure
ATWS:Anticipated transients without scram
BE:Best estimate
BEAU: Best estimate and uncertainty
BEMUSE:Best estimate methods uncertainty and sensitivity evaluation
BEPU:Best estimate plus uncertainty
BETF:Basic experiments test facilities
BoP:Balance of plant
BWR:Boiling water reactor
CFD:Computational fluid dynamics
CFR:Code of federal regulations
CIAU: Code with the capability of Internal Assessment of Uncertainty
CSAU:Code scaling, applicability and uncertainty evaluation
CSNI: Committee on the Safety of Nuclear Installations
ECCS:Emergency core cooling system
EVET:Equal velocities, equal temperatures
FFTBM:Fast fourier transform-based method
FP:Fundamental problem
GASAP:Global adjoint sensitivity analysis procedure
HEM:Homogeneous equilibrium model
IAEA:International Atomic Energy Agency
ISP:International standard problems
ITF:Integral test facilities
LA:Level A degree (terminology used in the certificate)
LB:Level B degree (terminology used in the certificate)
LBS:Level B Senior degree (terminology used in the certificate)
LOCA:Loss-of-coolant-accident
NEA: Nuclear Energy Agency
NK:Neutron-kinetics
NPP:Nuclear power plants
OECD:Organization for Economic Cooperation and Development
PWR:Pressurized water reactor
SETF:Separate effect test facility
TH:Thermal-Hydraulic
UBEP:Uncertainty band extrapolation process
UMAE:Uncertainty methodology based on acuracy extrapolation
UMS:Uncertainty methods study
US NRC: United States Nuclear Regulatory Commission
UVUT(UP):Unequal velocities, unequal temperatures (unequal pressure)
WWER:Water-cooled water-moderated energy reactor
1D, 3D:One-dimensional, three-dimensional
3D S.UN.COP:(Training on) Scaling, Uncertainty, and 3D coupled code calculations.