Abstract

RMPS methodology is used for estimating the uncertainties in the fulfillment of a target related with the design of the isolation condenser of a “CAREM-like” integral reactor. The passive-system assessment is made on a basis of a loss of heat sink transient. Given this scenario, the safety function is to remove the core decay heat after the actuation of the shutdown system, thus reducing the primary system pressure and leading the plant to a safe condition. The design target to evaluate is the avoidance of the RPV safety valves opening. In order to accomplish the evaluation, the following RMPS steps were followed: system identification, system modeling, characterization of TH phenomena, direct Monte Carlo simulation, sensitivity analysis, and quantitative reliability estimation. As main outcomes, a ranking of parameters' importance and an estimate of the failure probability, from a design target point of view, were achieved by sensitivity analysis and Monte Carlo simulations based on a response surface model.

1. Introduction

The extensive use of passive safety systems in advanced reactors design makes necessary an exhaustive and proper approach to their reliability assessment. This implies not only the consideration of mechanical components, evaluated through classical risk assessment tools (e.g., failure mode and effects analysis, Fault Tree Analysis, Hazop, etc.), but also the consideration of the associated thermal hydraulic (TH) phenomena in terms of the deviation from expected system behavior due to alterations in the environmental conditions.

The assessment of passive system TH phenomena involves the use of a suitable methodology aiming at determine the passive system functional reliability, that is, the failure probability of the physical principle upon which the system operation is relying [1]. Several efforts were made to tackle the functional reliability assessment, such as REPAS methodology; work carried out by ENEA, University of Pisa, and Polytechnic of Milan [2], and RMPS methodology, developed within the framework of a project called Reliability Methods for Passive Safety functions under the auspices of the European 5th Framework program [3]. Other related methodologies are mentioned in D’Auria et al. [4].

In this paper we present an RMPS study case applied to the Isolation Condenser of a CAREM-like integral reactor model.

2. RMPS Methodology Overview

The methodological steps carried out can be separated into the following:(i)identification of the system mission, accident scenario, and associated failure criterion;(ii)system modeling;(iii)characterization of the TH phenomena (identification of relevant parameters and their correspondent uncertainties); (iv)direct Monte Carlo simulation applied to TH code;(v)sensitivity analysis;(vi)quantitative design target assessment.

The system mission alludes to the safety function that the passive system has to perform (i.e., decay heat removal and primary pressure decrease); these are the goals for which the passive system has been designed and located within the overall system. Therefore, the system mission is related to some particular(s) initiating event(s), and allows the definition of design targets for passive system.

The failure criterion is established in terms of the nonfulfillment of the mentioned design targets.

Once the system mission, accidental scenario, and failure criterion are established, a system model has to be developed by means of a best-estimate TH code (e.g., RELAP5 Mod3.3).

According to the procedural steps, the relevant parameters connected with the TH phenomena have to be identified associating them to adequate nominal values, range of variation and probability distributions concerning which parts of the range are more likely than others.

Direct Monte Carlo simulation involves the propagation of the uncertain selected parameters through the considered TH code obtaining a model response (i.e., output variable) which allows, by means of statistical techniques, to estimate the probability of failure of the passive function.

The model response is achieved through the application of an adequate system performance indicator, strictly linked to the defined failure criterion.

In case the variance of the estimator is quite large, it may take an impractical number of simulation cycles to achieve a specific precision of the sought estimate. In this case, an uncertain parameter can be propagated throughout a simplified computational model (i.e., response surface) in order to obtain the desired precision or at least a proper upper bound for the estimation of the failure probability.

3. CAREM Reactor Description

3.1. Primary System

The CAREM NPP design is based on a light-water integral reactor. The whole high-energy primary system, core, steam generators, primary coolant, and steam dome, are contained inside a single pressure vessel.

For low-power modules (i.e., below 150 MWe), the flow rate in the reactor’s primary systems is achieved by natural circulation (Figure 1). Reactor coolant natural circulation is produced by the location of the steam generators above the core. The driving forces obtained by the differences in the density along the circuit are balanced by the friction, and form losses producing the adequate flow rate in the core in order to have the sufficient thermal margin to critical phenomena. Coolant also acts as a neutron moderator.

Self-pressurization of the primary system in the steam dome is the result of the liquid-vapor equilibrium. The large volume of the integral pressurizer also contributes to the damping of eventual pressure perturbations. Due to self-pressurization, bulk temperature at core outlet is near saturation. Heaters and sprinkles typical of conventional PWR’s are thus eliminated [5, 6].

3.2. Isolation Condenser

The isolation condenser (IC) has been designed to reduce the pressure on the primary system and to remove the decay heat in case of loss of heat sink (LOHS). It is a simple system that operates condensing steam from the primary system in emergency condensers (Figure 1). The inlet valves in the steam line are always open, while the outlet valves are normally closed, therefore the tube bundles are filled with condensate. When the system is triggered, the outlet valves open automatically. The water drains from the tubes, and steam from primary system enters into the tube bundles condensing over their cold surface. The condensate is returned to the reactor vessel establishing a natural circulation circuit. In this way, heat is removed from the reactor coolant. During the condensation process the heat is transferred to the water of the pool by a boiling process [6].

4. RMPS Application to Isolation Condenser

4.1. Definition of the Accident Scenario and System Characterization

The first step of the methodology is the definition of the accident scenario for the IC operation. The knowledge of this scenario allows the specific definition of failure criterion and relevant parameters, and the quantification of their uncertainties. The results obtained in the reliability and sensitivity analyses of the passive system are, thus, specific to this scenario [3].

4.1.1. Accident Scenario

In order to evaluate the system reliability, a loss of heat sink (LOHS) accident scenario with the following boundary conditions was proposed:(i)total loss of steam generators (SG) removed power with a 12.8 sec. ramp; (ii)no feed and bleed systems are taken into account; (iii)all safety systems are involved, first shutdown system (FSS) and IC, and safety relief valves, are triggered only by primary system pressure;(iv)no feedback due to reactivity coefficients is accounted for (conservatively, core power remains constant until the SCRAM condition is reached).

4.1.2. Mission of the System

Given a LOHS transient, the IC’s safety function is to remove the decay heat, consequently reducing the pressure on primary system until the hot shutdown condition is reached.

4.1.3. Design Target and Failure Criterion Definition

In this paper, failure criterion is set in terms of a design target instead of the system’s mission, as originally introduced in “RMPS methodology overview.”

When a design target is selected, special attention must be paid to the fact that its fulfillment always implies the mission’s accomplishment.

For IC evaluation, a short-term design target is selected and consists of the avoidance of primary system overpressure, at or beyond the safety valves (opening) set-point. Therefore, the failure criterion is verified when pressure set-point is reached.

4.2. Modeling of the System

A one-dimensional nodalisation of CAREM-like reactor was developed for RELAP5 Mod3.3 code. The correspondent nodalisation layout is shown in Figure 2. The selected acronyms for the identification of RELAP5 components are PP: pipe, BR: branch, and HS: heat structure.

Primary circuit nodalisation was setup by modeling the most relevant components: RPV dome, steam generator (SG), down comer, riser, core, and lower plenum.

Special attention was paid to the dome nodalisation, in order to allow 3D fluid circulation in a 1D path [5].

Condensation on control rods hydraulic feed tubes (which are located into the steam dome) and condensation on RPV wall due to thermal loss to exterior are modeled since they rule the steam generated in the core, which travels along the riser up to the dome. The amount of steam affects the hot leg density and the buoyancy forces that, together with the pressure losses, determine the primary circuit mass-flow rate.

Down comer and riser are divided into a suitable number of nodes in order to properly follow thermal fronts. On the other hand, lower plenum is modeled with a unique volume aiming to represent the water mixture effect before it comes into the core.

SG removed power is modeled in the associated heat structure as a boundary condition. In addition, core-generated power is modeled as boundary condition obtained from point kinetics calculation (according to ANS79-3 model) without taking into account the feedback due to reactivity coefficients.

The IC nodalisation includes the following system components: steam line (steam line piping + in header×2), Condensers (×4), return line (return line piping + out header×2), and system pool.

In order to properly simulate the natural circulation inside the pool, a detailed model has been adopted regarding an upstream branch representing the zone of the pool that is in contact with the condensers and above them, and a downstream branch representing the surrounding areas.

For the overall primary system and IC, a criterion of sliced nodalisation and similar length between nodes of adjacent volumes have been adopted.

4.3. Pressure Transient Description

The dynamic of primary system pressure, regarding a LOHS scenario, is described in this section. In order to clarify the explanation, three phases of the transient has been identified (Figure 3).

During Phase I, steam in the dome is compressed due to coolant expansion, increasing the system pressure. When primary system pressure reaches the correspondent set-point, the FSS is triggered. As a consequence of power reduction, core void generation stops and a brief pressure decrease is verified. Once this effect finalized, because there is no power removal, the temperature goes on increasing in the down comer and in the whole circuit, driving again to the pressurization of primary system which, during this phase, remains in subcooled condition.

When primary system pressure reaches the IC set-point, the system is leading to a sharp depressurization phase (Phase II).

Finally, once the system reaches again its saturation condition the sharp depressurization ends, and from this moment on, pressure begins to be ruled by steam reposition into the dome (generated in the core) and steam condensation in the IC. This transient stage is called Phase III, and its feature is that the whole primary system operates near equilibrium condition.

4.4. Selection of Relevant Parameters and Quantification of Their Uncertainties

A key point of the methodology is the selection of relevant parameters and their uncertainty quantification (i.e., assignation of probability distribution functions, nominal values, and range of variation).

Relevant parameters are those related to the nominal system configuration (design parameters) and physical quantities (critical parameters) that may affect the mission of the passive system. The uncertainties pertaining to the code are not accounted for, focusing the attention on the uncertainties relative to the input parameters characteristic of the passive system or the overall system [3].

This entire methodological step was carried out by means of expert judgment. Some of the most important considerations are summarized below.(i)Nominal values, although the model adopted is a simplified and fairly accurate version of CAREM reactor geometry, the reference values considered for calculations approximates to nominal operational values of CAREM25.(ii)Range of variations, upper- and lower-range limits of the parameters are established regarding realistic departures from their nominal values. This implies the consideration of operational procedures during the reactor commissioning (e.g., core inlet friction for setting up the nominal mass flow on the primary circuit) and control system’s actions associated with those variables that are regulated.(iii)Probability distributions, for design parameters related to system’s operation (e.g., operational power, nominal pressure, PCS mass flow rate), it is plausible to consider variations produced symmetrically around their nominal value since they are affected by regulation systems or are set up according to established procedures during the commissioning process. This allows representing their uncertainty by means of a normal distribution. Concerning critical parameters (e.g., IC tube thickness, fouling) and design parameters not related with systems operation (e.g., SCRAM delay, Decay power factor), uncertainties have been represented by a log-normal distribution since it is expected that samples take values on and beyond the distribution’s mode (i.e., nominal value), according to the physical behavior of critical parameters and conservative assumptions in case of parameters unrelated with system’s operation (i.e., nonregulated). It is important to remark that all parameter distributions have been truncated since the sampling beyond the range limits could lead to unrealistic system configurations (overlapping of parameters ranges, nonrealistic values for regulated variables, etc.).

Moreover, the parameters are considered to be statistically independent.

Table 1 shows the selected parameters and their corresponding distributions. These parameters have been established from an expert panel among the authors, to duly find and justify the assumptions on the relevant parameters.

4.5. Direct Monte Carlo Simulation

Direct Monte Carlo simulation consists of sampling the vector of input parameters, running the system model computer code for each sample, obtaining a vector of output variables, and estimating the characteristics of the output variables. This method can be used to compute the failure probability of a process by using a performance function as output variable. An estimate () of the actual probability of failure () can be found by dividing the number of simulation cycles in which the failure criteria is met by the total number of simulation cycles () [3, 7].

4.5.1. Sampling Method

In order to obtain the parameter samples, simple random sampling (SRS) method was adopted. In this method, every value of the sample is randomly generated from the correspondent parameter distributions. Simple random sampling provides a suitable option when no information of the system response is available, having as main advantages the simplicity of samples generation, the availability of well-known methods for estimation and statistical analysis, and the capacity of aggregation.

4.5.2. Determination of Code-Runs Number

The sample size, thus the number of code runs, is selected aiming at satisfying Wilks’ formula [8].

The selected sample size, 100 samples, satisfies the 95%/99% criteria (probability content = 95%, confidence level = 99%) for one-sided tolerance interval. The selection of one-side tolerance interval is justified since the problem addressed (concerning the failure criterion definition) can be understood as a problem of excess from a given value; therefore, it is not important what occurs in the “left tail” of the output distribution function.

4.5.3. Definition of Output Observable

The output observable characterizes the passive system behavior regarding the design target. Therefore, the output observable must reflect the IC operative margin (i.e., departure from the safety relief valves opening set-point). In this sense, the observable appears as a performance indicator of the selected design target [2].

For this application the following performance indicator (PI) is adopted: where = actual removed power by the IC; minimum removed power needed to avoid safety relief valves opening condition.

From its definition, PI represents a factor which affects the actual removed power by IC. This factor, tells how much has to be reduced (or eventually increased) in order that primary system pressure meets the failure criterion (i.e., ). The value of can be obtained through a parameterization of , performing stepwise calculations.

For this analysis it is assumed that is independent of pressure (and consequently of time), which is a good approximation for high pressures. This allows defining the following constant which can be understood as a fictitious removed power: where is a factor that vary from one to zero.

The maximum accumulated energy (for ) can be determined as: where initial system energy; net primary system power; = starting time of the transient; time at which the system maximum accumulated energy is reached (intersection between and ); activation time.

Providing that decay power is only a function of time, the following dependency is verified: . Moreover, during Phase III the system is near equilibrium condition and the energy calculated depends almost only on system pressure (given that the coolant total mass is constant), thus it fulfils . Therefore, the energy of the system corresponding to the pressure set-point of the safety relief valves can be expressed as follows: .

Taking into account (2) and (3), it can be seen that if is reduced in each step of the calculation, will gradually increment until it matches with . Once this condition is achieved, and the value of the parameter will provide the sought PI.

A qualitative graphical view of this stepwise process is shown in Figure 4; on each step only is changed, keeping the rest of the system constant. Note that if , is independent of and it occurs always at

4.5.4. Outcomes of Direct Monte Carlo Simulation

The results obtained by direct Monte Carlo simulation are presented in this section. Each shown result corresponds to a B-E code run (RELAP5 Mod3.3) of the associated input vector.

A qualitative straightforward analysis of the pressure evolutions (Figure 5) shows that none of the cases met the failure criterion (i.e., none of the cases exceed the safety valves pressure set point).

The model response (i.e., output variable) reflects the same, but now, specifying for each one of the cases the departure from failure domain (Figure 6).

The estimate of the probability of failure is: where number of code runs which met the failure criterion, and total number of code runs.

In this case, takes the value zero since no code run provides an output observable within the failure domain.

This result illustrates a limitation of Monte Carlo simulation for estimating rare-event probabilities since a large amount of calculations are needed. Moreover, direct Monte Carlo involves large computational time on each run (given the complexity of the physical problem to solve) allowing only a limited number of output observables, which is not enough for achieving a proper upper bound of the probability of failure.

Wilks’ formula for -order statistics and one-sided tolerance interval (5) can be used for calculating a conservative upper bound () of the actual probability of failure (). where expresses the “confidence” that will be lower or equal than .

Considering , and , it is obtained that . This constitutes a very high upper bound for the probability of failure, according to passive systems’ capabilities. Another insightful assessment is to consider (keeping ) which is about the order of magnitude of active system functional failure probability; this provides , constituting a prohibitive (in practical terms) number of TH code runs. Therefore, the use of Monte Carlo method based on surrogate models cannot be avoided in order to quantify a proper upper bound of the probability of failure.

4.6. Sensitivity Analysis

Sensitivity analysis is performed to determine those parameters that mostly influence on model response and thus, the passive safety function.

The assumption of linear relationship between the output observable () and the input parameters (), is made. This allows calculating the standardized regression coefficients (SRC) and partial regression coefficients (PCC) sensitivity indices. The linear hypothesis has to be validated throughout the determination coefficient . The coefficient represents the variance percentage of the output variable explained by the regression model. The more is close to one, the more the relation between the output and the inputs is linear. A coefficient of determination can also be obtained based on the rank ; the difference between and is a useful indicator of nonlinearity of the model ( is higher than in case of nonlinear models).

Further theoretical aspects of sensitivity analysis based on regression techniques can be found in Devictor and Bolado Lavín [7], Volkova et al. [9], Marquès [8], and Saltelli et al. [10].

It is important to remark that SRCs and PCCs provide related but not identical measures of variable importance. SRCs are sensitive to all distributions, this implies that they will not take into account the fact that a correlation between and can be a consequence of a third parameter’s influence [9]; however, PCCs provide important measures that tend to exclude the effect of the other variables. Nevertheless, in case that the input variables are uncorrelated, the order of variable importance based either on SRCs or PCCs (in their absolute values) is exactly the same [10]; a condition that corresponds to the present paper.

The results obtained are summarized in Table 2. The linear hypothesis is validated since and .

According to the ranking, 4-DF (decay power factor), 8-L (RPV water level), 15-HL (RPV dome heat losses), 12-TT (IC tube thickness), and 14-W1 (IC tube thickness due to fouling) are the most important input parameters. This outcome has a straightforward connection with the dynamic behavior of the CAREM-like model, since the parameters related with primary system (taking into account the trend given by their sign) are those linked to higher accumulation of energy; meanwhile, parameters related to IC contribute to the impairment of heat transfer to the pool.

4.7. Quantitative Reliability Calculations by Means of Surrogate Models
4.7.1. Response Surface Calculation

Response surfaces consist of a simplified substitute model that fits the initial data, which has good prediction capabilities and demands negligible time for one calculation. This feature allows one, once the response surface has been determined, to assess the passive system reliability easily by using Monte Carlo simulation.

Additional insights about response surface method can be found in Volkova et al. [9] and Devictor and Bolado Lavín [7].

A first degree linear model based on multiple linear regression is adopted for constructing the response surface. A stepwise variable selection procedure has been performed and leads to the suppression of five useless parameters in the linear model. The parameters of the linear predictor are summarized in Table 3; meanwhile, a few criteria that determine the quality of the approximation are summarized in Table 4.

The mean square error (MSE) is defined as the average of the squared differences between the observed outputs and the predicted outputs. It has to be compared to the variance of the observed outputs. The bias (BIAS) is defined as the average of the differences between the observed and the predicted outputs. Compared to the mean of the observed outputs, it shows that if there is a problem with the regression model (nonsymmetric residuals). Here, a perfect very small bias can be seen. Finally, the maximum residual, compared to the root mean square of MSE, allows detecting some anomaly as a very large residual of the regression model. No anomaly is present here.

In this study, all these criteria have also been computed in a prediction context using cross-validation techniques. These results, not shown here, are similar to the approximation criteria. Therefore, it can be concluded that the determined response surface also has good predictive capabilities.

4.7.2. Monte Carlo Simulation Based on Surrogate Model and Reliability Calculation

Monte Carlo simulation was performed on a basis of the constructed response surface. A million runs were done obtaining no cases within the failure domain allowing, once more, the estimate of the probability of failure to take the value zero. This high number of runs illustrates the limitation in the use of Monte Carlo for estimating rare-event probabilities. Nevertheless, the response surface evaluation provides new statistical evidence that allows achieving a more accurate upper bound of the actual probability of failure, in addition giving more useful insights about the passive system capabilities in terms of its design target fulfillment.

Regarding Wilks’ formula, a confidence level and taking now ; the upper bound achieved for the failure probability is .

5. Conclusions

The reliability assessment of the TH phenomena related to the CAREM-like passive RHRS were made regarding a loss of heat sink transient. Given this type of scenario, the passive system actuates removing the core decay heat and reducing the primary system pressure (safety function), with the objective of keeping the pressure below the safety relief valves set-point (design target). The failure criterion of the passive system is achieved when the design target is not accomplished.

In the aim of characterizing the TH phenomena, sixteen (16) relevant parameters were identified, associating to them adequate probability density functions by means of expert judgment.

The sample size, thus the total number of calculations performed with RELAP5 Mod3.3 code, has been fixed in 102 (Wilks’ formula).

Due to the limited number of model responses (because of the computational cost of each run), the results obtained by means of direct Monte Carlo simulation do not provide the necessary statistical information for estimating the probability of failure of the TH phenomena or even finding an upper bound of it. This illustrates a limitation in the straightforward use of direct Monte Carlo simulation for reliability assessment of complex physical phenomena.

For these applications, alternative methods, as surrogate models, are almost strictly needed. In this study case, a response surface based on a first-degree linear model was fitted to the most relevant parameters. Throughout 106 Monte Carlo simulations performed on this simplified model, no cases within the failure domain were obtained. However, this new statistical evidence allowed achieving an upper bound of the probability of failure equal to with a 95% confidence level; this result is conservative given that it is directly derived from Wilks’ formula.

The small upper bound obtained shows the highly reliable performance of the passive system addressed in this study.

From sensitivity analysis outcome, the higher ranked parameters are 4-DF (decay power factor), 8-L (RPV water level), 15-HL (RPV dome heat losses), 12-TT (IC tube thickness), and 14-W1 (IC tube thickness due to fouling).

It is important to remark that the assessment presented here is restricted to design parameters and physical quantities, disregarding other types of parameters which may affect the passive system functionality. When best estimate simulations plus uncertainty propagation are used for evaluating design targets, parameters pertaining to the model should be considered and their uncertainty quantified (implying efforts in source code modification and consequent validation, the reason why it is excluded from the scope of this work). This provides designers an improved feedback for tackling critical uncertain parameters either by increasing their knowledge level (and thus, reducing the uncertainty) or by taking measures in order to make them less relevant (e.g., in case of heat transfer correlation, their uncertainty effect is reduced by increasing IC tube thickness).

Acknowledgment

This paper has been supported by the International Atomic Energy Agency (IAEA) within the framework of the coordinate research project Natural Circulation Phenomena, Modelling and Reliability of Passive Systems that Utilize Natural Circulation.