Research Article  Open Access
C. Mesado, A. Soler, T. Barrachina, R. Miró, J. C. GarcíaDíaz, R. MaciánJuan, G. Verdú, "Uncertainty and Sensitivity of Neutron Kinetic Parameters in the Dynamic Response of a PWR Rod Ejection Accident Coupled Simulation", Science and Technology of Nuclear Installations, vol. 2012, Article ID 625878, 10 pages, 2012. https://doi.org/10.1155/2012/625878
Uncertainty and Sensitivity of Neutron Kinetic Parameters in the Dynamic Response of a PWR Rod Ejection Accident Coupled Simulation
Abstract
In nuclear safety analysis, it is very important to be able to simulate the different transients that can occur in a nuclear power plant with a very high accuracy. Although the best estimate codes can simulate the transients and provide realistic system responses, the use of nonexact models, together with assumptions and estimations, is a source of uncertainties which must be properly evaluated. This paper describes a Rod Ejection Accident (REA) simulated using the coupled code RELAP5/PARCSv2.7 with a perturbation on the crosssectional sets in order to determine the uncertainties in the macroscopic neutronic information. The procedure to perform the uncertainty and sensitivity (U&S) analysis is a samplingbased method which is easy to implement and allows different procedures for the sensitivity analyses despite its high computational time. DAKOTAJaguar software package is the selected toolkit for the U&S analysis presented in this paper. The size of the sampling is determined by applying the Wilks’ formula for double tolerance limits with a 95% of uncertainty and with 95% of statistical confidence for the output variables. Each sample has a corresponding set of perturbations that will modify the crosssectional sets used by PARCS. Finally, the intervals of tolerance of the output variables will be obtained by the use of nonparametric statistical methods.
1. Introduction
Being able to simulate accurately the different transients that can occur in a nuclear power plant is one of the main aims in nuclear safety analysis. Transient simulations involve both neutronic and thermalhydraulic calculations which are solved by different computer codes using best estimate models. Even though this approach provides more realistic responses than conservative codes, because of the reduction of the conservatism, the uncertainty in the code predictions of relevant safety system variable resulting from assumptions and code model estimates must be carefully evaluated. Uncertainties may stem from various sources, for example, lack of knowledge, approximate character of physical models, uncertainty in model data, and so forth, and will be statistically propagated to code output parameters which have a probability density function and a range limit.
In order to obtain the uncertainties in the macroscopic neutronic information using a best estimate neutronicsthermalhydraulic coupled code, a Rod Ejection Accident (REA) has been simulatedm and the results are presented in this paper. The REA accident belongs to the reactivityinduced accidents (RIA) category and is part of the licensing basis accident analyses required for pressurized water reactors (PWR). The REA consists of a rod ejection due to the failure of its operating mechanism with the power evolution driven by a continuous reactivity insertion. The main factor limiting the consequences of the accident is the Doppler reactivity effect. The physical description of the reactor response is based on the coupled neutronicthermalhydraulic code RELAP5/PARCSv2.7. Since a coupled code is used for the best estimate analysis, uncertainties from both aspects should be included and jointly propagated. CASMO4SIMULATE3 provides the crosssection sets which are processed using the SIMTAB methodology developed at the Polytechnic University of Valencia (UPV) together with Iberdrola [1].
With the objective of providing a realistic environment for the development of the dynamic behaviour, monitoring and adjustment procedures, dynamic models for monitoring and control the output system have been developed.
2. Statistical Methodology
2.1. Introduction
It is possible to quantify the uncertainty by means of statistical measures. For instance, even when the exact values for a given code input are unknown, a range of possible values that it can take may be available. It is then possible to quantify uncertainty by considering the range, and a probability density function (PDF) that assigns a probability to the values inside the range. Sometimes, however, a third factor has to be considered [2], namely, the model uncertainty, that is, the difference between the real system and the result predicted by a set of approximations depending on the used model.
In this work, both the tolerance limits and the PDF distributions were an assumption. The real values of the tolerance limits for each kinetic parameter and its PDF distribution type will be known only after a validated uncertainty propagation methodology is applied to the whole process. This process, consisting in neutron kinetic parameters generation for coupled 3D neutronicthermalhydraulic, could be divided into three steps: (1) cell physics (derivation of the multigroup microscopic crosssection libraries and associated uncertainties), (2) lattice physics (derivation of the fewgroup macroscopic crosssection libraries and associated uncertainties), and (3) core physics (core steadystate and transient behavior with the associated uncertainties). Hence, this work is a general approach to the third step (core physics) on the assumption that the uncertainties corresponding to the first two steps are previously obtained.
Moreover, it is known that the fractions of delayed neutrons precursors (and their decay constants) play an important role in output variable variance. This influence depends on how far from prompt critical the insertion of reactivity is. Thus for small insertions of reactivity, delayed neutrons play an important role in the time evolution of the neutron flux, and so the uncertainty in and will have a greater effect in the kinetic output variables. As the weight of the delayed neutrons decreases with the proximity to prompt criticality, the influence of and also diminish as does the effect of their uncertainty.
In any case, the reason why the uncertainties in and have not been included in this current study is the lack of available information. It is not expected, however, that maintaining these parameters unchanged will influence the conclusions about the uncertainty and sensitivity results taking into consideration the uncertainties in the most important neutronic data considered in this work. The effects of the uncertainties on the kinetic parameters related to the precursors of delayed neutrons, however, will be taken into account in future studies.
The computer code plays the role as the used model and its associated uncertainty is calculated using the presented methodology. Figure 1 provides an easy explanation of the followed procedure (step 3).
The computer code is a deterministic function that transforms stochastic uncertain in its inputs and models, to , into stochastic uncertainty associated to its outputs .
The coupled code is represented by the function , which calculated the output variables , as a function of input variables, , whose uncertainties can be determined. With sensitivity analysis, the importance of each regarding its influence on the uncertainty of can be calculated and hence a numerical value to that influence can be assigned.
To get a proper understanding on what is actually happening in the model, it is also important to perform a sensitivity analysis. This type of analysis reflects the variance of one response random variable when an input related random variable is perturbed. In this work, the sensitivity analysis performed is based on statistical measures of correlation between the input variables selected as sources of uncertainty and those output variables of interest. Two kinds of correlations can be used, those based on regression analysis, for example, Pearson Product Moment, and those based on nonparametric Rank correlation, for example, Spearman regression coefficients. Pearson correlation is most suitable for linear dependencies, whereas Spearman correlation can better quantify nonlinear and linear dependencies. For this work, Spearman correlation has been selected to carry out the sensitivity analysis. Additionally, full and partial correlations have been computed and analysed. Full correlations include the effects of all the input variables simultaneously, and partial correlations can eliminate the effects of the other variables for a given one. A threshold value of is usually accepted as a threshold value below which an input variable is considered to have no significant influence on an output variable. In the paper, we have selected a value of for this purpose.
It is important to point out that statistical sensitivity measures only quantify statistical relationships and do not offer quantitative values about the magnitude of the relationship which could be used to further compute linear uncertainties in the output variables of interest. For this, analytical or numerically obtained firstorder derivatives must be computed, , which is not always possible, especially for systems with nonlinear behaviour, for which second order and cross derivatives should be also included.
2.2. Methodology Description
This paper discusses a work based on previous studies [3]. The procedure followed to perform the uncertainty and sensitivity analysis is a samplingbased method. This method has been chosen because it is easy to implement, allows different procedures for the sensitivity analyses [4], and is suitable for all computer codes. One of its main advantages is that the user does not need to know beforehand the importance of each input variable since the order of importance is provided by the sensitivity analysis afterwards. However, the main drawback of the samplingbased methods is the high computational cost. Another important feature of the methodology is that the accuracy of the obtained results is not dependent on the number of input parameters but among other factors, as the sample size or even the sampling procedure randomness. The latter condition ensures the randomness in the output variables.
The first step in the methodology is the characterization of the input variables uncertainty. As a starting point, the user must decide which input variables could be more relevant or sensitive for the output variables. As stated earlier, two factors are used in order to define the uncertainties related to the input variables: the intervals of possible values and the PDF associated. The uncertainty analysis with nonparametric methods can lead to range limits for in the form (lower limit, upper limit). These limits contain the real value of with a certain probability, p, and a prescribed level of confidence, . The lower and upper selected bounds for all input variables are (−0.003, +0.003).
Regarding the selection of the distribution, if there is no information or knowledge in order to provide a PDF for a certain , the uniform distribution is recommended since it assigns equal probability to each value within the sample space. However, depending on the phenomena modelled, the selection of the normal or even the lognormal distribution could be a better choice. In this paper, the chosen PDF distributions are uniform and normal which presents three different standard deviations, 0.1%, 0.5%, and 1%, respectively, both with mean value equals to 0. Once the distribution type is chosen, the sample has to be generated.
Computer codes, in general, do not accept intervals as , but rather a unique numerical value for each . Thus the methodology must provide a value for each within each interval. There are several statistical techniques which could provide the desired values but two methods were selected in this work: Random and Latin Hypercube Lattice Sample (LHS), which is described in Section 2.4.
The second step involves performing the simulations a certain number of times. This action is equivalent to obtain as a function of different sampled (each time with a different sampled ). An appropriate sample size value, , is another important part of the methodology. This value depends on the desired uncertainty accuracy and is related with Wilks’ formula which gives the minimum number of simulations and, hence, the number of input samples.
The Wilks’ approach, also known as GRS’s (Gesellschaft für Anlagen und Reaktorsicherheit) method in nuclear safety field, is based on a pure statistical method [5]. The sample size, obtained by applying the Wilks’ formula for double tolerance limits with a 95% of uncertainty and with 95% of statistical confidence for the output variables, is equal to 93 [6]. However, it has been recently published that the recommended minimum number of required code runs for the condition of 95/95 for 1st order twosided approach are 146 [7], which is the code runs performed in this present work.
Each of the simulations uses a different sampled and leads to one different . In other words, sets of lead to different output variables, . Furthermore, will be randomly provided due to the fact that was randomly chosen. The PDFs of are sufficient to calculate the uncertainty in . However, the PDFs are not always easy to evaluate, thus, the only remaining alternative is to obtain as much information as possible about the PDFs properties together with the main parameters. One of the most used parameter is the quantile which comes from empirical PDFs and estimators.
The uncertainty of , for a certain uncertainty in , can be easily calculated if is randomly sampled with a normal PDF. Provided those hypotheses, the tolerance intervals can be calculated with only two parameters: sample mean, , and sample standard deviation, . As stated earlier, the hypothesis of normal PDF for is not always easy to guarantee; however, there are some statistical tests that can be used to quantify how well the hypothesis of normality fits the sampled data. An important statistic test is the Rank correlation test for detecting linear relationship as well as nonlinear behaviour between and . Examples of linear measures are the simple correlation coefficient (SCC), or Pearson’s moment product, and the partial correlation coefficient (PCC). The most important advantage of the PCC is that it eliminates the linear influence of the remaining on , leaving only the whose sensitivity is being calculated. Both Rank correlations, simple and partial, will be described in Section 2.3.
2.3. Simple or Partial Rank Correlation
In order to deal with models which are not clearly linear, simple (SRCC) or partial rank correlation (PRC) coefficients can be used. To calculate these two correlations, the sample values of and , whose relationship has to be determined, are separately “ranked”; that is, two separate and ordered lists (in decreasing or increasing order) are created, followed by the assignment of a rank or an integer number to each value.
If the two “unranked” original series of values are related monotonously, then the ordered series are linearly related. This is true even if the relationship between the unordered series is not linear. Thus the absolute values of SRCC and PRCC will quantify the degree of relationship between the given and of interest. The closer the values of these coefficients are to one, the more influence that will have on . In sensitivity analysis, the most used test is the Spearman’s coefficient. The critical values for the Spearman rank correlation coefficient are calculated by equation , where is the number of runs and represents the point on the standard normal PDF. This point represents the probability, , of observing a value greater than , which is known as the upper critical value (quantile). For a confidence interval of 95%, = 1.96.
2.4. Latin Hypercube Lattice Sample
For more than twenty years, the Latin Hypercube Lattice Sampling program has been successfully used to generate multivariate samples of statistical distributions. Its ability to use either Latin hypercube sampling or pure Monte Carlo sampling with both random and restricted pairing methods has made it an important part of uncertainty analyses in areas ranging from Probabilistic Risk Assessment (PRA) to complex simulation modelling.
Latin Hypercube Lattice Sampling (LHS) is a stratified random procedure that provides an efficient way of sampling variables from their distributions [4, 8, 9]. LHS is very popular for use with computationally demanding models because its efficient stratification properties allow for the extraction of a large amount of uncertainty and sensitivity information with a relatively small sample size. In fact, the main advantage of LHS respect Simple Random is that LHS gets a better dispersion sample points over the input domain space [10].
The LHS process consists of three steps: (1) the range of each input variable, , is divided into n intervals with equal probability according to the associated PDF, (2) a random point is generated in each interval and for each input variable, and (3) one sample is generated with a random combination, without replacement, of the points randomly generated in step 2 (one point from each ). Step 3 is repeated for the required samples. Since one point is randomly generated in each interval, the domain input space is efficiently covered.
3. DAKOTA Methodology
It is well known that some design parameters have a high influence on the response whereas the influence of other parameters can be neglected. In order to optimize the design process, sensitivity analysis techniques and parameter study methods are performed. Both techniques and methods are used for identifying which parameters could be needed and which could be taken as constant parameters. In addition, in a postoptimization role, sensitivity information is useful in determining whether or not the response functions are robust with respect to small changes in the optimum design point [11].
DAKOTA software, which stands for Design Analysis Kit for Optimization and Terascale Applications, is the selected toolkit for the present sensitivity analysis study. DAKOTA provides a flexible interface between analysis codes and iterative systems analysis methods [11]. Besides, DAKOTA is a very powerful tool that allows performing uncertainty quantification with sampling and sensitivity/variance analysis as well as a variety of more applications. The features implemented in DAKOTA lead to improved designs and system performance in earlier stages which may reduce development time and cost.
JAGUAR, which stands for JAva GUi for Applied Research, is a graphical wizard program which parses a DAKOTA input specification and serves as a graphical user interface (GUI) providing both graphical and text representations of the problem setup for DAKOTA studies afterwards [12]. Basically, JAGUAR is a friendly user interface that helps the user to introduce the data and calls DAKOTA for calculations.
For this specific uncertainty and sensitivity analysis, the studied input variables are the following seven neutronic parameters: diffusion coefficient which determines the leakage for thermal and fast group (Diff1 and Diff2), scattering crosssection (SigR) which determines the moderation, absorption crosssection for both groups (Siga1 and Siga2), and neutrons per fission multiplied by fission cross section which determines the rate of fission power release for both groups as well (NuSigF1 and NuSigF2). The sample size and the selected sample methods and distributions were discussed previously. The described input file information is summarized in Table 1.

After settingup the input model file, the perturbation matrix file is generated. Both processes are part of the socalled prerun process in Jaguar. The size of the generated matrix is equal to the sample size multiply by the number of input model variables (i.e., 146 7) and each matrix element represents a perturbation parameter. The matrix file is used for modifying the crosssection sets by multiplying the 7 input model variables by the obtained perturbations. Each sample has a corresponding set of perturbations so it means that the number of simulations must be equal to the sample size. Even though, the REA analysis is performed by the coupled system RELAP5PARCS, the crosssectional sets with the uncertainty perturbations are only used by PARCS.
Once the 146 simulations, with the corresponding perturbations, are performed, nonparametric statistical methods are applied for studying the influence of the uncertainty on the macroscopic neutronic information (postrun process). Moreover, the selected output model variables for the uncertainty and sensitivity analysis are reactor power, total reactivity, and enthalpy. During the postrun process, the prerun input files generated by Jaguar and the perturbation matrix file are used. However, the matrix has to be modified by adding an extra column which includes the required information for the performed sensitivity analysis. Figure 2 summarizes the followed procedure using DAKOTAJaguar computer package.
The modification of the matrix is one of the main parts of the sensitivity analysis and it can be performed following two different approaches. The first one is based on maximum values so the extra column contains the maximum value for each output variable (power, enthalpy, or reactivity) for a given case. The second approach is based on the time step values and it has been repeated three times, one for each output variable. The extra column contains the desired output variable value for each time step for a given case. Furthermore, for this second approach, some interpolations may be required since not all the simulations have the same time steps.
This paper describes the two approaches and the sensitivity analysis performed based on them. Henceforth, the first approach will be named as scalar sensitivity analysis and the second approach as indexdependent sensitivity analysis.
4. Model Description
The reactor core contains 157 fuel elements. Each fuel element has 264 fuel rods, 24 guide tubes, and 1 tube for the instrumentation. Therefore, the core has been modelled with 157 thermalhydraulic flow channels, with a onetoone correspondence with the fuel elements. The initial Hot zero power (HZP) steadystate conditions are: temperature equals to 565.58 K, initial density of 740.74 kg/m^{3}, and a total inlet mass flow rate through the core of 13301 kg/s which is distributed among all the channels depending on the crosssectional area. The transient is started by the ejection of a control rod with the maximum reactivity worth.
RELAP5 is the selected system code for modelling the 157 thermalhydraulic channels, which are connected with branches (BRANCH), and the bypass which is modelled as an independent channel (see Figure 3). The inlet and the outlet boundary conditions are modelled as a timedependent volume (TMDPVOL) and a timedependent junction (TMDPJUN) respectively, as shown in Figure 3.
Radially, the core is divided into 21.504 cm 21.504 cm cells (with a onetoone correspondence between the cells and the fuel assemblies) plus a radial reflector. In total, there are 157 fuel assemblies plus 64 reflector assemblies. Axially, the core is divided into 26 layers (24 fuel layers plus top and bottom reflector) with a height of 15.24 cm each. The total active core height is 365.76 cm.
The neutronic nodal discretization consists of 157 24 active nodes. In addition, 13 different types of fuel elements (including one to represent the reflector), with 291 different neutronic compositions, are considered. The neutronic model uses two prompt and six delayed neutron groups. Moreover, zeroflux at the outer reflector surface is considered as boundary conditions for the neutron diffusion equation and the decay heat model is activated.
The control rods are grouped in 6 banks which initially all are fully inserted. In a REA simulation, one of the main parameters that needs to be determined is the control rod with the maximum reactivity worth. Table 2 shows the coordinates and bank which correspond to the maximum reactivity worth control rod. Those results have been obtained using SIMULATE3.

As seen in Table 2, the control rod with the maximum reactivity worth corresponds to bank 5 at the location 14–10. It means that this is the selected rod for being ejected during the transient. Figure 4 represents the control rod banks and the ejected rod, the selected bank is renamed as bank 7.
The transient is started by the ejection of the rod 14–10 which is fully extracted in 0.1 s. The transient is simulated following the sequence of the events showed in Table 3.

5. Results
Recalling what was discussed previously, the sample size which guarantees double tolerance limits with a 95% of uncertainty and with 95% of statistical confidence for the output variables is equal to 146. The uncertainty and sensitivity analysis is performed assuming uniform distribution, on one hand, and normal distribution with three different deviations (0.1%, 0.5%, and 1%) on the other hand. Both cases have been simulated using Random and LHS sampling methods.
5.1. Scalar Sensitivity Analysis
Figures 5 and 6 represent the Partial rank correlation coefficients for the analysed case. From the point of view of the sensitivity analysis, the scalar sensitivity for the maximum power, enthalpy, and reactivity shows that the fast diffusion coefficient (1), the scattering crosssection (3), and both neutrons per fission multiplied by fission crosssection (6 and 7) have the highest influence on the uncertainties for all output variables. Moreover, the influence of absorption crosssections and thermal diffusion coefficient could be neglected. Those extracted conclusions can be extended to all the cases performed.
For simplicity, it has been only presented the comparison between the sampling methods for the normal PDF. In case of uniform PDF, the results are similar to normal PDF but with partial rank correlation slightly increased, that is, all output parameters are more sensitive with respect to input variables uncertainty. In next figures, it is seen that the fast absorption crosssection (Siga1) is positive for LHS method (Figure 5) and negative for the random method (Figure 6). Considering the sample size used (146) and the proximity of the mentioned crosssection, the sign change could be due to statistical fluctuations.
Figure 7 shows the scalar sensitivity analysis for the scattering crosssection (SigR1). It is shown that the use of the Uniform PDF to quantify the uncertainty of the input variables increases the sensitivity of a given output variable with respect to a specific input variable. The partial correlation coefficient statistically quantifies the influence of one variable, removing the effect of the rest, and, in this case, Figure 7 shows that assigning equal probability to the range of uncertainty of the variable (uniform distribution) increases the correlation coefficient, since the sampling results in a wider variation across the range compare to a normal PDF. Thus, Figure 7 shows that when normal PDF with LHS is used, the influence of the scattering crosssection is almost constant despite of the dispersion on the normal PDF, since the sampling variation tends to be concentrated around the mean of the normal distribution. Conversely, for Random sampling method, it is shown an increasing trend with the increase of the perturbation deviation. The same trends can be seen on other crosssectional parameters.
5.2. IndexDependent Sensitivity Analysis
The indexdependent sensitivity analysis for normal PDF and the two sampling methods are presented in this subsection. For the particular case in which the number of runs is equal to 146, critical values for Spearman’s coefficient are ; whether the correlation exceeds these values, then the parameter is influential.
First the results of Partial rank correlation coefficient (PRCC) for the output variable Power are presented. As shown in Figures 8 and 9, the most sensitive parameter is the fast diffusion coefficient (Diff1) with mostly a strong positive influence on the power. This is followed by the fast fission (NuSigF1), the scattering (SigR1), and the thermal fission (NuSigF2) crosssection, all of them with a similar absolute influence. The difference lies on the positive influence of NuSigF1 on the power and negative influence of the other two crosssections.
As seen, there are changes in the sensitivities of these four neutronic parameters once the insertion of positive reactivity has occurred. Also, the sensitivities change slightly to reach stability when the rods have already been extracted leading to their final value, corresponding to the sensitivity of those values for the reached steady state. All these results and conclusions can be extended to the random sampling normal distribution with 0.1% deviation case as seen in Figure 9. The results for the uniform distribution are similar for those obtained for the normal PDF with 0.1% deviation.
Figure 10 shows the results for normal distribution with 0.5% deviation and LHS sample method. The most sensitive parameters remain unchanged. Furthermore, the values are smoother than in the 0.1% standard deviation case. The results for the 1% deviation cases are not shown in this paper because of their similarity to the 0.5% deviation results.
In Figure 11, results of PRC coefficient for the output variable enthalpy are presented. Since these results have few abrupt changes, and for the sake of simplicity, a zoom in time axis has been performed. The extracted conclusions are similar to those obtained for the power. The most sensitive parameters are as shown for the power case. Increasing the deviation, the results are smoother. Using uniform distribution, the input parameters become more sensitive.
In Figure 12, results of PRC coefficient for the output variable reactivity are presented. Again, the same conclusions can be withdrawn, and the most sensitive parameters have the same sign as in the power case.
A final comment is needed with respect to the PRC coefficient variations. All output variables show an abrupt variation in PRC values at time 2 and around 25 seconds. As expected, the variation at time 2 s is due to the rod ejection, this variation expands its effect until time 6 s, and then a plateau is reached. The variation at time 25 s could be explained as follows: the PRC is a relative magnitude, so when all input variables are almost constant, a small variation in a thermalhydraulic or neutronic variable could produce an important relative change in the PRC.
The effect of the rod ejection and the reactivity variation are explained in Figure 13. This figure depicts the total reactivity and its decomposed contributions (Doppler, moderator density and control rod reactivities). As expected, the rod ejection at time 2 seconds implies a fast positive reactivity insertion which is immediately compensated by the Doppler effect (broadening of absorption crosssections) and a decrease in the moderator (liquid and gas) density. Following the rod ejection, the moderator density reactivity contributes the most to the total reactivity (as negative reactivity).
Figure 13 also shows the PRC coefficient related to the reactivity and the reactivities themselves. It can be seen that fast diffusion coefficient gains influence just after the rod ejection (2 s) and before the insertion of positive reactivity (2.1 s). This phenomenon is due to the loss of the neutron isotropic distribution inside the reactor. Fast diffusion factor maintains its importance (in absolute value). The reason for the change of the fast diffusion sign is the alteration in neutron production due to moderator density and fuel temperature variation (power increase and Doppler effect).
When Doppler reactivity becomes important (2.15 s), absorption parameters increase their influences and fission parameters have opposite tendency. This effect could be explained as follows: the power increases as a consequence of the rod ejection, then the fuel and moderator temperature rise. This is followed by a reduction in moderator density, hence, moderation, and the multiplication factor decrease.
5.3. Uncertainty Analysis
From the point of view of the uncertainty analysis, the results demonstrate that deviations about 0.1% have the smallest influence on the output variables of interest as expected. Figure 14 shows that both LHS and random samplings with normal distribution of 0.1% standard deviation reach a similar peak power at the approximate same time. As a result of the increasing of the uncertainty, there is a slight rise in the maximum output value. However, there is a spread in the time at which the peak power (and enthalpy and reactivity, see Figures 15 and 16) occurs. Moreover, the intervals of tolerance are greater in normal than in Uniform distributions, so the use of normal PDF provides more conservative results than uniform PDF.
Finally, similar conclusions can be extracted for the uncertainty in power, enthalpy, and reactivity for Random sampling method.
6. Concluding Remarks
This paper has described a Rod Ejection Accident (REA) simulated using the coupled code RELAP5/PARCSv2.7 with a perturbation on the crosssectional sets in order to determine the response of the computational system to uncertainties in the macroscopic neutronic information.
For all cases, the most influential uncertainties obtained by the scalar sensitivity analysis were the fast diffusion coefficient (1) with a positive influence on power, the scattering cross section (3) and both fission crosssections (6 and 7) with mixed positive/negative influence. The absorption crosssections together with the thermal diffusion coefficient could be neglected regarding the influence for the selected output variables. Therefore, the performed sensitivity analyses have shown that the influence of the uncertainties is not dependent on the selected sampling method.
The indexdependent sensitivity analysis showed the same influence for the different neutronic parameters. Moreover, there were sign changes in the most important neutronic parameters which are produced once the insertion of negative reactivity has been occurred. However, in all cases the tendency is to reach a steadystate condition for “long” time simulations. Regarding the sampling methods used, there were no significant differences. Furthermore, deviations greater than 0.1% showed smoother behavior; however, perturbations with standard deviations greater than 1% could lead to a run failure due to the high heat flux generation and consequently properties can be overpassed in steam tables.
Conflict of Interests
The authors of this paper have no conflict of interests to declare.
Acknowledgments
This work has been partially supported by the Spanish Ministerio de Educación y Ciencia under Project ENE200802669, the Generalitat Valenciana under Project ACOMP/2009/058, and the Universitat Politècnica de València under Project PAID05094285.
References
 O. Roselló, Desarrollo de una metodología de generación de secciones eficaces para la simplificación del núcleo de reactores de agua ligera y aplicación en códigos acoplados neutrónicos termohidráulicos [Ph.D. thesis], Universitat Politècnica de València, 2004.
 V. Kreinovich and S. A. Ferson, “A new Cauchybased blackbox technique for uncertainty in risk analysis,” Reliability Engineering and System Safety, vol. 85, no. 1–3, pp. 267–279, 2004. View at: Publisher Site  Google Scholar
 J. Juanas, R. Macián, F. Ánchel, T. Barrachina, R. Miró, and G. Verdú, “Uncertainty and sensitivity analysis in the neutronic parameters generation for BWR and PWR coupled thermalhydraulicneutronic simulations,” in International Conference on Nuclear Energy for New Europe, Portorož, Slovenia, September 2010. View at: Google Scholar
 J. C. Helton and F. J. Davis, “Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems,” Reliability Engineering and System Safety, vol. 81, no. 1, pp. 23–69, 2003. View at: Publisher Site  Google Scholar
 H. G. Glaeser, “Uncertainty evaluation of thermalhydraulic code results,” in Proceedings of the International Meeting on Best Estimate Methods in Nuclear Installation Safety Analysis (BE '00), Washington, DC, USA, November 2000. View at: Google Scholar
 S. S. Wilks, Mathematical Statistics, John Wiley & Sons, 1962.
 S. Hong, Oh, and Kim, “Generic application of Wilks tolerance limits evaluation approach to nuclear safety,” in OECD/CSNI Workshop on Best Estimate Methods and Uncertainty Evaluations, Barcelona, Spain, 2011 November. View at: Google Scholar
 R. L. Iman and W. J. Conover, “A distribution free approach to inducing rank correlation among input variables,” Communications in Statistics B, vol. 11, no. 3, pp. 311–334, 1982. View at: Google Scholar
 M. D. McKay, R. J. Beckman, and W. J. Conover, “A comparison of three methods for selecting values of input variables in the analysis of output from a computer code,” Technometrics, vol. 21, no. 2, pp. 239–245, 1979. View at: Google Scholar
 A User’s Guide to Sandia’s Latin Hypercube Sampling Software: LHS UNIX Library/Standalone Version.
 DAKOTA, A Multilevel Parallel ObjectOriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis. Version 5.2 User’s Manual.
 DAKOTA Jaguar 2.1 User’s Manual.
Copyright
Copyright © 2012 C. Mesado et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.