Research Article  Open Access
Uncertainty Analyses Applied to the UAM/TMI1 Lattice Calculations Using the DRAGON (Version 4.05) Code and Based on JENDL4 and ENDF/BVII.1 Covariance Data
Abstract
The OECD/NEA Uncertainty Analysis in Modeling (UAM) expert group organized and launched the UAM benchmark. Its main objective is to perform uncertainty analysis in light water reactor (LWR) predictions at all modeling stages. In this paper, multigroup microscopic crosssectional uncertainties are propagated through the DRAGON (version 4.05) lattice code in order to perform uncertainty analysis on and 2group homogenized macroscopic crosssections. The chosen test case corresponds to the Three Mile Island1 (TMI1) lattice, which is a 15 15 pressurized water reactor (PWR) fuel assembly segment with poison and at full power conditions. A statistical methodology is employed for the uncertainty assessment, where crosssections of certain isotopes of various elements belonging to the 172group DRAGLIB library format are considered as normal random variables. Two libraries were created for such purposes, one based on JENDL4 data and the other one based on the recently released ENDF/BVII.1 data. Therefore, multigroup uncertainties based on both nuclear data libraries needed to be computed for the different isotopic reactions by means of ERRORJ. The uncertainty assessment performed on and macroscopic crosssections, that is based on JENDL4 data, was much higher than the assessment based on ENDF/BVII.1 data. It was found that the computed Uranium 235 fission covariance matrix based on JENDL4 is much larger at the thermal and resonant regions than, for instance, the covariance matrix based on ENDF/BVII.1 data. This can be the main cause of significant discrepancies between different uncertainty assessments.
1. Introduction
The significant increase in capacity of new computational technology made it possible to switch to a newer generation of complex codes, which are capable of representing the feedback between core thermalhydraulics and neutron kinetics in detail. The coupling of advanced, best estimate (BE) models is recognized as an efficient method of addressing the multidisciplinary nature of reactor accidents with complex interfaces between disciplines. However, code predictions are uncertain due to several sources of uncertainty, like code models as well as uncertainties of plant, materials, and fuel parameters. Therefore, it is necessary to investigate the uncertainty of the results if useful conclusions are to be obtained from BE codes.
In the current procedure for light water reactor analysis, during the first stage of the neutronic calculations, the socalled lattice code is used to calculate the neutron flux distribution over a specified region of the reactor lattice by solving deterministically the transport equation. Lattice calculations use nuclear libraries as input basis data, describing the properties of nuclei and the fundamental physical relationships governing their interactions (e.g., crosssections, halflives, decay modes and decay radiation properties, rays from radionuclides, etc.). Experimental measurements on accelerators and/or estimated values from nuclear physics models are the source of information of these libraries. Because of the huge amount of sometimes contradictory nuclear data, the data need to be evaluated before they can be used for any reactor physics calculations. Once evaluated, the nuclear data are added in a specific format to socalled evaluated nuclear data files, such as ENDF6 (Evaluated Nuclear Data File6). The information of the evaluation files can differ because they are produced by different working groups all around the world (e.g., ENDF/B for the USA, JEFF for Europe, JENDL for Japan, BROND for Russia, etc.). The data can be of different type, containing an arbitrary number of nuclear data sets for each isotope, or only one recommended evaluation made of all the nuclear reactions for each isotope. Finally, these data are fed to a crosssectional processing code such as NJOY99 [1], which produces the isotopic crosssection library used by the lattice code. This process can create a multigroup library specifically formatted for the lattice code in use. For instance, Hébert [2] developed a nuclear data library production system that recovers and formats nuclear data required by the advanced lattice code DRAGON version 4 [3] and higher versions. For these purposes, a new postprocessing module known as DRAGR was included in NJOY99, which is thus capable of creating the socalled DRAGLIB nuclear data library for the DRAGON v 4.05 code.
In the major nuclear data libraries (NDLs) created around the world, the evaluation of nuclear data uncertainty is included as data covariance matrixes. The covariance data files provide the estimated variance for the individual data as well as any correlation that may exist. The uncertainty evaluations are developed utilizing information from experimental crosssection data, integral data (critical assemblies), and nuclear models and theory. The covariance is given with respect to pointwise crosssection data and/or with respect to resonance parameters. Thus, if such uncertainties are intended to be propagated through deterministic lattice calculations, a processing method/code must be used to convert the energydependent covariance information into a multigroup format. For example, the ERRORJ module of NJOY99 or the PUFFIV code is able to process the covariance for crosssections including resonance parameters and generate any desired multigroup correlation matrix.
Among the different approaches to perform uncertainty analysis, the one based on statistical techniques begins with the treatment of the code input uncertain parameters as random variables. Thereafter, values of these parameters are selected according to a random or quasirandom sampling strategy and then propagated through the code in order to assess the output uncertainty in the corresponding calculations. This framework has been highly accepted by many scientific disciplines not only because of its solid statistical foundations, but also because it is affordable in practice and its implementation is relatively easy thanks to the tremendous advances in computing capabilities. In this paper, the microscopic crosssections of certain isotopes of various elements, belonging to the 172group DRAGLIB library format, are considered as normal random variables. Two different DRAGLIB’s are created, one based on JENDL4 and the other one based on ENDF/BVII.1 data, because a large amount of isotopic covariance matrices have been compiled for these two major NDLs [4, 5]. The aim is to propagate the multigroup uncertainties through the DRAGON v 4.05 code, in order to assess and compare the different code outputs uncertainties while using both JENDL4 and ENDF/BVII.1 data. Uncertainty assessment is performed on and on the different 2group homogenized macroscopic crosssections of a PWR fuel assembly segment with poison (UO_{2}Gd_{2}O_{3}). This test case corresponds to the Three Mile Island1 (TMI1) Exercise I2 that is included in the neutronics phase (Phase I) of the “Benchmark for Uncertainty Analysis in Modeling (UAM) for design, operation, and safety analysis of LWRs,” organized and lead by the OECD/NEA UAM scientific board [6].
The preferred sampling strategy for the current study corresponds to the quasirandom Latin Hypercube Sampling (LHS). This technique allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. In fact, LHS was created in the field of safety analysis of nuclear reactors [7], and the benefits and efficiency of using LHS over SRS have been already proved in both LWRs neutronic and thermalhydraulic predictions [8, 9]. Output uncertainty assessment is based on the multivariate tolerance limits concept. Due to the fact that the output space formed by and some of the twogroup homogenized macroscopic crosssections are correlated, the univariate analysis does not apply anymore. By statistically perturbing 450 times the different isotopic microscopic crosssections, 450 different DRAGLIB libraries are created. Therefore, the output sample formed by the 450 code calculations infers to cover 95% of the multivariate output population, with at least a 95% of confidence. All these is performed twice, once for libraries based on JENDL4 data and another time for libraries based on ENDF/BVII.1 data, respectively, for their further comparison.
In the next sections, the multigroup microscopic crosssection uncertainties computed with ERRORJ are shown for some important nuclides. Thereafter, a deeper review on how to perform a statistical uncertainty analysis is presented, with emphasis on a developed methodology to properly sample the scattering kernel and the fission spectrum. This allows a correct uncertainty propagation through the lattice code since the neutron balance is preserved in the transport equation. Finally, results of the uncertainty analyses are shown for the test case and discussed.
2. Multigroup Uncertainty Based on JENDL4 and ENDF/BVII.1
2.1. Main Features
The uncertainty information in the major NDLs is included in the socalled “covariance files” within the ENDF6 formalism. The following covariance files are defined:(i)data covariances obtained from parameter covariances and sensitivities (MF30),(ii)data covariances for number of neutrons per fission (MF31),(iii)data covariances for resonance parameters (MF32),(iv)data covariances for reaction crosssections (MF33),(v)data covariances for angular distributions (MF34),(vi)data covariances for energy distributions (MF35),(vii)data covariances for radionuclide production yields (MF39),(viii)data covariances for radionuclide production crosssections (MF40).
To propagate nuclear data uncertainties in reactor lattice calculations, it is necessary to begin by converting energydependent covariance information in ENDF format into multigroup form. This task can be performed conveniently within the latest updates of NJOY99 by means of the ERRORJ module. In particular, ERRORJ is able to process the covariance data of the ReichMoore resolved resonance parameters, the unresolved resonance parameters, the component of the elastic scattering crosssection, and the secondary neutron energy distributions of the fission reactions [5]. ERRORJ was originally developed by Kosako and Yamano [10] as an improvement of the original ERRORR module in order to calculate selfshielded multigroup crosssections, as well as the associated correlation coefficients. These data are obtained by combining absolute or relative covariances from ENDF files with an already existing crosssection library, which contains multigroup data from the GROUPR module.
In the presence of narrow resonances, GROUPR handles selfshielding through the use of the Bondarenko model [1]. To obtain the part of the flux that provides selfshielding for the isotope , it is assumed that all other isotopes are represented with a constant background crosssection . Therefore, at resonances the flux takes the following form:
The most important input parameters to ERRORJ are the smooth weighting function and the background crosssection . It should be noticed that these are assumed to be free of uncertainty.
2.2. Computation of Uncertainties and Correlation Matrices of Important Isotopes
In this section, results of the ERRORJ module are shown from Figures 1, 2, 3, 4, 5, 6, and 7, respectively, for different reactions of 5 important nuclides: , , , , and . Results for are based on JENDL3.3 data since JENDL4 does not contain uncertainty information for this isotope. The value of the microscopic crosssections and their relative variances in percentage were computed for an energygrid of 172 groups by using a weighting flux that corresponds to the shape (in NJOY, this is equivalent to the option of GROUPR). For all cases, an infinite dilution condition was assumed (i.e., barns) and the temperature was considered to be 293 K.
(a) JENDL3.3
(b) ENDF/BVII.1
(a) JENDL4
(b) ENDF/BVII.1
(a) JENDL4
(b) ENDF/BVII.1
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(a)
(b)
(c)
(d)
(e)
(a)
(b)
(c)
(d)
(e)
Each of the following figures contains 3 main plots. The plot on the right corresponds to the value of a certain reaction crosssection, while the plot at the top corresponds to the relative variance (i.e., the variance of the crosssection divided by the actual value of the crosssection at a certain energy group). These two plots are presented in multigroup format as a function of energy (eV). Finally, the plot at the center represents the correlation that exists among the different 172 energy groups for that type of reaction.
From the isotopic composition of the TMI1 exercise, , , , , and are the only nuclides for which uncertainty information exists in both JENDL4 and ENDF/BVII.1 libraries. Therefore, only the corresponding reactions of these nuclides were statistically perturbed. It has to be mentioned that the fission spectrum uncertainty could not be computed by ERRORJ for the ENDF/BVII.1 library, neither for the nor the isotope. The code gave an error message about the I/O format of the file and, since this could not be resolved, the fission spectra covariance matrices from JENDL4 were used instead. This problem has already been addressed to the ENDF/B research group.
As seen in the previous figures, for each crosssection of a given nuclide, the variability of the probability of interaction at a certain energy group is related to the probability of interactions at other energy groups since the same measuring equipment was used when determining such probabilities. Such correlation can be studied through the selfreaction covariance matrix. In the same way, the variability of the probability of interaction at a certain energy group of a certain type of reaction is also related to the probability of interaction of a second type of reaction at the same energy group due to the same reason as above. Such correlation can be studied through the multireaction covariance matrix.
It should be noted that in the modern JENDL libraries, covariances for mubar (which allows performing an uncertainty analysis up to a linear degree of anisotropy) are defined for actinides. However, this is not the case for the newly ENDF/BVII.1 library and thus, the uncertainty analysis was only performed on the isotropic components of the scattering matrix. Another important issue that was noticed while computing the different reaction covariances was the fact that resonance uncertainties in JENDL4 are absolute. This means that selfshielded relative variances (or relative standard deviations) will change as a function of temperature and dilution at the resonant groups. To illustrate this issue, relative standard deviations at the resonant groups for different background crosssections were computed for the and reactions, as shown in Figures 8 and 9, respectively. Small relative standard deviations are obtained with large background crosssection values and vice versa. This fact is supported by the results obtained by Chiba and Ishikawa [11], where a dependency between relative multigroup covariances and background crosssections at the resonances was observed when JENDL3.2 data were employed.
Regarding the ENDF/BVII.1 resonant uncertainties, only an absolute dependency was observed, leaving the relative terms intact for any temperature and/or dilution conditions. This is an important issue, because as will be seen in Section 3, it is very easy to implement the perturbation methodology based on relative uncertainties. Nevertheless, an exception must be made at the actinides resonances for the JENDL4 case.
3. Statistical Uncertainty Analysis
3.1. Uncertainty Assessment Using Nonparametric Tolerance Limits
The first step of the standard statistical framework is to identify from the code inputs the most important uncertain parameters defined as , which can be models, boundary conditions, initial conditions, closure parameters, and so forth. They should be characterized by a sequence of probability distribution functions (PDFs) known as the uncertain input space. Then, a sampling strategy is used to generate a sample of size from such an input space which is propagated through the code in order to treat the output calculations as random variables. This scheme is shown in Figure 10.
Once a sample of the code output has been taken, a statistical inference of the output population parameters is performed. During recent years, it has been common in the field of nuclear reactor safety to use the theory of nonparametric tolerance limits for the assessment of code output uncertainty. This approach, proposed by Gesellschaft für Anlagenund Reaktorsicherheit (GRS) [12], is based on the work done by Wilks [13, 14] to obtain the minimum sample size in order to infer a certain coverage of a population, with a certain confidence. Let us assume that the uncertainty assessment is only performed in one output parameter. For the twosided case, where the coverage of the output population is expected to be inferred from the percentile to the percentile with a of confidence, the minimum sample size is given by the following implicit equation [15]:
For example, if the 5th and 95th percentiles of the population are to be inferred with a 95% of confidence, a sample size of 93 elements is required. It should be noticed that this analysis is solely based on the number of samples and applies to any kind of PDF the output may follow. Also, since the input space is only used as an indirect way to sample the output space, the use of nonparametric tolerance limits is independent from the number of uncertain input parameters. When the code output is comprised by several variables that depend on each other, the uncertainty assessment should be based on the theory of multivariate tolerance limits. Wald [16, 17] was the first to analyze the statistical coverage of a joint distributionfree PDF. In Guba et al. [18], the concern about assessing separate tolerance limits to statistically dependent outputs was raised within the nuclear reactor safety community. In such a work, it was shown that the general equation developed by Noether [19] for simultaneous upper and lower tolerance limits can be used to determine the minimum sample size required to cover, in a distributionfree manner, a joint PDF depending on the number of output variables. Such equation reads as follows: where is the number of upper tolerance limits and is the number of lower tolerance limits to be assessed. For instance, in the case of twosided tolerance limits for a single variable, and (3) turns out to be the same as (2). Therefore, if a twosided uncertainty assessment is going to be performed to 2 statistically dependent output variables then , and so on. It should be noticed that the sample size in the multivariate case depends on the correlation among the different parameters. Guba et al. [18] exemplified this fact for a bivariate normal distribution. It was then shown that if the variables were highly correlated, the required sample size to cover the joint PDF is smaller than for the poorly correlated case. Nevertheless, if nothing is known about the output space PDF, (3) would give the required sample size for the desired multivariate coverage with a desired confidence independently of the correlation (or covariance) among the output parameters. This is a very powerful statistically significant way to assess uncertainty in the design of computational experiments since in general, nothing is known about the PDF where the calculations are coming from.
Other authors have done some work to derive the minimum sample size for multivariate nonparametric tolerance limits, such as the equation presented by Scheffe and Tukey [20] as follows: where is the value of the distribution with degrees of freedom. Ackermann and Abt [21] tabulated (4) as a function of the desired coverage and confidence, respectively, for a large number of tolerance limits the space in study may be comprised with. These tables are in agreement with for instance, Table no. 4 shown in [18] with respect to the solution of (3) for the twosided case and up to 3 variables in question.
3.2. Latin Hypercube Sampling
The simplest sampling procedure for developing a mapping from input space to output space is through SRS. In this procedure, each sample element is generated independently from all other sample elements; however, there is no assurance that a sample element will be generated from any particular subset of the input space. In particular, important subsets with low probabilities but high consequences are likely to be missed if the sample is not large enough [7]. Even though in the theory of nonparametric tolerance limits, the minimum sample size is independent from the dimension of the input space, if an efficient coverage of the different inputs can be performed with the same sample size that is needed to statistically significant cover the output space, then the code nonlinearities would be better handled and the output uncertainty assessment would be as well more efficient. The aforementioned goal can be achieved if Latin Hypercube sampling is employed instead of simple random sampling.
LHS can be viewed as a compromise, since it is a procedure that incorporates many of the desirable features of random and stratified sampling. LHS is done according to the following scheme to generate a sample of size from the input space in consistency with their PDFs. The range of each variable (i.e., the ) is exhaustively divided into disjoint intervals of equal probability and one value is selected at random from each interval. The values thus obtained for are paired at random without replacement with the values obtained for . These pairs are combined in a random manner without replacement with the values of to form triples. This process is continued until a set of NKtuples is formed. In this way, a good coverage of all the subsets defining the uncertain input space can be achieved. This procedure is exemplified in Figure 11 for two different possible input distributions, one corresponding to a uniform distribution and the second to a normal distribution, respectively.
(a)
(b)
In the field of computational experiments, the concept of tolerance limits applied to the code uncertainty assessment is valid even if the input space is sampled with LHS. This is due to the fact that such a theory does not assume any kind of parametric distribution of the code output space, and is only founded on the ranking of a statistically significant number of samples. Therefore, since this theory is independent from the dimensionality of the input space, it does not matter how the input space is sampled as long as the minimum sample size requirement is being fulfilled. In other words, LHS is used to cover much better the input space and ergo, to much better handle the code nonlinearities in order to try to infer more realistic output percentiles that the ones SRS might infer for the same sample size, and for the same level of confidence. For example, the use of LHS applied to the inference of code output tolerance limits in a nonparametric way can be found in [7, 22, 23]. Moreover, it should be reminded that the estimation of the output cumulative density function (CDF) when LHS is employed is unbiased [24].
3.3. Determination of the Sample Size according to TwoGroup Diffusion Theory
Since uncertainty analysis in this work is performed to both and homogenized twogroup macroscopic crosssections, the minimum sample size to assess multivariate uncertainty based on nonparametric tolerance limits is dependent on the number of macroscopic crosssections that are required to calculate . For example, by following the solution of the twogroup diffusion equation in a homogenous system and applying vacuum boundary conditions [25], the wellknown fourfactor formula can be derived where the removal crosssection is given by
It is common that thermal upscattering is not present and thus, . Therefore, when assessing the covariances between and the twogroup macroscopic crosssections, a minimum of 6 output parameters are in question (i.e., , , , , , and ). According to Table 1b present in [21], for a twosided 95% coverage of 6 variables with a 95% of confidence, a minimum of 361 samples are required. Nevertheless, if the uncertainty assessment is extended to other parameters such as diffusion coefficients, a sample size of 410 elements is needed, because diffusion coefficients are related to through the transport crosssection. Therefore, since one of the main goals of performing lattice calculations is to prepare a set of homogenized and energy collapsed set of parameters for any further core analysis, the output sample for the multivariate uncertainty analysis should contain at least 410 elements.
4. The Input Uncertain Space: Sampling Procedure of the DRAGLIB Library
4.1. Main Features of the DRAGON Code and the DRAGLIB Library
The DRAGON code is the result of an effort made at École Polytechnique de Montréal to rationalize and unify the different models and algorithms used to solve the neutron transport equation into a single code.
Advanced lattice codes essentially feature selfshielding models with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space dependent isotropic or anisotropic streaming effect, availability of the characteristics method and burnup calculation with energyresolved reaction rates. The advanced selfshielding models available in DRAGON version 4.05 are based on two main approaches: equivalence in dilution or subgroup models. Stateofthe art resonance selfshielding calculations with such models require dilutiondependent microscopic crosssections for all resonant reactions, and for more than 10 specific dilutions. Ultrafine multigroup crosssection data are also required in the resolved energy domain. Thus, the crosssections library energy structure should comprise at least 172 groups. Since these capabilities require information that is not currently available in for example, the WIMSformatted library, a nuclear data library production system was written by Hébert [2] to recover and format the required nuclear data that is needed to feed the DRAGON v 4.05 code.
The management of a crosssection library requires capabilities to add, remove, or replace an isotope, and the capability to reconfigure the burnup data without recomputing the complete library. For these purposes, DRAGR was developed by Hébert [2] and is an interface module to perform all these functions while maintaining full compatibility with NJOY99 and its further improvements. DRAGR produces DRAGLIB, a direct access crosssection library in a selfdescribed format that is compatible with DRAGON or with any lattice code supporting that format. The DRAGR Fortran module was written as a clean and direct utility that makes use of the NJOY modules PENDF and GENDF. For each nuclide within DRAGLIB, the crosssections for the following neutroninteraction reactions are described: , , , , , , and . Also, NuSigmaFission, the released neutron energy spectrum (CHI), and the P0 and P1 scattering matrices are included. Since the uncertainty study reported hereafter is based on JENDL4 data, a DRAGLIB library of 172 groups was needed to be produced using JENDL4 information for different temperatures and background crosssections. The first 79 groups correspond to the thermal region; the next 46 groups correspond to the resonant region and the last 47 groups correspond to the fast region. An example of microscopic crosssections for different reactions included in DRAGLIB can be found in Figures 1, 2, and 3 for , , and , respectively. These crosssections were calculated at 293 K and considering an infinite dilution.
The DRAGON code solves the multigroup criticality equation at the pin cell level using the collision probability theory, and at the fuel assembly level by means of the method of characteristics. In its integrodifferential form, the zerolevel transport corrected multigroup equation is given by
The left hand side of (7) is related to how neutrons disappear in space by leakage and any absorption or scattering reaction at the group , while the right hand side is related to how neutrons are being produced at the energy level through the sum of the scattering and fission contributions coming from the different neutron energy groups. Then, the input uncertain space is composed by the different microscopic crosssections, and . If any statistical perturbation on a type of reaction is going to be made in one side of the transport equation, it should be somehow propagated to the other side as well in order to preserve the neutron balance. However, some uncertainty information (depending on the type of reaction and nuclide in question) cannot be directly computed directly from the NDLs. For example, straightforward covariances cannot be obtained for the scattering matrices, and so on. Therefore, different methodologies needed for a proper propagation of microscopic crosssection uncertainty are detailed in the next subsections.
4.1.1. Uncertainty Analysis of the Scattering CrossSection
The scattering source can be expanded such as where the index indicates if the reaction is elastic or inelastic, and is referred to the nuclide index. In general, the and scattering matrices in multigroup format computed by NJOY are based, within the ENDF6 formalism, on the file which accounts for energyangle distributions of different reactions. For example, the reaction is considered for elastic scattering, while all the reactions that are present in the file between and should be taken into account for inelastic scattering.
Let us analyze the matrix. For the nominal case, the following relationship between energyintegrated crosssections and the scattering matrix can be grounded
Since uncertainties are only given to the isotropic scattering reaction , any sampling of the form can be propagated to the scattering matrix if the nominal transfer matrix is kept constant, that is
In the nominal case of the transport corrected version, a degree of linear anisotropy can be taken into account by modifying the diagonal of the scattering matrix as follows:
As shown before in Section 2, uncertainties for the average of the cosine of the scattering angle mubar are defined in JENDL4 only for some actinides. Nevertheless, since this is not the case for the ENDF/BVII.1 library, perturbations to mubar were not considered in this paper because otherwise, a fair comparison between the distinct uncertainty assessments would not take place.
If it is considered that any nondiagonal element of the scattering matrix is isotropic (i.e., ), any perturbation can be balanced in the transport equation since the total microscopic crosssection is given by the sum of the absorption and the corrected scattering crosssections. This means that where the capture and fission perturbations expressed such as can be directly sampled from the covariance matrices computed with ERRORJ.
4.1.2. Uncertainty Analysis of the Fission Spectrum
Equation (7) is expressed in such a way that the fission spectrum should always satisfy the following normalization condition:
If a sample is to be drawn for the different spectrum groups, the perturbed spectrum should be carefully renormalized to unity. In the statistical uncertainty approach, this can be achieved by dividing each of the perturbed groupterms of the spectrum by the sum of all of the perturbed groupterms. For example, for a certain sample, this can be illustrated as follows: where the new perturbed fission spectrum will satisfy the normalization condition, that is
4.2. Sampling the DRAGLIB Library
For our study, the multigroup microscopic crosssections of certain isotopes are treated as random variables following a normal PDF. Therefore, for each crosssection of a given nuclide, the nominal crosssection value at each energy group corresponds to the mean value. Since the LHS methodology described in the previous section assumes that the different variables are independent, the Latin hypercube procedure developed by Iman and Conover [26] for sampling correlated variables was followed. This procedure is based not directly on the covariance matrix but, instead, on the correlation matrix. Nevertheless, it can be applied in a very straightforward manner because the ERRORJ output can be processed by the NJOYCOVX [27] program in order to obtain directly, for each reaction, the variance of each group and the associated correlation matrices.
A final total correlation matrix needs to be computed in order to evaluate all the individual self and mutualreaction correlation matrices. This corresponds to a square matrix of size 172*(number of individual correlation matrices). Before starting the sampling procedure, the total correlation matrix should be positive definite. If not, the negative eigenvalues contained in the diagonal of the matrix should be made slightly positive (and is created). Then, the new positive definite total correlation matrix takes the form: where is a matrix containing the eigenvectors of the original correlation matrix.
For each nuclide, the procedure for correlated variables begins by taking an LHS sample based on the individual group variances, and assuming that the group crosssection values are independent, for example: where is the total number of multigroup crosssections, and the number of samples. The aim of this procedure is to rearrange the values in the individual columns of , so that a desired rank correlation structure results among the individual variables. This can be achieved by somehow relating the correlation coefficients of the matrix, to the total correlation matrix .
If the correlation matrix of is called , the method applies a Cholesky decomposition to both and in order to obtain, respectively, and lower triangular matrices that satisfy the following relationships: Then, the target or desired matrix can be computed such as: where the matrix relates and as follows:
In the end, has a correlation matrix equal to , and the values of each variable in must be rearranged so that they have the same rank (order) as the target matrix . That is why this method is known as the rankinduced method.
Since ERRORJ only can evaluate one dilution at a time, a methodology was developed in this work to shield the crosssections covariances at all dilutions and temperatures. Due to the fact that ERRORJ gives both the relative and absolute covariance matrices, only one evaluation is necessary at one temperature and one dilution (i.e., infinite dilution and 273 K). Afterwards, it is only required to multiply the crosssections value at each energy group by the relative multigroup covariance matrix. This scheme is exemplified in Figure 12.
For moderators and some other materials, only and the matrix are to be perturbed already in the DRALGIB format. It is important to modify the total crosssection according to the different and perturbations, since the total crosssection is used by the code and the neutron balanced must be preserved. For important actinides present in LWRs, the , NuSigmaFission, and fission spectrum should be as well modified in DRAGLIB. The total crosssection for these cases should be modified and transport corrected according to (11) and (12). In principle, according to the code developers [3], the transport correction is made at the code level and thus, the total crosssection included in DRAGLIB should be only based on isotropic terms. However, in this implemented statistical methodology DRAGLIB is modified to include the transport corrected version at each sample and therefore, while performing lattice calculations, a flag must be raised at the input deck level in order to inform the code not to perform the transport correction.
5. Results
5.1. Uncertainty Analysis
The TMI1 test case corresponds to a PWR fuel assembly segment with poison at full power conditions (i.e., pellet temperature at 900 K). Four fuel pins are doped with gadolinia as a burnable poison. The actual UO_{2}Gd_{2}O_{3} fuel has a density of 10.144 g/cm^{3}, the fuel enrichment is 4.12 w/o, and the Gd_{2}O_{3} concentration is 2 wt%. Important geometrical rod parameters are presented in Table 1; more information like isotopic composition and so forth, can be found in [6].

The nominal solution to this exercise is shown in Tables 2 and 3, where the fast and thermal macroscopic crosssections and are presented, respectively, using libraries based on both JENDL4 and ENDF/BVII.1 data. For example, for this exercise, Ball [28] computed a value of 1.40340 based on the 69group IAEA library. All these nominal values can be used as a point of comparison for the uncertainty results.


The final sample of 450 elements is significant to cover 95% of the output space formed by the different homogenized macroscopic crosssections, and diffusion coefficients with a 95% of confidence, since all one needs is a sample size of 410 as previously explained. If the relative uncertainty for is defined such as:
Then, uncertainty results for are presented in Table 4. For the twogroup macroscopic crosssections and diffusion coefficients, uncertainty results based on JENDL4 are shown from Tables 5, 6, and 7, while other results based on ENDF/BVII.1 are shown from Tables 8, 9, and 10.







The correlation matrices among the different output parameters are shown, respectively, in Figures 13 and 14.
5.2. Analysis of the Results
As can be appreciated from the previous study, computed uncertainties in the output parameters are much higher for the JENDL4 case, than for the ENDF/BVII.1 case. For example, the standard deviation of the JENDL4 NuSigmaFission crosssection for JENDL4 is 78 times larger than its ENDF/BVII.1 counterpart. In a previous sensitivity study applied to a PWR fuel segment and based on JENDL4 [9], it was found that the most dominant input parameter corresponded to reaction. If one compares the computed ERRORJ variances from both NDLs for such reaction, just like the one made below in Figure 15.
(a)
(b)
It can be seen that up to 1000 eV, uncertainties based on JENDL4 data are much larger than the uncertainties based on ENDF/BVII.1. This creates a large sampling variability of the microscopic crosssection. For example, this effect at 293 K and assuming infinite dilution is presented in Figure 16, where two different samples of 100 elements were taken based on both JENDL4 and ENDF/BVII.1 covariance data.
(a)
(b)
A big difference is observed in the spread of the samples for thermal energies and almost up to the last resonant energies. The fact of having large relative variances in JENDL4 for the thermal groups (~7%) compared to small relative variances in ENDF/BVII.1 (~0.5%), and also large variance differences (up to 10 times) at the resonances, is the cause of such a huge sampling variability between both libraries.
Since uncertainties included in JENDL4 for are very high compared with for instance, the ones included on the ENDF/BVII.1 library, such reaction becomes the most dominant. Other studies based on the SCALE 44group covariance matrices [28, 29] suggested that the microscopic crosssection is the most influential one. Indeed, it is natural to think that capture crosssections have a big impact on lattice calculations, since it is the only reaction that imbalance only one side of the neutron transport equation (i.e., disappearance at a certain energy group). Nevertheless, unfair uncertainties among different input reactions make the uncertainty computations to be very biased.
6. Conclusions
In this paper, a statistical uncertainty analysis was performed on lattice calculations using the DRAGONv4.05 code. The input uncertainty space corresponded to the microscopic crosssections of the different nuclides of the DRAGLIB library. This work is one of the first attempts to process in multigroup format uncertainties from modern nuclear libraries such as JENDL4 and ENDF/BVII.1, so they could be applied to the uncertainty assessment of lattice calculations. Thus, confidence in the results of advance lattice codes can be obtained through the use of a statistical uncertainty analysis.
By comparing the obtained relative uncertainty coming from the two different NDLs, a huge difference could be observed. It can be concluded that large differences in the computed covariances, just like the ones existing between JENDL4 and ENDF/BVII.1 for the reaction, are the cause of such biases in the uncertainty results. This fact was supported by making a comparison on the spread of the different samples of such microscopic crosssection; huge spreads were obtained at the thermal and resonant regions when the sampling is based on JENDL4 than when is based, for instance, on ENDF/BVII.1 data.
The results obtained in this work are important because they demonstrate that it is feasible to statistically perturb and propagate basic uncertainty data through lattice calculations with the current computational technology. This is also the first step to develop an integral statistical uncertainty methodology for nuclear reactor predictions using advanced models, since the lattice code outputs are to be used as inputs to the core simulators. Further studies may include a global and nonparametric sensitivity analysis, where the correlation between the different microscopic and macroscopic crosssections can be assessed. Also, geometrical uncertainties, as well as statevariable uncertainties can be included.
Uncertainty analysis applied to lattice calculations is very important to trust LWR core designs, because the computation of the homogenized and energycollapsed macroscopic crosssections is the first step in the modeling of LWRs. Therefore, confidence in the further calculation of the effective neutron multiplication factor is totally bounded to the computed uncertainties of lattice codes output parameters.
Abbreviations
:  Fast fission factor 
:  Resonance escape probability 
:  Thermal utilization factor 
:  Thermal fission factor 
:  Removal macroscopic crosssection (1/cm) 
:  Fast downscattering macroscopic crosssection (1/cm) 
:  Thermal upscattering macroscopic crosssection (1/cm) 
:  Fast absorption macroscopic crosssection (1/cm) 
:  Thermal absorption macroscopic crosssection (1/cm) 
:  Fast Nusigmafission macroscopic crosssection (1/cm) 
:  Thermal Nusigmafission macroscopic crosssection (1/cm) 
:  Scalar neutron flux at the energy group (neutrons/) 
:  Transportcorrected total macroscopic crosssection at the energy group (1/cm) 
:  Transportcorrected scattering macroscopic crosssection at the energy group (1/cm) 
:  scattering matrix at the inelastic or elastic reaction, from the nuclide from energy group to 
:  Capture microscopic crosssection, from the nuclide and at the energy group 
:  Fission microscopic crosssection, from the nuclide and at the energy group 
:  Nubar at the energy group 
:  Mubar at the energy group 
:  Normalized fission spectrum at the energy group . 
References
 R. E. MacFarlane and A. C. Kahler, “Methods for processing ENDF/BVII with NJOY,” Nuclear Data Sheets, vol. 111, no. 12, pp. 2739–2890, 2010. View at: Publisher Site  Google Scholar
 A. Hébert, “A nuclear data library production system for advanced lattice codes,” in International Conference on Nuclear Data for Science and Technology, pp. 701–704, 2007. View at: Google Scholar
 G. Marleau and A. Hébert, “A user guide for DRAGON version 4,” Institute of Nuclear Energy Internal Report IGE294, École Polytechnique de Montréal, 2009. View at: Google Scholar
 K. Shibata, O. Iwamoto, T. Nakagawa et al., “JENDL4.0: a new library for nuclear science and engineering,” Journal of Nuclear Science and Technology, vol. 48, no. 1, pp. 1–30, 2011. View at: Publisher Site  Google Scholar
 M. B. Chadwick, M. Hermanb, P. Oblozinsky et al., “ENDF/BVII. 1 nuclear data for science and technology: cross sections,” Nuclear Data Sheets, vol. 112, no. 12, pp. 2887–2996, 2011. View at: Google Scholar
 K. Ivanov et al., “Benchmark for Uncertainty Analysis in Modeling (UAM) for Design, Operation and Safety Analysis of LWRs vol. I: Specification and Support Data for the Neutronic Cases (Phase I),” NEA/NSC/DOC(2011), Version 2, 2011. View at: Google Scholar
 J. C. Helton and F. J. Davis, “Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems,” Reliability Engineering and System Safety, vol. 81, no. 1, pp. 23–69, 2003. View at: Publisher Site  Google Scholar
 A. HernandezSolis, C. Ekberg, A. Ö. Jensen et al., “Statistical uncertainty analyses of void fraction predictions using two different sampling strategies: Latin hypercube and random sampling,” in Proceedings of the18th International Conference on Nuclear Engineering (ICONE '10), Xi’An, China, May 2010. View at: Google Scholar
 A. HernandezSolis, Uncertainty and sensitivity analysis applied to LWR neutronic and thermalhydraulic calculations [Ph.D. thesis], Chalmers University of Technology, 2012.
 K. Kosako and N. Yamano, “Preparation of a Covariance Processing System for the Evaluated Nuclear Data File JENDL (III),” JNC TJ9440 99003, 1999. View at: Google Scholar
 G. Chiba and M. Ishikawa, “Revision and application of the covariance data processing code, ERRORJ,” in International Conference on Nuclear Data for Science and Technology, pp. 468–471, October 2004. View at: Publisher Site  Google Scholar
 H. Glaeser, “GRS method for uncertainty and sensitivity evaluation of code results and applications,” Science and Technology of Nuclear Installations, vol. 2008, Article ID 798901, 6 pages, 2008. View at: Publisher Site  Google Scholar
 S. S. Wilks, “Determination of sample sizes for setting tolerance limits,” Annals of Mathematical Statistics, vol. 12, no. 1, pp. 91–96, 1941. View at: Google Scholar
 S. S. Wilks, “Statistical prediction with special reference to the problem of tolerance limits,” Annals of Mathematical Statistics, vol. 13, no. 4, pp. 400–409, 1942. View at: Google Scholar
 S. S. Wilks, Mathematical Statistics, Wiley, New York, NY, USA, 1962.
 A. Wald, “An extension of Wilks’method for setting tolerance limits,” Annals of Mathematical Statistics, vol. 14, pp. 44–55, 1943. View at: Google Scholar
 A. Wald and J. Wolfowitz, “Tolerance limits for a normal distribution,” Annals of Mathematical Statistics, vol. 17, pp. 208–215, 1946. View at: Google Scholar
 A. Guba, M. Makai, and L. Pál, “Statistical aspects of best estimate methodI,” Reliability Engineering and System Safety, vol. 80, no. 3, pp. 217–232, 2003. View at: Publisher Site  Google Scholar
 G. E. Noether, Elements of Nonparametric Statistics, Wiley, New York, NY, USA, 1967.
 H. Scheffe and J. W. Tukey, “A formula for sample sizes for population tolerance limits,” Annals of Mathematical Statistics, vol. 15, no. 2, p. 217, 1944. View at: Google Scholar
 H. Ackermann and K. Abt, “Designing the sample size for nonparametric,” Multivariate Tolerance Regions. Biometrical Journal, vol. 26, no. 7, pp. 723–734, 1984. View at: Google Scholar
 A. Matala, “Sample size requirement for monte carlo—simulations using latin hypercube sampling,” Internal Report 60968, Departmentof Engineering Physics and Mathematics, Helsinki University of Technology, 2008. View at: Google Scholar
 A. HernandezSolis, C. Ekberg, C. Demaziere, A. Ödegård Jensen, and U. Bredolt, “Uncertainty and sensitivity analyses as a validation tool for BWR bundle thermalhydraulic predictions,” Nuclear Engineering and Design, vol. 241, no. 9, pp. 3697–3706, 2011. View at: Google Scholar
 M. Stein, “Large sample properties of simulations using Latin Hypercube Sampling,” Technometrics, vol. 29, no. 2, pp. 143–151, 1987. View at: Google Scholar
 W. M. Stacey, Nuclear Reactor Physics, WileyVCH, Weinheim, Germany, 2004.
 R. L. Iman and W. J. Conover, “A distributionfree approach to inducing rank correlation among input variables,” Communication in StatisticsSimulation and Computation B, vol. 11, no. 3, pp. 311–334, 1982. View at: Google Scholar
 OECD/NEA Databank, “ERRORJ, Multigroup covariance matrices generation from ENDF6 format,” Package No. NEA1676/07, 2010. View at: Google Scholar
 M. Ball, Uncertainty analysis in lattice reactor physics calculations [Ph.D. thesis], McMaster University, 2012.
 M. Pusa, “Incorporating sensitivity and uncertainty analysis to a lattice physics code with application to CASMO4,” Annals of Nuclear Energy, vol. 40, pp. 153–162, 2012. View at: Google Scholar
Copyright
Copyright © 2013 Augusto HernándezSolís et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.