Science and Technology of Nuclear Installations

Volume 2017 (2017), Article ID 7275346, 10 pages

https://doi.org/10.1155/2017/7275346

## Manufacturing Data Uncertainties Propagation Method in Burn-Up Problems

^{1}Reactor Physics and Fuel Cycle Division, Reactor Studies Department, CAD, DEN, French Atomic Energy and Alternative Energies Commission (CEA), 13108 Saint Paul-lès-Durance, France^{2}Experimental Physics Division, Reactor Studies Department, CAD, DEN, French Atomic Energy and Alternative Energies Commission (CEA), 13108 Saint Paul-lès-Durance, France

Correspondence should be addressed to Thomas Frosio; moc.liamg@oisorf.samoht

Received 25 August 2016; Revised 15 November 2016; Accepted 12 December 2016; Published 26 January 2017

Academic Editor: Alejandro Clausse

Copyright © 2017 Thomas Frosio et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A nuclear data-based uncertainty propagation methodology is extended to enable propagation of manufacturing/technological data (TD) uncertainties in a burn-up calculation problem, taking into account correlation terms between Boltzmann and Bateman terms. The methodology is applied to reactivity and power distributions in a Material Testing Reactor benchmark. Due to the inherent statistical behavior of manufacturing tolerances, Monte Carlo sampling method is used for determining output perturbations on integral quantities. A global sensitivity analysis (GSA) is performed for each manufacturing parameter and allows identifying and ranking the influential parameters whose tolerances need to be better controlled. We show that the overall impact of some TD uncertainties, such as uranium enrichment, or fuel plate thickness, on the reactivity is negligible because the different core areas induce compensating effects on the global quantity. However, local quantities, such as power distributions, are strongly impacted by TD uncertainty propagations. For isotopic concentrations, no clear trends appear on the results.

#### 1. Introduction

Sensitivity analysis (SA) methods are invaluable tools allowing the study of how the uncertainty in the model output relies on the different sources of uncertainty in the model inputs [1]. As a ranking method, it can be used to determine the most contributing input variables to an output behavior as the noninfluential inputs or clarify some correlated effects within the model. The objectives of SA are numerous; one can mention model verification and understanding, model simplification, and factor prioritization [2]. Finally, the SA helps in the validation of a computer code, guidance research efforts, or the justification in terms of system design safety. There is a high amount of literature on procedures and techniques for SA. The main outcomes can be found in [3, 4]. There are many possible uses of SA, described within the categories of decision support, communication, increased understanding, or system quantification and model development. Many different approaches to SA are described elsewhere, varying in the experimental design used and in the way results are processed. An example of manufacturing uncertainties propagation is described in [5].

Tolerance analysis is also becoming an important tool for nuclear engineering design. This seemingly arbitrary task of assigning tolerances can have a large effect on the cost and performance of manufactured products, such as fuel design and fabrication. However, the fact of propagating tolerances instead of uncertainties does not lead to a representative approach of the errors because; in this case, only a bias is taken into account. It is then imperative to understand what kind of physical data creates and propagates uncertainties on the neutronics parameters for both safety and performance reasons. In Material Testing Reactors (MTR), the performance parameters can be core fuel cycle or isotope production. Therefore, it is necessary to calculate isotopic concentrations uncertainties in the reactor core.

We focus in this paper on technological data propagation, with a special attention to uranium enrichment and plate thickness, as example of manufacturing uncertainties propagation. In a general* engineering* framework much broader than nuclear engineering only, as tolerances affect both cost and quality of a product, tolerancing is now considered as being a critical engineering design function. As such, tolerance allocation is a significant task that deserves considerable attention. The current situation is a compromise between designers, who usually specify tight tolerances to ensure high quality, and manufacturers, who prefer loose tolerances to reduce manufacturing cost [6]. So, adequate (i.e., reasonable or well-balanced) tolerances must be achieved in order to both ensure the desired performance and ease the fabrication process.

In general, there are no specific guidelines for allocating tolerances for any component, but [4] quotes the following paragraph [7]: “The most common practice is to allocate some tolerance that seems appropriate on the basis of experience or intuition, and then conduct an analysis to ensure that the allocated tolerance suits the desired design function. In order to do this, the designer must be able to realize all possible effects of the tolerances specified, especially if universal interchangeability is one of the design goals. The effects of specified tolerances are generally analyzed by creating an analytical model that can predict the accumulation of tolerances in an assembly. Prediction of tolerance accumulation is necessary because critical fits, clearances, etc. are usually controlled by the accumulation of several component tolerances”.

After a reminder of the theoretical approach and the implementation of tolerance analysis in the MC propagation methodology and UQ in coupled Boltzmann/Bateman problem, a practical example is given for complete depletion calculation, based on a Material Testing Reactor (MTR) benchmark. This latter is described, and the associated tolerance data, based on an actual series of manufacturing feedback, are detailed. One will focus on two main technological parameters: uranium enrichment of the plates as well as their thicknesses.

The uncertainty propagation will be performed for two different integral quantities: the reactivity, that is, a more global parameter, and the power factor (i.e., the plate fission rate distribution), more sensitive to local variations. A particular focus on the concentrations of some important isotopes will also be made.

#### 2. Evaluation of the Technological Uncertainties

The method used to evaluate the uncertainties comes from complete work performed in [8, 9].

The complete evaluation of propagated uncertainties on neutronics parameters requires a precise knowledge of both nuclear data and manufacturing uncertainties. If the primers are relatively well known and characterized through consistent covariance matrices, such as the latest ENDF/B-VII.1 [10] or COMAC [11], manufacturing uncertainties are sometimes sparse and often not taken into account in the UQ process. However, those values can be built by considering (supposed known) tolerances.

The statistical nature of uncertainty analysis naturally relies on the use of Monte Carlo sampling methodology. Monte Carlo sampling methods can be used to perform uncertainty propagation throughout the whole core calculation process. The manufacture of a technological item is simulated, for example, by creating a set of component dimensions with small random changes to simulate natural process variations. In this case, a Gaussian model can be selected as a statistical distribution of uncertainties, and tolerances can be chosen as variances values at , built by expert elicitation.

Next, the resulting assembly dimensions are calculated from the simulated set of component dimensions. The numbers of outliers that fall outside the specification limits are then counted. Sample sizes generally range between 5,000 and 100,000, based on the required accuracy of the simulation. The accuracy of Monte Carlo sampling increases with larger sample sizes. Obviously, the computational effort of large sample sizes can be significant, but Monte Carlo sampling offers many advantages because of its flexibility. It also allows the generation of a sample of uncertain inputs. We then obtain a sample corresponding to the outputs of the calculation code.

Of course, the best and more rigorous way is to get actual measurement of each series of manufacturing parameters that would allow building the propagated bias on integral parameters between the theoretical core (i.e., without tolerance) and the actual (i.e.,* as built*) core. The measurement of each sample enables postulating a statistical model of its manufacturing uncertainty. This is the methodology used in the present study.

##### 2.1. Benchmark Description

The benchmark used is the present paper is a Material Testing Reactor based on 20% enriched ^{235}U fuel plates [9]. A unique type of assembly has been modelled to build the whole core. The benchmark does not contain absorbing assembly in order to simplify the calculation: the goal being to give orders of magnitude of the propagated uncertainties.

A fuel assembly is made of 22 1.27 mm thick Zircaloy plates (in green). Each plate contains a 0.51 mm thick fuel blade, called “the meat.” The blue elements of Figure 1 represent the water. The assembly stiffeners are made of aluminium.