Abstract

Safe operation and industrial improvements are coming from the technology development and operational experience (OE) feedback. A long life span for many industrial facilities makes OE very important. Proper assessment and understanding of OE remains a challenge because of organization system relations, complexity, and number of OE events acquired. One way to improve OE events understanding is to focus their investigation and analyze in detail the most important. The OE ranking method is developed to select the most important events based on the basic event parameters and the analytical hierarchy process applied at the level of event groups. This paper investigates further how uncertainty in the model affects ranking results. An analysis was performed on the set of the two databases from the 20 years of nuclear power plants in France and Germany. From all uncertainties the presented analysis selected ranking indexes as the most relevant for consideration. Here the presented analysis of uncertainty clearly shows that considering uncertainty is important for all results, especially for event groups ranked closely and next to the most important one. Together with the previously performed sensitivity analysis, uncertainty assessment provides additional insights and a better judgment of the event groups’ importance in further detailed investigation.

1. Introduction

Collecting and understanding operating experience are an important part of keeping continuous, reliable, and safe operation of any complex industrial facility including nuclear power plants. This operating experience is taken from the facility to the national and international levels (e.g., [1, 2]). More detailed investigation is performed only for selected important events. The selection process is easy for accidents, but not so easy for a large number of events because this requires resources and may imply a degree of subjectivity. One approach to select events for detailed investigation is to develop and apply a method based on event groups ranking. Ranking results are useful to regulators and industry for better resource prioritization in maintaining and improving safety and operation.

In [3], four different approaches for event groups ranking were compared with findings about their difference and proposed favorable method. The selected method is based on the application of the analytical hierarchy process in order to allow easier determination of relative importance for all ranking indexes (RI). Sensitivity of the method was analyzed in [4] with findings on how events grouping and selected dataset (i.e., technology and country) influence the results.

This paper investigates how the uncertainty of ranking indexes of relative importance (i.e., weightings) influences the ranking results. The described method is applied separately for 20 years of events from the nuclear power plants operation in France and Germany. The focus is only on the uncertainty for each data source, with no aim of comparing them because of significant differences in technology, data collection, and other factors.

2. Method

Here, we first briefly present the event groups ranking method and then proceed to analyze uncertainty. The ranking method description focuses on the most critical elements for the uncertainty. As mentioned before, a complete description of the method is available in [3].

2.1. Ranking Method and Inputs for Uncertainty Assessment

Ranking is performed at the level of event groups (EG) using selected sets of parameters. For the brevity, only about half of the more important groups are selected and presented in the paper. Table 1 presents respected parameters values for selected event groups (Table 3 lists respected description for each EG). In total, 20 event groups are analyzed for French (FR) and 14 for German (DE) datasets. Because of technology and data collection differences, some event groups do not exist in both datasets and there are also differences in parameters. Both datasets have frequency, trend, and precursors parameters. The French dataset has special parameters which present a number of so-called “generic and recurring” events. There is also a difference in the number of events reported to the IAEA/NEA (International Atomic Energy Agency/Nuclear Energy Agency) international reporting system (IRS) because in France it is chosen to report only one representative event in case similar events occur in several plants of the French fleet. The trend value is presenting change of the number of events for a certain period (i.e., 0.5 means that the number of events is not changing, values up to 1 mean increasing trend, and values down to 0 represent a decreasing trend). The trend has been considered, in this analysis, only for the last five years the most relevant. All other parameters are for a whole period of 20 years.

The ranking index value is calculated for each parameter and event group as a ratio of the number of events for that parameter in the group and the maximal number of events for the same parameter considering all the event groups: where RI is ranking index, EG is event group, and is number of events.

For the trend, RI value is calculated to present an unchanged number of events with 0.5, increasing number of events (parameter value up to 1), and decreasing number of events (values down to 0) over a certain period.

The final total ranking value for each EG is calculated as the sum of all parameters ranking values multiplied by respect weights: where RI is ranking index, EG is event group, and is weights for respected parameter.

These weights could be judged by experts directly or more transparently and consistently by using some methods like the analytical hierarchy process (AHP, [3, 5]) where the only requirement is to make pairwise comparisons for all parameters. Table 2 presents a set of RI weights used in this assessment which was derived using AHP results as the base. (Short description of the method is provided in the appendix.)

Applying (1) and (2) to all EG using data presented in Table 1 and weights in Table 2, total ranking values are quantified. Most important event groups are then considered once which have the highest ranking values. (Selected results are presented in the Table 4 for the FR and Table 5 for the DE datasets under column “AHP point.”)

2.2. Approach to Uncertainty Assessment

Most generally, uncertainty consists of the aleatory and epistemic part, which includes parameter and model uncertainties [6]. The ranking uncertainty, because of the model, was considered separately and this analysis applies to the AHP based ranking method, [3]. Uncertainty of dataset values is considered outside this assessment because it is epistemic and not significant (in comparison to the ranking uncertainties). Therefore, we believe that data source uncertainty is less critical for the ranking. Clearly this assumption depends on the completeness of the data collection program. Considering the central importance of RI weights and expert judgement impact, they are selected as primary uncertainty input to the assessment. The upper described ranking approach is far from the complexity of some deterministic models (e.g., [7]) and the parameters uncertainty consideration is less challenging.

The sampling-based approach is selected because of its effectiveness and wide use. The basic idea is that analysis results are a function of uncertain inputs , [8]. This allows quantification of results uncertainty and determination of input parameters influence.

In this uncertainty study, event groups ranking is quantified as a statistical distribution resulting from the ranking indexes weights defined with statistical distributions instead as point value. This will allow examining event groups ranking sensitivity and defining need for further assessment (i.e., better uncertainty method regarding input statistical distribution, etc.).

A next critical step for the uncertainty analysis is the determination of distributions for the parameters sampling. Without sufficient base for direct determination of specific distributions from the nature of input parameters, they are selected based on the accepted referenced approach and suitability for this stage of ranking uncertainty study. Two different distributions could be found related to the uncertainty where AHP is applied: triangular [9] and uniform [10]. The triangular distribution is used to represent cases when we know the most likely outcome of sampled RI weights. In this particular case, the most likely outcome is calculated with the AHP process (the calculated RI weights). The second common case is that we do not know enough about sampled RI weights (only smallest and largest values), in which case the uniform distribution can be used with smallest and largest values set to 0 and 1, respectively. Referenced works use distributions applied to the process of AHP weights determination, that is, to the pairwise comparison. For our purpose, we decided to apply both distributions to the already quantified AHP weight. In order to sample all RI weights in a way they always make a sum of 1 (100%) sampling was done for one RI weight in separate simulation with corrections to other RI weights for each sampling and relative to the AHP value. This means, for example, if RI, with weight, is sampled and new sampled weight is , then correction (i.e., multiplier) for other RI weights is defined by (3). Then, new weight value for RI is , where is AHP determined original weight. Consider where is sampled RI weight; is AHP derived (starting) weight; and is correction (i.e., multiplier) for other RI weights.

For the triangular distribution, separate simulation was performed for each RI with minimal sampling values set to 0 and maximum set to 1 keeping the original AHP value as the most likely. Figure 1 presents the distribution for the “frequency” RI in German data where AHP weight is equal to 0.08 (Table 2). A resulting dependent distribution for other RIs has also been presented.

In the case of uniform distribution of the input variable after several iterations ±33% change from the original AHP weights is selected for further analysis. Higher value would be in fact sensitivity analysis and smaller value would not influence the results enough.

For this exercise, ranking values were quantified, based on the AHP determined RI weights using Microsoft Excel with add-on Monte Carlo simulation and statistical tool Quantum XL [11]. The number of samplings was determined based on the comparison of sampling distributions of input and output variables for 10 and 100 thousand samples. Figure 1 presents the sampling distribution for one input variable (with the others accordingly adjusted), and Figure 2 presents resulting distributions of four highest ranked event groups from the same simulations. Because a significant statistical difference was not found, further analysis continued only with 10 thousand samplings per simulation.

Finally, after all simulations for all RIs are performed, cumulative ranking values distributions are presented with statistical analysis and box plot [8]. This was done in order to simplify the presentation of results and to make clear conclusions about uncertainty. A separate detailed analysis of results for each RI weight simulation is possible in the same way if found important.

3. Results and Discussion

Based on the approach described in the previous section for each country dataset, one simulation was performed for every RI (i.e., five for France and four for Germany datasets). The following subsections present the most important results and findings.

3.1. Simulation Results

Results from each simulation are generating ranking values distribution for all selected event groups. This is a vast volume of results and for brevity graphical results are presented only for highest ranked event groups. Most of the results are presented for simulations with triangle distribution. For France datasets, Figure 5 shows separately for each RI simulation all five RI weights distributions, and Figure 6 presents respected resulting distribution of the ranking values of the eight most important event groups. Similarly for Germany Figure 7 presents all RI weights distributions and Figure 8 presents respected resulting distributions of the ranking values of the four most important event groups. The number and specific event groups are selected according to the country overall results.

Final results for the distribution of all ranking values for selected event groups are presented with a box plot in Figure 3 for France and Figure 4 for Germany. Values for Quartile 1 (Q1), Mean, Median, and Quartile 3 (Q3) are presented, together with baseline AHP RI weights ranking results in Table 4 for France and Table 5 for Germany.

For uniform distributions, simulations with ±33% sampling of RI weights produce a much smaller influence to the final ranking values for all event groups. Because of brevity only final box plot graphs are presented for each country dataset in Figures 9 and 10.

3.2. Discussion

Resulting combined ranking values uncertainty for both countries shows numerically and graphically how ranking is affected by RI weights uncertainty. For both countries, an overlap (by looking at values between Q1 and Q3) is smaller for higher ranked event groups. For France data the two highest ranked event groups (13.4 and 16.3) are clearly separated and then after that the following five (7.1, 8.3, 12, 16.1, and 16.2). For Germany data, event group 14 is clearly the highest ranked and the following three event groups (7.1, 13.2, and 13.3) are distinctly higher ranked than the rest of the event groups.

As a form of simulation results confirmation, it is valuable to point out that the difference between Median, Mean, and baseline AHP RI weights ranking results is very small. The difference for France data is between −1% and 4% and between −2% and 5% for German data (Tables 4 and 5).

By looking at the resulting ranking values distributions, it is clear that the difference between Q1 and Q3 varies between event groups. The same is true for the spread symmetry (regarding the size and side).

In the case of uniform ±33% distribution of the RI weights, Mean and Median results are the same and equal to the baseline AHP RI weights ranking results. Q1 and Q3 values are very close, with the maximal difference ~3% of absolute value which is less than 10% of relative value even for the low ranked event groups. Compared to the triangle distribution case, outliers and suspected outliers are almost nonexistent. These results do not show any significant change and they clearly do not have value compared to the triangular simulations.

The presented results, from triangular simulations, seem mostly useful to prove the robustness of ranking results obtained by only using AHP point estimate for RI weights. Based on the two countries datasets, it also seems that uncertainty results are helping find better separation between more important event groups and the rest of them. Finally, for two closely ranked event groups, it seems easier to select more important by using uncertainty results (e.g., FR5.1 versus FR8.3 and DE1.2 versus DE4.2).

A comparison between two countries datasets has a lot of limitations due to the various differences (i.e., technology, regulation, safety culture, etc.). This does not prevent making observations about uncertainty results. It seems that uncertainty results are consistent between these two datasets with all mentioned differences regarding the background.

4. Conclusion

Uncertainty assessment is important for enhancing the model results interpretation and usability. Critical parameters uncertainty influencing the model results can be assessed by a sampling-based approach. AHP determined weights for ranking indexes are selected as the most important epistemic source of uncertainty in the event groups ranking method. Triangular distribution with a most likely value determined by the AHP and limits between 0 and 1 is selected as a sampling input. Dependencies between all ranking indexes were treated with simple corrections and separate simulation for each ranking index. Combined distributions for all event groups ranking values are generated from all simulations. The uncertainty assessment approach applied to two countries datasets proves both consistent and valuable results.

An alternative uniform sampling distribution proved to be without sufficient merit to be considered for use.

Event groups ranking with analytical hierarchy based ranking indexes weights could be improved with sampling-based uncertainty assessment because it confirms most of the results and provides help for better distinguishing closely ranked event groups.

Appendix

A. Abbreviations, Nomenclature and Some Additional Details

This appendix provides abbreviations, nomenclature, and some additional details about ranking method, input data, and assessment results. This information is not essential to follow the paper; however, it could help understand the complete picture of the presented uncertainty approach and results.

A.1. Determination of Ranking Indexes Weighting Factors Using Analytical Hierarchy Process (AHP)

This is a short description of the application of the analytical hierarchy method for the determination of ranking indexes relative importance in the event group ranking assessment (modified from [3]). Methodical approach is applied in order to allow easier and more consistent RI weighting determination for the expert. This is a common approach for many decision making applications with a large number of parameters [12].

The main advantage of AHP is that it can be used to determine the relative importance for any number of parameters requiring only their pairwise comparison. Pairwise comparison (relative importance for two ranking indexes and ) of all ranking indexes results in reciprocal matrix with off-diagonally reciprocal elements . The core of the AHP is to find the solution to the equation using the principal eigenvalue method (where is the principal eigenvalue and is the associated eigenvector) [13]. Since the eigenvector is a normalized priority weight vector (sum of obtained weights is normalized to one) with elements representing the relative importance of ranking indexes, the elements of eigenvector will be normalized relative importance of one ranking index over another. The principal eigenvalue provides a measure of consistency in AHP. The principal eigenvalue and associated eigenvector give a ranking of priorities from pairwise matrix according to the Saaty [13].

Pairwise relative importance could be estimated using a different scale and the most popular is from 1 to 9 (i.e., 1 meaning both RIs are valued as equally important) with respect to reciprocal values. The additional significant advantage of AHP is that the consistency of the comparison can be quantified with a so-called consistency ratio and then iteratively improved if needed.

A further advantage of using AHP is that a multilayered set of parameters can be used and expert judgments from more than one expert can be combined. These features were not applied in the ranking application described in this study. In the future, using more than one expert opinion might be interesting to analyze sensitivity and further improve the ranking method.

A.2. Additional Details about Input Data and Assessment Results

Tables 3, 4, and 5 and Figures 5, 6, 7, 8, 9, and 10 present additional details about input data used for this study and more detailed results.

Abbreviations

:Analytical hierarchy process
:German dataset
:European Commission
:Event group
:French dataset
:Gesellschaft für Anlagen-und Reaktorsicherheit
:International Atomic Energy Agency
:International reporting system
:Institut de Radioprotection et de Sûreté Nucléaire
:Joint Research Centre-Institute for Energy and Transport
:Nuclear Energy Agency
:Operating experience
:Ranking index.
Nomenclature
:Reciprocal matrix with all relative RI importance and their reciprocal values
:Relative importance of to
:Correction (i.e., multiplier) for other RI weights
:Subset of events in the FR database comprising generic and recurring events
:Event group
:Event group identification number
:Interquartile range (bottom and top of the box are the 25th and 75th percentile)
:Subset of events reported to the IRS (for FR, only the first-time event for each type is reported)
:Maximum eigenvalue
:Number of events
:Data points beyond 3 IQR from the edge of the box
.:Precursor events
:1st Quartile
:3rd Quartile
:Ranking index value
:Data points between 1.5 and 3 IRQ from the edge of the box
:Trend parameter value (i.e., 0.5 for unchanged number of events, up to 1 for increasing trend, and down to 0 for decreasing trend).
:Event groups considered in the presented uncertainty assessment
Weight for
:Weight for parameter (i.e., relative importance)
:Sampled weight for
:Vector of local weightings.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

Critical data used as the base and input for this work are produced by the work performed for the European Clearinghouse EC JRC-IET, by Dr. Michael Maqua from the Gesellschaft für Anlagen-und Reaktorsicherheit (GRS) and Dr. Didier Wattrelos from the Institut de Radioprotection et de Sûreté Nucléaire (IRSN).