Table of Contents Author Guidelines Submit a Manuscript
Science and Technology of Nuclear Installations
Volume 2008 (2008), Article ID 987165, 7 pages
Research Article

Comparison of Methods for Dependency Determination between Human Failure Events within Human Reliability Analysis

Jožef Stefan Institute, Reactor Engineering Division, Jamova cesta 39, 1000 Ljubljana, Slovenia

Received 4 February 2008; Accepted 21 April 2008

Academic Editor: Martina Adorni

Copyright © 2008 Marko Čepin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The human reliability analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jožef Stefan human reliability analysis (IJS-HRA) and standardized plant analysis risk human reliability analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance.

1. Introduction

The human reliability analysis (HRA) is a systematic framework, which includes the process of evaluation of human performance and associated impacts on structures, system, and components for a complex facility. The process and the results are highly subjective, and they are the input for probabilistic safety assessment (PSA), which deals with many parameters of high uncertainty [14].

Many methods connected with HRA were developed in the last decades: for example, technique for human error rate prediction (THERP) [5], systematic human action reliability procedure (SHARP) [6], accident sequence evaluation program (ASEP) [7], a technique for human event analysis (ATHEANA) [8, 9], cognitive reliability and error analysis method (CREAM) [10], human cognitive reliability (HCR) [11], standardized plant analysis risk HRA (SPAR-H) [12], and Institute Jo ef Stefan human reliability analysis (IJS-HRA) [1315].

Those methods have some unique and some common features [16, 17]. It is difficult to judge them or to compare them in sense, which method is better than others. It is observed that in the methods developed recently more attention was given to the cognitive portion of human failure events (HFEs) [16, 17]. An important feature is the dependency [13, 18], which is more emphasized at more recent methods, although the standpoint was stated years ago with THERP [5]. The mentioned methods use the data, the human reliability databases. Well ago, less data was available and many specific human error probabilities and human shaping factors, which adjust those probabilities, were determined based on expert judgement. Nowadays, much more data is available due to more experience in the plant operation and due to more training in plant simulators. This may lead to the conclusion that more recent methods are less subjective.

The objective of the paper is to show that subjectivism can largely impact the HRA results and consequently the results and applications of PSA in a nuclear power plant (NPP) with special emphasis on consideration of dependency. The objective is to identify the key features, which may decrease subjectivity of HRA.

Two methods from the set mentioned above are selected for their detailed comparison in an example case of real probabilistic safety assessment model, which include human reliability analysis. Those two are: SPAR-H [12] and IJS-HRA [13, 14]. They are selected as they are relatively new methods, which encompass the previous knowledge in the field, which are relatively simple for their application, and which pay an acceptable level of attention to the issue of dependency, which is the focus of the work.

2. Methods

2.1. IJS-HRA

Figure 1 shows the scheme of the IJS-HRA method [13]. The method for evaluation of HFE is developed including consideration about dependencies between HFE [5, 13]. Figure 1 shows that identification of HFE distinguishes preinitiator events (i.e., preinitiators), initiator events (i.e., initiators) and postinitiator events (i.e., postinitiators). Preinitiators are the events that may cause the equipment to be unavailable before the initiating event has occurred. Initiators are the events that may contribute to the occurrence of initiating events. Postinitiators are the events, which are connected with human actions to prevent accident or mitigate its consequences after initiating event has occurred. Evaluation of HFE including evaluation of dependencies integrates assessment of human error probabilities (HEPs) with plant information, operator interview, simulator experience, and plant database.

Figure 1: Scheme of IJS-HRA method.

The five levels of dependency are determined according to THERP: zero dependency (ZD), low dependency (LD), moderate dependency (MD), high dependency (HD), and complete dependency (CD) [5]. Human error probability (HEPs) of dependent HFE and is determined according to equation: where for dependency levels ZD, LD, MD, HD, and CD, where and C, respectively [5].

Figures 2 and 3 show how dependency between HFE is determined for preinitiators and for postinitiators, respectively. Initiators are treated similarly as postinitiators. For preinitiators, there is an additional algorithm, which from independent HFE and its dependent event HFE calculates their HEP as the geometry average of both [13].

Figure 2: IJS-HRA dependency—preinitiator HFE.
Figure 3: IJS-HRA dependency—postinitiator HFE.

Figures 2 and 3 show that based on the parameters, which are connected with their representative HFE, the dependency evaluation code is identified (e.g., LD12). Dependency evaluation code consists of first two characters identifying the level of dependency (e.g., ZD, LD, MD, HD, and CD). The next numbers in the code represent the scenario number of the corresponding scenario from dependency method presented in its respective figure and identify parameters that are important for determining the level of dependency: for example, cue, time between, crew, stress, complexity, location, system, action description, procedure, timing, person, and action similarity [13]. For example, for 2 dependent postinitiators, a dependency level LD is determined on Figure 3 (LD12), which shows: different cue, 5–30 minutes between the events, low stress, simple action, and no change of probability needed as joined E-5.

2.2. SPAR-H

Standardized plant analysis risk HRA (SPAR-H) is a method for estimating the human error probabilities (HEPs) associated with operator actions and decisions in nuclear power plants [12]. Table 1 shows how dependency between HFE is determined. Five levels of dependency are determined, similarly to THERP and IJS-HRA. The parameters for determining the level of dependency differ from THERP and from IJS-HRA.

Table 1: SPAR-H dependency.

3. Analysis and Results

3.1. Qualitative Comparison

Table 2 shows how dependency determined in IJS-HRA method suits the dependency determined in SPAR-H method (theoretical comparison of both dependency methods).

Table 2: Comparison of dependency levels—all scenarios are presented.

Table 3 is the subset of Table 2. Table 3 focuses only to those scenarios (specific scenario suits specific set of parameters), which suit real HFE considered in the specific HRA (practical comparison of both dependency methods based on specific PSA model). Both tables show that for specific HFE their respective HEP is evaluated as a different value, if it is determined with one or the other method.

Table 3: Comparison of dependency levels—only scenarios, which are applicable for HFE of the specific PSA model.
3.2. Quantitative Comparison

64 HFEs exist in the PSA model, which HEP is changed if HRA dependency method changes. Table 4 shows a part of those HFE with identified dependency levels and respective HEP for both methods IJS-HRA and SPAR-H. Terms CALC and IND marked at preinitiators represent the calculation of final HEP as the geometry average between the independent value of HEP for action at one train and the respective dependent HEP assessed as low dependency (LD12) for similar action at the other train.

Table 4: Selected HFE with quantified HEP (for IJS-HRA and for SPAR-H).

Table 5 shows the results of risk increase factor and risk decrease factor of selected HFE calculated based on analysis runs with PSA model based on IJS-HRA dependency and based on SPAR-H dependency considered. Selected HFE in the table are those with and , which are a criteria for identification of risk significant events. The differences between both cases are very large.

Table 5: Results of importance of HFE.

Table 5 shows that identification of important HFE shows only one HFE, which is identified as important in both analyses (POST_INI_04, which deals with operator establishing auxiliary feedwater pumps). The difference between both cases about the core damage frequency is very large, too. It differs for more than one order of magnitude.

Figure 4 shows a comparison of fractional contribution of HFE for both analyses. The figure shows that there are no comparable results: events, which contribute significantly, if IJS-HRA dependency is considered, can be insignificant, if SPAR-H dependency is considered and vice versa.

Figure 4: Comparison of fractional contribution of HFE.

Similarly, large differences exist if instead of five levels of dependency less dependency levels are determined with different equations for evaluation of dependency.

4. Conclusions

The methods for dependency determination between human failure events within human reliability analysis have been examined.

Consideration of human error probability of the first human failure event in a sequence as it is and an increase of independent human error probability of the next human failure event in a sequence common to most of the HRA methods, except IJS-HRA, which for relatively similar actions determines identical failure probability based on geometry average.

The methods for determination of dependency between human failure events differ mostly in definition of parameters, which impact the dependency, in their application and in the determination of dependency level, which applies to a specific set of parameters. All those distinctions are subjective. This subjectivism can lead to a difference of several orders of magnitude in the results of HRA and in the PSA, which includes HRA. This means significant differences in all PSA results and their applications, for example,

(i)identification of key human failure events, which is an input for prioritization of simulator training,(ii)calculation of core damage frequency and its sensitivity to changes, which is an input for risk-informed decision-making,(iii)identification of different key tasks within human failure event in order to identify the key parameters from HRA database. The subjectivism could be minimized with integration and standardization of

(i)selection of parameters, which affects the dependency between human actions, for example,(a)persons (e.g., one or more persons involved, e.g., same or different people are performing the actions),(b)similarity of actions (e.g., similar or not similar action),(c)similarity of implementation of procedures (e.g., filling the forms without signing the steps of the form or with signing the steps, e.g., same or different procedure for),(d)similarity of locations (e.g., same or different location),(e)timing (e.g., sequential performance or a larger time interval between the actions),(f)stress level (e.g., low, high, optional: moderate),(g)complexity of actions (e.g., simple or complex actions, where specific definition of simplicity or complexity are important),(ii)the number of levels of dependency and the formulas for their evaluation (e.g., five levels of dependency as in THERP, SPAR-H, and IJS-HRA with their corresponding formulas).

In addition, the detailed guidelines are needed which would guide the application and which would be highlighted with many practical examples. Database on the examples of quantified human error probabilities for independent tasks, for dependent tasks, and for complete human actions and their dependencies should become a part of nuclear power plant probabilistic safety assessment database.


The Slovenian Research Agency supported this research (partly research program P2-0026, partly research project V2-0376 supported together with Slovenian Nuclear Safety Administration).


  1. ASME RA-S-2002, “Standard for probabilistic risk assessment for nuclear power plant applications,” The American Society of Mechanical Engineers, 2002.
  2. Regulatory Guide 1200, “An approach for determining the technical adequacy of probabilistic risk assessment results for risk-informed activities,” U.S. Nuclear Regulatory Commission, 2004.
  3. M. Čepin and B. Mavko, “A dynamic fault tree,” Reliability Engineering & System Safety, vol. 75, no. 1, pp. 83–91, 2002. View at Publisher · View at Google Scholar
  4. M. Čepin, “Analysis of truncation limit in probabilistic safety assessment,” Reliability Engineering & System Safety, vol. 87, no. 3, pp. 395–403, 2005. View at Publisher · View at Google Scholar
  5. A. D. Swain and H. E. Guttman, “Handbook of human reliability analysis with emphasis on nuclear power plant applications,” Final Report NUREG/CR-1278, U.S. Nuclear Regulatory Commission, Washington, DC, USA, 1983. View at Google Scholar
  6. SHARP, “Systematic human action reliability procedure,” EPRI. NP-3583, 1984.
  7. A. D. Swain, “Accident sequence evaluation program: human reliability analysis procedure,” Tech. Rep. NUREG/CR-4772, U.S. Nuclear Regulatory Commission, Washington, DC, USA, 1987. View at Google Scholar
  8. W. D. Travers, “Technical basis and implementation guidelines for a technique for human event analysis (ATHEANA),” Tech. Rep. NUREG-1624, U.S. Nuclear Regulatory Commission, Washington, DC, USA, 1999. View at Google Scholar
  9. J. Forester, D. Bley, S. Cooper et al., “Expert elicitation approach for performing ATHEANA quantification,” Reliability Engineering & System Safety, vol. 83, no. 2, pp. 207–220, 2004. View at Publisher · View at Google Scholar
  10. E. Hollnagel, Cognitive Reliability and Error Analysis Method (CREAM), Elsevier, Amsterdam, The Netherlands, 1988.
  11. A. Spurgin, “Another view of the state of human reliability analysis (HRA),” Reliability Engineering & System Safety, vol. 29, no. 3, pp. 365–370, 1990. View at Publisher · View at Google Scholar
  12. D. Gertman, H. Blackman, J. Marble, J. Byers, and C. Smith, “The SPAR-H human reliability analysis method,” Tech. Rep. NUREG/CR-6883, U.S. Nuclear Regulatory Commission, Washington, DC, USA, 2005. View at Google Scholar
  13. M. Čepin, “DEPEND-HRA—a method for consideration of dependency in human reliability analysis,” Reliability Engineering & System Safety. In press. View at Publisher · View at Google Scholar
  14. M. Čepin, “Importance of human contribution within the human reliability analysis (IJS-HRA),” Journal of Loss Prevention in the Process Industries, vol. 21, no. 3, pp. 268–276. View at Publisher · View at Google Scholar
  15. A. Prošek and M. Čepin, “Success criteria time windows of operator actions using RELAP5/MOD3.3 within human reliability analysis,” Journal of Loss Prevention in the Process Industries, vol. 21, no. 3, pp. 260–267. View at Publisher · View at Google Scholar
  16. A. Kolaczkowski, J. Forester, E. Lois, and S. Cooper, “Good practices for implementing human reliability analysis (HRA),” Tech. Rep. NUREG-1792, U.S. Nuclear Regulatory Commission, Washington, DC, USA, 2005. View at Google Scholar
  17. J. Forester, A. Kolaczkowski, E. Lois, and D. Kelly, “Evaluation of human reliability analysis methods against good practices,” Final Report NUREG-1842, U.S. Nuclear Regulatory Commission, Washington, DC, USA, 2006. View at Google Scholar
  18. J. F. Grobbelaar, J. A. Julius, and F. Rahn, “Analysis of dependent human failure events using the EPRI HRA Calculator,” in Proceedings of the 9th International Topical Meeting on Probabilistic Safety Analysis (PSA '05), pp. 499–501, San Francisco, Calif, USA, September 2005.