Advances in Fuzzy Systems

Advances in Fuzzy Systems / 2016 / Article

Review Article | Open Access

Volume 2016 |Article ID 4612086 | https://doi.org/10.1155/2016/4612086

P. A. Baziuk, S. S. Rivera, J. Núñez Mc Leod, "Fuzzy Human Reliability Analysis: Applications and Contributions Review", Advances in Fuzzy Systems, vol. 2016, Article ID 4612086, 9 pages, 2016. https://doi.org/10.1155/2016/4612086

Fuzzy Human Reliability Analysis: Applications and Contributions Review

Academic Editor: Zeki Ayag
Received30 Nov 2015
Revised05 Mar 2016
Accepted13 Mar 2016
Published04 Apr 2016

Abstract

The applications and contributions of fuzzy set theory to human reliability analysis (HRA) are reassessed. The main contribution of fuzzy mathematics relies on its ability to represent vague information. Many HRA authors have made contributions developing new models, introducing fuzzy quantification methodologies. Conversely, others have drawn on fuzzy techniques or methodologies for quantifying already existing models. Fuzzy contributions improve HRA in five main aspects: (1) uncertainty treatment, (2) expert judgment data treatment, (3) fuzzy fault trees, (4) performance shaping factors, and (5) human behaviour model. Finally, recent fuzzy applications and new trends in fuzzy HRA are herein discussed.

1. Introduction

The term “Human Reliability Assessment” (HRA), human reliability evaluation or analysis, was first introduced in 1962 by Munger et al. [1] and can be defined as “the probability that a task or job is successfully completed by an individual in a specific state of operation of the system in a minimum required time (if there is time requirements)” [2].

In the negative sense, “human error” is defined as “the failure probability to execute a given task (or execution of a prohibited task), which may cause equipment damage or disrupt the sequence Operations” [3].

Almost all HRA methods and approaches share the assumption that it is significant to use the concept of “human error,” so it is also significant to develop ways to estimate chances of “human error.” As a result, numerous studies have been performed to produce data sets or databases to be used as a basis for human error probabilities (HEP) quantification. This view prevails despite serious doubts expressed by HRA scientists and professionals and related disciplines. A general review of HRA [4] notes that many approaches are based on highly questionable assumptions about human behaviour.

The main contribution of fuzzy mathematics is its ability to represent vague information. It has been used to model systems that are difficult to define precisely [5]. As a methodology, fuzzy set theory incorporates vagueness and subjectivity. Fuzzy decision-making includes the uncertainties of human behaviour in decision-making. Fuzzy set theory, created by Zadeh in 1965, emerges as a powerful way to quantitatively represent and manipulate imprecise decision-making problems [6]. Since the vague parameters are treated as imprecise rather than precise values, the process is more powerful and results are more credible. Fuzzy mathematics emerges as a tool to model processes that are too complex for traditional techniques (such as probability theory) and when process information is qualitative, inaccurate, or unclear; for these cases the concept of membership function properly represents this type of knowledge [7].

Fuzzy logic captures an inherent property of most human communications: they are not accurate, concise, perfectly clear, and crisp [8]. The meaning of the word (natural language) is diffused because a word can be applied perfectly to some objects or events, clearly excluding others, and can be applied to a certain extent, in part, to other objects or events. Language statements are inherently vague; this fact could be addressed with fuzzy set theory [9]. Fuzzy logic resembles the way that humans make decisions and inferences [7].

In fuzzy processing there are basically three components [7]: (1) fuzzification, (2) fuzzy inference, and (3) defuzzification. Fuzzification is the process by which the input variables are transformed into fuzzy numbers sets. Fuzzy inference is a set of fuzzy if-then-else rules used to process diffuse inputs and generate fuzzy conclusions; that is, fuzzy inference interprets input vector values ​​ and, based on a rules set, generates an output vector. Defuzzification is the process of weighing and averaging out all fuzzy values ​​into a single output signal or decision.

It is easy to see the applicability of this tool for quantifying human reliability. Many HRA authors have made contributions developing new models with fuzzy quantification methodologies or using fuzzy techniques or methodologies for quantifying existing models, for example, fuzzy CREAM [7]. In the following sections the main concepts of HRA methodologies and fuzzy applications and contributions made to human reliability are presented.

2. Human Reliability Assessment Review

The birth of HRA methods was in 1960, but most of the techniques for human factor evaluation, in terms of propensity to fail, have been developed since the mid-80s. HRA techniques or approaches can be basically divided into two categories: first and second generation. Currently, dynamic HRA techniques or methods of the third generation, understood as an evolution of the previous generations [10], are subject to research.

The first-generation methods or quantitative HRA methods were based on statistics. The most important first-generation HRA method is THERP (technique for human error-rate prediction) [11], based on event tree analysis. A lot of methods and models in classical HRA theory assume that all probabilities are accurate [12]; that is, each probability involved can be perfectly determined. HEP can be assigned on the basis of operator’s task characteristics and then modified by performance shaping factors (PSFs). In first-generation HRA, task characteristics are represented by HEPs; and the context, which is represented by PSF, is considered a minor factor in HEP estimation [12]. This generation is concentrated in HRA quantification, in terms of action success/failure, with less attention paid to in-depth causes and reasons for human behaviour.

The integrity of probabilistic information implicates two conditions: (1) all probabilities and probability distributions are well known or determinable; (2) system components are independent; that is, all random variables that describe component reliability behaviour are independent or alternatively dependence is precisely known.

Precise measurements of system reliability can be calculated whenever these two conditions are met. However, reliability evaluations combined with systems and components description may come from various sources. In most practical applications, it is difficult to expect that the first condition is met and, usually, the second condition is violated.

Utkin and Coolen [13] provide an important contribution to the imprecise reliability, discuss a variety of topics, and review the suggested applications of imprecise probabilities in terms of reliability. Modelling human error through probabilistic approaches has shown a limitation in qualitative aspects’ quantification of human error and attributes complexity of involved circumstances. Mosleh and Chang [14] indicate the first-generation HRA methods’ limitations, enumerate some expectations, and show that methods should be based on human behaviour models.

Among first-generation techniques are Absolute Probability Judgment (APJ), Human Error Assessment and Reduction Technique (HEART), Human Error Justified Data Information (JHEDI), Probabilistic Human Reliability Analysis (PHRA), Action Tree System Operator (OATS), and Success Likelihood Index Method (SLIM). The most popular and effective method is THERP, characterized, as other first-generation approaches, by a precise mathematical treatment of probability and error rates. THERP is based on event tree where each branch represents a combination of human activities and their mutual influences and results.

The main features of first-generation methods can be summarized [15] as (1) binary representation of human actions (success/failure); (2) human action phenomenology; (3) low attention in human cognitive actions (lack of a cognitive model); (4) emphasis on quantifying the probability of incorrect human actions; (5) dichotomy between the errors of omission and commission; and (6) indirect treatment of context.

THERP and approaches developed in parallel—as HCR (Human Cognition Reliability) developed by Hannaman, Spurgin, and Lukic in 1985—describe cognitive aspects of operator performance with a cognitive model of human behaviour, known as skill-rule-knowledge (SRK) model [16]. This model, based on human behaviour classification, is divided into practical skills, rules, and knowledge-based behaviour, depending on the cognitive level used. Attention and conscious thought that an individual gives to activities decreased from the third to the first level. This model of behaviour fits very well with Reason’s human error theory [17]: there are several types of errors, depending on the actions’ result carried out with intention or not. Reason distinguished “slips” errors that occur in skill level; “lapses” errors caused by memory failure; and “mistakes” errors made during the action execution. In THERP, however, bad actions are divided into omission and commission errors representing, respectively, failure to carry out necessary operations to achieve the desired result and execution of actions not referred to as concerned task, which keep off the desired result [18].

First-generation HRA methods ignore the cognitive processes that underlie human behaviour, in fact, they have a cognitive model without realism and they are psychologically inadequate. They are often criticized for not considering some factors’ impact such as environment, organizational factors and other relevant PSFs, and inadequate treatment of commission errors and expert judgment [14, 18, 19]. Hollnagel [18] noted that “all inadequacies of previous HRA methods often lead analysts to perform an HEP evaluation deliberately high and with greater uncertainty limits to compensate, at least in part, these problems” [18]. This is clearly not a desirable solution.

In the early 1990s, the need for improved HRA methods generated a number of important research and development activities worldwide. These efforts led to great advances in first-generation methods and the birth of new techniques, identified as second generation. These HRA methods were, at first, vague and unclear. While first-generation HRA methods are primarily behavioural approaches, second-generation HRA methods aspire to be conceptual [14].

The gap between generations is evident in the abandonment of quantitative approach in probabilistic risk analysis (PRA) or probabilistic safety assessment (PSA) in favour of greater attention to qualitative assessment of human error. The focus was on cognitive aspects of human beings, causes of errors rather than their frequency, study of factors interaction that increases error probability, and PSFs interdependence.

The second-generation HRA methods (such as CREAM “Cognitive Reliability and Error Analysis Method” or ATHEANA “A Technique for Human Event Analysis”) are based on human behaviour models. This generation of methods emphasizes the qualitative characterization of human error, describing cognitive roots and human cognitive functions involved.

Clearly, any attempt to understand human behaviour should include the role of human cognition, defined as the act or process of knowing that includes awareness and human operator judge. From the HRA analyst’s perspective, the immediate solution considering human cognition was the introduction of a new error category: “cognitive error,” which is defined as both the lack of an activity predominantly cognitive and the inferred cause of activity fails. For example, CREAM, developed by Hollnagel in 1993, maintains the division between causes and logical consequences of human error. Causes of misconduct (genotypes) are the reasons that determine the occurrence of certain behaviours and effects (phenotypes) are represented by incorrect forms of cognition and inappropriate actions.

Cognitive models have been developed to represent the logical-rational process of human beings and include dependence on personal factors (such as stress, incompetence, etc.), situation conditions (normal system conditions, abnormal conditions, or emergencies), and human-machine interface models, which reflect the control system process [20]. In this perspective, human operator must be seen as a part of an integrated system (its acronym in English is MTO: “Man-Technology-Organization”), that is, a team of operators (men) working together to achieve the same goal, which is involved in the mechanical process (technology) within an organization and company management (organization), and, together, representing the resources available. Cognitive models used in second generation are based on the assumption that human behaviour is governed by two basic principles: cyclical nature of human cognition and cognitive processes dependence with context and work environment.

Another difference between generations refers to the choice and use of PSF. None of the first HRA approaches seeks to explain how PSFs exert their effect on performance and, in addition, PSFs—such as management methods and attitudes, organizational factors, cultural differences, and irrational behaviour—are not adequately addressed. PSFs in first generation are mainly derived focusing on environmental impact, while PSFs in second generation were obtained by focusing on cognitive effects [21]. PSFs of both generations were revised and collected in a single taxonomy of performance factors [22].

Most important second-generation methods are A Technique for Human Event Analysis (ATHEANA), Cognitive Environmental Simulation (CES), Connectionism Assessment of Human Reliability (CAHR), and Méthode d’Evaluation Realisation des Missions Opérateur pour la Sûreté (MERMOS—assessment method of operational security missions).

3. Applications and Contributions of Fuzzy Mathematics to Human Reliability

3.1. The Uncertainty Problem: Fuzzy Reliability

One of the main contributions of fuzzy mathematics to human reliability is to capture the phenomenon of uncertainty, associated with information sources and the intrinsic randomness of man-machine systems. As Zio indicates [23], a fundamental problem in reliability analysis is the uncertainty of failure occurrence and its consequences.

Risk analyses have three types of treatments of uncertainty according to degree of information availability [24]: (1) historical information available and sufficient (modelled through simple probabilistic frequencies); (2) information available but insufficient (modelled by statistical theories as Bayesian networks); and (3) information not available (modelled through expert judgment). Uncertainty is a function of incompleteness and fuzziness that can be modelled using membership functions. Contributing factors to uncertainty are four [24]: (1) inadequate statistical analysis methods (for statistical parameters); (2) lack of sufficient information for proper statistical analysis (for statistical models); (3) complexities of working conditions and health (for expert judgment); and (4) the level of education and experience (also for expert judgment).

As indicated by Konstandinidou et al. [7] it is necessary to build a human reliability model that can incorporate subjective information and, therefore, an adequate mathematical treatment for this type of information. As Sheridan says [8] operators’ knowledge (indispensable source of information for task analysis) of system variables and their interrelationships is fuzzy.

Natural language introduces uncertainty for its vagueness and imprecision. For example, the principle underlying the design of all systems is that if the system is designed to withstand the worst accident scenarios then it can be resistant to any credible accident; refer to “worst case scenario” which implies subjectivity and arbitrariness, leading to contemplation of highly improbable scenarios [23]. A fuzzy treatment of this linguistic term should address that problem.

Equally, conceptual constructs such as “situational awareness” (SA), criticized for its vagueness and imprecision, are modelled by Naderpour et al. [25] using Bayesian networks and fuzzy inference system. Situational awareness is a crucial factor to improve performance and reduce human error; however, few methods assess SA because it is difficult to model and evaluate. The construct “mental model,” identified as a folk model as well as situational awareness [26], can take two forms: the first is qualitative (describing the interrelationships between a set of objects and experienced events); the second is a quantitative cause and effect relationship (this model addresses questions like “what happens if”). The second type of mental model can be represented by fuzzy rules [8].

HEROS model [9] is an expert system where vagueness of verbal statements is modelled with linguistic variables represented by fuzzy numbers and fuzzy intervals. The input of “expert system” is PSFs evaluation in natural language and the output, also in natural language, is a human error characterization (very unlikely, unlikely, likely, very likely, and very likely).

Some authors speak of “fuzzy reliability” to incorporate this issue [27, 28]. In fuzzy reliability, probability and binary states are replaced with possibility and fuzzy states. Error possibility provides a more detailed description than error probability in probabilistic reliability [27].

Uncertainty about stress and intensity of components in fuzzy reliability can be classified as follows [28]: (1) random stress and fuzzy intensity, (2) fuzzy stress and random intensity, and (3) fuzzy stress and fuzzy intensity. For an electronic component, for example, stress can be operating temperature or voltage; intensity can be the maximum temperature or voltage that component supports. In practice, variable’s stress or intensity is difficult to compute, so they are fuzzy variables. In fuzzy reliability each component is taken as a fuzzy variable.

Other types of uncertainty treated with fuzzy logic arise from human failure event and human actions dependencies. Methods developed to date do not provide task dependencies [29]; one of the desirable attributes of an improved model is the ability to cover human failure events, their dependencies, and recoveries [30]. This source of uncertainty is approached in two main ways: by fuzzy expert judgment elicitation [31, 32] or HEP modifications by dependencies considerations [33, 34]. Recent third-generation HRA models, or dynamic HRA, include simulation to address task dependency [35].

3.2. Expert Judgment Data Treatment

Applying HRA methodologies involves numerous judgments expressed in natural language; for example, THERP’s degree of stress, operating instructions, and training quality have to be qualified verbally; or in HRC, they are based solely on verbal evaluations [9].

One of the great problems of extracting information from an expert is the bias degree of people’s judgments about variables values ​​ [8]. To address this problem, three types of calibration are used: (1) people’s biases are quite stable so they will be the same for the same or similar situations in the same or similar variables, so they can be treated with probability densities; (2) human judgment includes not only the best guess but the degree of confidence in self-response; and (3) judgment can be compared with actual events.

In countries where objective probabilistic risk information is extremely rare or inadequate, using subjective judgment based on experts’ experience is inevitable [24]. In these cases fuzzy theories are useful; fuzzy sets are used to handle ambiguity in probabilistic modelling of subjective judgments. For example, fuzzy theory is applied to convert expert opinion in natural language to obtain numerical values of risk factors [36].

FORAS risk assessment model (Flight Operations Risk Assessment System) [37] is a fuzzy expert system (FES), based on aviation experts knowledge (variables can be linguistic values). The risk model is a hierarchical decomposition of risk contributing factors, whose interrelations are represented by fuzzy sets rules. This decomposition allows identifying major contribution elements. FES is ideal for environments, such as aviation safety, where knowledge is highly subjective and empirical, resulting from years of experience, accident investigations, simulations, and psychological studies.

The vast majority of fuzzy sets applications in HRA use fuzzy logic ability to formally represent qualitative and ambiguous statements, without including a FES to obtain expert knowledge. According to Zio et al. [31], the only model that obtains expert knowledge through fuzzy set theory was published by Huang et al. [38].

3.3. Binary Logic and Fuzzy Logic: Fuzzy Fault Trees

Errors are modelled according to a binary logic of success/failure, so other error modes are not explicitly identified [39]. However, human reliability analysis should not be limited to binary treatment of human actions (correct or failed actions) typically used in fault trees. Many actions may not be included in that binary logic, for example, initiating events [15]. Binary fault trees do not allow context representation nor individuals’ representation, their interrelationships, and system dynamics [40].

In Fuzzy Causal Model (FCM) [36], accident mechanism is explained by directed acyclic diagrams showing the logical relationships of a variety of events. In contrast to traditional failure trees, and even Bayesian networks, not only the occurrence events probability range but also the incidence relations and influence degree between different events are represented by triangular fuzzy numbers . The causal fuzzy model has two interesting contributions: (1) “relaxed” logical operators, that is, AND operator in which, given all the true input variables, there is a low probability of nonoccurrence of the output variable, and OR operator in which, given all the negative input variables, there is a low possibility of occurrence of the output variable; (2) fuzzy conditional operators, that is, output variable occurrence probability given the input variable, which can be a fuzzy value.

Celik and Cebi [41] used Human Factors Analysis and Classification System (HFACS) theory integrated with Fuzzy Analytical Hierarchy Process (FAHP) to quantitatively evaluate human errors contribution in shipping accidents, trying to ensure accident reports consistency in order to clearly identify causes of accidents.

3.4. Performance Shaping Factors

Most of human reliability theories are based on implicit functions relating PSFs with error probabilities; however, they fail to consider variability, uncertainty, and incomplete knowledge that characterize many domains and experts [42].

Bertolini [43] uses “fuzzy cognitive maps” to rank PSFs importance. Using an expert group and by Delphi technique, Bertolini establishes the relationships between 34 PSFs, setting the “fuzzy cognitive map.” Thus, he determines that “noise” factor is the main factor that decreases human reliability. This is an extremely interesting and practical finding, especially in systems design, but questionable for its use in absolute terms and in quantifying human reliability.

In the model developed by Li et al. [44], all performance factors, such as “working environment,” have three states (incompatible, compatible, and advantageous); each state has an occurrence probability (which is estimated by expert judgment and not through historical data) according to a triangular fuzzy membership function, where the peak is the most likely value and extreme values are confidence interval limits.

HEROS model [9] uses a fuzzy inference system (expert system) combining fuzzy values of each PSF; then these values are associated to calculate human error possibility.

Kim and Bishu [45] use fuzzy logic in parameters modelling (age, experience in and out of the control room, and education) that influences the relationship between response time and misdiagnosis probability in an emergency situation. Despite differences found in laboratory observations and field observations, an approximate error value resulting from fuzzy regression models plays an important role given the difficult data acquisition in real cases.

3.5. Human Behaviour Model

Artificial intelligence authors use fuzzy logic to simulate and emulate human behaviour and cognition. Many authors, in order to improve HRA methodologies, introduce the role of human cognition [15], and they have used this knowledge to be incorporated in human reliability analysis modelling human behaviour or human information processing with fuzzy logic.

As an example, SAMPLE model (Situation Awareness Model for Pilot-in-the-Loop Evaluation) [46] can be cited. Created by Charles River Analytics and supported by Wright-Patterson Air Force Base, SAMPLE is an “information processing model” in dynamic systems; it has an agent based architecture that represents human entities (fighter pilots, commercial pilots, air traffic controllers, and dispatchers). It includes fuzzy technologies, Bayesian reasoning, and rule-based expert systems.

Another interesting problem is decision-making modelling. Zadeh [47] proposed the use of fuzzy logic for handling uncertainties associated with human decision-making. Leiden et al. [46] are pioneers in decision-making theory based on recognition (recognition-primed decision-making RPD). According to this theory, good decisions can be reached through the recognition of experience in typical situations and the subsequent identification of the alternative that works. This recognition is modelled applying fuzzy pattern recognition.

The usual assumption in fuzzy logic is that, given a situation, the rule or combination of rules that have greater applicability (membership degree) should dominate the action; in other words, the action attesting a greater membership degree must be chosen [8]. Terano and Sugeno [48] used fuzzy logic for multiple targets weighting problem. The decision maker can assign a score to each of the objectives and can also judge the relevance of any combination of the objectives (taking one at a time, taking in pairs, etc.). For each objectives combination, relevance is compared with the worst scores for each objective in that combination and the worst of the two is taken. A combination with greater weight is then chosen.

4. Most Recent Fuzzy Set Contributions and Applications

The applications of fuzzy logic started after Zadeh’s publications in 1965 [6], principally in automatic control [49], visual and speech recognition, home electronics, man-machine models interaction, artificial intelligence, and several industrial applications. Mendel and John [50] proposed fuzzy sets type-2 in order to model different levels of uncertainty for different forms of data. According to [51] fuzzy sets type-2 approach appears to be in its infancy. In this section, recent fuzzy applications are compared and fuzzy HRA applications are contextualized.

A bibliometric analysis from 2012 to 2015 was made using the online databases: ISI-Web of Science, ScienceDirect, SpringerLink, Informaworld, Engineering Village, Emerald, and IEEE Xplore. Application of fuzzy set can be classified under three main sets (Figure 1): (1) manufacturing operations and industries, (2) service operations and industries, and (3) information and communication technology (telecommunication network planning, image processing, pattern recognition, information retrieval, and weather forecasting, etc.); and the main subject areas of fuzzy application are computer science, engineering, automatic and control systems, robotics, and mathematics. Principal fuzzy engineering applications [52] are (Figure 2) (a) classification and pattern recognition, (b) fuzzy control systems, (c) fuzzy optimization, (d) fuzzy cognitive mapping, and (e) system identification. In this context, fuzzy HRA publications represent almost 1% of principal fuzzy applications (Figure 1) and 1% of fuzzy engineering applications (Figure 2).

Particularly in fuzzy HRA, following the classification of Section 3, the most recent publications (2012–2015) were on uncertainty treatment (15%), expert judgment data treatment (45%), fuzzy fault trees (9%), performance shaping factors (18%), and human behaviour model (13%).

A recent and important line of research is about fuzzy Bayesian network (26% of fuzzy HRA applications from 2012 to 2015); the applications were, for example, PSFs quantification improvement [44], case applications [5356], and fuzzy Bayesian CREAM [57]. On the other hand, the most popular recent works were about fuzzy CREAM (probably due to the method popularity), however, without a big amount of publications (2%). In terms of membership functions, the most used continue to be triangular functions (56%), followed by Gaussian (26%) and trapezoidal (18%). Only 6 (<1% of fuzzy HRA applications) articles from 2012 to 2015 mentioned type-2 fuzzy sets, but none of them really apply.

Present researches on HRA suggest that “failure arises from systematic and predictable organizational factors at work, not simply erratic behaviours by individuals” [58]. This new line focuses on anticipating and preventing failure conditions as a system characteristic instead of considering human operator as a probabilistic failure component. This system capacity is called resilience [59]. Resilience engineering redefines the concept of safety: “the ability to succeed under varying conditions” [59]. As an emerging line of research, resilience engineering needs improvements, especially in the way of measuring the resilience of organizations. Dekker and Hollnagel [26] indicate that phenomenon or construct explanations should be decomposed or reduced into fundamental elements that suggest possible measures allowing explication’s corroboration. The quantification and measurement of abstract and complex construct like “resilience engineering” entails numerous problems, and fuzzy logic should be the adequate mathematical tool for its modelling. Nevertheless, there are no publications on the subject.

5. Conclusion and Discussion

This review presents the advantages of using fuzzy mathematics to quantify human reliability. Even if they represent less than 1% of the actual fuzzy applications, human reliability analyses prove to be a prosperous and growing field of application of fuzzy techniques. Fuzzy contributions improve HRA in five main aspects: (1) uncertainty treatment, (2) expert judgment data treatment, (3) fuzzy fault trees, (4) performance shaping factors, and (5) human behaviour model.

In the first case, sources of uncertainty and examples of fuzzy treatment were discussed. Ambiguous, qualitative, imprecise, and vague information is modelled with fuzzy sets in many HRA methods. The major advantage of using fuzzy sets is to capture the uncertainty associated with verbal statements, linguistic variables, subjective information, conceptual constructs (as situation awareness or mental models), and task dependencies. In fuzzy reliability, probability and binary states are replaced by possibility and fuzzy states.

Expert judgment is the principal source of information in HRA, for it is very difficult, or even impossible, to develop an HRA method without drawing on expert opinion. Fuzzy sets are used to handle ambiguity in probabilistic modelling of subjective judgments, transforming natural language into numerical values and addressing the bias degree of people’s judgments on variable values. Fuzzy expert systems prove to capture expert knowledge in HRA applications.

Another important contribution is fuzzy fault trees including “relaxed” logical operators and fuzzy conditional operators. Fuzzy fault trees allow for simpler representations of the complex nature of human actions and much more flexibility than binary fault trees.

Concerning PSF, there is a wide application of fuzzy techniques ranging from triangular fuzzy numbers to represent PSFs confidence limits to fuzzy cognitive maps to establish PSFs relationships and dependence and determining the main factor that decreases human reliability. Fuzzy sets admit PSFs variability, uncertainty, and incomplete knowledge.

Finally, many HRA methods include artificial intelligence approaches to model human behaviour. Both an agent based architecture that represents human entities with fuzzy technologies, Bayesian reasoning, and rule-based expert systems and a decision-making model based on fuzzy pattern recognition or multiple targets weighting problems are examples of the complexity achieved by fuzzy artificial intelligence methods in HRA applications. Due to these methodologies, great depth and accuracy in modelling human behaviour can be reached.

This paper puts forth the applications and contributions of fuzzy set theory to human reliability modelling. As shown, most of these applications resort to triangular membership functions demonstrating strength, flexibility, and simplicity enough for safety analyses. The inclusion of fuzzy theory type-2 is one vacant area of HRA research. Proposed by Mendel and John as an extension of ordinary fuzzy sets, fuzzy grade or fuzzy sets type-2 may probably decrease the uncertainty in human reliability analyses. However, the exploiting of this theory in human reliability techniques for safety analyses may increase calculation complexity to impractical levels.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work is supported by the National Council of Scientific and Technical Research CONICET, Argentina, and the National University of Cuyo, Mendoza, Argentina.

References

  1. S. Munger, R. Smith y, and D. Payne, “An index of electronic equipment operability, data store,” Tech. Rep. AIR-C43-1/62-RP, American Institutes for Research, Pittsburgh, Pa, USA, 1962. View at: Google Scholar
  2. D. Meister, “The nature of human error,” in Proceedings of the Global Telecommunications Conference and Exhibition Communications Technology for the 1990s and Beyond (GLOBECOM '89), 1989. View at: Google Scholar
  3. E. Hagen, “Human reliability analysis,” Nuclear Safety, vol. 17, pp. 315–326, 1976. View at: Google Scholar
  4. E. M. Dougherty Jr., “Human reliability analysis—where shouldst thou turn?” Reliability Engineering and System Safety, vol. 29, no. 3, pp. 283–299, 1990. View at: Publisher Site | Google Scholar
  5. C. Kahraman, M. Gülbay, and Ö. Kabak, “Applications of fuzzy sets in industrial engineering: a topical classification,” Studies in Fuzziness and Soft Computing, vol. 201, pp. 1–55, 2006. View at: Publisher Site | Google Scholar
  6. L. A. Zadeh, “Fuzzy sets,” Information and Computation, vol. 8, pp. 338–353, 1965. View at: Google Scholar | Zentralblatt MATH | MathSciNet
  7. M. Konstandinidou, Z. Nivolianitou, C. Kiranoudis, and N. Markatos, “A fuzzy modeling application of CREAM methodology for human reliability analysis,” Reliability Engineering and System Safety, vol. 91, no. 6, pp. 706–716, 2006. View at: Publisher Site | Google Scholar
  8. T. B. Sheridan, Telerobotics, Automation, and Human Supervisory Control, MIT Press, 1992.
  9. A. Richei, U. Hauptmanns, and H. Unger, “The human error rate assessment and optimizing system HEROS—a new procedure for evaluating and optimizing the man-machine interface in PSA,” Reliability Engineering and System Safety, vol. 72, no. 2, pp. 153–164, 2001. View at: Publisher Site | Google Scholar
  10. V. Di Pasquale, R. Iannone, S. Miranda, and S. Riemma, “An overview of human reliability analysis techniques in manufacturing operations,” in Operations Management, M. M. Schiraldi, Ed., chapter 9, pp. 978–953, InTech, Rijeka, Croatia, 2013. View at: Publisher Site | Google Scholar
  11. A. Swain and H. Guttman, A Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant, Nuclear Regulatory Commision, Washington, DC, USA, 1983.
  12. B. J. Kim and R. R. Bishu, “Uncertainty of human error and fuzzy approach to human reliability analysis,” International Journal of Uncertainty, Fuzziness and Knowlege-Based Systems, vol. 14, no. 1, pp. 111–129, 2006. View at: Publisher Site | Google Scholar
  13. L. V. Utkin and F. P. A. Coolen, “Imprecise reliability: an introductory overview,” Studies in Computational Intelligence, vol. 40, pp. 261–306, 2007. View at: Publisher Site | Google Scholar
  14. A. H. Mosleh and Y. H. Chang, “Model-based human reliability analysis: prospects and requirements,” Reliability Engineering and System Safety, vol. 83, no. 2, pp. 241–253, 2004. View at: Publisher Site | Google Scholar
  15. I. S. Kim, “Human reliability analysis in the man–machine interface design review,” Annals of Nuclear Energy, vol. 28, no. 11, pp. 1069–1081, 2001. View at: Publisher Site | Google Scholar
  16. J. Rasmussen, “Skills, rules, knowledge, signals, signs, symbols, and other distinctions in human performance models,” IEEE Transactions on Systems, Man and Cybernetics, vol. 13, no. 3, pp. 257–266, 1983. View at: Google Scholar
  17. J. Reason, Human Error, Cambridge University Press, Cambridge, UK, 1990.
  18. E. Hollnagel, Cognitive Reliability and Error Analysis Method (CREAM), Elsevier, Oxford, UK, 1998.
  19. O. Sträter, V. Dang, B. Kaufer, and A. Daniels, “On the way to assess errors of commission,” Reliability Engineering and System Safety, vol. 83, no. 2, pp. 129–138, 2004. View at: Publisher Site | Google Scholar
  20. P. Marsden and E. Hollnagel, “Human interaction with technology: the accidental user,” Acta Psychologica, vol. 91, no. 3, pp. 345–358, 1996. View at: Publisher Site | Google Scholar
  21. S. W. Lee, A. R. Kim, J. S. Ha, and P. H. Seong, “Development of a qualitative evaluation framework for performance shaping factors (PSFs) in advanced MCR HRA,” Annals of Nuclear Energy, vol. 38, no. 8, pp. 1751–1759, 2011. View at: Publisher Site | Google Scholar
  22. J. W. Kim and W. Jung, “A taxonomy of performance influencing factors for human reliability analysis of emergency tasks,” Journal of Loss Prevention in the Process Industries, vol. 16, no. 6, pp. 479–495, 2003. View at: Publisher Site | Google Scholar
  23. E. Zio, “Reliability engineering: old problems and new challenges,” Reliability Engineering and System Safety, vol. 94, no. 2, pp. 125–141, 2009. View at: Publisher Site | Google Scholar
  24. H.-N. Cho, H.-H. Choi, and Y.-B. Kim, “A risk assessment methodology for incorporating uncertainties using fuzzy concepts,” Reliability Engineering and System Safety, vol. 78, no. 2, pp. 173–183, 2002. View at: Publisher Site | Google Scholar
  25. M. Naderpour, J. Lu, and G. Zhang, “An intelligent situation awareness support system for safety-critical environments,” Decision Support Systems, vol. 59, no. 1, pp. 325–340, 2014. View at: Publisher Site | Google Scholar
  26. S. Dekker and E. Hollnagel, “Human factors and folk models,” Cognition, Technology & Work, vol. 6, no. 2, pp. 79–86, 2004. View at: Publisher Site | Google Scholar
  27. T. Onisawa and Y. Nishiwaki, “Fuzzy human reliability analysis on the Chernobyl accident,” Fuzzy Sets and Systems, vol. 28, no. 2, pp. 115–127, 1988. View at: Publisher Site | Google Scholar
  28. Q. Jiang and C.-H. Chen, “A numerical algorithm of fuzzy reliability,” Reliability Engineering and System Safety, vol. 80, no. 3, pp. 299–307, 2003. View at: Publisher Site | Google Scholar
  29. F. Vanderhaegen, S. Zieba, S. Enjalbert, and P. Polet, “A benefit/cost/deficit (BCD) model for learning from human errors,” Reliability Engineering and System Safety, vol. 96, no. 7, pp. 757–766, 2011. View at: Publisher Site | Google Scholar
  30. A. Mosleh, “A model-based human reliability analysis framework,” in Proceedings of the International Conference on Probabilistic Safety Assessment and Management (PSAM '10), Seattle, Wash, USA, 2010. View at: Google Scholar
  31. E. Zio, P. Baraldi, M. Librizzi, L. Podofillini, and V. N. Dang, “A fuzzy set-based approach for modeling dependence among human errors,” Fuzzy Sets and Systems, vol. 160, no. 13, pp. 1947–1964, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  32. L. Podofillini, V. Dang, E. Zio, P. Baraldi, and M. Librizzi, “Using expert models in human reliability analysis—a dependence assessment method based on fuzzy logic,” Risk Analysis, vol. 30, no. 8, pp. 1277–1297, 2010. View at: Publisher Site | Google Scholar
  33. D. I. Gertman, H. S. Blackman, J. I. Marble, C. Smith, R. L. Boring y, and P. O'Reilly, “The SPAR-H human reliability analysis method,” in Proceedings of the 4th American Nuclear Society International Topical Meeting on Nuclear Plant Instrumentation, Controls and Human-Machine Interface Technologies, Columbus, Ohio, USA, 2005. View at: Google Scholar
  34. T. Q. Tran, R. L. Boring, D. D. Dudenhoeffer, B. P. Hallbert, M. D. Keller, and T. M. Anderson, “Advantages and disadvantages of physiological assessment for next generation control room design,” in Proceedings of the 8th IEEE HFPP Conference on Human Factor and Power Plants and 13th Annual Workshop on Human Performance, Root Cause, Trending, Operating Experience, Self Assessment (HPRCT '07), pp. 259–263, Monterey, Calif, USA, August 2007. View at: Publisher Site | Google Scholar
  35. R. L. Boring, “Modeling human reliability analysis using MIDAS,” in Proceedings of the 5th International Topical Meeting on Nuclear Plant Instrumentation, Controls, and Human Machine Interface Technology, Albuquerque, NM, USA, November 2006. View at: Google Scholar
  36. Q.-L. Lin, D.-J. Wang, W.-G. Lin, and H.-C. Liu, “Human reliability assessment for medical devices based on failure mode and effects analysis and fuzzy linguistic theory,” Safety Science, vol. 62, pp. 248–256, 2014. View at: Publisher Site | Google Scholar
  37. M. Hadjimichael, “A fuzzy expert system for aviation risk assessment,” Expert Systems with Applications, vol. 36, no. 3, pp. 6512–6519, 2009. View at: Publisher Site | Google Scholar
  38. D. Huang, T. Chen, and M. J. Wang, “A fuzzy set approach for event tree analysis,” Fuzzy Sets and Systems, vol. 118, no. 1, pp. 153–165, 2001. View at: Publisher Site | Google Scholar | MathSciNet
  39. G. W. Parry, “Suggestions for an improved HRA method for use in Probabilistic Safety Assessment,” Reliability Engineering and System Safety, vol. 49, no. 1, pp. 1–12, 1995. View at: Publisher Site | Google Scholar
  40. M. Ramos Martins and M. Coelho Maturana, “Application of Bayesian Belief networks to the human reliability analysis of an oil tanker operation focusing on collision accidents,” Reliability Engineering and System Safety, vol. 110, pp. 89–109, 2013. View at: Publisher Site | Google Scholar
  41. M. Celik and S. Cebi, “Analytical HFACS for investigating human errors in shipping accidents,” Accident Analysis & Prevention, vol. 41, no. 1, pp. 66–75, 2009. View at: Publisher Site | Google Scholar
  42. A. Gregoriades and A. Sutcliffe, “Scenario-based assessment of nonfunctional requirements,” IEEE Transactions on Software Engineering, vol. 31, no. 5, pp. 392–409, 2005. View at: Publisher Site | Google Scholar
  43. M. Bertolini, “Assessment of human reliability factors: a fuzzy cognitive maps approach,” International Journal of Industrial Ergonomics, vol. 37, no. 5, pp. 405–413, 2007. View at: Publisher Site | Google Scholar
  44. P.-C. Li, G.-H. Chen, L.-C. Dai, and L. Zhang, “A fuzzy Bayesian network approach to improve the quantification of organizational influences in HRA frameworks,” Safety Science, vol. 50, no. 7, pp. 1569–1583, 2012. View at: Publisher Site | Google Scholar
  45. B. Kim and R. R. Bishu, “On assessing operator response time in human reliability analysis (HRA) using a possibilistic fuzzy regression model,” Reliability Engineering and System Safety, vol. 52, no. 1, pp. 27–34, 1996. View at: Publisher Site | Google Scholar
  46. K. Leiden, K. R. Laughery, J. Keller, J. French, W. Warwick, and S. D. Wood, A Review of Human Performance Models for the Prediction of Human Error, The Human Systems Integration Division, NASA, Ann Arbor, Mich, USA, 2001.
  47. L. A. Zadeh, “Outline of new approach to the analysis of complex systems and decision processes,” IEEE Transactions on Systems, Man and Cybernetics, no. 1, pp. 28–44, 1973. View at: Google Scholar | MathSciNet
  48. T. Terano and M. Sugeno, Microscopic Optimization by Using Conditional Fuzzy Measures, Tokyo Institute of Technology, Tokyo, Japan, 1974.
  49. R. M. Tong, “A control engineering review of fuzzy systems,” Automatica, vol. 13, no. 6, pp. 559–569, 1977. View at: Publisher Site | Google Scholar
  50. J. M. Mendel and R. I. B. John, “Type-2 fuzzy sets made simple,” IEEE Transactions on Fuzzy Systems, vol. 10, no. 2, pp. 117–127, 2002. View at: Publisher Site | Google Scholar
  51. T. Dereli, A. Baykasoglu, K. Altun, A. Durmusoglu, and I. B. Türksen, “Industrial applications of type-2 fuzzy sets and systems: a concise review,” Computers in Industry, vol. 62, no. 2, pp. 125–137, 2011. View at: Publisher Site | Google Scholar
  52. T. J. Ross, Fuzzy Logic with Engineering Applications, John Wiley & Sons, New York, NY, USA, 2009.
  53. L. Zhang, X. Wu, Y. Qin, M. J. Skibniewski, and W. Liu, “Towards a fuzzy Bayesian network based approach for safety risk analysis of tunnel-induced pipeline damage,” Risk Analysis, vol. 36, no. 2, pp. 278–301, 2015. View at: Publisher Site | Google Scholar
  54. G. Kabir, R. Sadiq, and S. Tesfamariam, “A fuzzy Bayesian belief network for safety assessment of oil and gas pipelines,” Structure and Infrastructure Engineering, pp. 1–16, 2015. View at: Publisher Site | Google Scholar
  55. L. Zhang, X. Wu, M. J. Skibniewski, J. Zhong, and Y. Lu, “Bayesian-network-based safety risk analysis in construction projects,” Reliability Engineering and System Safety, vol. 131, pp. 29–39, 2014. View at: Publisher Site | Google Scholar
  56. M. Hänninen, “Bayesian networks for maritime traffic accident prevention: benefits and challenges,” Accident Analysis & Prevention, vol. 73, pp. 305–312, 2014. View at: Publisher Site | Google Scholar
  57. Z. L. Yang, S. Bonsall, A. Wall, J. Wang, and M. Usman, “A modified CREAM to human reliability quantification in marine engineering,” Ocean Engineering, vol. 58, pp. 293–303, 2013. View at: Publisher Site | Google Scholar
  58. D. Woods and J. Wreathall, Managing Risk Proactively: The Emergence of Resilience Engineering, Ohio University, Columbus, Ohio, USA, 2003.
  59. E. Hollnagel, D. D. Woods, and N. Leveson, Resilience Engineering: Concepts and Precepts, Ashgate, 2007.

Copyright © 2016 P. A. Baziuk et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1836
Downloads1099
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.