Human Error, Reliability, Resilience, and Performance

book-cover

Editors: Ronald Boring

Topics: Human Error, Reliability & Performance

Publication Date: 2023

ISBN: 978-1-958651-58-2

DOI: 10.54941/ahfe1003547

Articles

Human error and performance modeling with virtual operator model (HUNTER) synchronously coupled to Rancor Microworld Nuclear Power Plant Simulator

The Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER) is a virtual nuclear power plant operator. The virtual operator can follow procedures the timing and reliability of their actions are tied to dynamic human performance modeling parameters (Boring et al., 2016). Unlike traditional (static) risk modeling HUNTER has a dynamic version of SPAR-H to calculate performance shaping factors (PSFs) based on evolving plant conditions. HUNTER models task level Goals-Operators-Methods-Selection rules (GOMS)-HRA as the operator walks through procedure steps. Here we describe how the HUNTER virtual operator model was tightly coupled with the Rancor Nuclear Power Plant Microworld as part of a suite of probabilistic risk assessment (PRA) tools.

Roger Lew, Ronald Boring, Thomas Ulrich
Open Access
Article
Conference Proceedings

A critical analysis of the concept of resilience skills from an enactivist perspective

This paper offers a critical analysis of the concept of professional skill and cognition as it is conceived in the field of resilience engineering which is concerned with understanding how adaptive capacity is configured in complex sociotechnical systems. It is argued that the current disembodied and representationalist approach, separating thinking from acting, cannot accommodate resilience understood as adaptive capacity. Instead, an enactivist approach, emphasizing the constitutive coupling between embodied action and environment, is suggested as an ontological basis for research on resilience and adaptability in work.

Martin Viktorelius
Open Access
Article
Conference Proceedings

Human reliability analysis in aviation accidents: A review

In the civil aviation sector, human factors is the primary cause of many safety incidents. Aircraft flying, maintenance, and operations are the major tasks that are heavily dependent on professionals; thus, they are subject to human error probability. Human reliability analysis (HRA), which can evaluate human state and managing risk, has been developed over the years to identify, predict, and reduce human failures throughout aircraft operating procedures. Different generations of HRA tools have been developed to quantify the risks that are associated with safety accidents, including such as the Human Error Assessment and Reduction Technique, Technique for Human Error Prediction, Standardized Plant Analysis Risk Human Reliability Analysis, Cognitive Reliability and Error Analysis Methods, and Bayesian Network (BN). However, little is known about how these approaches are applied in aviation safety. This review aimed to systematically examine the current status of research on HRA in aviation accidents. A total of 13 studies were included and encompassed the studies of the first, second, and third generalizations of HRA alone or in combination with other methods (e.g., Improved Analytics Hierarchy Process, Functional Resonance Analysis Methods, Human Factor Analysis and Classification System, and Fault Tree Analysis). Results revealed that the third-generation HRA with BN was frequently used, showing great application potential for flight safety risk prevention and reduction. In the future, testing other third-generation HRA models driven by data in the field of airworthiness is necessary.

Steven Tze Fung Lam, Alan H.S. Chan
Open Access
Article
Conference Proceedings

Digital Twin Verification for Advanced Reactor Remote Operations

Advanced reactors, especially microreactors, must take advantage of remote monitoring and control strategies to reduce the commercial cost of deployment and operations costs to compete with existing electrical generators. A robust and flexible remote concept of operations must be developed to support diverse designs and use cases to ensure safe and reliable operations. This paper presents the unique aspects of remote operations as they contrast existing established operations to highlight issues that must be considered. A key element of the remote operations concept is the ability for a physically separated command and control center to maintain awareness of the reactor’s state and perform supervisory control on the reactor. A digital twin implementation is proposed to serve as a verification system, to provide the remote operations center with verified reactor state information and to provide the remotely situated reactor with verified operation center commands. This approach augments existing communication infrastructure to support operators as they assess the validity of the information they are receiving and confidence that the commands they issue can be executed at the remote reactor.

Thomas Ulrich, Joseph Oncken, Ronald Boring, Kaeley Stevens, Megan Culler, Steven Bukowski, Troy Unruh, Jeren Browning
Open Access
Article
Conference Proceedings

Synchronous vs. Asynchronous Coupling in the HUNTER Dynamic Human Reliability Analysis Framework

The Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER) framework for dynamic human reliability analysis (HRA) has recently been developed into standalone software. HUNTER creates a virtual operator that is coupled to a virtual system model, in this case a nuclear power plant model. Asynchronous model coupling is most often found in the use of thermohydraulic codes like RELAP5-3D, which are designed to run in batch mode without interruption to determine the evolution of plant parameters from a particular set of conditions. Within RELAP5-3D, it is possible to schedule changes in the configuration, but conditions are determined a priori and not changed once a particular simulation run is started. In contrast, synchronous model coupling is most commonly found in interactive simulators, which feature a system model linked to real-time inputs from a human user. A model that is executed is a simulation, while a simulator is a simulation designed to interact with human inputs. Simulation is typically asynchronous to other models or humans, whereas simulators are synchronous with regular exchanges to other models or humans. For example, a training simulator at a nuclear power plant operates synchronously in such a manner that an input from the reactor operator at any point in time will change the evolution of the simulation run. The simulator provides an evolving response to dynamic contexts that reflect operator actions. The ability to change the simulation direction mid-course is the hallmark of synchronous coupling. HUNTER, as virtual operator, most accurately reflects human-system interactions when it is coupled synchronously with a plant model. In this paper, we explore synchronous and asynchronous coupling based on implementations in HUNTER.

Ronald Boring, Thomas Ulrich, Roger Lew, Jooyoung Park
Open Access
Article
Conference Proceedings

Analysis of Tasks in Autonomous Systems Using the EMRALD Dynamic Risk Assessment Tool

An autonomous system refers to the system that has the power and ability for self-governance in the performance of system functions. Autonomous systems have been actively pursued in a variety of domains such as automotive, aviation, maritime, medicine, and nuclear fields. As an unmanned concept employing the highest automation level, the autonomous system basically performs most of the work in normal operations or emergency situations. However, despite advances in technology, many researchers have noted these systems still require human actions. The nature of human actions on autonomous systems is different than the human actions that are considered in existing systems. Nevertheless, only a few studies have been conducted on 1) characterizing the different types of errors and risks associated with human actions interacting with autonomous systems and 2) how to evaluate human actions in the autonomous operations. As a starting point, this study aims to investigate differences of tasks in autonomous operation compared to those in existing nuclear power plant operation using the Event Modeling Risk Assessment Using Linked Diagram (EMRALD) software. In this paper, insights aspect of human error and time are derived out and discussed based on the output of the EMRALD models.

Jooyoung Park, Jisuk Kim, Thomas Ulrich, Ronald Boring, Steven Prescott
Open Access
Article
Conference Proceedings

Japanese Practical Concepts on Human Error Prevention: 3H, 4M, and 5S

Environment of Quality Control and Safety Management in Japanese IndustriesThe Japanese customers are, in general, very strict on product quality. They can select products from many competitors, so they will not buy products with minor flaws. In addition, high-technology products are required to have very high reliability, so defects on products are critical. Human errors are not allowed in the production. Historically, Japanese economy has been concentrated on mass-manufacturing since 19th century. In the past, the quality control (QC) of Japanese companies was very weak in general. Most of the Japanese think the low quality resulted in unreliability of weapons in the World War II. After the war, most of Japanese companies adopted scientific QC methods from the United States, which is based on statistical analysis to reduce uncertainty in the productions. Japanese companies have developed other methods to manage product quality and workplace safety, which seem very original concepts developed among Japanese industrial society after the war. This paper introduces the Japanese concepts for QC and safety managements, named "3H", "4M" and "5S". Their theoretical backgrounds are also explained. Risks Prediction with 3H-4MIn safety management, it is very important to predict possible incidents beforehand. In Japanese industrial society, many people use the concepts of 3H and 4M for the purpose. “3H” stands for a triple of Japanese adjectives of “Hajimete” (“for the first time”), “Hisashiburi” (“in quite a while”), and “Henkou” (“change”). In most of incidents, the features of 3H are very likely to appear in the story. The Japanese engineers therefore consider 3H features as bad indications that evoke the accidents. “4M” means major four aspects in production processes, namely “Man”, “Machine”, “Material”, and “Management” [1]. (Some engineers may use the variation of “5M” adding “Method” into the four.) In some Japanese companies, supervisors responsible for safety are trained to doubt whether any of 4M is contaminated with baleful features of 3H. For example, supervisors pay special attention on “a new worker”, “a use of a machine in quite a while”, “a change of the instruction rule”, and other 4M with 3H features. This attitude concentrated on risks of “3H in 4M” is a very efficient way to predict large risks hidden in the workplaces, even though it might pay less attention to other minor aspects. Importance of Apparent Order: 5S “5S” is a group of five Japanese words of “Seiri” (organization), “Seiton” (placing things in order), “Seisou” (cleaning), “Seiketsu” (sanitariness), and “Shitsuke” (compliance to rules). Those five features keep simplicity and safety of workplace, that results in suppression of human errors. The 5S features are superficial and easy to see. Supervisors can quickly judge their condition just by observing the workplace. Most of Japanese industries attach top priority to maintenance of 5S to keep quality and safety. Reference 1. Krzysztof Knop, Krzysztof Mielczarek, “Using 5W-1H and 4M Methods to Analyse and Solve the Problem with the Visual Inspection Process – case study”, 12th International Conference Quality Production Improvement (QPI 2018), 2018.

Toru Nakata
Open Access
Article
Conference Proceedings

What Fatality and "Prescott Way" Causal Factors Are Revealed in the July 23, 2013, Deployment Zone News Conference?

This is a story that needs to be told - always remembered - truthfully. This semi-inclusive paper examines the alleged wildland fire human factors that existed, contributing to the fatal Granite Mountain Hot Shot (GMHS) tragedy on the June 30, 2013, Yarnell Hill (YH) Fire; derived from the July 24, 2013, GMHS Deployment Zone (DZ) News Conference videos by InvestigativeMEDIA Reporter and author John Dougherty (JD) with Prescott FD (PFD) Wildland Battalion Chief (WBC) Darrell Willis, along with numerous reporters. The videos were then transcribed from the spoken words into a written PDF format using the novel Otter app so you can truly read what WBC and various Reporters are discussing compared to the mostly unreliable "CC - Closed Caption" hit-and-miss versions in the two videos. Rather than use all the Otter-transcribed text, the authors selectively used those WBC ambiguities of established tried-and-true Rules Of Engagement, i.e. LCES, Fire Orders, etc. Being able to read what is said is more revealing and thought-provoking offering new perspectives on this divisive fatal event. Torn and tormented while aware of the real truth, WBC held these young men as Sons - on the annoying horns of a dilemma - feeing obliged to defend them, weakly attempting to share in his alleged illusory-recollected “truth” of why it happened.

Fred Schoeffler, Joy A. Collura
Open Access
Article
Conference Proceedings

Method for Enhancing Evaluation of the Human Error Probability in Disaster Risk Assessment for Industrial Plants

An important part of considering countermeasures for disasters in industrial plants is to conduct an accurate risk assessment. In general, we assess risk based on two indicators: the harm severity and the probability. We first select candidate countermeasures based on the harm severity. Next, the specifications of countermeasures such as the scope of application, expected lifetime and cost are determined, taking into account of the probability. As a result, we introduce appropriate countermeasures in the field that can control risks within acceptable limits.Various countermeasures for disasters caused by human error are considered similarly and introduced in the field. It is necessary to analyze the factors related to human error, which are called performance shaping factors, in order to evaluate the human error probability. For this purpose, workers with appropriate knowledge and ability of human factors must be assigned to the risk assessment team. However, it is difficult for many industrial plants to secure the required number of workers who can accurately analyze and evaluate the effects of human factors. At the accident level, it is possible to invite human factor experts to the analysis team because the budget for the analysis is large. On the other hand, the budget for an incident analysis is limited because of the large number of incidents. The purpose of this study is to support the enhancement of incident-level disaster risk assessment. We examine the assessment method of the human error probability that can be conducted by those with limited knowledge of human factors.We attempted to evaluate on a scale of six (5: Certain, 4: Likely, 3: Possible, 2: Unlikely, 1: Rare, and 0: Eliminated), which are commonly used in risk assessments, rather than to calculate detailed values such as the human error probability.First, we organized and classified factors related to the occurrence of human error by text mining of disaster incident cases over the past 20 years. Second, we referred to a number of past studies on performance shaping factors and constructed [the database of factors influencing on human error risk]. Finally, we developed a system to evaluate the human error probability by extracting keywords and sentences from incident reports of industrial plants and matching them with [the database of factors influencing on human error risk]. The system consists of the following functions.(1)Extract keywords w_i and sentence data S_j=(w_1,w_2,…,w_k) that keywords are concatenated from an incident report.(2)Check S_j against [the database of factors influencing on human error risk] to estimate the influence on the human error probability. The estimated impact for S_j is x_i.(3)Comprehensively evaluated x_i. This overall impact evaluation value is denoted as Y.(4)Indicate the human error probability by a 6-level value based on Y.(5)Adapt the estimated human error probability and the harm severity entered separately to a commonly used risk matrix.We proposed a method that can be adapted to the risk matrix used in general disaster risk assessment. This method was validated by several safety administrator in an industrial plant and its validity and feasibility were confirmed. However, it is a problem for practical application to link with the incident report design because it is sometimes difficult to adapt to this method depending on the incident report design.

Mamiko Murahashi, Yusaku Okada
Open Access
Article
Conference Proceedings