From Checklists to Chatbots: Reimagining HRA with Generative AI
Open Access
Article
Conference Proceedings
Authors: Michael Hildebrandt, Awwal Arigi
Abstract: This paper evaluates the capability of Large Language Models (LLMs) to support Human Reliability Assessment (HRA) through a systematic test using the Integrated Human Event Analysis System for Event and Condition Assessment (IDHEAS-ECA) methodology. Using Claude Opus 4.1, we generated Steam Generator Tube Rupture scenarios and subsequently tasked the model with producing a comprehensive HRA analysis, which was then independently reviewed by two IDHEAS-ECA method experts. The LLM demonstrated substantial domain knowledge, generating technically coherent scenarios with appropriate procedural details and system responses, and produced a structured analysis covering cognitive functions and performance influencing factors. However, expert review identified critical methodological gaps including conflation of concepts from different HRA methods, omission of formal task analysis steps required by NUREG-2256, and inadequate human failure events identification. While current LLMs show promise as auxiliary tools for scenario generation and preliminary analysis, they require significant enhancement before supporting safety-critical HRA applications. Future work should focus on method-specific training, integration with structured knowledge representations (e.g. knowledge graphs), and development of validation protocols to ensure appropriate application boundaries.
Keywords: Large Language Models, Human Reliability Assessment, Nuclear Power Operations, Knowledge Graphs, IDHEAS-ECA
DOI: 10.54941/ahfe1007027
Cite this paper:
Downloads
16
Visits
67


AHFE Open Access