Seeing the Invisible Load: XR + Multimodal Sensing for Cognitive Ergonomics in Industrial Training
Open Access
Article
Conference Proceedings
Authors: Jessica Johnson, Andwele Grant
Abstract: Extended reality (XR) technologies are increasingly positioned as disruptive Industry 5.0 tools for human-centric industrial training and intelligent human–system integration. Coupled with multimodal sensing (eye tracking, EEG, HRV, GSR, and other physiological signals), XR environments promise to make otherwise invisible cognitive demands observable, especially for novice trainees entering complex industrial settings. Yet the evidence base is fragmented: (1) there is no quantitative synthesis of the cognitive ergonomics benefits of XR plus sensing; (2) little is known about which XR–sensor configurations yield the strongest effects; (3) prior reviews rarely focus on industrial and manufacturing tasks; (4) multimodal signals are used predominantly for post-hoc diagnosis rather than real-time adaptation; and (5) trade-offs between egocentric, in-situ capture and controlled laboratory configurations are poorly characterized. This paper presents a meta-analysis of empirical studies that (a) used XR for training or performance support, (b) integrated at least one multimodal sensing channel, and (c) reported training- or work-relevant outcomes such as workload, situation awareness, task performance, or transfer. For this paper, we focused on manufacturing and industrial tasks (e.g., assembly, inspection, maintenance) and on novice or early-career operators. The synthesis yields evidence indicating where XR plus multimodal sensing robustly improves cognitive ergonomics for industrial novices, where effects are weak or inconsistent, and which broad modality–sensor pairings are most often associated with reduced workload, enhanced situation awareness, and lower error rates. Results indicated that egocentric, in-situ capture increases ecological validity without systematically degrading training and performance benefits but also reveal a major gap: most systems treat multimodal data as diagnostic rather than as inputs to intelligent, closed-loop adaptation. Building upon these findings, we design guidance for intelligent interfaces and human-machine teaming in industrial systems emphasizing XR modalities and adaptive policies for future cognitive ergonomics-Industry 5.0 embedded systems.
Keywords: Hybrid Cognitive-Centered Systems, XR Training, Cognitive Ergonomics, Human-Machine Teaming, Adaptive Cognitive Systems
DOI: 10.54941/ahfe1007064
Cite this paper:
Downloads
1
Visits
2


AHFE Open Access