A Systematic Review of Ground Truth Labeling and Prediction for Cognitive Workload Adaptive Systems
Open Access
Article
Conference Proceedings
Authors: Udit Kumar Das, Moajjem Chowdhury, Yunmei Liu, David Kaber
Abstract: Cognitive workload monitoring (or real-time inferencing) is crucial for the safe operation of complex human-machine systems, and motivates the development of adaptive automation technologies to dynamically assist operators and prevent both overload and disengagement situations. We systematically reviewed 75 recent studies (2015–2025) on machine learning-based cognitive workload monitoring and adaptive systems. The review focused on three key challenges: (1) ground-truth workload labeling; (2) predictive model generalization across users; and (3) adaptive automation/interface interventions. Approximately 28% of studies were found to rely on retrospective self-report workload scales for ground-truth labels, although some use objective task performance metrics or hybrid labeling approaches. Predictive models were observed to achieve high accuracy for the same individuals they were trained on (subject-dependent validation; mean ~85.6%), but performance dropped when tested on new users (subject-independent validation; mean ~80.3%). In general, the majority of studies present offline model development (for asynchronous classification of workload states) or conceptual system proposals; only 7 studies (9.3%) implemented and evaluated a real-time closed-loop workload-responsive system with human participants. These gaps highlight the need for standardized multimodal workload state labeling methods, cross-user modeling techniques, and empirical validation of closed-loop workload-adaptive systems in operational settings.
Keywords: Ground Truth Labeling, Cognitive Workload, Predictive Modeling, Adaptive Interventions, Human-in-the-Loop System
DOI: 10.54941/ahfe1006883
Cite this paper:
Downloads
25
Visits
137


AHFE Open Access