Interpretable Multimodal Framework for Assessing Cognitive Load and Stress in Collaborative Robot Environments

Open Access
Article
Conference Proceedings
Authors: Sandi Baressi ŠegotaDarko EtingerIvan LorencinNikola TankovicLuka BlaškovićNikola Anđelić

Abstract: This study presents an explainable machine learning framework for estimating cognitive load and stress from multimodal physiological and affective data collected during human–robot collaboration tasks. The proposed approach integrates electroencephalography (EEG), electrocardiography (ECG), galvanic skin response (GSR), and emotion-related features with contextual task information to model human cognitive states. Data were preprocessed, standardized, and evaluated using a leave-one-participant-out cross validation scheme to ensure subject-independent generalization. Bayesian optimization was applied to tune the hyperparameters of non–tree-based models, including support vector regression (SVR) for predicting continuous NASA-TLX scores and a multilayer perceptron (MLP) for classifying discrete stress levels. The regression model achieved an R² of 0.98 and a mean absolute error of 0.08, while the classification model obtained an accuracy and F1-score of 0.94. Model interpretability was ensured through SHapley Additive exPlanations (SHAP) analysis, which identified EEG coherence and beta-band activity, ECG LF/HF ratios, and emotion-related indicators such as sadness and confusion as dominant contributors to increased cognitive load and stress. These findings highlight the potential of combining physiological and affective modalities with explainable artificial intelligence for reliable cognitive state assessment. The developed methodology provides a foundation for adaptive robotic systems capable of monitoring and responding to human mental states, thus supporting safer and more efficient collaboration in dynamic operational environments.

Keywords: cognitive load, collaborative robotics, stress detection, explainable AI, human-robot collaboration

DOI: 10.54941/ahfe1007150

Cite this paper:

Downloads
2
Visits
3
Download