Investigating Feature Set Decisions for Mental State Decoding in Virtual Reality based Learning Environments

Open Access
Conference Proceedings
Authors: Katharina LingelbachDaniel DiersMichael BuiMathias Vukelić

Abstract: In modern workplaces with rapidly changing skill requirements, suitable training and learning environments play a key role for companies to remain competitive, effective and ensure job satisfaction. To provide an immersive, interactive, and engaging learning experience, Virtual Reality (VR) has emerged as a revolutionary technology. Especially when erroneous behaviour is associated with severe consequences or great resources, VR offers the opportunity to explore actions and visualize consequences in safely and at affordable costs. In addition, it provides an easy way to personalize educational content, learning speed, and/or format to the individual to guarantee a good fit with skills and needs. This is decisive, since insufficient or excessive workload during training sessions results in demotivation and reduced performance. In the latter case, persistent professional exhaustion, pressure to succeed and stress can lead to long-term psychological consequences for employees. Besides skill and ability, current physical conditions (e.g., illness or fatigue) and psychological states (e.g., motivation) also affect the learning performance. To identify and monitor individual mental states, Brain-Computer Interfaces (BCI) measuring neurophysiological activation patterns, e.g., with an electroencephalography (EEG), or functional near-infrared spectroscopy (fNIRS) can be integrated in a VR-learning environment. Recently, fNIRS, a mobile optical brain imaging technique, has become popular for real-world applications due to its good usability, portability, and ease of use. For the reliable online decoding of mental states, informative neuronal patterns, suitable methods for pre-processing and artefact removal, as well as efficient machine learning algorithms for the classification need to be explored. We, therefore, investigated and decoded different working memory states in a free moving fNIRS experiment presented in VR. different working memory states in a free moving fNIRS VR experiment and the possibility of decoding these states properly. 11 volunteers (four female, right-handed, mean age of 23.73, SD = 1.42, range = 21−26 years) participated in the study. The experimental task was a colour-based visuo-spatial n-back paradigm adapted from Lühmann and colleagues (2019) with a low (1-back) and high working memory load condition (3-back) and a 0-back condition as active baseline. Brain activity was recorded using the mobile NIRx NIRSport2. To capture brain activation patterns associated with working memory load, optode montage was designed to optimally cover the prefrontal cortex (PFC; in particular, dorso- and ventrolateral parts of the PFC) with some lateral restriction by the VR head-mounted display (HMD). fNIRS signals were processed using the python-toolbox mne and mne-nirs. For the decoding of working memory load, we extracted statistical features, that are peak, minimum, average, slope, peak-to-peak, and time-to-peak, from epochs of oxygenated (HbO) and deoxygenated (HbR) hemoglobin concentration per channel. A Linear Discriminant Analysis (LDA), Support Vector Machine (SVM) and Gradient Boosting classifier (XGBoost) were explored and compared to a Dummy classifier (empirical chance level). We also investigated which cortical regions contributed to the decoding when choosing single features and which feature combination was suggested to optimize performance. With this study, we aim to provide empirically supported decision recommendations to reach the next step towards future online decoding pipelines in real-world VR-based learning applications.

Keywords: Brain, Computer Interfaces (BCI), functional Near, Infrared Spectroscopy (fNIRS), Visuo, Spatial Working Memory, Learning, Machine Learning

DOI: 10.54941/ahfe1003014

Cite this paper: