Metaverse, Virtual Environments and Game Design

book-cover

Editors: Tareq Z. Ahram, Christianne Falcão

Topics: Virtual Environments and Game Design

Publication Date: 2025

ISBN: 978-1-964867-54-0

DOI: 10.54941/ahfe1005989

Articles

Virtual Mittweida - Creating a game-based approach to teach artificial intelligence for games

With the proliferation of computing technologies and an ongoing trend of introducing digital and blended learning aspects into higher education, innovative approaches to teaching complex topics like artificial intelligence (AI) have emerged. Of particular interest is the use of game-based learning approaches. According to problem-based learning theory, providing students with an interactive problem and encouraging them to independently find solutions promotes deeper understanding and skill acquisition. Thus, game-based approaches offer an engaging way for students to explore challenging concepts. However, despite the growing use of game-based methods in fields like economics, their application in computer science - especially in teaching game AI - remains under-explored.Understanding AI is increasingly critical for game development, as modern games emulate human-like behaviour in areas such as decision-making, character routines, and adaptive strategies. While many mechanisms and approaches of agent-level decision-making and planning are well understood, their application in video games poses unique challenges, such as accommodating unpredictable player interactions and ensuring performance efficiency without degrading the player experience. While strategy games like StarCraft or multiplayer online battle arenas like DotA 2 have been of interest as proving grounds for advanced AI training methods, their use in education has been limited due to their high complexity and associated learning curve.This work proposes the development of a novel interactive application to fill this niche. Taking inspiration from city building and management games, the application simulates the campus of the University of Applied Sciences Mittweida, where students are given control of agents acting as archetypal roles of students. The agents' goal is the acquisition of knowledge, an abstract resource gained by participating in courses, requiring the agent to navigate the campus. Students interact with the system through an API that provides information on the state of the simulation and allows issuing commands to specific agents. For example, agents who continuously acquire knowledge over a prolonged period do so at decreasing efficiency. To remedy this, a student implements a routine checking the learning efficiency of all agents, commanding "tired" agents to take a break. Alternatively, the student could train a machine learning algorithm to do the same task, albeit more adaptive. Additionally, the application enables dynamic changes to the environment at runtime, such as adding or removing courses or buildings, simulating player-driven alterations to the game world. By designing decision-making algorithms for these agents, students gain hands-on experience with fundamental AI concepts, i.e. decision trees, bridging the gap between theoretical knowledge and practical application.To evaluate the effectiveness of the application, a comparative study with undergraduate students is planned. Over the course of two semesters, two groups of students will be taught the basics of game AI - one using traditional teaching methods (primarily lectures), the other using a game-based method incorporating the new application. The learning progress of both groups will be monitored using assignments, with students being given a sample project and tasked to develop a game AI solution, i.e. for a non-player character in a first-person shooter.

Alexander Thomas Kühn, Marc Ritter, Manuel Heinzig, Christian Roschke
Open Access
Article
Conference Proceedings

Wheelchair Virtual Reality Simulator ERA: A Real to Virtual Interface Investigation

Wheelchair users face significant mobility challenges, requiring innovative solutions to enhance accessibility. Virtual reality (VR) simulators offer a promising approach; however, existing systems are often costly and application-specific. This study focuses on advancing the ERA system, an open-source, modular, and cost-effective VR wheelchair simulator designed for diverse applications. The primary objective is to integrate real wheelchair tracking within the virtual environment, while the secondary objective is to generate comparative data between virtual and real wheelchair use. A between-subjects experimental design was conducted with 10 participants divided into two groups of five. Each group navigated a predefined path within a 9m × 7m virtual classroom using either a fully virtual wheelchair or a real wheelchair equipped with a VR controller for position and orientation tracking. Performance metrics, including task completion time, total displacement, and total rotation, were analyzed. Findings indicate that the VR controller provides a viable tracking solution, and the collected data offer valuable insights for enhancing the ERA system’s interface and simulation fidelity.

Erico Monteiro, Alexandre Anibal Campos Bonilla, Milton Cinelli, Rafael Campos
Open Access
Article
Conference Proceedings

Enhancing Virtual Reality Gaming through Wearable Haptic Feedback

Virtual reality (VR) gaming has seen significant advancements in immersive technologies, yet many current systems still rely on handheld controllers that limit full-body interaction and realistic tactile feedback. This paper presents an enhanced version of the Haptic Gamer Suit (H-Suit), a wearable haptic feedback system designed to improve user immersion in VR environments. The upgraded H-Suit features advanced actuators for precise tactile feedback, as well as smart textiles embedded with sensors for pressure, temperature, and motion to provide real-time environmental responses. Key innovations include an expanded glove interface with additional buttons and motion sensors, enabling more intuitive interactions and seamless integration with AR/VR smartglasses for gesture recognition and real-time visual feedback. The enhanced H-Suit offers a comprehensive, full-body sensory experience, bridging the interaction gap between the physical and digital worlds and enhancing the overall immersion and emotional resonance of VR gaming. This work represents a step in the development of wearable haptic technologies for next-generation virtual reality and extended reality (XR) applications.

Chutisant Kerdvibulvech
Open Access
Article
Conference Proceedings

Enhancing Free Walking in Virtual Environments with Warning Walls: A Pilot Study on Redirected Walking Using Machine Learning Agents

Virtual Reality (VR) technology allows users to explore unlimited virtual environment spaces. Users can explore virtual spaces using a Head Mounted Display (HMD) device. Various techniques for exploring VR spaces, such as teleportation, flying, walking-in-place, and devices such as omnidirectional treadmills, are often used. Previous literature states that real walking is the most natural method to explore virtual spaces. However, the constraint to the natural walking method is the limited physical space, so the user is at risk of encountering the boundary wall of physical space. Redirected Walking (RDW) technique addressed solving the limitations of tracking space so that users can explore unlimited virtual spaces without encountering the boundaries of physical space. The RDW technique has experienced rapid development since it was first introduced more than two decades ago; until now, its development has utilized artificial intelligence technology. This pilot study aims to explore the use of AI in the Deep Reinforcement Learning framework in designing virtual environment designs through a simulation system. This study uses machine learning agents trained to explore a non-predefined pathway virtual space larger than the tracking space by applying the essential principles of redirected walking techniques such as rotation, translation, and curvature gain. The study results show that machine learning agents can learn well and explore virtual spaces larger than the size of tracking spaces, both using warning walls and without warning walls.

Nyoman Natanael, Chien-Hsu Chen, Wei-Ting Lu
Open Access
Article
Conference Proceedings

Impact of Cognitive Load on Learning in Immersive Virtual Reality Environments

The present work focuses on the use of Immersive Virtual Reality (IVR) to train users by immersing them in safe and controlled Virtual Environment (VE) and enabling them to learn by doing. Particular attention is given to adaptive VR-training systems that are capable of dynamically adjusting the training experience based on user’s performance and cognitive state. Considering this, a new methodology for such systems is proposed, that is crafted on the Cognitive Theory of Multimedia Learning (CTML). This methodology aims to help instructors to understand how to adapt VR-training systems to users during their experience in VEs, leading to effective Learning Outcomes and avoiding a high Cognitive Load (CL). This human factor plays a critical role in mediating the relationship between Presence, Immersion, and Learning Outcomes, as the VE generates a CL to users. To have a deeper understanding, factors influencing CL in VEs are presented and relative solutions are proposed. It is our understanding that adaptive VR-training systems, their design, architecture, and attributes, can pave the way to new research directions and that the new methodology presented in this paper will be supportive.

Francesca Massa, Sara Buonocore, Raffaele De Amicis, Andrea Tarallo, Giuseppe Di Gironimo
Open Access
Article
Conference Proceedings

Right Hand,Left Hand,Both Hands: Exploring the Relationship Between Handedness,Interactivity and Cognitive Load in Virtual Reality Procedural Training

Virtual reality (VR) has emerged as a viable tool for immersive learning, yet the impact of individual differences on user interaction and training outcomes in VR remains somewhat underexplored. This study aims to address this gap by examining the relationship between a number of individual difference and personality variables, interactivity (active vs. passive engagement), and cognitive load in VR. To investigate this, 79 participants were recruited from a university participant pool. One of the variables that emerged as related to performance was handedness(i.e., whether a user is left- or right-handed). Previous research has shown that handedness in VR can explain differences in movement speed, interactivity, and embodiment. Specifically, aligning controls and interactions with the user's dominant hand preferences. In our study, 72 right-handed and 7 left-handed individuals(10%, which is representative of handedness in the population)completed a series of procedural tasks in VR. These tasks were designed with varying levels of interactivity—the active instruction condition affording more user-driven engagement than the passive instruction condition. Performance metrics such as accuracy in the VR post-training test were collected, while cognitive load was measured post-training using a subjective questionnaire. Data analyses were performed using Analyses of Variance and multiple regressions. Results revealed no significant effect of handedness on cognitive load and no interaction effects between handedness and interactivity in the VR post-training scores. However, we found that handedness and cognitive load were significant predictors of procedural training outcomes. These findings suggest that handedness may influence training in VR, underscoring its potential role in shaping learning outcomes. Based on these results, we offer practical considerations for implementing VR-based training across safety critical industries like aviation, defense, and healthcare. These include recommendations for designing VR interfaces that accommodate handedness variability and guidelines for optimizing interactivity to enhance learning outcomes for both right-handed and left-handed users.

Fiona Duruaku, Tulsi Patel, Nathan Sonnenfeld, Blake Nguyen, Florian Jentsch
Open Access
Article
Conference Proceedings

Eye-tracking based mental fatigue assessment in VR environments

While "stress" measured by wearable devices typically reflects autonomic nervous system responses to external pressures, a deeper form of fatigue—mental fatigue—is becoming more prevalent in today’s information-rich society. Characterized by decreased cognitive and attentional capacity, mental fatigue is harder to detect and slower to recover from, often not resolved by sleep alone. Early detection and management are crucial to prevent its chronic progression. Mental fatigue differs from peripheral stress in that it is characterized by impaired cognitive function and attention span. Mental fatigue often goes unnoticed and is not easily relieved by sleep alone. Early assessment and intervention are essential to prevent its chronic progression. We have developed ZEN EYE Pro, a VR-based eye-tracking system that allows for rapid (approximately 1 minute) and non-invasive assessment of mental fatigue in a real-world context. The purpose of this study was twofold: first, to evaluate the feasibility of ZEN EYE Pro as an objective assessment tool for mental fatigue. The second was to test the validity of our developed assessment by examining the impact of two established stress reduction interventions - mindfulness-based stress reduction (MBSR) and inhalation aromatherapy. In an experiment with 61 Japanese adult participants, the mean mental fatigue score was 44.20% (SD = 9.93). (1) A 5-minute mindfulness session with Apple Vision Pro reduced fatigue scores by 18.85% (p < 0.001). (2) A 2-minute inhalation aromatherapy session with a blend of aromas reduced fatigue scores by 14.47% (p < 0.001). These results demonstrate the feasibility and validity of objective and time-efficient mental fatigue assessment using the ZEN EYE Pro and suggest its applicability in a variety of real-world settings.

Anna Shimafuji, Fuma Hayashi
Open Access
Article
Conference Proceedings

Mitigating VR Motion Sickness in Visual Sharing Based on Observer-Observed Coupled Movements

Visual sharing via Virtual Reality (VR) significantly enhances skill transfer within complex industrial domains. However, sharing another's viewpoint can lead to severe motion sickness. Although many researchers have proposed numerous solutions to address VR sickness, few are specifically tailored for visual-sharing experiences. This study aims to propose the VR sickness mitigation method for VR visual-sharing experiences introducing Observer-Observed (Observer and Operator) Coupled Movements. This method can reduce the disparity between the body's movements and the visuals by field of view restriction and motion-compensated point rendering with observer-observed coupled movements in the visual-sharing system. Furthermore, this study experimented to verify the effects of the proposed method. The results indicate that using this method during shared viewpoints can reduce the sickness caused by perceptual and visual discrepancies to some extent without significantly impacting the observer's understanding of the content in the VR experience.

Langsong Sun, Kimi Ueda, Hirotake Ishii, Hiroshi Shimoda
Open Access
Article
Conference Proceedings

Changes in Heart Rate Variability During Immersive Multisensory Forest Bathing Experiences

Exposure to nature promotes relaxation and reduces stress, but accessibility concerns have led to increased investigation of virtual reality nature simulations, including “virtual forest bathing.” This study examines the effects of audiovisual (AV) and audio-visual-olfactory (AVO) immersive VR experiences on relaxation, quality of experience (QoE), and heart rate variability (HRV) among nurses in a mental health inpatient unit. Participants experienced 2.5-min sessions of 360° natural scenes with counterbalanced conditions. Both conditions (AV and AVO) showed improvements in relaxation and QoE ratings, while the AVO condition resulted in greater HRV changes towards the end of the experience, as well as greater correlations with subjective relaxation and QoE ratings.

Marilia Karla Soares Lopes, Belmir Jose De Jesus Junior, Olivier Rosanne, Susanna Pardini, Lora Appel, Christopher Smith, Tiago Henrique Falk
Open Access
Article
Conference Proceedings

Subjective and Objective Assessment of the Impact of Stress and Mental Workload on Cybersickness During Virtual Reality Training

Cybersickness is an issue in immersive virtual reality (VR), akin to motion and simulator sickness, resulting in symptoms such as nausea, dizziness, and eye strain. Cybersickness has been shown to affect a significant portion of VR users. In training scenarios involving demanding tasks (e.g., for first responders' training), however, reports of cybersickness symptoms are higher than those for the average user. It is hypothesized that the stress and mental workload generated by these scenarios may be the cause for this increased propensity for cybersickness. In this study, we investigate the impact of stress alone, mental workload alone, and their combined impact on cybersickness levels. The levels of stress and mental workload are manipulated while participants perform a driving simulator task. In the high mental workload condition, the driver has to keep an eye on the road while driving from one location to another, as well as monitor and count the number of pedestrians wearing a certain colour shirt. In the high stress condition, traffic conditions become heavy, background noise increases, and sudden breaks are needed to avoid accidents (e.g., from a ball rolling into the road to a car suddenly changing lanes). Lastly, the combined condition contains all the elements of the previous two conditions. In all cases, a baseline driving period is present (without stress or workload) and is used for comparisons within each subject. Both self-report and neurophysiological measurements are used to gauge the impact of these three conditions on cybersickness. Self-report questionnaires are used to assess stress (DASS-21), mental workload (NASA-TLX), and cybersickness symptoms (SSQ) at several instances during the experiment. In turn, an instrumented Meta Quest 3 VR headset is used equipped with 16 electroencephalography (EEG) and electro-oculography (EOG) sensors, while wearable devices are used to monitor photoplethysmography (PPG), electrocardiography (ECG), and respiration signals. These neurophysiological signals are used to continuously extract measures of mental workload, stress, and other cognitive/affective states almost in real-time. In this paper, we describe the experimental setup, the instrumented headset, the EEG and biosignal metrics that are computed, and provide preliminary subjective and objective findings based on the first 12 participants (four per condition). The study is ongoing and aims to collect data from 60 participants (20 per condition). It is hoped that these preliminary insights will help the research community refine VR training protocols, making them more comfortable and effective for students.

Marc Antoine Moinnereau, Abhishek Tiwari, Danielle Benesch, Nicole Bolt, Gregory P Krätzig, Simon Paré, Tiago Henrique Falk
Open Access
Article
Conference Proceedings

Immersive Philosophical Thought Experiments Through Virtual Reality

Virtual reality (VR) is transforming higher education by providing immersive and interactive learning environments that enhance traditional pedagogical methods. While disciplines such as medicine and engineering have been early adopters of VR for skill development and simulation training, its potential in the humanities, particularly Philosophy, is only beginning to be explored. Philosophy courses often challenge students to grapple with abstract concepts and complex ethical dilemmas which can feel disconnected from real-world applications. VR offers a unique opportunity to bridge this gap by allowing students to experience these scenarios in lifelike, immersive settings. This study examines the use of VR in university courses by teaching the classic Trolley Problem taught across Philosophy departments, especially as part of introductory courses to the field. The experiment used an A/B test approach to get feedback on the participants’ experience and their attitudes towards VR and traditional classroom approaches to teaching the Trolley Problem thought experiment, represented in this experiment as a video lecture. While the experiment uncovered a range of interesting information, key results indicate evidence the VR experience was perceived as different or more impactful when deciding what decision to make in the thought experiment, strong support for the VR experience as a learning tool, preference for the virtual reality experience over a video presentation of the same material, and a wish for inclusion of VR content in courses regardless of previous experience with VR or videogaming. The paper also discusses additional connections in the data, methodological limitations, and opportunities for future research.

Gordon Carlson, Sammuel Byer
Open Access
Article
Conference Proceedings

Impact of EEG-Based Virtual Reality Haptic Force Feedback on User Experience

This study investigates the effects of haptic force feedback on brain neurofunctional connectivity and user immersion in virtual reality (VR) rehabilitation training. We used a VR setup with wearable force feedback devices to compare task performance between conditions with force feedback and those without it across different difficulty levels. By collecting Electroencephalography (EEG) signals and subjective data, we gained valuable insights into cognitive and emotional responses. Results showed enhanced neural activity and stronger immersion in the beta and gamma frequency bands under force feedback conditions. Multi-modal stimulation improved cognitive memory and user experience, with effects positively correlated to task difficulty. These findings show that combining natural interactions with our senses can improve virtual reality (VR) training and help develop better rehabilitation methods in the future.

Zhang Ping, Wenjing Ma, Feiyang Li, Zhenhua Yu
Open Access
Article
Conference Proceedings

Construction of a VR Multimodal Dataset for Stress Recognition

Accurate identification of individuals' stress states is critical for optimizing intervention strategies and enhancing safety performance in intelligent human-machine interaction systems and high-risk operational environments. Virtual Reality (VR) technology offers a novel paradigm for inducing controllable and ecologically valid stress through highly immersive scenarios. The development of high-quality multimodal stress datasets represents an urgent requirement to advance emotion computing and practical applications of intelligent human-computer interaction. This study presents the creation of a VR-based multimodal stress dataset. The experimental protocol comprised four tasks: ground walking, elevated platform 1 walking, elevated platform 2 walking, and jumping platform tasks. Physiological data including electroencephalogram (EEG), photoplethysmography (PPG), electrodermal activity (EDA), and eye tracking data were collected across all tasks, along with subjective stress ratings in different scenarios. Data from 30 participants were acquired. A binary classification was performed between two representative scenarios: ground walking (low-stress state) and jumping platform (high-stress state). Following feature extraction from EEG and PPG signals, classification models (decision tree, random forest, Bagging, and AdaBoost) were implemented. The random forest classifier achieved optimal performance, yielding a cross-subject five-fold cross-validation accuracy of 0.8360 ± 0.0162 and F1 score of 0.8140 ± 0.0301 for distinguishing between low-stress and high-stress states. This dataset provides essential data support for real-time stress recognition, with potential applications in intelligent human-computer interaction and medical rehabilitation. Data-driven interventions based on this resource could significantly enhance health outcomes and work efficiency across multiple domains.

Qichao Zhao, Jianming Yang, Qingju Wang, Ying Gao, Bing Zhang, Qian Zhou, Ping Wu, Han Li
Open Access
Article
Conference Proceedings

G-LLE System Design: Gamified Rehabilitation for Children’s Lower Limb Therapy Based on Natural Mapping

It is widely recognized that gamification can potentially improve users' performance in rehabilitation tasks. However, limited research has addressed mitigating additional cognitive load from gamification in pediatric rehabilitation. This study examined a natural mapping-based gamified rehabilitation system to improve children's rehabilitation performance and minimize cognitive load from a user-centered perspective. Eighteen participants were enrolled in a controlled trial, providing questionnaire feedback and interviews after each situation. The results showed that G-LLE improved children's rehabilitation performance and maintained a low cognitive load. On the other hand, it revealed the importance of personalised game elements and immersive interactions in enhancing the rehabilitation experience. This study demonstrates the potential of a user-centered design approach to innovating pediatric rehabilitation, offering valuable insights for developing scalable and personalised gamified rehabilitation systems.

Yaqin Ping, Men Han Li, En Guo Cao
Open Access
Article
Conference Proceedings