Accessibility, Assistive Technology and Digital Environments

Editors: Matteo Zallio
Topics: Human Factors and Assistive Technology
Publication Date: 2025
ISBN: 978-1-964867-38-0
DOI: 10.54941/ahfe1005973
Articles
Metaverse-Based Demonstrators as an Alternative to Traditional Presentations: Case Fossil-Free Steelmaking Processes
In recent years, metaverse platforms have developed to the point where they are expected to become a new revolutionary marketing tool. This paper provides a case study of their use as a powerful alternative to traditional presentation means. An enterprise-targeted, standalone metaverse experience was developed with Microsoft Mesh to compare two different steelmaking processes of a major company in the industry participating in a maritime project. The developed experience was tested on several occasions in systematically controlled environments, involving 22 participants, all of whom were either affiliated with the maritime industry or academic researchers working with the maritime industry. The participants were asked to complete a user feedback survey after testing the experience. Based on both observational data and the survey responses gathered, it was evident that metaverse solutions built for informative purposes could significantly improve or even replace traditional means to showcase otherwise complex processes.
Tommi Immonen, Eero Nirhamo, Mikko Salonen, Kaapo Seppälä, Seppo Helle, Olli Heimo, Teijo Lehtonen
Open Access
Article
Conference Proceedings
How Face Display and Clothing Affect User Impressions of Robotic Virtual Reality Agents Introducing Products
This study investigated the effects of adding faces and clothing to robotic virtual reality (VR) agents that introduce users to products within a VR space on the users’ evaluative impressions. Research has shown that users expect human-like behavior and social cues when interacting with computers. Furthermore, human-like agents in VR environments have been shown to evoke positive impressions and encourage prosocial behavior. In this study, we designed moderately human-like VR agents to encourage favorable product impressions. The literature on human-like agents has reported that robotic agents can effectively temper user expectations, while computer graphics (CG) animated agents excel at building user trust. In our earlier study, we found that users have a more positive impression of the product guidance by CG animated agents compared with that from robotic agents. However, we argue that each design has unique strengths and that the blanket dismissal of robotic agents is unwarranted. Therefore, we explored the potential of enhancing user impressions by incorporating CG animated features into robotic agents. Specifically, this study focused on “face display” and “clothing” as important human-like components and investigated the effect of these components on users’ evaluative impressions of robotic VR agents. In this experiment, robotic agents were presented in six conditions based on their face display (i.e., absent face and present face) and clothing (i.e., none, female clothing, and male clothing). The participants evaluated the robotic VR agents in each condition based on their impressions of their human-like and friendly qualities. The agent’s face was designed to be neutral, with minimal emotional expression, and consisted of simple geometric shapes. We chose student uniforms as clothing because the participants were likely to be familiar with them. In all conditions, the VR agent performed motions, such as bowing and pointing to products, and providing voice guidance. Twelve male participants (average age, 22.6 years) took part in this experiment. They wore the HTC VIVE Pro Eye to observe the VR agent introducing the product. The analysis of variance results for the participants’ subjective evaluations revealed a significant main effect for face display (F = 6.159, p < .016) and clothing (F = 12.419, p < .000) on their impressions of the human-like agent. Multiple comparisons indicated that the absence of a face was rated as more human-like (p < .048) and that wearing male clothing significantly increased the human-like impression of the robotic VR agent compared with none (p < .000). A significant trend (F = 3.567, p < .063) was observed for the main effect of face display on the impression of friendliness. These results suggest that incorporating male clothing into the design of robotic VR agents can enhance their human-like impressions. However, the results also suggested that very human-like agent designs are not always desirable. For example, agents with faces can potentially reduce their human-like impression. It is anticipated that these findings will contribute to the design of robotic VR agents that are perceived by users as being more human-like.
Michiko Inoue, Shunsuke Yoneda, Masashi Nishiyama
Open Access
Article
Conference Proceedings
Virtual teleportation into real spaces via digital avatars for remote collaborative design
Market globalization and the rapid evolution of customer requirements highly influence how the product design process must be performed. It is becoming increasingly important to consider usability, functionality, and visual harmony of products to be designed in the early design phases. Collaborative design in an immersive virtual environment (IVE) is a promising approach to enhancing the flexible cooperation of a geographically distributed design team. New ICT technologies have recently emerged that allow creating IVEs to facilitate synchronous and remote design review activities through easy interaction and data sharing among all participants. A virtual environment provides an immersive space where virtual actors (avatars) reproduce team members to achieve high presence, real-time communication, and remote collaboration. Though several collaborative software applications based on virtual reality (VR) technique have been developed to connect people and share design information in an IVE, in most cases, the artificial 3D environments do not represent real spaces. Basically, existing VR-based applications focus on inspection and modelling functions for analyzing target products thoroughly (i.e., rotate and manipulate CAD models, zoom, measure specific model items, add or delete some parts) based on 3D-CAD platforms. This study focuses on synchronous and remote design review meetings in a real space where a target product is installed and used. The technical review processes require tele-communication and mutual view sharing among remote reviewers and local collaborators. Examples include remote inspection for public infrastructure, superposition of a design concept over a real building for renovation, and remote machinery maintenance instruction. To improve a user’s perception of “being present in another real space,” the concept of virtual teleportation is introduced. This is a tele-conferencing and space sharing technology in which participants can remotely work together as if they were in the shared real space during communication. This paper proposes a simplified virtual teleportation system for the extemporary sharing of a real space and industrial products to be reviewed. This system combines augmented reality (AR) and VR to provide an IVE that enables reviewers in different geographic locations to communicate with other collaborators at a local site through their digital avatars. The users will feel as if they are looking, talking and meeting with each other face to face through their digital avatars in the real place. The system setup supports the remote evaluation of product design installed in a real space from the viewpoint and viewing angle of the avatars. Instead of performing real-time 3D reconstruction of the scene, the proposed system constructs a static IVE from panoramic images of the local site. This approach is simpler but still preserves the visible detail of the real space and the target equipment.
Shinichi Fukushige
Open Access
Article
Conference Proceedings
Comparing 360-Degree Video and Immersive VR in Empathy Induction: Impact on Young Designers' Engagement and Problem Identification
Empathy is essential in design research, yet young designers often struggle to empathize with unfamiliar user groups, such as older adults with dementia. Virtual reality (VR) has emerged as a tool for empathy induction, but the effectiveness of different VR formats remains unclear. This study compares 360-degree video and immersive virtual environments in fostering affective and cognitive empathy, user engagement, and problem identification. A total of 22 young designers were randomly assigned to experience one of the two VR modalities, depicting daily challenges faced by dementia patients. Results showed that 360-degree video significantly enhanced affective empathy and engagement compared to immersive VR, while no significant differences were found in cognitive empathy or problem identification. These findings suggest that realistic, context-rich experiences may evoke stronger emotional resonance, whereas fully virtual environments require further refinement. Future research should explore how interactive elements, embodiment, and perspective shifts in immersive VR can improve empathy induction. This study underscores the importance of media selection and contextual realism in empathy-driven design interventions.
Dinghau Huang
Open Access
Article
Conference Proceedings
Reducing the cognitive load among teachers in hybrid lectures by a representation of remote students through a physical avatar in the classroom
A necessity during the COVID-19 pandemic, hybrid teaching, has remained a common format in universities. Typically, a synchronous live session is joined by students either in the physical classroom or via a remote option with interaction through textual chat. While hybrid formats have many advantages in terms of inclusivity and sustainability, research also reveals numerous impairments of wellbeing and educational quality, such as the added cognitive load of the instructor, perceived differences in social presence between the two student groups (on-site and remote), and limited interaction opportunities for remote students. To address such difficulties, we explore the concept "Fernstudent". A physical avatar represents remote students collectively in the classroom, creating a communication channel based on the same modalities as for on-site communication. A field test of the prototype in two classes showed promising results. Compared to the standard option (Jitsi), the social presence of remote students was perceived higher with the Fernstudent and the interaction between teacher and remote students more similar to that with on-site students. Remote students’ intention for active participation became comparable to that of on-site students. Limitations, planned further developments, and general implications for hybrid teaching and digitalization in the school and workplace are discussed.
Daniel Ullrich, Andreas Butz, Kaja Lena Isaksen, Sarah Diefenbach
Open Access
Article
Conference Proceedings
A Monitoring Automation Recipe Cookbook: Simple Open-Source Solutions for Home-Based Support of People Living with Dementia and Their Caregivers
Activities of Daily Living (ADLs) are vital to maintaining quality of life among People Living with Dementia (PLwD), whose cognitive impairment often impacts independence in daily activities. This paper presents smart home automation "recipes" that can assist ADLs, particularly bathing—safely one of the most complex due to safety risks like falls and burns. The automation recipes incorporate open-interface sensors, motion detection, and real-time alerts, with features targeted at dementia progression stages.Interventions at early stages include voice reminders and water temperature automatic adjustments; middle-stage features include safety reminders, activity-based inactivity detection, and water shut-off; caregiver-aided monitoring forms the interventions at late stages. The ergonomically and cognitively designed intuitive dashboard of the system allows caregivers to easily select and customize automation sequences based on special needs.Central design elements—e.g., large touch-sensitive buttons, high contrast screens, and voice-controlled commands—encourage use by both caregivers and PLwD. By facilitating safer, more independent bathing and reducing the need for constant supervision, the system reduces caregiver burden. Human factors and ergonomics are central to making such solutions practical, accessible, and effective for dementia care in the home.
Harish Kambampati, Damon Berry, Julie Doyle, Orla Moran, Jonathan Turner, Micheal Wilson, Dympna O Sullivan, Suzanne Smith
Open Access
Article
Conference Proceedings
Exploring museum experiences for people with cognitive disabilities: Characteristics, challenges and digital design opportunities
Cognitive impairments affect thinking, memory, and attention, posing challenges to traditional museum exhibitions that rely on static text and images. From a human factors perspective, this study examines how individuals with mild cognitive impairment (MCI) perceive museum exhibits differently from cognitively normal visitors and explores their museum visit experiences. Through in-depth interviews with 15 participants—including individuals with MCI, their relatives, and care agency staff—this study identifies psychological, cognitive, and ergonomic barriers they face. Thematic analysis reveals key difficulties and design opportunities, leading to digital design recommendations that enhance accessibility. By incorporating the perspectives of individuals with cognitive impairments, this study advances human factors research in museum design and contributes to more inclusive exhibition experiences.
Chen Peng
Open Access
Article
Conference Proceedings
Virtual Reality for Education from a User Experience Perspective: A Bibliometric Analysis
Education is shifting from traditional multimedia to immersive Virtual reality (VR) environments. Despite numerous reviews on VR in education, few focus on user experience. Existing reviews mainly use qualitative methods, limiting objective analysis due to small sample sizes. Quantitative analysis can address this by covering more studies. This study explores VR in education from a user experience perspective (VR-E-UEP) using bibliometric methods. Methods: We used the Web of Science (WOS) core collection database and employed VOSviewer and CiteSpace for keyword, evolutionary, and co-citation analyses. Results: VR-E-UEP research hotspots are divided into four clusters: 1) specific VR applications in education, 2) advantages and key concepts of VR in education, 3) data analysis techniques, and 4) key factors affecting user experience. Evolution analysis shows early research focused on VR technology and applications, mid-term research emphasized human factors, and recent studies highlight machine learning. Frequently co-cited research falls into five categories: 1) Definitions and Technologies of Virtual Reality and Augmented Reality, 2) Advantages and Effectiveness Evaluation of Virtual Reality Technology in Education, 3) Applications of Immersive Virtual Reality in Learning and Training, 4) User Experience Measurement, 5) Statistical Analysis Methods and Theoretical Models. Finally, future research directions are discussed.
Yang Shi
Open Access
Article
Conference Proceedings
Prototype of system to identify shape of figure from contour drawn by line of sight
Some diseases, such as amyotrophic lateral sclerosis, make it impossible to move only a limited number of body parts. Patients with these diseases have normal thinking ability and will, but their means of expressing their intentions are diminished. For instance, patients who cannot move their mouths cannot speak, and patients who cannot move their fingers cannot write or operate a PC or smartphone. Eye gaze-based input systems are developed as a new input method in PCs and smartphones. Current gaze input systems often operate the computer by varying the duration of gazing and blinking. This system has two types of operations. Here, consider operating a smartphone with eye gaze-based input. In this case, selection of an icon and tap operations can be performed, but there are not enough types of eye gaze-based input to perform flick, pinch-in or pinch-out operations. In addition to gazing and blinking, Kosaka et al. propose drawing figures such as circles and rectangles on the screen with the line of sight. For instance, if a user surrounds the icon with the line of sight and the trajectory of the line is square, the user flicks it. This allows for more types of operations using eye tracking-based input. Therefore, Kosaka et al. developed a prototype system to estimate the type of figure drawn from the trajectory data of the line of sight on the screen using a probabilistic Hough transform, but the system did not perform well in recognizing figures. In this study, we developed and evaluated the prototype machine learning system to recognize the type of figure drawn from eye tracking data. We expect that the system recognizes shapes correctly even when gaze trajectory data contains a lot of noise or relatively complex shape data by using machine learning. The prototype system using transition learning correctly identified relatively large shapes 97% of the time and relatively small shapes 78% of the time.
Haruki Yamada, Hiroaki Kosaka
Open Access
Article
Conference Proceedings
Leveraging Exoskeletons to Reduce Musculoskeletal Disorders in Aluminum Forging: A Case Study
Musculoskeletal disorders (MSDs) are a long-term challenge in Germany, leading to significant sick leave and financial losses across industries. To address these issues, a German automotive supplier specializing in aluminum forging and cold forming initiated a pilot project to explore the potential of exoskeleton technology. The project, conducted in collaboration with exoIQ GmbH, a Hamburg-based start-up specializing in intelligent support systems, aimed to improve employee health, alleviate workplace strain, and enhance overall working conditions.The necessity for this intervention was underscored by findings from a company-wide survey conducted in collaboration with a health insurance provider. This survey revealed high levels of sick leave attributed to MSDs, confirming the need for immediate ergonomic solutions. Additionally, the Key Indicator Method (Leitmerkmalmethode) was employed to evaluate ergonomic risks at selected workstations. This method, used as both an assessment and benchmarking tool, identified critical tasks with significant biomechanical strain, providing a foundation for targeted interventions.To address these challenges, three exoskeletons designed by exoIQ GmbH were integrated into the production environment for a four-week trial. The devices were selected for their ability to reduce strain in physically demanding tasks and were tested under realistic working conditions. Employees with diverse body types and physical profiles participated in the study, ensuring a comprehensive evaluation of the technology’s adaptability and eTectiveness.The trial incorporated multiple methods to assess the impact of the exoskeletons. Observational data and ergonomic assessments were gathered alongside employee feedback collected through structured interviews and surveys. The Key Indicator Method was applied to compare risk levels before and after exoskeleton implementation. Quantitative data included metrics such as muscle activation, joint stress, and task completion times, while qualitative data captured user experiences, including perceived comfort, ease of use, and mobility challenges.Preliminary findings demonstrated a marked reduction in physical strain, particularly for tasks involving overhead work, heavy lifting, and repetitive motions. Ergonomic risk assessments indicated notable improvements, with reductions in key strain markers. Employees reported lower levels of fatigue, greater task eTiciency, and improved well- being while using the exoskeletons. However, some limitations, such as the weight and mobility of the devices, were identified, suggesting areas for future refinement.This case study highlights the potential of exoskeleton technology as an eTective ergonomic intervention in high-strain industrial environments. By combining innovative technology with robust evaluation methods, including health insurance data, the Key Indicator Method, and direct user feedback, this approach provides a replicable model for industries seeking to reduce MSD-related sick leave and improve workplace conditions.The findings also emphasize the importance of involving employees in the evaluation process to identify practical challenges and opportunities for optimization. Lessons learned from this pilot project will inform future research and the development of tailored exoskeleton solutions for diverse industrial applications.
Sumona Sen, Patrick Poetters
Open Access
Article
Conference Proceedings
Human-Centric Approach for Developing XR Applications in the Space Domain
This paper introduces the human-centric approach for developing the extended reality (XR) applications in the space domain. XR applications for supporting high-knowledge, high-value work have been developed in five projects in collaboration with the European Space Agency (ESA). These projects include (1) EdcAR, which focuses on augmented reality for assembly, integration, testing and validation (AIT/AIV), and orbit operations; (2) mobiPV4Hololens, which brings the international space station (ISS) procedure viewing to HoloLens; (3) AROGAN, which is based on augmented reality for ISS and ground applications; (4) VirWAIT, which creates a virtual workplace for AIT & product assurance (PA) training and operations support; and (5) DPIAR, which digitalizes procedures and introduces augmented reality. Development began in 2016, and the system is currently being implemented in the ESA Test Centre’s. The consecutive projects formed the four phases of development process: The first phase focused on demonstrating proof-of-concept, the second phase on establishing connections to space systems, and subsequent phases on developing features based on user needs and integrating connections to the ESA Test Centre ’s sensor systems. This phased approach ensured that the system evolved in a structured manner, addressing both technical and user-centric requirements at each stage. All these projects employed a human-centric approach in their development, incorporating the most suitable parts of the standard ISO 9241-210 (2019) on ergonomics of human-system interaction. This standard emphasizes the importance of designing systems that are both effective and satisfying for users. Based on four development cycles, the XR environment combined with human-centric evaluation and design has proven to be a powerful method from early-stage proof-of-concept to actual implementation. Throughout the process, potential users have been able to provide valuable feedback, enhancing the novel tool's ability to support high-knowledge, high-value work. This iterative feedback loop has been crucial in refining the applications to better meet the needs of the users, ensuring that the final product is both functional and user-friendly.
Kaj Helin, Jaakko Karjalainen, Timo Kuula
Open Access
Article
Conference Proceedings
Generational Physical Ability Differences in Developing a Universal XR Metaverse Platform for Inclusive Digital Leisure Culture: Focused on Bowling
This study analyzed intergenerational differences in physical ability focusing on bowling content within the development process of a universal XR metaverse platform for inclusive digital leisure culture. Using XR devices, motion data was acquired during bowling swings from a total of 80 participants, including Teenagers (TA), Youth (YU), Middle-Age (MA), and Old-Age (OA). The analysis focused on the velocity of the right-hand controller and chest tracker during the Forward Swing and Release phases, which influence bowling swing velocity and stability. The results revealed significant intergenerational differences in velocity for both the right-hand controller and chest tracker during the Forward Swing phase. Similarly, in the Release phase, the right-hand controller and chest tracker also exhibited significant differences across generations. These results confirm intergenerational differences in physical ability during the Forward Swing and Release phases of the bowling swing motion. Furthermore, they highlight the necessity of an XR metaverse platform that accounts for these differences. This study provides foundational data for adjusting human factors related to intergenerational physical ability in XR environments.
Jin-i Hong, Yun-hwan Lee, Yeong-hun Kwon, Jongsung Kim, Minsung Yoon, Jongbae Kim
Open Access
Article
Conference Proceedings
Exploring Key Virtual Reality Features to Enhance Efficiency Communication in Spatial Design
Virtual reality (VR) technology has emerged as a promising tool for enhancing communication efficiency in spatial design by providing immersive and interactive environments. This study investigates the impact of specific VR features implemented through the KeyVR platform on design-related communication processes. Using a mixed-method approach involving pre- and post-test communication efficiency questionnaires and the Kano Model analysis, the research evaluates how functionalities such as teleportation, material switching, and interactive sketching contribute to discussion quality, communication richness, and openness. Results indicate that VR-based communication improves several dimensions of interaction compared to traditional face-to-face methods, though challenges like contextual applicability of the experiment remain. This study highlights the importance of VR features and proposes further research directions to optimize VR tools for spatial design communication efficiency.
Allan Chung, Yo-wen Liang
Open Access
Article
Conference Proceedings
Frontal Cortex Hemodynamics during measured by NIRS in a Rehabilitation Task of Cerebrovascular Disorders Using VR Technology
In recent years, digitization has spread in various medical fields, and there is growing interest in the introduction of VR technology, which can be enjoyed like a game, in the rehabilitation field. VR can train spatial cognition and attention to an object, and can also simultaneously train higher brain functions in people with cerebrovascular disorders. In this study, we investigated the brain activation state of people with cerebrovascular disorders when using a rehabilitation app. Based on the results, the characteristics of brain activity dynamics during VR use were examined. The subjects of the study were two people with cerebrovascular disorders (women in their 60s, returning to work, symptoms: right brain injury, left hemiparesis). Brain activity status was measured using NIRS (Near Infrared Spectroscopy) for Oxy-Hb, Deoxy-Hb, Total-Hb and StO2. Rehabilitation was performed using an app to train visual and cognitive functions, and before and after rehabilitation, a task to test attention function (TMT: Trail Making Test) was performed. Two devices were used: a tablet and VR. Oxy-Hb increased in most trials with the tablet device compared to the resting device, but decreased in half of the trials with the VR device compared to the resting device. Comparing tablet and VR, Oxy-Hb increased significantly for tablet (p>0.001), while Deoxy-Hb decreased significantly for VR (p>0.001) . In terms of StO2 results, both Tablet and VR conditions showed an increase in values compared to resting condition. During the TMT test, OxyHb and rSO2 after VR were significantly higher than those after Tablet use (p>0.05), suggesting that rehabilitation using VR may help maintain brain activation afterward.
Yuki Mizuno, Shino Izutsu, Mayumi Nishikubo, Yoshihiro Hoshikawa
Open Access
Article
Conference Proceedings
Towards Vehicle Content Accessibility Guideline
This study entails the current status of the accessibility in vehicle. With the movement of Software-Defined-Vehicle (SDV), automobiles transform into a digital device with larger screen and connected content service including map. However, it is questionable if the contents in vehicle is accessible to all. We often experience a feeling of "disabled" when we cannot 1) perceive the content on screen 2) operate the function with interactive components, or 3) understand the messages from the vehicle that we are driving with. How can we diagnose the current in-vehicle accessibility to find the direction to improve? The current issue on vehicle content accessibility is related to the missing of vehicle content accessibility guideline. This study reviews current guidelines and regulation in the U.S.A and introduce the draft of vehicle content accessibility guideline (VCAG) that can contribute to enhanced accessibility of in-vehicle digital content and app. Recognizing the unique challenges posed by the driving context, the research explores methods to test interface design, content presentation, and interactive features to ensure universal design and content accessibility while minimizing cognitive overload for drivers.
Sookyung Cho
Open Access
Article
Conference Proceedings
Optimizing keyboard accessibility: effects of raised character size on touch-typing performance
This study investigates the impact of raised character sizes on typing performance, providing insights into optimizing keyboard accessibility. Using customized keyboards with varying raised character sizes, we measured the touch-typing speed and accuracy and evaluated the user experience of 32 non-professional typists under strictly controlled conditions. The participants used four different keyboards: a standard mechanical keyboard (X group), a keyboard with raised characters smaller than standard Braille (A group), a keyboard with raised characters equal to standard Braille (B group), and a keyboard with raised characters larger than standard Braille (C group). Typing speed (WPM), typing accuracy (%), and usage experience scores (five-point Likert scale) obtained from the experimental measurements were analyzed using one-way ANOVA. The results indicate that more prominent raised characters significantly enhance typing accuracy and overall user satisfaction. Specifically, Product Group B (6.5mm) achieves higher accuracy compared to other groups, while Product Group C (8.5mm) provides the best user experience among all groups, with typing speeds comparable to the standard keyboard. This demonstrates the role of enhanced tactile feedback in improving typing performance to varying degrees. These findings suggest that incorporating more prominent raised characters into keyboard design can improve tactile feedback, thereby enhancing both typing performance and user experience.
Mengshi Yang, Zhengyang Wang, Xuemiao Teng, Jingmin Wu, Hongtao Zhou
Open Access
Article
Conference Proceedings
Embodied Cognition in Virtual Lunar Exploration: A Multi-modal Interactive Installation for Space Science Education
This paper presents "Space Exploration: Moon Gazing," an interactive installation that applies embodied cognition theory to space science education. Traditional space science exhibits typically rely on visual and textual information, creating a passive learning experience that fails to internalize abstract astronomical concepts. Our installation addresses this limitation through a "body-environment-tool" framework that transforms space knowledge into direct bodily experience. The system integrates a six-screen collaborative interface with gesture-based interaction, VR immersion, and multi-sensory feedback mechanisms. Users experience simulated lunar gravity constraints, manipulate virtual objects through natural gestures, and engage with content through a three-layer narrative structure that connects traditional Chinese lunar imagery with modern space technology. This research contributes to human-centered design in digital environments by demonstrating how embodied interaction can bridge the gap between abstract scientific concepts and intuitive understanding, providing a new paradigm for immersive educational experiences in public science venues.
Zhixin Cai, Zhaolu Jiang
Open Access
Article
Conference Proceedings
Multi-Sensory Integration and Emotional Responses: The Impact of Materials on Perception and Emotion
With the rapid advancement of virtual reality (VR) technology, multi-sensory integration has become a significant area of focus in the emotional design of virtual environments. Materials play an important role in shaping both perception and emotion, while tactile stimuli are particularly influential in cognitive and emotional responses. Additionally, research indicates that there is an integration between vision, hearing, and touch. Studies have demonstrated that combining various sensory inputs, particularly tactile, visual, and auditory stimuli, can enhance cross-sensory integration. While research has examined the individual effects of tactile, visual, and auditory stimuli on perception, less attention has been paid to how these senses integrate and combine, especially within virtual reality. The study explores the impact of materials on cognition and emotion, as well as the integration between tactile, visual, and auditory inputs. An experiment was conducted in the virtual reality world "The Library of Solitude by the Sea," with thirty-two participants interacting with three materials. The research focused on the integration of tactile, visual, and auditory stimuli by stimulating all three senses simultaneously. Emotional responses were measured through questionnaires and skin conductance responses (GSR). The findings revealed that different materials influenced participants’ emotions and perceptions, with the multi-sensory group reporting greater levels of pleasure and disgust than the single-sensory group. This suggests that various materials evoke different emotional responses and that the combination of tactile, visual, and auditory stimuli leads to a more intense emotional response. The use of multiple sensory stimuli is essential for enhancing immersion and emotional consistency in virtual environments. The study fills a gap in the literature on how multi-sensory integration influences emotional responses. It demonstrates that combining tactile, visual, and auditory stimuli produces a stronger emotional response than the use of a single sensory input. The study also highlights how different materials affect emotional responses. The results indicate that multi-sensory integration enhances emotional engagement and depth. This research offers valuable insights into the emotional design of virtual environments, particularly in the development of immersive experiences through multi-sensory integration. Furthermore, it provides a foundation for future research on multi-sensory design and contributes to the continued evolution of emotionally immersive virtual worlds.
Zitong Zhou, Wei Gong
Open Access
Article
Conference Proceedings
The design of counselling and wellness spaces on university campuses
In the contemporary era, university students are undergoing a period of transition that is characterised by a multitude of challenges. The departure from the home environment is accompanied by the commitment to the new academic organisation and the planning for the future. If this phase is not adequately processed by students, there is a risk of compromising their emotional equilibrium, with repercussions for both their academic performance and their psychophysical well-being.The objective of this project is to foster the psycho-physical well-being of the university student community by preventing and counteracting risky behaviour and promoting healthy and sustainable lifestyles. This will be achieved by designing counselling and wellness spaces and resting rooms on university campuses.The present article has been written with the aim of providing structuring primary prevention interventions through the design of indoor and outdoor spaces, easily modifiable and adaptable to host the various activities envisaged, and the setting up of resting rooms functional to psycho-physical recovery. It presents preliminary findings from the research undertaken within the MOEBIUS PRO-BEN project, which also explores the use of artificial intelligence (AI) to extend the benefits of outdoor green spaces to indoor counselling areas.The research undertaken within the MOEBIUS PRO-BEN project leverages the capabilities of the metaverse to facilitate immersive, multi-sensory experiences that enhance the connection between students and positive well-being, while creating innovative counselling rooms and wellness areas.This study is inherently interdisciplinary, merging fields such as psychological well-being and mental health technology with architecture and design for health. By incorporating AI-driven solutions and the immersive potential of the metaverse, the project explores how technology can be integrated into the physical and virtual environments of counselling spaces to create dynamic, adaptable settings. These spaces are designed not only to support emotional equilibrium but also to engage students in meaningful, therapeutic experiences. The interdisciplinary approach bridges the gap between psychological health interventions, environmental design, and technological innovations, ultimately aiming to improve the overall psycho-physical well-being of students in a modern university setting.
Annalisa Di Roma, Giulia Annalinda Neglia
Open Access
Article
Conference Proceedings
Biomechanics Simulation and Damage Analysis of Head and Neck on Extraction Aircrew Escape System
Currently, aviation lifesaving methods are mostly divided into ejection escape and traction rescue. Compared with ejection lifesaving, traction lifesaving technology is an active lifesaving method, and its advantages are simple structure, light weight, small space occupation, good stability, more suitable for low-speed light aircraft lifesaving. It uses the rocket to pull the crew out of the aircraft in an accident. The rocket first exits the cockpit, and then pulls out a rope to pull the crew out of the aircraft, and after a delay of a certain time, the parachute automatically opens, so that the crew can safely land on the ground. In order to avoid serious injury in the process of traction, it is necessary to study the stress of human head and neck joints which are most vulnerable to injury in the process of traction. In this study, CT image scans of the head and neck were performed on 2 volunteers who fit the body standards of pilots. MIMICS software was used to process the scanned image data and reconstruct the prototype of the human head and neck model. The model was further processed by geomagic software to obtain a complete and smooth head and neck geometric model. A complete head and neck finite element model was obtained by dividing the grid of the geometric model, setting the element format and material parameters of the body structure. The finite element model of head and neck was verified by axial impact test of cadaver from 0° and 15°. The results showed that the finite element model established in this paper has high accuracy. After processing the test data, the initial loading conditions of the traction rescue simulation were obtained, and the stress of the head and neck joints during the helicopter rescue was simulated. The LS-PrePost module in ANSYS software was used to obtain the data needed for dynamic response and damage analysis. Based on the HIC and NIC criteria, the head and neck injury during the life-saving process was determined. The results showed that the neck of the human body may be damaged during the positive traction, and the analysis of the obtained stress curve showed that the vertebrae may fracture at the maximum stress, and the intervertebral disc may be caused by overextension. In order to reduce the deformation of the cervical vertebra during the life-saving process of traction, a set of effective restraint system should be designed to restrain the human body and reduce the relative movement between various vertebrae in the neck, so as to better protect the cervical vertebra.
Zhongqi Liu, Qian Yang, Qianxiang Zhou
Open Access
Article
Conference Proceedings
A Multimodal Approach to Predicting Toe Temperature: Experimental, CFD, and LSTM-Based Methods
For technical footwear, such as mountaineering boots, there is currently no official standard for assessing and quantifying their thermal insulation, although this is crucial for footwear designed for use at high altitudes. Thermal insulation is not only important for the comfort of the wearer but also for safety during prolonged exposure to extreme conditions. This study aims to develop a customizable simulation model (based on Computational Fluid Dynamics, CFD) for mountaineering boots that allows the evaluation of their thermal resistance (RcT in m²K/W) according to the UNI EN ISO 15831:2004 standard.The 3D geometry of the boot was reconstructed with the Rhinoceros 7 CAD software based on a realistic reproduction of the considered boot prototype. The model simplifies the design by removing details such as laces, lace holes, and the outer gaiter, which does not contribute to thermal insulation. To increase realism, the model contains an air gap between the foot and the shoe in some specific areas, which reproduces the actual conditions as accurately as possible.The computational framework uses a User Defined Database (UDD), implemented in the CFD software Ansys Fluent to manage the material composition of the boot. The database contains the thermal properties of the materials used, such as thermal conductivity (in W/mK) and thickness (in mm), evaluated according to the UNI EN ISO 9920:2007 and UNI EN ISO 5084:1998 standards. The CFD simulations were validated by comparing the results with experimental data obtained with a Newton Thermal Manikin and showed a deviation of only 12%. This discrepancy is attributed to minor differences between the CAD model and the physical prototype of the boot. The validated CFD results provides the first relevant metric describing the insulation performance of the shoe.The simulation results are also integrated into a SARIMAX machine learning algorithm that predicts the temperature of the big toe over time starting from the average skin temperature (whose strong correlation with big toe temperature has been observed and confirmed through numerous human test experimental campaigns). The data used to train and test the SARIMAX algorithm came from in vivo tests performed in a climate chamber under four different environmental and activity conditions corresponding to different Metabolic Rates (MR). All tests were performed according to strict protocols to ensure reproducibility. This included standardized clothing for all participants and a uniform test time to minimize disruption to circadian cycles in thermoregulation. Core temperature was monitored as an additional control measure.This model is then used in connection to the JOS-3 thermoregulation model, a system of 83 interconnected nodes that calculates human physiological responses and body temperatures using a numerical backward difference method. The thermoregulation model is able to return the mean skin temperature, given as an input to the previously trained SARIMAX algorithm resulting in an otherwise unavailable (to JOS-3) big toe temperature.The predicted big toe temperature serves as a secondary parameter for evaluating the insulation performance of the shoe. The maximum exposure duration is defined as the time (in minutes or hours) required for the temperature of the big toe to reach the safety limit of 15°C. This two-parameter approach improves the evaluation of technical footwear and takes into account both comfort and safety in extreme environments.
Eleonora Bianca, Antonio Buffo, Gianluca Boccardo, Marco Vanni, Ada Ferri
Open Access
Article
Conference Proceedings
The Impact of Four Field of View Conditions on Team Marksmanship Performance Using a Team Shooting Scenario (TSS) Task
The purpose of this study was to examine the influence of four field of view (FOV) conditions on three-person team’s marksmanship performance in terms of aiming time, accuracy, precision, and targets hit using a Team Shooting Scenario (TSS) task developed by the U.S. Army. Forty-eight soldiers from the South Carolina National Guard participated. The FOV restrictors included two monocular (41o and 78o) and two binocular (129o and 150o) restrictors. Soldiers were tasked with discriminating between distractors and threats (“T”), using the TSS’s 28 light boxes around a circle with a 15m diameter. Three of the four test variables were significantly different based on FOV condition; accuracy, precision, and hits significantly varied, while aiming time did not. For accuracy, the smaller the FOV, the better the performance. FOV was a statistically significant predictor of precision in the smallest and largest FOV conditions when using the quadratic effect where an inverted U-shape demonstrates greater precision. The greatest number of targets hit for both the linear and quadratic effects increase as FOV decreases. To provide guidance on optimal future head mounted devices needed for teams of soldiers during combat related tasks, it is critical to have team-based data to assess soldier performance and the effect of FOV on performance.
Johnell Brooks, Patrick Rosopa, Casey Jenkins, Jose D Villa, Elizabeth Perry, Patrik Schuler, Joshua Roper, Paul Riebe, Samuel Blankenship, Megan Gilstrap, Amanda Meldzuk, Matthew Morris, Rebecca Pool, Jackson Grant, Daniel Greco, Braden Hudgins, Tyler Warren, Kevin J Pritchett, William Cox, Todd Shealy, Frank Rice, Peioneti Lam, Linda De Simone, Edward Hennessy, Blake Mitchell
Open Access
Article
Conference Proceedings
Digital Human Modeling for Naval Aviation: Past, Present, and Future
Digital Human Modeling (DHM) has been used to inform U.S. Navy (USN) aircraft acquisition programs such as F-35, CH-53K, and others for decades. Historically, the primary focus of Naval Air Warfare Center Aircraft Division’s (NAWCAD) DHM efforts has been anthropometric accommodation (reach, vision, clearances) for aircrew and aircraft maintainers. A variety DHM applications (e.g., SAFEWORK/Delmia/Envision, RAMSIS, Jack/Process Simulate Human, Santos) have been used by Department of Defense (DoD) subject matter experts and/or aircraft manufacturers. DoD contributions to DHM capabilities have included software development, validation, and the development of multivariate use cases for inclusion in requirements specifications and related modeling efforts. Use of DHM is essential for evaluation of design early in the acquisition lifecycle to reduce cost and development time, however, there are a number of limitations that can impact modeling fidelity that must acknowledged. Examples include a lack of representative anthropometric data and manikins, users with no previous Human Factors and Ergonomics exposure/expertise, users with little understanding of aircrew and maintainer operations and environment, and modeling evaluations relying on guesswork regarding positioning and posture of manikins without accurately representing factors like cushion compression, flesh compression, aircrew clothing and equipment, postural variation, and restraint systems. Although acute injury risk due to crash or ejection has been successfully modeled for many years, recent Fleet requirements indicate the need to predict the risk of chronic musculoskeletal pain/injury as well. A variety of efforts to develop biomechanical modeling applications and assessments of ergonomics tools in commercially available DHM applications are currently underway to support ongoing initiatives. An architectural framework supporting integration of current and future human modeling software is in development. A collaborative U.S. Air Force and USN project to collect empirical data on fully equipped personnel in a variety of aircraft seating to develop posture models, support DHM tool development, and improve modeling fidelity is also underway. Proposed future efforts include development of a variety of publicly available modeling tools to include: parametric accommodation models, head and hand models, shape models, and poseable parametric finite element human models that use USN/USMC aircrew, aviator, and general population databases and 3D scan data to allow accurate representation of distinct populations. This presentation will document the DHM journey for Naval Aviation, highlighting past, present, and future efforts of NAWCAD and their collaborators.
Lori Basham, Barry Shender, Bethany Shivers, Andrew Koch, Kenneth Ritchey
Open Access
Article
Conference Proceedings