Human Factors in Virtual Environments and Game Design

book-cover

Editors: Tareq Ahram, Christianne Falcão

Topics: Virtual Environments and Game Design

Publication Date: 2024

ISBN: 978-1-964867-13-7

DOI: 10.54941/ahfe1004980

Articles

What about the real use of virtual, extended and augmented reality ? A survey of a French representative sample

Literature provides few if no data on the current use and exposure of individuals to virtual reality (VR) and/or augmented reality (AR) technologies in the wild. Most of the publications concern prototypes and systems tested in laboratories, whereas actual uses in private and professional situations are poorly documented. Obtaining a clear picture the current use and exposure to VR/A/M technologies is thus difficult, beyond high-profile applications (e.g. Pokemon GO) and devices (e.g. Oculus rift). To address this gap, a survey was conducted in the context of a working group at the French Agency for Food, Environmental and Occupational Health & Safety (ANSES) among a sample of 776 French people aged 18 and over who have already experienced virtual or augmented reality (from a representative national sample of 2970 French people aged 18 and over) and 122 children aged 6 to 17 who have already experienced virtual or augmented reality. The online questionnaire was designed to identify the people concerned and the situations of exposure to these technologies, the type of systems and devices used, as well as to examine the possible occurrence of cyber-sickness symptoms felt after or during exposure. Beyond the lack of previous studies, a specific difficulty and limit to interpreting previous surveys lies in the emerging nature of the technologies under consideration, i.e. the fact that they are evolving technologies, still little known and/or poorly understood (especially for the general population) and responding to uses and needs that are still incompletely identified. Thus, the study's instructions relied on a precise definition of VR/AR combined with typical illustrations of the different types of devices and uses presented in the questionnaire.The results show that 26% of French people aged 18 and over have already experienced virtual or augmented reality, whereas 33% of French people with children between the ages of 6 and 17 report that their children have already experienced VR/AR. Characteristics of the users population, situations and duration of use, as well as devices mostly used are clarified. In terms of health consequences, between one-third and one-half of users report having experienced symptoms during or following exposure to VR or AR, depending on how the measurement is conducted. The most common self-reported symptoms are dizziness and headache. Symptoms mainly appear during or immediately after exposure and disappear very quickly afterwards, with the exception of headaches and visual fatigue, which seem to persist more over time. The types of use and technologies used seem to be determining factors in the occurrence of symptoms. The results, together with a review of the literature, set perspectives to the recommendations published by the French Agency for Food, Environmental and Occupational Health & Safety (ANSES).

Jean-marie Burkhardt, Dina Attia, Francine Behar-cohen, Ouriel Grynszpan, Evelyne Klinger, Regis Lobjois, Guillaume Moreau, Olivier Nannipieri, Alexis Paljic, Pascale Piolino, Hung Thai-van, Serge Tisseron, Isabelle Viaud-delmon
Open Access
Article
Conference Proceedings

Kitchen Horrors: Unraveling the Influence of Multimodal Stressors on User Experience in Virtual Reality through Electrodermal Activity

The last couple of years have seen a surge in the quality of on-site multiplayer virtual reality experiences. The shift to standalone VR headsets, the decrease in latency and the increase in reliability of rendering VR content have all benefited the rise of VR entertainment parks. The next frontier, however, seems to be the inclusion of sensor data (e.g., electrodermal activity signals) to aid the creation of adaptive VR experiences that are equally immersive for all users. If we can assess the specific impact certain stimuli have on the user during an immersive experience, creators will not only be able to create more engaging content but also design feedback loops to bring to users personalized VR experiences in real time. The current study takes a vital step in this direction by measuring electrodermal activity (EDA) to differentiate between stress responses to visual, audio, and audio-visual stimuli in a haunted VR kitchen game. The study leverages data from 13 participants who underwent a 40-minute-long virtual reality experience. The analysis suggests that relying solely on cleaned EDA data to differentiate between stress and no-stress conditions may not be effective, despite subjective reports of such distinctions. However, a more detailed analysis of EDA features (i.e., EDA peak amplitude and SCR peak amplitude) reveals the ability to not only differentiate between the impact of various stimuli modalities (audio, visual, and audio-visual) on stress responses but also discern between individuals’ responses. These findings underscore the imperative for adaptive VR experiences tailored to the unique responses of individual users, pointing towards a future where personalized, real-time immersive experiences can be finely crafted based on users' physiological reactions.

Aleksandra Zheleva, Michał Kacper Gil, Edoardo Pelosin, Durk Talsma, Lieven De Marez, Klaas Bombeke
Open Access
Article
Conference Proceedings

Use of virtual reality for crime scene investigation training by security forces

Virtual reality is a legitimate tool to complement the range of conventional exercising and training across a variety of disciplines. In the long term, virtual reality has its use in the industrial field, from where it is gradually moving into the environment of education and training in healthcare, services, and other fields. Its use across security forces is relatively new, bringing the potential of training in a safe environment without the additional logistical burden of demanding exercise planning, for which it can serve as a supplement. Within a virtual reality simulation in terms of prepared scenarios and developed applications, trainees are free to explore and explore their environment from any angle, including dangerous and inaccessible locations. This allows users to experience circumstances in the virtual world in a way that would otherwise be difficult or impossible. Among the characteristics of virtual reality as a didactic method, it is necessary to highlight the multiple cognitive and pedagogical advantages that allow to improve the understanding of processes, performance and learning experience of the trainees, the improvement of their ability to analyse problems and explore new concepts, the multitude of scenarios that can be created, the high capacity of interaction and the ease of learning that this technology offers. This paper presents the use of virtual reality for training security forces in crime scene investigation scenarios for different types of model situations. The purpose of the applications is to introduce a standardized procedure to new police officers while conducting refresher training for existing officers in terms of setting standards across the discipline and activity. This paper presents selected scenarios, including models, that have been developed for training security forces, as well as the technological background of fully autonomous training that overcomes the shortcomings of conventional training and thus becomes an important complement to it. The scenarios presented represent the environment of the home in which the police officer as a trainee is located. He/she gradually walks through the dwelling unit, familiarizes them with the scene he/she has entered through the headset, and performs crime scene examination tasks in the role of a police officer. His/her task is to inspect the crime scene and document specific findings that will be filed as essential components for the follow-up investigation. The purpose of implementing virtual reality within security forces is, among other things, to minimize the potential physical strain compared to conventional training in the same scenario. For the implementation of a scenario that is exposed both in terms of standard procedure and in terms of emotional load, the preparatory phase is important, not only in terms of scenario development and validation, but especially in terms of measuring the probands' reactions to the given load. For this reason, the scenario preparation mode, and the partial outcomes of the measurement of police officers, quite specifically the measurement of cognitive load (specifically heart rate, respiration, and skin conductance) in relation to the virtual reality simulation, will also be presented.

Marek Bures, Alena Lochmannová
Open Access
Article
Conference Proceedings

Multisensory Virtual Reality Reminiscence Therapy: A Preliminary Study on the Initial Impact on Memory and Spatial Judgment Abilities in Older Adults

With advancing medical technology and the rise of an aging society, the global population of dementia patients is increasing. Dementia is an irreversible degenerative disease that leads to a gradual decline in cognitive abilities, including memory, spatial judgment, time perception, and language skills. Despite the availability of medication to alleviate symptoms, a complete cure is unattainable, and treatment can only delay disease progression with limited effectiveness. Recent literature explores non-pharmacological treatments for dementia, including reminiscence therapy, and investigates the use of Virtual Reality (VR) as a therapeutic approach. Unlike traditional methods, VR technology can create realistic virtual environments, enhancing sensory and cognitive experiences. Related studies have explored the combination of visual and auditory experiences in the VR environment, incorporating sensory stimuli such as touch and smell to enhance the sensory and cognitive abilities of older adults. Previous research indicates that combining multiple sensory stimuli can enhance memory and spatial judgment abilities. Therefore, the present study focuses on developing a VR game that integrates multiple sensory stimuli to investigate its impact on the memory, spatial judgment, and time perception of older adults. To achieve this goal, the research team invited experts to develop a VR game with multiple sensory stimuli, combining visual, auditory, tactile, and olfactory elements, with a theme centered around agricultural life. We conducted in-depth discussions on multi-sensory experiences, and preliminary feedback was obtained through interviews with elderly participants and observations by experts. Experts found that the nostalgic therapeutic farming game that combined VR technology and multiple sensory elements resulted in better performance of older adults in task judgment and memory retrieval. Despite the limited number of participants and this study's short training period, future comprehensive experiments and long-term observations are necessary to obtain more substantial evidence.

I-jui Lee, Pan Xin - Ting
Open Access
Article
Conference Proceedings

Mobile Solution for Ergonomic Training in Industry: A HoloLens 2 Mixed Reality Approach

In industry, many employees suffer from limb and spine ailments. These physical disorders lead to long-term pain syndromes and musculoskeletal issues. As a proactive measure, Augmented Reality can aid in conducting interactive ergonomic training sessions for employees and provide real-time support for monitoring individual workers on the shop floor. Existing setups for ergonomic evaluations limit the user’s mobility because these are typically stationary and require external motion-capturing systems. Here, we address Musculoskeletal Disorders using Head-Mounted Displays as a standalone, hands-free, and mobile tool for real-time ergonomic evaluation. We selected the Microsoft HoloLens 2 as a Head-Mounted Display and Rapid Upper Limb Assessment as the ergonomic analysis method. We implemented parts of this system using the Unity development environment. Our Proof of Concept shows that a hands-free Mixed Reality application with the HoloLens 2 can support ergonomic evaluations without requiring stationary setups or additional camera or sensor systems. In the current setup, the live feed from the camera of the HoloLens 2 captures the observed person. A local computer analyzes the posture via a wireless network connection, and the results are sent back to the Unity application, where they are visualized with a 3D model. Our work offers a comprehensive perspective on the components used and provides an implementation for the suggested system. It also serves as a foundation for future work, including testing the system’s efficiency in practical use and expanding the user interface, specifically for training in a working environment.

Katharina Kuznecova, Susanne Werking, Gerrit Meixner
Open Access
Article
Conference Proceedings

Using Multi-Modal Physiological Markers and Latent States to Understand Team Performance and Collaboration

Squads of the future battlefield will include a mixture of technically savvy humans and artificially intelligent teammates. Contextually aware AI teammates will be essential for war fighter overmatch. To understand how multimodal physiology can impact mixed team performance, we looked at how physiological team properties emerge in a naturalistic and collaborative environment. Here, we examined internal states and team outcomes based on these states within the context of a complex bomb defusal task in a simulated and naturalistic environment. This overarching research integrates eye gaze behavior, neural activity, speech, heart rate variability, and facial expressions to unravel the intricate relationship between individual and team performance. Here we focus on the facial expression data. Using a novel testbed, we aimed to uncover how these physiological processes evolve and interact with human interactions to influence team dynamics and task performance. Compared to traditional highly controlled lab tasks, this novel testbed enables peripheral measurement of multimodal physiology during naturalistic team formation and collaboration. We report differences between an individual task and teaming task in global facial expressivity results and correlations between facial expression synchrony scores and team task performance.

Ashley Rabin, Catherine Neubauer, Stephen Gordon, Kevin King
Open Access
Article
Conference Proceedings

Studying spatial visualization ability under micro-gravity conditions simulated in Virtual Reality

Spatial cognitive processing is a fundamental aspect of human cognition, influencing our comprehension of spatial environments. Researchers have defined spatial ability in various ways, encompassing skills such as generating, visualizing, memorizing, and transforming visual information. Despite the diversity in definitions, there is a shared understanding that spatial ability is an inherent skill aiding individuals in tasks requiring visual and spatial acumen. One of the dimensions of spatial ability is spatial visualization that governs our day-to-day activities of staying and working in and navigating through space. One of the factors that could impact our spatial visualization ability is the alignment of visual and body axis that is maintained on earth due to gravitational cues. However, such cues are not available in micro-gravity environments that exist aboard the International Space Station (ISS). It is imperative to understand if human spatial visualization is impacted by such conditions to determine safety and productivity risks. In this paper, we present results of our research examining if the non-alignment of body and visual frame of reference (FOR) affects spatial visualization ability. We administered the Purdue Spatial Visualization Test: Visualization of Rotation (PSVT:R) to measure the spatial visualization ability of 230 participants. The PSVT:R assesses an individual's capacity to mentally rotate 3D objects. Participants matched the rotated view of a test object to a provided example, evaluating spatial visualization skills and cognitive abilities. The study included three test conditions, one control and two experimental conditions simulated in Virtual Reality (VR) using Unity 3D game engine. The control condition (C1) had the body axis and the visual FOR aligned just like a space on earth. The experiment conditions E1 and E2 depicted a micro-gravity environment to simulate statically and dynamically non-aligned visual and body axes, respectively. Participants sat in a swivel chair and wore HTC Vive Pro Eye headsets to experience the three conditions. Results consistently indicated a significant difference between response time (RT) and accuracy of participants’ responses under the three study conditions. Moreover, a negative correlation was found between the response time and accuracy, which implied a trade-off between response time and accuracy—a common phenomenon where individuals may prioritize speed over precision or vice versa. Our findings support the existence of a relationship between response time and accuracy, characterized by a significant difference and a weak correlation. The Bland-Altman analysis offered additional insights, emphasizing the variability in this relationship. In the C1 condition, the correlation coefficient was -0.1902, suggesting a weak tendency for accuracy to slightly decrease as reaction time increases. Similarly, the E1 condition exhibited a negative correlation of -0.2333, indicating a weak but negative trend of decreased accuracy with longer reaction times. In the E2 condition, the correlation coefficient was -0.1049, suggesting a mild decrease in accuracy as reaction time increased. Overall, the consistent negative correlations across all conditions imply a general pattern: participants with longer reaction times may exhibit slightly lower accuracy, and vice versa. Results also showed that the non-alignment of visual and body axes impact spatial visualization ability.

Faezeh Salehi, Manish Dixit, Vahideh Karimimansoob
Open Access
Article
Conference Proceedings

How age relate to spatial orientation ability under simulated microgravity environments?

Spatial cognitive processing is a crucial element of human cognition, intricately influencing our understanding of spatial environments. Despite varying definitions, researchers concur that spatial ability encompasses skills like generating, visualizing, memorizing, and transforming visual information—a fundamental aptitude for tasks requiring visual and spatial acumen. Spatial orientation is one such ability that utilizes egocentric spatial encoding and contributes to human spatial ability. This study focuses on the evaluation of spatial orientation ability through the Perspective-Taking Ability (PTA) test. This test gauges participants' capacity to envision a view from an alternative. Stimuli include 5-6 routine objects placed on the perimeter of a circle, and participants are asked to mentally position themselves at one object facing another object and point to a third object. Scores depend on the degree of deviation from the correct direction in sexagesimal degrees. This nuanced evaluation explores spatial orientation and comprehension of an environment from diverse viewpoints. The PTA test was digitalized and integrated into Virtual Reality (VR) environments created in Unity 3D to depict three scenarios. The first scenario was the control group that included an earth-like setting in which the gravitation vertical, idiotropic axis of a participant, and the visual axis are aligned. The second scenario of experiment group 1 simulated spatial conditions of microgravity in space, which lacks gravitational vertical and has statically misaligned visual and idiotropic axes. In the third scenario, the misalignment is dynamic in that it is constantly changing around X, Y, and Z axes over the test session. The three study conditions were administered to 230 participants through HTC Vive Pro Eye head-mounted displays (HMDs). Participants’ responses were collected using a programming script and analyzed to understand how participants’ performance on the PTA test tasks varied between the three conditions and how their age moderated this influence. Participants were categorized into age groups: 18-22, 23-27, 28-32, 33-37, and 38+. The Mann-Whitney U test indicated a significant difference in response accuracy of the participants aged 23-27, 33-37, and 38 and above, indicating distinctive performance between the three study conditions. This means that static and dynamic misalignment influenced spatial orientation performance. Conversely, participants aged 28-32 showed no significant difference between the three conditions, indicating no impacts of the misaligned idiotropic and visual axes. Based on the Kruskal-Wallis test results, the age groups of 18-22 and 38+ revealed significant accuracy differences, whereas the age group 23-27 had highly significant differences. Conversely, the age group 28-32 showed no significant accuracy difference, suggesting comparable performance, whereas the age group 33-37 showed a significant accuracy difference. Results indicate a statistically significant accuracy difference among age groups, suggesting age group moderating the influence of misaligned axes on PTA scores. The pairwise age group comparisons using the Dunn's Post Hoc Test showed significant differences in accuracy for the 23-27 age group compared to the 18-22, 28-32, and 33-37 age groups, revealing age-related variations in spatial accuracy. In conclusion, our research unveiled a profound connection between age and accuracy, demonstrating pronounced differences among age groups.

Faezeh Salehi, Manish Dixit, Vahideh Karimimansoob
Open Access
Article
Conference Proceedings

Random Dot Kinematogram used in Virtual Reality: A preliminary experiment

For professional use of virtual reality (VR), it is important to understand, how decisions made in VR differ from decisions made in reality. For example, if decision makers of automaker corporations experience virtual vehicle prototypes in VR, would they make the same decisions on product features in VR as they would in reality? Or, if students use VR to learn and carry out exams, would they decide for the same actions and exam answers as in reality? Two-choice tasks in a physical environment using the random dot kinematogram have already been realised. In our study, we therefore aimed at replication of this experiment in virtual reality. Challenges arose in the selection of the VR devices. Hence, here we report on the pre-experiment to identify a suitable VR setup.The biggest problem with this experiment was that lines were seen instead of dots. For this reason, different headsets with different refresh rates were tested to avoid this.The test subjects were students and tested the settings in randomized order and then indicated what they had seen in randomized answers. The data was collected in the form of an online questionnaire. A total of 17 people took part in the test. There is no clearly satisfactory result. However, most of the “very good” and “good” results were achieved with 80 Hz of the Valve Index.

Caroline Schon, Kevin König, Johannes Tümler
Open Access
Article
Conference Proceedings

Articulated Spatial Audio for Minimally Invasive Surgery Training

Contemporary spatial sound recording and reconstruction systems enable audiences to experience realistic 3D soundscapes from multiple speakers or binaural headsets. Spatial audio is especially useful in minimally invasive surgery training which represents the sound sources, dynamic patterns, and verbal communications. In our study, we recorded and reconstructed 3D sound from a laparoscopic surgery room with multiple cases. The spatial sound data contains ambient sounds, equipment sounds, and music in the OR environment. We then articulated the ECG sound based on the simulated patient’s conditions. For example, when the patient feels pain, the heart rate increases. Our experiments show that the articulated spatial audio helps to narrow the gap between the abstract training boxes and the actual OR environments. It is one step forward to advance Extended Reality (XR) with physical and physiological variables.

Yang Cai, Joshua Paik
Open Access
Article
Conference Proceedings

A Survey on the Relationship between Stress, Cognitive Load, and Movement on Cybersickness

This survey focuses on a crucial virtual reality (VR) issue that has been reported to affect roughly 40% of VR users – cybersickness. Cybersickness is similar to motion sickness but occurs with electronic screens or VR displays instead of actual movement. Cybersickness can refer to a cluster of symptoms, including nausea, eye strain, vertigo, and sweating, to name a few. Within training exercises using VR for law enforcement, we have anecdotally seen that more than 40% of our trainees report some symptom of cybersickness. Our training scenarios often include stressful and mentally charged situations, as well as include intense head and body movements for operational and tactical purposes. As such, this survey explores the scientific literature to see if there have been any reported links between stress, cognitive load, and head and body movement on reported cybersickness levels. A total of fourteen papers were surveyed. Findings were often mixed and inconclusive but pointed towards a positive relationship between cybersickness and both cognitive load and stress. On the other hand, studies looking at head movements showed a negative relationship with levels of cybersickness. It is hoped that these insights can help VR researchers develop new training protocols that can be more comfortable and accessible for all users.

Marc Antoine Moinnereau, Danielle Benesch, Gregory P Krätzig, Simon Paré, Tiago Henrique Falk
Open Access
Article
Conference Proceedings

Physical Human Factor Parameters through VR Leisure Contents: Focused on Motion Feature Extraction for Adults from VR Bowling

This study explores the impact of Virtual Reality (VR) on leisure sports, focusing on the analysis of motion data in VR bowling among adults aged 19-38. Acknowledging the gap in research regarding physical movement characteristics in VR sports, this work aims to contribute to the ergonomic development of VR leisure content for diverse generations. Using the Vive Pro Eye HMD, Vive Tracker 3.0, and the C2 Plus omnidirectional VR treadmill, we captured detailed three-dimensional position and velocity data. The Unity software facilitated motion data collection, while Python was employed for the analysis, particularly concentrating on the velocity features of the dominant hand controller. The analysis revealed that the Z component of velocity reached its highest mean linear speed at 6.474 during the release phase, aligning with the dynamics of traditional bowling yet underscoring VR's distinctive experience. Conclusively, the findings highlight VR's potential to enrich leisure sports, urging broader research across various VR sports contents and demographics. This pursuit is vital for understanding biomechanical and physical human factors in VR, paving the way for technologies that mitigate generational physical differences and foster the development of accessible, enjoyable VR leisure content for all ages.

Yeong-hun Kwon, Yun-hwan Lee, Minsung Yoon, Soyeong Park, Son Daehoon, Kim Yebon, Hyojun Lee, Yohan Ko, Jongsung Kim, Jongbae Kim
Open Access
Article
Conference Proceedings

Beyond Gaming: Neuroscientific Insights into VR Through Gameflow Analysis

This paper explores the concept of 'gameflow' within the realm of Virtual Reality (VR), extending its application beyond traditional gaming boundaries to encompass various industries. The primary objective is to establish a multifaceted scoring and evaluation system that is adaptable across different sectors, leveraging the universal nature of game-like approaches. Central to our study is the use of VR gaming as a main case study. By adopting neuroscientific methods, specifically functional Near-Infrared Spectroscopy (fNIRS), we aim to validate and refine game evaluation standards. Our research signifies a step in the interdisciplinary application of gameflow analysis, which not only evaluates the gaming experience from a neuroscientific perspective but also underscores the potential of gameflow principles in enhancing user experience and effectiveness in diverse fields.

Jun Chen, Yazhou Chen
Open Access
Article
Conference Proceedings

Designing mobile game input unreachability: risks when placing items out of the functional area

When planning controls for mobile games and gamified apps, designers consider how gamers access features and where to display them. With users potentially operating their devices single-handed, content producers have been using design approaches based on the screen area a thumb can reach when the hand supports the device, with different degrees of difficulty. Depending on the screen size, some parts are out of the thumb’s reach, requiring operation with the assistance of the other hand or changing grip when possible. Despite the common facilitated access to relevant game resources within the area, some items are intentionally placed in unreachable zones, trying to make gamers take longer until they can access them, thus increasing displayed content exposure. These hard-to-reach options are inputs to mute, forward, or close in-game advertising and in-app purchase offers. They disregard the potential uncommon thumb actions one may adopt to tap them. This paper studies single-handed thumb reachability in mobile games and the ads they display to identify how their screen design can provide different levels of performance and body safety to access specific content and then understand whether items out of the thumb’s reach can lead to potential risks for the gamer. While game design should contribute to interaction and comfort, promotional features seeking monetization have strategies to avoid or delay interaction, with risks of interfering with performance or thumb injuries.

Wiliam Andrade
Open Access
Article
Conference Proceedings

Impressions of Musical Pieces in the Pokémon Series

In recent years, Japanese anime and video games have been highly regarded as a part of Cool Japan content. Among them, a video game series Pokémon is standing out as globally famous content. The Pokémon series is a role-playing game developed by Game Freak Inc. and sold by The Pokémon Company. It has been popular since the release of Pokémon Red/Green in 1997, with many titles being released up to the latest. Throughout its long history, one of the important factors that has kept Pokémon popular is music.In a previous study, the character design of monsters was investigated, but there was no investigation on the music used in the Pokémon series. Pokémon includes various situations and scenes, such as battles with trainers and monsters, transfer on bicycles, walking around the towns, and visiting the hideouts of evil organizations, and so on. In the Pokémon series, corresponding musical piece is prepared for each situation and scene. The pieces are composed based on the theory and experience of musicians and it is not clarified whether they are matched to the situations or scenes. In the present study, a perceptual experiment was conducted to research the impressions of musical pieces in the Pokémon series using semantic differential method, and investigated whether the musical pieces are matched to the situations and scenes. For the experiment, 151 musical pieces used in the Pokémon series were prepared as sound stimuli. The stimuli were presented through headphones STAX SR-407 at the level of LAeq=55.9-70.4. Eighteen students of Kanazawa Institute of Technology participated as listeners. The participants listened to a stimulus and were requested to rate their impressions for the piece using 25 bipolar seven-step scales. The rated scores were averaged for each scale and used for factor analysis. The results of the factor analysis showed that the three-factor solution accounted for 86.9% of the data variance. These factors were labeled pleasantness, powerfulness and speed, respectively. The tonality of the musical piece determined the pleasantness, i.e. a piece with a major key was perceived as pleasant and a minor key was felt as unpleasant. A piece with a wide range of loudness change sounded powerful and a piece with a narrow range of loudness change sounded powerless. Contemporary game music is produced with various real sound sources and this allows realizing a wide range of loudness change. The rhythm and tempo of the musical piece determined speed. A piece with a fast rhythm and tempo sounded rapid and vice versa. Moreover, the results showed that the musical pieces were suitably composed and matched to the situations and scenes, in the Pokémon series. For example, the pieces used in the battle situations sounded powerful and rapid, the pieces used in the scene of towns sounded powerless, pleasant and slow, and so on. The results of the present study suggest how to compose for a situation or scene suitably, controlling the tonality, dynamic range of the loudness, rhythm and tempo.

Masaya Fukasawa, Masashi Yamada
Open Access
Article
Conference Proceedings

Effects of Color of Clothes on the Impressions of a Male Character in a Video Game

Clothes are one of the important factors which express personalities. It is true in the cases of characters appearing in video games. It is thought that colors play an important role to determine the impressions for the clothes and the person who wears them. In the present study, effects of color of the clothes on the impressions of a male character appearing in a video game are investigated. In the video game “CODE VEIN,” a player selects a human avatar and the design of the clothes the character wears. The avatars and the clothes are produced by 3D modeling. In the present study, a male character and three shapes of clothes were selected. The clothes were painted in one color. The color was selected from the set of ten fundamental colors in the Munsell’s hue circle (red, yellow-red, yellow, green-yellow, green, blue-green, blue, purple-blue, purple and red-purple) and three achromatic colors (white, gray and black). In total, 39 different stimuli in which a male character wearing different clothes were constructed. Fifteen students of Kanazawa Institute of Technology participated in the perceptual experiment. The participants sat in a darkroom and watched a monitor display EIZO Flex Scan SX2462W. Each stimulus was presented on the display and the participants were requested to rate their impressions for the stimulus using 25 semantic differential scales. The rated scores were averaged over the participants and the averaged scores were used for factor analysis. The results of the analysis showed that the impression space was spanned by activity, potency and evaluation factors. The stimuli with warm colors (red, yellow-red, yellow and red-purple) were perceived as active. In contrast, cool colors and achromatic colors (blue-green, blue, purple-blue, grey and black) were perceived as passive. The results of the multiple-comparison tests showed that activity was greatly affected by hue. Red and black clothes were perceived as powerful. In contrast, green-yellow, green red-purple clothes were perceived as powerless. Among the achromatic colors, black was powerful, white was powerless and grey was intermediate. This implies that the brightness of the color affects the potency. On the evaluation factor, blue-green and white clothes were perceived as pleasant but purple-blue and purple were perceived as unpleasant. These results may reflect the cultural background. White colors are used for white coats of doctors, which should be clean. In contrast, purple-blue and purple are often used as the symbol colors of poisons. This might evoke the unpleasant impression for these colors. Then a multiple regression analysis was performed with the degree of preference as the criterion valuable and three factor scores as the explanatory valuables. The results showed that preference was determined by the evaluation and potency factors: The character wearing pleasant and powerful clothes were preferred and vice versa. Achromatic colors also tended to be preferred.

Akane Fuchigami, Masashi Yamada
Open Access
Article
Conference Proceedings

Impression Change of a Female Character in Illustration by Shadow

Recently, anime-touch illustrations became very popular as well as anime in Japan. In anime and anime-touch illustrations, shadows are used to express the emotions of characters and those which are not physically correct are frequently used. In the present study, it is examined that the impression changes of a female character by shadow expressions including the shadows physically incorrect. In the video game CUSTOM ORDER MAID 3D2, a player creates characters using 3D modeling. A female character with the simplest clothes was selected for the present study, and the shadows were produced with the 26 directions of the light sources. Then, these 3D pictures were traced as 2D illustrations. The physically incorrect shadows included the cases with the lights on the face, hair or eyes from a virtual source. Also, a stimulus with no shadow was prepared. In total 30 stimuli were prepared for a perceptual experiment. Twelve students of Kanazawa Institute of Technology participated in the experiment. Each participant sat on a chair in a darkroom and watched each stimulus presented on the display EIZO Flex Scan SX2462W. Then, the participant was requested to rate the impressions for the stimulus using 18 semantic differential scales. The rated scores were averaged and used for factor analysis. The results of the analysis showed that the impression space was spanned by friendliness, powerfulness and naturalness factors with the cumulative contribution ratio of 82.1%. The character was perceived as friendly when the shadow was produced with a light source set in front of the character. Contrarily, the character was perceived as not friendly when the light source was set behind the character.The friendliness may deeply correlate with the shadow on the face. The character became powerless when the light source was in the central position and powerful when the source was in the lateral positions. It is thought that the powerfulness correlates with the solidity of the character. The character was perceived as natural when the source was in foreground or background of the character. The naturalness may correlate with the shadow covering a part of the eyes. Among the physically incorrect shadow expressions, the stimulus with the eyes shined in the shadow was perceived as quite powerful and unnatural. The results of the present study will contribute to production of impressive illustrations and anime.

Rinka Yamaguchi, Masashi Yamada
Open Access
Article
Conference Proceedings

Design Exploration of an Augmented Reality Exergame for Walking Training: Target-by-Target vs. Multi-Target Guidance

Walking training is essential for the rehabilitation of lower limbs and overall health maintenance. With the progression in Augmented Reality (AR) and Virtual Reality (VR) technologies, serious exergames incorporating these innovations are gaining popularity in walking training. These games create engaging and interactive environments or tasks to enhance user motivation, training volume, and quality. This study investigates an AR exergame aimed at increasing training volume and improving user experience during walking training. The game underwent extensive multidimensional validations and comparisons. The game includes two modes: “Target-by-Target Guidance”, where users collect sequentially appearing gems at random locations, and “Multi-Target Guidance”, where multiple gems appear at once, allowing users to collect them in any order. The study involved twelve participants who, after becoming familiar with the game, completed 5-minute walking sessions under three conditions: without the game, with Target-by-Target Guidance, and with Multi-Target Guidance. Participants also filled out the Game Experience Questionnaire (GEQ) after each session with the order of conditions randomized. Results indicated that Multi-Target Guidance significantly outperformed Target-by-Target Guidance in terms of total walking distance and Positive affect in the GEQ. However, no significant differences were observed between the two modes in step length, step count, and other GEQ dimensions. Notably, both modes surpassed the “no game” condition in total walking distance and all GEQ dimensions, demonstrating the exergame’s effectiveness in enhancing training volume and user experience. The study’s insights into the superior benefits of Multi-Target Guidance provide valuable guidance for the design of similar serious exergames that focus on walking training through target-oriented tasks.

Weiyi Li, Pengbo Feng, Hongtao Ma, Longfei Ma, Hengyu Zhang, Yuanyuan Liu
Open Access
Article
Conference Proceedings

Human factors and pedagogic principles to design a fire-safety pedagogic game

A previous study using virtual reality demonstrated that children didn’t know how to efficiently exit a building with smoke on the corridors. Fire safety is important, and as such, it should be taught in Finnish schools by kids with ages 10 – 16. However, materials are not harmonized and at the end each school is free to decide how and how much their study programs include fire safety. With the aim of creating a useful and effective pedagogic tool for teachers, we designed and created an educational game to instil the knowledge, skills and attitudes related to fire safety that kids should have. The design was done together with fire inspectors and a pedagogy expert in the field. Human factors of children as players and five pedagogic design principles were considered. The outcome was a free-to-play mobile game called Virpa – Fire Expert which was provided for iOS and Android devices. The game provides several hours of gameplay via numerous tasks and minigames, and it achieved exceptionally good player retention rates. The most innovative and pioneering aspect in the game was the combination of virtual and real worlds in the same digital learning environment via machine vision algorithms and augmented reality functionalities. Furthermore, the game was conceived as a research tool, as we wanted to evaluate the impact of the tool on the overall learning process, but also, to provide teachers, educators, and parents, with feedback about the learning outcomes of their pupils or children. Always respecting young players’ privacy, the game collects anonymously metrics and data points, which were combined in an efficient and effective evaluation form. The design of the form considered a total of seven human factors related to teachers needs and interests. The form is also free to access via a website which collects real-time data from the server and automatically organizes it for the teacher. This paper describes the applied design principles and the considered human factors regarding children, the typical walkthrough in the game, the type of collected data, the game engagement, the learning impact assessment, and the final verification tool created for teachers and educators.ame engagement and learning impact assessment results, and the final verification tool created for teachers and educators.

David Oliva, Kimmo Tarkkanen, Timo Haavisto, Brita Somerkoski, Axel Lindberg, Mika Luimula
Open Access
Article
Conference Proceedings

Mania Archetype: Chart Generation for Rhythm Action Games with Human Factors

In recent years, it has been proven feasible to generate rhythm game charts through machine learning models. This has reduced the cost of chart generation and served as an auxiliary tool for novice chart creators. Machine learning generation allows players to experience new songs and charts earlier. However, the article does not provide sufficient discussion on how to generate challenging and high-quality human-like charts for the versatile 4k Mania mode, which uses 4 scrolling 'note highways' to display the notes to be played. The focus of this article is on improving the generation of 4k charts in OSU! Mania through research on machine learning chart generation, with the aim of creating more interesting and accurate charts. Additionally, we propose a more comprehensive and humane standard for evaluating the quality of generated charts. This was informed by interviews with experienced players and chart creators.

Jiale Wang, Wei Huang, Xiu Li
Open Access
Article
Conference Proceedings

Measuring Detection and Habituation of Olfactory Stimuli in Virtual Reality for Improved Immersion

Virtual reality (VR) is a powerful tool that allows humans to interact with systems at scale, often with more feasibility and/or accessibility. However, the usefulness of these digital counterparts can be limited by the immersiveness of the experience, especially when the human-system interaction is integral for the use case (e.g., VR for training or gaming). Therefore, increasing a users’ sense of presence can improve the utility of VR. Previous literature suggests that integrating olfactory may be beneficial towards this. However, gaps exist in formalizing the integration and deployment of olfactory in VR. The objective of this study is to investigate the effectiveness of a wearable odor device for detecting scent stimuli in VR. Our primary goal is to determine the optimal parameters under which a participant can accurately detect a scent stimulus. We seek to answer the following objectives: 1) how do various levels of scent intensity and duration effect scent detection, and 2) does habituation of intensity and pleasantness of scent occur after prolonged exposure.A 20-minute VR study (N=34) using an Oculus Quest 2 and odor attachment was conducted; during which participants were exposed to two scents across various scent intensities and delays. The study used a 5 (intensity: 105, 150, 225, 300, 600 ms; within-subject) x 3 (delay: 15, 30, 60 seconds; within-subject) x 2 (scent: pleasant (smoke), unpleasant (body odor); within-subject) factorial design. Pleasant and unpleasant scents were selected using a pilot study. Participants clicked a button using the Oculus controller when they detected a scent and rated the scent intensity and pleasantness each on a 7-point Likert scale.A binary logistic mixed model was used to predict scent detection (hit/miss). There was a significant effect (p < .05) on detection based on intensity, age, and interaction of scent with intensity. Detection for 600 ms intensity was 4.41 times more likely than intensity at 105 ms. There was a decreased likelihood of detecting the pleasant scent at 600 ms, suggesting that the unpleasant scent was even more detectable at this high intensity. Individuals ≤ 35 years were 2.17 times more likely to detect a scent than those over 35.ANOVAs were performed to assess Likert scale ratings of scent intensity and pleasantness, as a measure of habituation. Results indicate no effect of order or delay, but within-subject effects of scent and intensity. Where unpleasant was rated more intense (p < .001) and more unpleasant (p < .001) compared to pleasant. Additionally, higher intensity dispersions were self-reported as more intensity (p = .003).Our framework and associated conclusions can guide olfactory integration into VR environments for improved human interaction. The results suggest that the effectiveness of a scent may vary depending on the strength of the intensity and the user's age. Certain scents may be more potent at lower intensities, while others may require higher intensities to produce a significant [detectable] effect. The use of multiple sensory integration into virtual training and/or gaming simulations could have a positive effect on knowledge outcomes, skill acquisition, and enjoyment.

Scott Ledgerwood, Erika Gallegos, Marie Vans
Open Access
Article
Conference Proceedings