Human Factors in Virtual Environments and Game Design
Editors: Tareq Ahram, Christianne Falcão
Topics: Virtual Environments and Game Design
Publication Date: 2022
ISBN: 978-1-958651-26-1
DOI: 10.54941/ahfe1002053
Articles
Open Medical Gesture: An Open-Source Experiment in Naturalistic Physical Interactions for Mixed and Virtual Reality Simulations
Mixed (MR) and Virtual Reality (VR) simulations are hampered by requirements for hand controllers or attempts to perseverate in use of two-dimensional computer interface paradigms from the 1980s. From our efforts to produce more naturalistic interactions for combat medic training for the military, we have developed an open-source toolkit that enables direct hand controlled responsive interactions that is sensor independent and can function with depth sensing cameras, webcams or sensory gloves. From this research and review of current literature, we have discerned several best approaches for hand-based human computer interactions which provide intuitive, responsive, useful, and low frustration experiences for VR users. The center of an effective gesture system is a universal hand model that can map to inputs from several different kinds of sensors rather than depending on a specific commercial product. Parts of the hand are effectors in simulation space with a physics-based model. Therefore, translational and rotational forces from the hands will impact physical objects in VR which varies based on the mass of the virtual objects. We incorporate computer code w/ objects, calling them “Smart Objects”, which allows such objects to have movement properties and collision detection for expected manipulation. Examples of smart objects include scissors, a ball, a turning knob, a moving lever, or a human figure with moving limbs. Articulation points contain collision detectors and code to assist in expected hand actions. We include a library of more than 40 Smart Objects in the toolkit. Thus, is it possible to throw a ball, hit that ball with a bat, cut a bandage, turn on a ventilator or to lift and inspect a human arm.We mediate the interaction of the hands with virtual objects. Hands often violate the rules of a virtual world simply by passing through objects. One must interpret user intent. This can be achieved by introducing stickiness of the hands to objects. If the human’s hands overshoot an object, we place the hand onto that object’s surface unless the hand passes the object by a significant distance. We also make hands and fingers contact an object according to the object’s contours and do not allow fingers to sink into the interior of an object. Haptics, or a sense of physical resistance and tactile sensation from contacting physical objects is a supremely difficult technical challenge and is an expensive pursuit. Our approach ignores true haptics, but we have experimented with an alternative approach, called audio tactile synesthesia where we substitute the sensation of touch for that of sound. The idea is to associate parts of each hand with a tone of a specific frequency upon contacting objects. The attack rate of the sound envelope varies with the velocity of contact and hardness of the object being ‘touched’. Such sounds can feel softer or harder depending on the nature of ‘touch’ being experienced. This substitution technique can provide tactile feedback through indirect, yet still naturalistic means. The artificial intelligence (AI) technique to determine discrete hand gestures and motions within the physical space is a special form of AI called Long Short Term Memory (LSTM). LSTM allows much faster and flexible recognition than other machine learning approaches. LSTM is particularly effective with points in motion. Latency of recognition is very low. In addition to LSTM, we employ other synthetic vision & object recognition AI to the discrimination of real-world objects. This allows for methods to conduct virtual simulations. For example, it is possible to pick up a virtual syringe and inject a medication into a virtual patient through hand motions. We track the hand points to contact with the virtual syringe. We also detect when the hand is compressing the syringe plunger. We could also use virtual medications & instruments on human actors or manikins, not just on virtual objects. With object recognition AI, we can place a syringe on a tray in the physical world. The human user can pick up the syringe and use it on a virtual patient. Thus, we are able to blend physical and virtual simulation together seamlessly in a highly intuitive and naturalistic manner.The techniques and technologies explained here represent a baseline capability whereby interacting in mixed and virtual reality can now be much more natural and intuitive than it has ever been. We have now passed a threshold where we can do away with game controllers and magnetic trackers for VR. This advancement will contribute to greater adoption of VR solutions. To foster this, our team has committed to freely sharing these technologies for all purposes and at no cost as an open-source tool. We encourage the scientific, research, educational and medical communities to adopt these resources and determine their effectiveness and utilize these tools and practices to grow the body of useful VR applications.
Thomas Brett Talbot, Chinmay Chinara
Open Access
Article
Conference Proceedings
A pilot study for a more Immersive Virtual Reality Brain-Computer Interface
We are presenting a pilot study for a more Immersive Virtual Reality (IVR) Brain-Computer Interface (BCI). The originality of our approach lies in the fact of recording, thanks to physical VR trackers, the real movements made by users when they are asked to make feet movements, and to reproduce them precisely, through a virtual agent, when asked to imagine mentally reproducing the same movements. We are showing the technical feasibility of this approach and explain how BCIs based on motor imagery can benefit from these advances in order to better involve the user in the interaction loop with the computer system.
José Rouillard, Hakim Si Mohammed, François Cabestaing
Open Access
Article
Conference Proceedings
Measuring Human Influential Factors During VR Gaming at Home: Towards Optimized Per-User Gaming Experiences
It is known that human influential factors (HIFs, e.g., sense of presence/immersion; attention, stress, and engagement levels; fun factors) play a crucial role in the gamer’s perceived immersive media experience [1]. To this end, recent research has explored the use of affective brain-/body-computer interfaces to monitor such factors [2, 3]. Typically, studies have been conducted in laboratory settings and have relied on research-grade neurophysiological sensors. Transferring the obtained knowledge to everyday settings, however, is not straightforward, especially since it requires cumbersome and long preparation times (e.g., placing electroencephalography caps, gel, test impedances) which could be overwhelming for gamers. To overcome this limitation, we have recently developed an instrumented “plug-and-play” virtual reality head-mounted display (termed iHMD) [4] which directly embeds a number of dry ExG sensors (electroencephalography, EEG; electrocardiography, ECG; electromyography, EMG; and electrooculography, EoG) into the HMD. A portable bioamplifier is used to collect, stream, and/or store the biosignals in real-time. Moreover, a software suite has been developed to automatically measure signal quality [5], enhance the biosignals [6, 7, 8], infer breathing rate from the ECG [9], and extract relevant HIFs from the post-processed signals [3, 10, 11]. More recently, we have also developed companion software to allow for use and monitoring of the device at the gamer’s home with minimal experimental supervision, hence exploring its potential use truly “in the wild”. The iHMD, VR controllers, and a laptop, along with a copy of the Half-Life: Alyx videogame, were dropped off at the homes of 10 gamers who consented to participate in the study. All public health COVID-19 protocols were followed, including sanitizing the iHMD in a UV-C light chamber and with sanitizing wipes 48h prior to dropping the equipment off. Instructions on how to set up the equipment and the game, as well as a google form with a multi-part questionnaire [12] to be answered after the game were provided via videoconference. The researcher remained available remotely in case any participant questions arose, but otherwise, interventions were minimal. Participants were asked to play the game for around one hour and none of the participants reported cybersickness. This paper details the obtained results from this study and shows the potential of measuring HIFs from ExG signals collected “in the wild,” as well as their use in remote gaming experience monitoring. In particular, we will show the potential of measuring gamer engagement and sense of presence from the collected signals and their influence on overall experience. The next steps will be to use these signals and inferred HIFs to adjust the game in real-time, thus maximizing the experience for each individual gamer.References[1] Perkis, A., et al, 2020. QUALINET white paper on definitions of immersive media experience (IMEx). arXiv preprint arXiv:2007.07032.[2] Gupta, R., et al, 2016. Using affective BCIs to characterize human influential factors for speech QoE perception modelling. Human-centric Computing and Information Sciences, 6(1):1-19.[3] Clerico, A., et al, 2016, Biometrics and classifier fusion to predict the fun-factor in video gaming. In IEEE Conf Comp Intell and Games (pp. 1-8).[4] Cassani, R., et al 2020. Neural interface instrumented virtual reality headsets: Toward next-generation immersive applications. IEEE SMC Mag, 6(3):20-28.[5] Tobon, D. et al, 2014. MS-QI: A modulation spectrum-based ECG quality index for telehealth applications. IEEE TBE, 63(8):1613-1622.[6] Tobón, D. and Falk, T.H., 2016. Adaptive spectro-temporal filtering for electrocardiogram signal enhancement. IEEE JBHI, 22(2):421-428.[7] dos Santos, E., et al, 2020. Improved motor imagery BCI performance via adaptive modulation filtering and two-stage classification. Biomed Signal Proc Control, Vol. 57.[8] Rosanne, O., et al, 2021. Adaptive filtering for improved EEG-based mental workload assessment of ambulant users. Front. Neurosci, Vol.15.[9] Cassani, R., et al, 2018. Respiration rate estimation from noisy electrocardiograms based on modulation spectral analysis. CMBES Proc., Vol. 41.[10] Tiwari, A. and Falk, T.H., 2021. New Measures of Heart Rate Variability based on Subband Tachogram Complexity and Spectral Characteristics for Improved Stress and Anxiety Monitoring in Highly Ecological Settings. Front Signal Proc, Vol.7.[11] Moinnereau, M.A., 2020, Saccadic Eye Movement Classification Using ExG Sensors Embedded into a Virtual Reality Headset. In IEEE Conf SMC, pp. 3494-3498.[12] Tcha-Tokey, K., et al, 2016. Proposition and Validation of a Questionnaire to Measure the User Experience in Immersive Virtual Environments. Intl J Virtual Reality, 16:33-48.
Marc Antoine Moinnereau, Tiago Henrique Falk, Alcyr Alves De Oliveira
Open Access
Article
Conference Proceedings
Mobile Cross Reality (XR) space for remote collaboration
In this study, we propose an XR (cross reality) dialogue system that transmits an omnidirectional stereoscopic moving viewpoint image of a remote real space and presents it to a local worker with an HMD (Head Mounted Display), and the worker faces the stereoscopic avatar (face part) of the local worker presented with MR glasses. The system is asymmetrical.This system is asymmetric. The position and orientation of the head and eyes measured by the HMD (Vive Pro Eye, HTC) at the local site are transmitted to the remote space, and the avatar of the local worker is shown by the MR glasses (Magic Leap, HoloLens2). The CG object and avatar (the face of the local worker) are shared with the remote 3D real space. This enables the remote worker to see the face of the local worker who has the viewpoint position on the TwinCam Go mobile stereoscopic camera.We conducted an experiment to evaluate the reality of the avatars presented in the MR glasses of the field workers as interactors and the clarity of the instructions for the spatial objects to be discussed. The participants in the experiment were seven university (graduate) students (aged 21-24). The communication time of this system was about 70 ms one way. The G.1010 of ITU-T recommends that the delay between terminals for real-time video communication should be less than 150 ms one way.The experiment participants stood in front of the stereoscopic camera (TwinCam) as remote experiencers, and wore MR glasses to see the avatars of the local experiencers (wearing HMDs). The local operator describes three spatial objects (a cube, a sphere, and a cone) that are displayed 1.7 m closer to the experimental participant from the remote camera.Analysis of variance showed that the reality and clarity of the dialogue increased at the 1% significance level when the avatar's head rotation and eye movements were present. The 3D projection display was higher than 2D at the 5% level of significance. The clarity of the subject improved at the 5% level of significance when there was head rotation and eye movement, but the 3D projection was not significantly different from the 2D.This indicates that the 2D monitor without binocular disparity has a large reading error in the depth direction. Another clue is that the HMD and MR glasses can utilize motion parallax by moving the head even slightly. This makes it possible to reduce the error to less than about half that of a 2D monitor.In this study, we developed a cross reality (XR) dialogue system for mobile remote collaboration. As a result of three kinds of evaluation experiments, it was shown that the clarity of dialogue and the accuracy of depth indication were improved compared with the condition of the conventional 2D video teleconference. This system can also be integrated with the metaverse, and can provide a variety of remote experiences of the world even when mobility is restricted due to current contagious disease. In the future, we will continue to verify and demonstrate the technology necessary for smooth telecommunication.
Yusuke Kikuchi, Ryoto Kato, Vibol Yem, Yukie Nagai, Yasushi Ikei
Open Access
Article
Conference Proceedings
Augmented Reality: Beyond Interaction
In Postphenomenology, various human-technology-world relations have been inve- stigated. They provide an understanding of how humans and technological artifacts shape each other and how human intentionality is mediated by technology. As discus- sed in this paper digital technology as it has developed over the last twenty years has not received the attention it deserves. The reflections on smart environments and augmented reality technology show a lack of insight into this research and are based on (premature) commercial applications that are not representative of this research. We take a closer look at this research from a human-computer interaction perspective and use this premise to comment on these relations and reflect on their values.
Anton Nijholt
Open Access
Article
Conference Proceedings
Generation of Walking Sensation by Providing Upper Limb Motion
For an immersive first-person experience in a VR space, the user (or an avatar)needs to walk freely within the space because locomotion is essential and the basisfor a variety of activities. The present study investigates the arm swing of a user toinduce a bodily sensation during walking in a VR space. We built an arm swingdisplay that moves the upper limbs similarly to the motion appears in a real walking.The amplitude and flexion ratio of swing motion to present the sensation of virtualwalking were determined via the user study. The optimal swing angle and itsforward part ratio (flexion ratio) to create the sensation of a straight waking werearound 30 degrees and 54 percent, respectively. For a turning walking, the relationbetween the asymmetric flexion ratio of both arms and the curvature of a turningpath was also determined by an experiment.
Gaku Sueta, Ryoto Kato, Yusuke Kikuchi, Vibol Yem, Yasushi Ikei
Open Access
Article
Conference Proceedings
Exploring the Design of the Sign System of NTUH through Wayfinding Behavior in Virtual Environment
This study uses National Taiwan University Hospital (NTUH) as the experimental field to explore the existing sign system design through the wayfinding behavior in the virtual environment. The experiment simulated scenes that allowed the participants to move freely from a first-person perspective and provided wayfinding tasks. The results showed that participants were more likely to use signs suspended from the ceiling to find directions. When they don't see the target information on the signs, they wander around or go to a similar section to look for it. If the target is not on the first floor, indicators should provide clear information about the floor. The existing sign system makes users ignore information in layout, and the way of arrow direction indication also needs to be standardized. The results of this research help to understand the wayfinding behavior of users in the hospital, to serve as a reference for design improvements.
Ching Yuan Wang, Ching I Chen, Meng-Cong Zheng
Open Access
Article
Conference Proceedings
VR-based Serious Games Approach for a Virtual Installation of an Ammonia Compressor Pack in the Industrial Refrigeration
Extended Reality technology (xR) contains Augmented, Virtual and Mixed Reality Technology (AR / VR / MR) is one of the key technologies of digital transformation. Thanks to the existing powerful immersive hardware systems, complex technical and natural systems can be digitally represented in a realistic virtual environment. This enables users to completely immerse in the virtual environment and to observe and interact with the systems and objects contained therein without major restrictions, or to augment real products and systems with digital data in runtime. This creates new opportunities to present the behaviour and functionalities of complex systems in a tangible and understandable way. Therefore, the xR technology has the ability to revolutionize learning and training methods, especially in the qualification of specialists and experts. Within the international project “International Cooperation on VR/AR Projects” (IC xR-P) – cooperation between University of Applied Sciences Karlsruhe from Germany, Turku University of Applied Sciences from Finland, and Higher Institute of Computer Science and Multimedia Sfax from Tunisia – the application of the VR technology in the training area in terms of supporting of cognitive skills will be investigated. IC xR-P focuses on the implementation of VR training apps for medical training, rescue and knowledge transfer. This paper presents and discusses the implementation of a VR-Training App for the installation of an industrial refrigeration Ammonia Compressor Pack (ACP). The VR app has been developed in cooperation with the German refrigeration technology manufacturer BITZER Kühlmaschinenbau GmbH. The implemented VR training App has been developed in terms of a feasibility study to demonstrate the aibilty of the VR technology to train Bitzer's customers to install the ACP worldwide. The VR app should support the customers engineers to perform an error-free installation of the ACP based on simplified and tangible virtual process description and virtual instruction. The VR training app comprises four main modules: VR logging and reception room, VR delivery and unloading process for the ACP, VR installation of the ACP, and VR visualization room for the refrigerant flow and technical details of the ACP. The VR app can be used as a single and multiplayer mode. A summary of the app evaluation by the engineers of BITZER will be presented at the end of the paper.
Fahmi Bellalouna, Robin Langebach, Volker Stamer, Mike Prakash, Mika Luimula, Anis Jedidi, Faiez Gargouri
Open Access
Article
Conference Proceedings
Hazardous Training Scenarios in Virtual Reality - A Preliminary Study of Training Scenarios for Massive Disasters in Metaverse
Simulation training in aviation and maritime is widely used in competence training and assessment. These simulator centres have suffered a lot because of COVID-19 pandemic. Due to the rapid progress in technology development and the pandemic disruptive solutions are intensively searched in vocational and professional training. Flight and maritime simulators are examples of training environments where even hazardous scenarios can be trained in safe conditions. In the previous studies, we have shown that virtual reality offers for other fields tools to create training solutions which can be again hazardous such as our virtual fire safety application used in fire escape. In addition, virtual and augmented reality can be used to create digital learning environments in fire safety prevention training combining physical, psychological, social and pedagogic dimensions. In this paper, we will focus also on virtual fire safety training. Aircraft fires require special treatment in firefighting with regards to the burning materials. This is due to the fact that about half of the aircraft consists of fibre composites, which can release many fine particles that are harmful to the lungs during combustion. However, the training of aircraft firefighting is currently only possible with great effort on a few special training grounds. This training application with multiplayer functionalities was created with Unity game engine. In the design phase, emphasis in the creation of the game was in setting up environment where teamwork and leadership is needed to accomplish the scenario. This approach is quite close to the metaverse concept where social communication is combined with hands on training activities among a large group of participants in an immersive digital training environment. The task of the participants is to first assess the situation, extinguish the fire and prevent the fibres from spreading to the surrounding area. This is done by collecting individual smaller pieces of composite debris or covering larger ones with foam so that they can no longer be carried away with the smoke and wind. Aside from distinguishing the fire at the crash site, the firefighters are also trained to collect debris from the crash site and discard it into the bin. Both tasks are equally as important and require a Standard Operational Procedure guideline in order to realistically implement them in the application. Research was needed to find appropriate solutions for multiplayer functionalities, for fire and smoke behavior, and for extinguishing the fire. Photon Unity Networking framework was used to enable multiplayer functionalities. Fire Propagation plugin in turn enabled to make the fire spread, to configure the appearance of the fire including the size, and the location of flames, the amount and the shape of smoke and sparks based on given requirements. Extinguishing the fire required the use of the water particle system with suitable collision detection. We found multiplayer functionalities to be an important element in virtual training. Scenario was designed so that participants had to communicate well with each other to ensure a fast firefighting. Our application is still in a prototype phase and more efforts will be needed to make the training more realistic. We will in the next phase present the story of the scenario more in details, and increase the stress level of participants by adding more tasks. In addition, our aim is to improve the assessment system analyzing user data, including difficulty levels, high score list, and feedback system. This solution can also be seen as a preliminary study for a massive catastrophe training experiment where tens or hundreds of professionals will be trained in the metaverse environment utilizing in-house metaverse technology.
Mika Luimula, Jarmo Majapuro, Fahmi Bellalouna, Anis Jedidi, Brita Somerkoski, Timo Haavisto
Open Access
Article
Conference Proceedings
Exploring the use of virtual reality in co-reviewing designs
Virtual reality (VR) is an upcoming technology that is increasingly used in design environments. In a design process, designers often work together through co-creation. The next step, one that is often overlooked, is co-reviewing in VR. Virtual reality has potential as a valuable decisive step in the creation process, replacing a traditional tool. One tool in the traditional design reviewing processes is the trade-off analysis. In VR, concepts can be reviewed in different sizes and through various perspectives, the products are perceived as more tangible and true-to-scale. To obtain a better view on the use of VR in co-reviewing, a comparison of a traditional method with an immersive method is made in this paper. The participants are eight industrial design master students which did the experiment in pairs of two. The results show that VR offers advantages for reviewing ergonomics of a design, in which traditional 2D screen-based software is more limited in comparison.
Ian Garcia, Jouke Verlinden
Open Access
Article
Conference Proceedings
Application of Virtual Reality to Instructions of Manual Lifting Analysis
Application of virtual and augmented reality has been popular in various areas such as education, entertainment, training, communication, design, therapy, and more. In education, particularly, virtual reality (VR) has been widely used by researchers or educators to create interactive instructions for interesting problem in alternatively real-like environments. Educators have recognized the potentials of VR to educational instructions in traditional classrooms where the environment of instructions would be limited to two-dimensional space and third-person’s view to a problem. Application of VR can allow students to be placed themselves into a virtual environment of interest with more immersive perception of problem conditions and application of instructions. In the literature, a considerable number of studies have applied digitalization technologies including virtual and augmented reality in design of process of workstations, manufacturing workplaces, and maintenance activities. In general, most of studies evaluated potentials of application of virtual reality from the perspective of fidelity, usability, and cost, but not from the perspective of effectiveness of instructions. In traditional instructions of manual lifting analysis, workplace or task conditions for manual lifting are usually given by statements, sometimes with pictorial descriptions. From the perspective of learners who need to investigate and analyze lifting tasks, it would be necessary to convert two-dimensional descriptions of task conditions into three-dimensional spaces for better understanding of tasks. However, the conversion would be limited with individual’s capability of imaginary perception. Providing virtual task environment for lifting task conditions to learners would be very helpful to understand underlying conditions for lifting task. The objective of this study was to design a teaching aid to improve instructions in virtual reality environments and to compare effectiveness of instructions between traditional instructions and virtual reality instructions. One exemplary lifting task published by Centers for Disease Control and Prevention was designed by using the Unity software. This example may be used by other educators to develop their own virtual instructions for different task analyses.
Byungjoon Kim, Jinkun Lee, Emma Kloth
Open Access
Article
Conference Proceedings
Replacing Common Picking Devices from Augmented Reality Scenarios at Warehouses by a Laser Projection System
The human worker will still play a major part in the factory of the future. The number of tasks to accomplish is constantly rising in modern production facilities and warehouses, whereas the shortage of skilled workers is increasing. As a result, fewer professionals must work alongside with many roustabouts at highly complex technical systems and in short clock cycles. Therefore, in this paper, we present a laser projection system as a new way of provision of information to guide the worker through the processes. Further, we demonstrate the usage of the system in a common logistic process and compare it with other established interaction modalities
Jan Finke, Sebastian Hoose, Jana Jost, Moritz Roidl
Open Access
Article
Conference Proceedings
Design of control elements in Virtual Reality - Investigation of factors influencing operating efficiency, user experience and presence
The ergonomic design of control elements in real life has been researched in detail. Various studies exist on the optimal dimensioning, their haptic and acoustic feedback to achieve high control accuracy and user experience. But the development of products is increasingly done with virtual prototypes. Virtual reality (VR) allows these prototypes to be tested in a highly immersive environment. However, the findings from reality cannot be transferred to VR directly. For example, users in VR interact with the prototypes using controllers, which affects haptic feedback. This study investigates how rotary dials and joysticks must be designed and programmed in VR so that control tasks can be performed efficiently and generate a high user experience and perceived presence.In user tests, subjects (n = 25) evaluate the control of a joystick and a rotary dial in VR. In a virtual crane operator's cabin or at a virtual table, the subjects (f = 10, m = 15, age: 24 +- 3) perform four predefined tasks per control element. On two screens in VR, subjects see a vertical bar graph with a scale from 0 to 100 % controlled by the joystick and separately a numerical value between 0 and 100 % which is controlled by the rotary dial. The screens display the task to the subjects, e.g., "Set the value from 0 % to 42 %". According to the method “design of experiments”, 14 factors, such as vibration feedback, acoustic feedback, position of the subject or the sensitivity of the control element are systematically varied on two respectively three levels (e.g., diameter of actuator 40 mm, 80 mm or 110 mm). For each trial, the control accuracy and the time required to complete the task are determined. In addition, the perceived presence is assessed using the Slater-Usoh-Steed-Questionnaire and the user experience is surveyed using the User Experience Questionnaire. The effect of a change of level on the response parameters is investigated using multifactorial ANOVA (α = .05). Linear regression is used to calculate a mathematical relationship between factor and response parameter. These mathematical models are used to calculate which factor values can be used to achieve a high level of control accuracy with a low time requirement and a high level of user experience and perceived presence. The factors angular resolution, inclination, shape of the rotary dial and position of the subject have a highly significant effect (p ≤ .001) on the time required to complete the tasks with the rotary dial. On the control accuracy of the rotary dial, the angular resolution, the VR-controller and the interaction of angular resolution and diameter of the rotary dial have a significant effect. On the user experience, a total of six factors and two interactions have a significant effect. On the perceived presence of the subjects, the VR environment and the diameter of the rotary dial have a significant effect. The calculated optimized design is a rotary dial with vibration feedback, without acoustic feedback, with visualization of a rough knurling, an angular resolution of 10-12 degree/value, a 40 mm diameter and no inclination. Visualization of the hand should be avoided.Sensitivity, size, subject position, VR environment, and the interaction of subject position and VR environment have a significant effect on the time required to perform the control tasks with the joystick. Three factors and one interaction have a significant effect on the control accuracy of the joystick. The interaction of the factors vibration feedback and visualization of the hand has a significant effect on the perceived presence of the subjects. On user experience, nine factors and five interactions have a significant effect. The calculated optimized levels of factors for the joystick are vibration and acoustic feedback, no visual feedback, vertical handle with a height of 20-24 cm, a five-level angular resolution, a maximum deflection angle of +- 15°, a sensitivity of 8 %/sec and a visualization of the hand.The trials show a high degree of scatter. The residuals show outliers in the experiments. These deviations are mainly due to the individual previous experience of the test subjects in handling VR systems. Nevertheless, significant effects could be identified. A screening experimental design was used in this study. In a follow-up study, detailed investigations with a full factorial experimental design must be performed with the significant factors. The factors will be tested at multiple levels and with a significantly increased number of trials to further increase the accuracy of the mathematical models.
Niels Hinricher, Chris Schröer, Claus Backhaus
Open Access
Article
Conference Proceedings
Chinese Characters Factory - Design of children's Chinese character construction enlightenment game based on augmented reality technology
China's "13th Five-Year Plan" points out that the development of the industry should adapt to the development trend of multi-media technology. Nowadays, augmented reality and virtual reality technologies have gradually penetrated into our daily life and have had a significant impact in many aspects. Since ancient times, there have been many important educational ideas in China. With the development of the Internet, there is a particular focus on the importance of Chinese characters. In recent years, the application of AR design and related research in various fields of children's education has also developed rapidly, at the same time, the unique interactive, immersive and imaginative characteristics of AR technology have greatly improved the enthusiasm and initiative of learning because they conform to children's figurative thinking. Therefore, in this environment, it is meaningful to explore how to effectively create a sought after AR children's Chinese character construction enlightenment game.【Methods】: This paper introduces augmented reality technology into the field of children's Chinese character education through technical research to create a design method of virtual-real interaction.This paper discovers the characteristics of children's language education, as well as the Chinese character root method through theoretical research, and finds the fit in children's cognitive development and the character root method. The author attempts to design a suitable set of diagrams to tap into the similarities between word-making thinking and product thinking.This paper finds the AR teaching format through market research. It weakens the one-way indoctrination process of product knowledge information and gives play to children's subjective initiative. The content is intuitive and can be used to perceive information through visual, tactile and auditory senses in a comprehensive manner.【Result】: “Chinese Characters Factory” is developed based on the Unity 3D, with ARkit as an augmented reality technology solution, run on the IOS platform. Users have access to game experience with iPad. The whole design practice is divided into three systems: Chinese character experiment system, mapping collection system and entertainment interactive system. The Chinese Character Experiment System was inspired by chemical experiments. Chinese characters are formed by the combination of character roots and graphemes with corresponding character formation methods. It identifies the Chinese character card images in the physical environment according to the image tracking technology of ARkit, being able to superpose Chinese character models. In addition, mapping collection system and entertainment interactive system are used to solve children's Chinese character literacy, novel and entertaining growth education.【Conclusion】: Children's educational products with augmented reality technology are important and innovative for the development of children's minds. It is highly interactive and rich in teaching content presentation, so it can mobilize children's all-round perception of information, which greatly stimulates children's learning interest in the learning process and brings a brand-new experience to teaching.Based on the characteristics of Chinese characters, "Chinese character Factory" is a Chinese character AR game that fits the characteristics of language education and the cognitive development of preschool children. It applies the advantages of augmented reality technology to help children learn and memorize Chinese characters in a gamified way by experimenting with synthetic Chinese characters, bringing children a vivid and interesting Chinese character learning experience. At the same time, "Chinese character test" is ready to be put on the App store.At present, the application of educational products based on augmented reality technology on the market is still in the primary stage. In the future, we still need to explore the application of augmented reality technology in education.
Cai Xin
Open Access
Article
Conference Proceedings
Promoting a teaching platform for "Traditional Skills + Virtual Reality Technology”
VR (Virtual reality) technology has been applied to teaching/learning in many contexts. Our interest focuses on the application of VR in design studies, allowing students to experience craft, (technical aspects, tools and methods), providing advanced means and methods for learning, and enabling the recovery and renewal of crafting and making. VR technology is used to simulate the space environment and technological process in the production process. Students will be brought into the simulated environment through different sensing devices, so that they can operate the objects in the virtual world by themselves, so as to enhance their feelings, deepen their understanding of the traditional technology, and better learn and create. Through survey sections we can achieve an understanding of students experience of VR for craft, and the enhancement of the teaching effect in a product design course. Finally, we can increase the inheritance and development of traditional skills among young people and in the future.
Shujun Ban, Maria Rita Ferrara
Open Access
Article
Conference Proceedings
Rather Multifaceted than Disruptors: Exploring Gamification User Types of Crowdworkers
Crowdsourcing allows individuals and organizations to outsource tasks to an anonymous group of individuals called crowdworkers, who are paid when completing the tasks. The quality of task results depends on factors like task complexity, task instruction quality, and crowdworker-related aspects, like their motivation towards the task. In this context, gamification, i.e., the use of game elements in non-ludic contexts, could foster the crowdworkers’ motivation besides monetary incentives. Nevertheless, identifying the audience’s gamification preferences is important to maintain their motivation in the long term. Although gamification has been used in crowdsourcing, to the best of our knowledge, it has not been based on crowdworkers’ preferences. The User Types HEXAD scale is suitable to identify those preferences and is designed explicitly for gamification. In our research, we investigated which HEXAD user types characterize crowdworkers. To aim this, we conducted a large-scale user study on two well-known crowdsourcing platforms, Amazon Mechanical Turk and Microworkers. Crowdworkers completed a demographic questionnaire, an image annotation task, and the HEXAD scale. The results show that crowdworkers are a multifaceted audience since all user types are exhibited. Therefore, we argue that one gamification approach might already satisfy a broad range for crowdworkers.
Edwin Ricardo Gamboa, Mathias Bauer, Muhammad Reham Khawaja, Matthias Hirth
Open Access
Article
Conference Proceedings
Metaverse Applications for Location-based Virtual Reality
Recently, the metaverse has considerably dominated tech headlines for researchers in the fields of human-computer interaction and human factors and ergonomics around the world, due to the advancement of its software and hardware. With its ability to immerse humans in a virtual, inexpensive, and safe world, it has greatly become popular in recent years, particularly virtual reality. Nevertheless, despite the extremely increasing attention to virtual reality, there is still very limited research work for summarizing and analyzing location-based virtual reality in which virtual reality tools are utilized for assisting humans to experience virtual reality by physically interacting with the environment. This paper proposes a new research study for investigating recent works of location-based virtual reality applications. After that, the evolution of virtual reality using location-based features is explored. Finally, we conduct a comparative analysis of virtual reality and augmented reality works, including location-based works in the past five years.
Chutisant Kerdvibulvech
Open Access
Article
Conference Proceedings
Evaluation of Rapport in Human-Agent Interactions with a VR Trainer after a 6-week Exergame Training for Senior Users with Hypertension.
Human interactions with the trainer during physical training can be highly engaging and motivating [1] and are based on rapport as a dynamic structure of mutual attentiveness and coordination [2]. Human-Agent interaction in virtual reality (VR) aims to establish interaction patterns and rapport with virtual agents similar to real life. Research shows that users react towards virtual agents similar to real people [3] and that rapport is established similar to human rapport [4]. Therefore, rapport with virtual trainers in exergames is used to enhance an engaging and motivating user experience.In this paper we report on the results from an evaluation study on perceptions and interactions with the virtual trainer “Anna” after a 6-week exergame training for senior patients with hypertension. The human-like “Anna” is the key element of interaction design in a gamified series of exergames developed in the bewARe project. Anna was developed as a realistic, full body, female figure (silhouette) to motivate participation in the VR training. The primary goal of our research was to evaluate to what extent senior users can establish rapport with the virtual trainer as a factor contributing to positive user experience and training outcomes. The evaluation was conducted with 23 participants aged 65 and older with diagnosed hypertension. The virtual trainer Anna facilitated user participation in both exergames by giving instructions, modeling movements and providing feedback during the exergames in the HTC Vive Pro Headset. We used the 15-item rapport scale by [5] to measure rapport. The study also applied further research instruments to explore perceptions of the virtual trainer such as the trait list with 9 items describing selected features of the virtual trainer, and the bipolar uncanniness questionnaire with 40 adjectives used to assess possible Uncanny Valley effects described by [6]. The results of the rapport scale indicate that the design of the virtual trainer was effective for establishing rapport especially in terms of building a relationship with the virtual trainer and enhancing the engagement of senior users to participate in the VR training. However, the design was less effective in creating a positive perception of the trainer as a warm, caring and respectful agent. The overall median of the rapport scale was 6 (Min:1,Max:8). The results of the evaluation of the trait list revealed that voice quality, speech pauses and bodily movements were rated highest, followed by head and hand movements. The lowest values were researched for face expression. In the Uncanny Valley questionnaire, the median value for the humannes scale was 1 (Min:-3,Max:3), for the attractiveness 1 (Min:0,Max:3) and for the eeriness 0 (Min:-1,Max:0). Furthermore, the paper explores the relationships between the rapport scores and the perception of senior trainees of selected characteristics of the virtual agent and the uncanniness scale. Finally, given the diverse results from the study, the paper discusses possible design options for enhanced rapport and motivational effects of a virtual trainer based on the analysis of literature in related areas.References1. Ghosh, P., Satyawadi, R., Prasad Joshi, J., Ranjan, R. & Singh, P.: Towards more effective training programmes: a study of trainer attributes, In: Industrial and Commercial Training, vol. 44, pp. 194-202 (2012).2. Tickle-Degnen, L. & Rosenthal, R.: The Nature of Rapport and Its Nonverbal Correlates. In: Psychological Inquiry, vol. 1, pp. 285-293 (1990).3. Garau, M., Slater, M., Pertaub, D. P. & Razzaque, S. The responses of people to virtual humans in an immersive virtual environment. Presence. 14, pp. 104–116 (2005).4. Huang, L., Morency, L., & Gratch, J.: Virtual Rapport 2.0. In: Vilhjálmsson H. H., Kopp S., Marsella S., Thórisson K.R. (eds.) Intelligent Virtual Agents. IVA 2011. Lecture Notes in Computer Science, vol. 6895, pp. 68--79. Springer, Berlin, Heidelberg (2011).5. Gratch J., Wang N., Gerten J., Fast E. & Duffy R.: Creating Rapport with Virtual Agents. In: Pelachaud C., Martin JC., André E., Chollet G., Karpouzis K., Pelé D. (eds) Intelligent Virtual Agents. IVA 2007. Lecture Notes in Computer Science, vol 4722. Springer, Berlin, Heidelberg, (2007).6. Ho, C., & MacDorman, K.F.: Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Comput. Hum. Behav., 26, pp. 1508-1518 (2010).
Ilona Buchem, Oskar Stamm, Susan Vorwerg, Kai Kruschel, Kristain Hildebrand
Open Access
Article
Conference Proceedings
Myopia Prevention Game Interface Design Based on Children's Cognition
This paper is based on the cognitive characteristics of Chinese children. Through literature review and user study, we understand that children prefer pictures and videos for information acquisition, and they are more concerned with saturated colours. What’s more children are more willing to experience and interact in action. Therefore, this article addresses the above characteristics, takes the prevention of children's myopia as the application scenario, and combines the most common information media that children currently come into contact with, to design a game interface that is suitable for children's cognitive characteristics. The design runs in the form of a game that can make children willing to interact and willing to accept the treatment of myopia prevention. And using a smartwatch, which is commonly used by Chinese children as an information medium, saturated colours and cute, rounded design elements are used to ultimately design the game interface. Finally, the paper combines interviews and research with children to understand that the design meets the cognitive and aesthetic needs of children and has gained their approval.
Peiqi Yi, Yunzhu Hu, Xiaoxue Zhang, Hui Wang, Xin He
Open Access
Article
Conference Proceedings
Immersive virtual interactive environment based on hologram technology for youth art education
Arts education for young people (mainly aged 16-18) varies greatly geographically due to differences in economic development between regions. The current situation in arts learning is that high schools' teaching models and thinking need to be more exciting and interactive. With the rise of the 'meta-universe', the use of hologram technology and immersive virtual interactive environments such as AR and VR can make up for the current waste of educational resources due to 'egalitarianism' in the arts education sector. This is also a sign of increasing economic investment in education.Based on several years of experience in high school art education, the author has been able to adapt the teaching equipment to the teaching behaviour, through literature research, field studies and experience summaries, to sort out the most basic functions that teaching should have when designing, and to conduct product research based on existing products on the market, to give certain product function deficiencies and aspects that can be improved. The study also provides some insights into hologram technology and immersive virtual interactive environments for art education.
Yujie Zhang, Ao Jiang
Open Access
Article
Conference Proceedings
Designing a Serious Game to enhancement of musical skills of children using iPlus methodology
Serious Games have become a popular tool for knowledge transfer, behavioral, perceptual, or cognitive change, and valuable for developing or improving a specific area. The use of serious games for educational purposes can have a very positive effect; in our case, this area is related to the educational or psychological field, and its purpose is to strengthen the psychoeducational processes, allowing the achievement of an optimal and motivating learning process in the different stages of development of the human being. The current research focuses on constructing a serious game for the enhancement of musical skills of children using iPlus methodology. This game is aimed at children from 8 to 12 years old; however, it is not ruled out to be applied to adolescents, middle-aged, and elderly adults. The game is suitable for practicing and improving musical abilities such as memory, attention, and spatial perception by recognizing notes on the staff. The possibility of strengthening two essential psychological processes is also offered, not only for the acquisition of learning but also to solve life problems.
Verónica-Gabriela Maldonado-Garcés, Elking Araujo, Mayra Carrión, Marco Santórum, Patricia Acosta-Vargas, Diana López, Mahatma Quijano, Luis Llanganate
Open Access
Article
Conference Proceedings
Guided by the hint: How shader effects can influence object selection in Virtual Reality
In this paper we investigated if shader effects as interaction hints can be used in a Virtual Reality simulation to manipulate peoples’ object selection behavior. We compared the effects Glowing Outline, Color Saturation and Transparency and showed in a study with 13 participants that objects with prominent shader effects get selected significantly more often in a virtual reality simulation compared to transparent or objects without any applied effect (p = 0,01). However, the impact of the annotating shader declines throughout interactions done.
Philip Schaefer, Andre Vaskevic, Gerrit Meixner
Open Access
Article
Conference Proceedings
Taking a Romantic Adventure Together: Explore the Design of Virtual Travelling for Couples in Long-Distance Relationships
This research explores the design possibility to create an online adventure for couples in long-distance relationships. Three missions – intensive eye contact, future planning, and cooperative work – were designed and realized on a virtual space on gather.town. Two couples participated in the user test. The result shows two different practices of couples to solve a joint problem and potential and concerns when designing adventurous experiences to enhance couples’ joint activities.
Sin- Wei Huang, Wei- Chi Chien
Open Access
Article
Conference Proceedings
Design and Evaluation: Virtual Reality Haptic Interface to Enhance User Experience
Haptic interface is a platform that convey haptic stimuli to represent and transmit information through touch. This paper systematically focus the achievements of haptic interface and multisensory interaction in virtual reality environment on perspectives of human-computer interaction research: immersive experience and emotional expression, and proposes an evaluation system based on the Quality of Experience (QoE) model aimed to quantitatively analyze and enhance human-computer interaction user experience in haptic interfaces with its statistical significance, which assists researchers and engineers to research, develop, and utilize haptic interaction technologies in virtual reality environments.
Siyang Zhang, Yun Wang, Youru Wu, Sixuan Nie, Lu Chen
Open Access
Article
Conference Proceedings
Mobile application design with augmented reality triggers
Mobile applications developed for educational topics are a playful contri-bution to the teaching-learning process. This research presents a mobile application generated from participatory design and user experience. This study presents the concept construction phase, the theoretical foundation of the proposal focused on human-computer interaction, in addition to the methodological process that allowed obtaining a final product framed in a research project that was articulated to a bonding project with the commu-nity. 189 children from public and private educational schools in the cities of Quito and Ambato, in Ecuador, aged between 6 to 8 years, participated. The research defined the scope of the application, its structure, require-ments, usability and navigability. For the development of the prototype, Unity was used, for the generation of the videogame and its functionalities. On the other hand, Cinema 4D was used for 3D modeling, together with Vuforia SDK for augmented reality. In addition, Adobe Audition was ap-plied for the generation of the audiobook complemented by Adobe Pho-toshop and Adobe Illustrator in the graphic theme. The application was created for Android and IOS mobile platforms.For the design of augmented reality triggers, illustrations with colored frames were used, making possible to clearly recognize on which pages the mobile device should be used. The project is in the final evaluation phase, and it is expected to conclude testing the participating educational schools, in which the Tobii Eye Tracker 4C device, an eye tracking tool, will be used, which will allow information to be re-collected about the impact and integration that this application generates on users, on the topic addressed, on the design and composition of the product and on the results obtained with the use of this human-computer interaction model.
Cesar Guevara, Carlos Borja, Hugo Arias-Flores
Open Access
Article
Conference Proceedings
Coin game to improve the education of designers in product design
The education of the creative professions is always a challenge, and speaking of the new generations of product designers, it further complicates the need to find new ways to capture their attention and correctly teach the basics of design. That is why we present a teaching tool that promotes the education of the design, the logic of design together with the concept of play, called "Fichas"
Fabiola Cortes Chavez, Ana Paula Diaz Pinal, Elvia Luz González-Muñoz, Alberto Rossa-Sierra, Mariel Garcia-Hernandez, Marcela De Obaldia
Open Access
Article
Conference Proceedings
Virtual human centered design: an affordable and accurate tool for motion capture in mixed reality
The introduction of Digital Human Modeling and Virtual Production in the industrial field has made possible to bring the user to the center of the project in order to guarantee the safety of workers and well-being in the performance of any activity. Traditional methods of motion capture are unable to represent user interaction with the environment. The user runs a simulation without the realistic objects, so his behavior and his movements are inaccurate due to the lack of real interaction. Mixed reality, through a combination of real objects and virtual environment, allows to increase the human-object interaction, improving the accuracy of the simulation. A real-time motion capture system produces considerable advantages: the possibility of modifying the action performed by the simulator in real time, the possibility of modifying the user's posture and obtaining feedback on it, and finally, after having suffered a post - data processing, without first processing the recorded animation. These developments have introduced Motion Capture (MoCap) technology into industrial applications, which is used for assessing and occupational safety risks, maintenance procedures and assembly steps. However, real-time motion capture techniques are very expensive due to the required equipment. The aim of this work, therefore, is to create an inexpensive MoCap tool while maintaining high accuracy in the acquisition. In this work, the potential of the Unreal Engine software was initially analyzed, in terms of ergonomic simulations. Subsequently, a case study was carried out inside the passenger compartment of the vehicle, simulating an infotainment reachability test and acquiring the law of motion. This procedure was performed through two cheap MoCap techniques: through an optical system, using ArUco markers and through a markerless optical system, using the Microsoft Kinect® as a depth sensor. The comparison of the results showed an average difference, in terms of calculated angles, between the two methodologies, of about 2,5 degrees. Thanks to this small error, the developed methods allows to have a simulation in mixed reality with user’s presence and offers an accurate analysis of performed movements.
Saverio Serino, Carlotta Fontana, Rosaria Califano, Nicola Cappetti, Alessandro Naddeo
Open Access
Article
Conference Proceedings
The Verification of Human-Machine Interaction Design of Intelligent Connected Vehicles Based on Augmented Reality
Firstly, compared with the traditional vehicle interactive verification methods, it is concluded that the real vehicle test scheme has the disadvantages of high cost, poor experience of pure physical bench scheme and lack of force feedback of pure virtual reality scheme. Augmented reality plus physical bench scheme has potential. Through the disassembly of automobile interactive verification process, the factors affecting the effect of automobile interactive verification are summarized. Secondly, conduct investigation and interview on the automobile active interactive users and the subjects of verification experiment, summarize the form and scope of automobile human-computer interaction design, as well as the design strategies and difficulties of interactive verification experiment, so as to determine the functional requirements of automobile interactive design verification bench. Then the feasibility of augmented reality technology applied to automotive human-computer interaction design verification is discussed and practiced.
Zhang Heng, Yuanbo Sun, Zhang Boyang, Yinan Che
Open Access
Article
Conference Proceedings
The impact of digital tools on the education process in times of pandemic in Ecuador
In the context presented by COVID 19, given the closure of educational centers worldwide, the frequent and varied use of technological tools in the classroom has been evidenced, which requires an analysis of the impact of its application in the educational context. For this reason, it has been raised as an objective of this study to know the influence that technology has had on the educational process. To this end, an issue that generates great interest in the current educational scenario was addressed: digital tools applied within the current situation experienced in the pandemic; which is eminent because through its use you can continue with the teaching/learning process between teacher and student in difficult times such as those that society in general is currently experiencing. This in turn is quickly shaping what education will look like and how it will be delivered in the coming years. With the advanced pace of educational technology in the world of information and communication technologies, essential enablers have been detected. These currently help institutions to find opportunities and broader solutions in education and have demanded of teachers a rapid adaptation to digital work tools that were not usual in their daily work; that had to learn to work cooperatively by sharing knowledge about the use of tools, impressions, doubts, resources, among other elements within their daily work. The proposed methodology is positivist, with a quantitative approach and confirmatory type because it required a previous explanation or a series of assumptions or hypotheses. The sample was made up of 250 teachers from the Sierra educational system in Ecuador. The survey technique was applied and a questionnaire was applied as an instrument, with an overall score in the dimensions of perception on the impact of ICT in education. In this dimension they were asked about their perception of various aspects related to the teaching-learning process, by means of a five-point Likert scale and the use of inferential probabilistic statistics by the causal study of variables to determine the digital tools used in the educational field during the time of pandemic in Ecuador. As a result, it was obtained that the communication tools used were Microsoft Teams and gamification kahoot and quizziz. The hypothesis that allowed knowing about the use and use of information and communication technologies in virtual contexts was validated, which although they were forced due to the new reality that the pandemic has generated, caused a positive impact within the Ecuadorian educational process.
José Gómez
Open Access
Article
Conference Proceedings
Revisiting the correlation between video game activity and working memory: evidence from machine learning
With the popularity of video games, more and more researchers are trying to understand the relationship between video game activity and cognitive abilities, and one of the important cognitive systems is working memory. Working memory is a limited capacity short-term memory system for processing currently active information and is an important predictor of goal-driven behavioral domains. Its scope of action includes, but is not limited to, fluid intelligence, verbal ability, and mathematical analysis.Due to the importance of working memory for the analysis of human behavior, numerous studies have attempted to describe the architecture and models of working memory. In general, models of working memory can be loosely categorized into content and process models, depending on their focus. The content model focuses on the static material of working memory, which includes mainly verbal and spatial visual material. The process model focuses on the dynamic processes of working memory and includes both Updating and Maintenance of memory.However, this area of research has also been the subject of debate among researchers.Translated with www.DeepL.com/Translator (free version). These disputes involve two main assumptions. According to the so-called core training hypothesis, a potential machine for improving cognitive ability through video games is provided by the so-called core training hypothesis. According to this hypothesis, repeated stress on the cognitive system will induce plastic changes in its neural matrix, leading to improved cognitive response performance. According to this hypothesis, repeated strains of the cognitive system can induce plastic changes in its neural matrix, which is an important reason for the improvement of performance. The other proposed basic mechanism is the meta-learning mechanism, that is, learning how to learn. According to this, video games (especially action games) can improve related motor control skills, such as rule learning, cognitive resource allocation, and probabilistic reasoning skills, which are used in many different situations.A recent study showed that the analysis of certain extreme groups showed that video game players performed better than non-game players in all three WM measurements, and that when extended to the entire sample, video game time and visual space WM and n-back performance. In general, the relationship between cognition and playing video games is very weak.This study used the Waris et al, 2019 dataset to re-investigate the correlation between video game activity and three different dimensions of working memory using seven different supervising learning models. It was concluded that video game activity was most highly correlated with the visuospatial component, slightly less correlated with the mnemonic updating component, and least correlated with the verbal component. This partly confirms Waris et al, 2019's view that the analytic method may be the key to the study.
Dingzhou Fei
Open Access
Article
Conference Proceedings
Thinking of Aesthetic Empathy in Immersive Exhibitions
Based on the phenomenon of immersive exhibitions becoming a boom in art experiences, the authors investigate the reasons for this phenomenon from the intersection of the two fields of cognition and art - “aesthetic empathy”, and consider the audience’s “aesthetic empathy” as one of the key factors contributing to the phenomenon. In addition, the authors believe that the communication effect of immersive exhibitions is ultimately completed by the joint construction of technical equipment and the personal characteristics of the audience. Then, the authors propose some conditions and methods for building a positive art experience in immersive exhibitions. The overall point refers to that the advantages of immersive technologies include sensory stimulation and visual image construction, which will benefit the audience in immersing themselves in the symbolic system constructed by the work, which is the driving factor in the early stage of empathy. Then, this will make the audience think and feel the meaning speculation and emotional understanding brought by this symbol system, and complete the empathy experience.
Meixue He, Ao Jiang
Open Access
Article
Conference Proceedings
The Development of Muay Thai Training Game for Tourism 4.0
This research aims to develop game training kits for Muay Thai sports in order to make Muay Thai training a leisure exercise. The training armor is equipped with 5 sensors to measure stinging forces and 5 light signals. There are three levels of light jamming time, including 1 second for beginners, 0.8 seconds for intermediate skilled athletes and 0.5 for professionals. In the light of the light signal, there are 9 forms of sticking. The research has developed a force measurement system and a real-time transmission program system to be displayed on the screen. Score counts can be accumulated for the same escalation as playing games. 15 participants from athletes and trainers tested the training equipment, with three minutes of continuous sting tests in each round and six punches in total. The results showed that the training equipment has good performance. The transmission program system can display punches, scores and counts of punches in real time, and can be statistically recorded to know the development of the athletes. The participants suggested adding sensor installation points and faster light fixture times.
Nanthawan Am Eam
Open Access
Article
Conference Proceedings
Dora: an AR-HUD interactive system that combines gesture recognition and eye-tracking
AR-HUD is a popular driving assistance system that can display the image information on the front windshield to ensure driving safety and enhance the driving experience. AR-HUD will deal with more complex non-driving tasks in future highly automated driving scenarios and provide more diverse information to drivers and passengers. To meet the changing needs with new technologies, this paper proposes an interactive gesture and gaze controlled AR-HUD system. Following the user experience design method, the interactive system design was conducted through specific research on driving behaviors and users’ demands in HAD scenarios. A prototype for evaluation was made based on OpenCV, and then a virtual driving scene was built. Through the evaluation of a user study(n=20), the usability of the system was verified.
Xingguo Zhang, Zinan Chen, Xinyu Zhu, Zhenyu Gu
Open Access
Article
Conference Proceedings
Karaoke Game with Body Movement Tracking for Investigating Singing-motor Interactions: System Proposal
This paper presents and discusses the design, implementation and initial results of a game system using a singing-motor paradigm, using karaoke singing combined with a human-computer interface through body movement recognition. The purpose of this system is to collect experimental data on how karaoke singing and body movement interact and how they affect each other, in order to analyze the data from the perspective of human-machine interface, immersion, gameplay and entertainment. At the end, the results and outputs of the development are presented, and some follow-up proposals are discussed.
Ryota Horie, Luiz Fernando Pedroso
Open Access
Article
Conference Proceedings
VR-App for a Virtual Perception of Memory Impairment in Alzheimer’s Patients
Augmented, Virtual and Mixed Reality Technology (AR / VR / MR) - also known as xR technology - is one of the key technologies of digital transformation. Thanks to the existing powerful immersive hardware systems, complex technical and natural systems can be digitally represented in a realistic virtual environment. This enables users to completely immerse in the virtual environment and to observe and interact with the systems and objects contained therein without major restrictions, or to augment real products and systems with digital data in runtime. This creates new opportunities to present the behaviour and functionalities of complex systems in a tangible and understandable way. Therefore, the xR technology can revolutionize learning and training methods, especially in the qualification of specialists and experts. This paper will introduce the international project “International Cooperation on VR/AR Projects” (IC xR-P). The target of “IC xR-P” is the implementation of a practice-oriented xR training applications in the areas of medical training, rescue and Knowledge transfer in schools and universities and their testing and evaluation with selected experts. “IC xR-P” is an international cooperation between the University of Applied Sciences Karlsruhe from Germany, University of Applied Sciences Turku from Finland and the Higher Institute of Computer Science and Multimedia Sfax from Tunisia. Among the learning projects in the ICxRP, we focus in this paper on the implementation of VR training apps for medical training in this paper, we centred on the Perception of Memory Impairment in Alzheimer’s Patients. Patients with early Alzheimer’s disease may have spatial and time-oriented disorders. The objective is to use immersion in a virtual environment. This allows the user to experience a multisensory experience during which the user can feel and interact naturally and intuitively in real time via sensory interfaces. VR offers different levels of interaction, from the minimum level where the subject remains passive, looking at the environment, to more interactive levels where the subject is active, controlling its movement to the first person in the virtual environment via various interaction tools immersive.Within this project a VR-Training App will be designed and implemented, which fulfill the following functions with different virtual games facilitating the communication of the patient with the virtual environment. This application can develop the creating immersion and feeling of presence in patients. Also, we propose Family/Entourage Show service, it’s a memory stimulation exercise by integrating the family photos. We propose in addition a VR creation of patients’ usual living environment (home, hallway, bedroom…). We improve the valuing specific objects and places in the house to facilitate the orientation and the exploration of the environment. Finally, we propose à musical Training service: it offers a question-answer game that aims to stimulate the patient’s memory. From here, player can choose which exercise they want to play or focused on. We propose the orientation exercise, the memory exercise with the card-game, the recognizing game, and the exercise of leisure activities. These last exercises will also stimulate their memory by singing along to some songs, guessing animals, and making a tasty hamburger following the right steps. For most of the game, there will always be an evaluation of the player’s performance at the end. It will either displays on the television screen or a screen will pop out to show the results.
Anis Jedidi, Faiez Gargouri, Fahmi Bellalouna, Mika Luimula
Open Access
Article
Conference Proceedings
Effect of Visibility of Auditory Stimulus Location on Ventriloquism Effect using AR-Head-Mounted Display
Virtual reality (VR) or augmented reality (AR) games using head-mounted displays (HMDs) are becoming increasingly popular in recent games. These games can present wider visual stimuli than TVs or handheld games. Moreover, the location of auditory stimuli is presented to the same location as visual stimuli. Therefore, we propose presenting visual stimuli to auditory stimuli rather than presenting auditory stimuli to visual stimuli. When we present visual stimuli to auditory stimuli, it is necessary to clarify how far the locations of the sound source and visual stimuli can be shifted. Thus, we examined varying degrees of spatial disparity between auditory and visual stimuli to determine whether they are still perceived as originating from the same location. The ventriloquism effect is known as a cross-modality between the locations of auditory and visual stimuli. Many researchers investigated the ventriloquism effect; however, there is no research on the effect of visibility of a loudspeaker playing a sound on the ventriloquism effect. In this study, we aim to clarify the effect of visibility of a loudspeaker playing a sound on the ventriloquism effect. For this purpose, we conducted two experiments to determine whether auditory and visual stimuli are originating from the same location when there are varying degrees of spatial disparity between them and measure their angle of origin. One was an AR condition experiment in which measurements were made with the loudspeaker visible, whereas the other was a VR condition experiment in which the loudspeaker was not visible. From the experimental results, the discrimination threshold of angle was more significant under the condition in which the loudspeaker was visible (AR condition experiment) than under the condition in which the loudspeaker was not visible (VR condition experiment). The results show that the ventriloquism effect is more substantial when the loudspeaker is visible.
Kaoru Kawai, Kenji Muto
Open Access
Article
Conference Proceedings