Augmented, Virtual and Mixed Reality Simulation

book-cover

Editors: Tareq Ahram, Waldemar Karwowski

Topics: Simulation and Modelling

Publication Date: 2023

ISBN: 978-1-958651-94-0

DOI: 10.54941/ahfe1004434

Articles

Best practices in using Virtual Reality for Design Review

Immersive three-dimensional (3D) model review is a key use case for Virtual Reality (VR) in engineering endeavours. The stereoscopic image provided by VR headsets enables users to naturally perceive the 3D model; this bypasses the mental effort needed when viewing 3D on a regular display and viewers can easily grasp the complexities. This technology is particularly beneficial for team members who aren't design experts, such as construction staff, the Health, Safety, and Environment (HSE) team, and Subject Matter Experts (SMEs). VR allows these users to actively engage, understand, and contribute valuable insights to the design team in real-time. Now that remote working and online meetings are standard practices, VR meetings are the next step and a natural progression for collaborative design reviews. VR headsets with internet connectivity empower teams to collectively scrutinize 3D models and LIDAR (Light Detection And Ranging) point cloud scans. This user-centric, immersive review approach:-Engages a broader set of stakeholders in the design process,-Swiftly highlights design and layout flaws, and-Facilitates a safer construction sequence, especially when Building Information Modelling (BIM) is used.VR in engineering is therefore a triple-win, with the potential to reduce travel costs and CO2 emissions (significant since the construction sector is responsible for around 39% of global carbon emissions), engage specialized skillsets that may be scarce or geographically dispersed, and identify design issues at earlier project stages. For the latter, there is an outsize cost reduction gain since it is well established that project cost escalation has a significantly non-linear upward trajectory (North American Space Institute (NASA), 2004). Since the construction industry amasses a staggering $2500 billion in rework costs annually (representing 13% of the industry's total budget), early detection of issues is highly important to the success of any project.This paper delves into the workflows of VR design review, exploring how they have been successfully applied in large-scale capital projects, encompassing: -VR hardware and software solutions (current at the time of writing),-Details of a staged roll-out model that reduces barriers to adoption, and-Examples from real-world case studies.

Martin Robb, Rune Vandli
Open Access
Article
Conference Proceedings

The Augmented Welder Profile: Augmenting Craftmanship with Digital and Collaborative Tools

More and more applications of Augmented Reality (AR) in manufacturing industries are introduced every day and while recent research has shown that one of the more popular applications, high volume assembly instructions, might not offer the best setting for this technology, many other applications exist that do. For assembly, remote guidance or training, rare assemblies, low takt time and high mix production, do still show promise. This article introduces the role of the Augmented Welder, a role utilizing AR technology for the programming of a welding robot. An operator support system in the form of a custom application programmed in Unity and visualized with a pair of Hololens2, connected to an ABB robot through RobotStudio. The robot is equipped with a welding gun dummy. The operator can, through the Hololens2, set safety boundaries, introduce work pieces, place targets in a 3D space, simulate the robot path, send the program to the robot, and activate the physical robot, among other things. We performed a modified pluralistic walkthrough to evaluate the operator support system both with respect to our application but also to search valuable insight to the general use of AR in the use of such applications. Results showed that while the subjects were generally positive towards the support system, several issues were identified and raised by various degrees of severity. The primary issues arose around the navigation and interaction with 2D menus and 3D objects in a 3D Mixed Reality (MR) space. The absence of physics confused the subjects as they could not interact with the virtual objects as they would have with physical objects. The lack of physics simply meant that they didn't act the same. Furthermore the interaction with 2D menus in a 3D space was both reported and observed as being very difficult as the 2D representations probably led to problems with depth perception. The general results of the debriefing indicated that using AR for robot programming was challenging although some of this can be attributed to the fact that this was the participants first use of such a system. The users indicated that the menus were appropriate and that the interaction was intuitive while the navigation within the system was not experienced as natural which confirms the above-mentioned issues with menus disappearing from the line of sight and feedback around generated target points being absent.

Peter Thorvald, Magnus Holm, Mattias Strand, David Romero
Open Access
Article
Conference Proceedings

The Efficiency and User Experience of AR Walking Navigation Tools for Older Adults

Navigating in unfamiliar environments is a common problem for people, and the use of navigation tools on smartphones can improve navigation efficiency and help people navigate better. Many navigation systems today incorporate augmented reality (AR) and mixed reality (MR) technologies to assist navigation. However, older adults still face difficulties in using these emerging technologies. In particular, in complex environments, they are prone to taking wrong actions that result in deviation from the intended path. Previous studies have found that older adults have trouble matching virtual indicators with real environments when using AR-based walking navigation tools, especially when faced with multiple similar intersections. Different types of AR-based walking navigation systems use different virtual prompts, and their effectiveness may vary depending on the environment. To further explore the impact of different AR-based walking navigation systems and environmental complexity on older adults' navigation, this study recruited 36 older adults to participate in an experiment. They used three different AR-based walking navigation systems (landmark-based, route-based, and map-based) in two different levels of environmental complexity (simple and complex) to navigate to a designated destination in a virtual environment. First, the study found that participants made fewer navigation errors in the simple environment than in the complex environment. However, the cognitive load was higher in the simple environment. Further analysis showed that environmental complexity mainly affects cognitive load by influencing the degree of frustration and effort that people put into the task. The simpler the environment, the more effort participants felt they needed to exert to complete the task, and the more frustrated they became after making mistakes. And it was found that the number of intersections was higher in the simple environment, which may be a contributing factor to the higher cognitive load. Secondly, the study found that navigation performance (task completion time and the number of errors) and subjective feedback (system usability and cognitive load) were best for the route-based AR walking navigation system among the three systems, followed by the landmark-based AR walking navigation system, and the map-based AR walking navigation system was the least effective. This is because older adults' spatial abilities mainly focus on landmark knowledge and route knowledge, not configurational knowledge. Post-experiment interviews revealed that the route-based AR walking navigation system was found to be more similar to real-life navigation landmarks, making it easier to use. However, the landmark-based AR walking navigation system requires participants to pay attention to the corresponding landmarks, and their relationships with arrow indicators and accompanying text to confirm whether they have made the correct decision. This may lead to navigation errors when participants rely on a single source of information. Finally, the study found that participants with higher spatial memory completed the task in less time and made fewer navigation errors. The results of this study indicate that the greater the difference between AR navigation assistance information and real spatial information, the worse the navigation performance and experience of older users in different environments with different levels of complexity. This study provides a reference for improving the age-adaptability of navigation assistance tools and optimizing information prompting methods in complex environments.

Mingjun Liu, Liu Tang, Jia Zhou
Open Access
Article
Conference Proceedings

Toward a New Definition of Augmented Reality

In 1997 Ronald T. Azuma introduced a definition for augmented reality. The definition can be considered slightly outdated because of developments in augmented reality and ubiquitous computing. Extended reality environments do not only allow interactive virtual objects superimposed on reality and aligned with reality, but also static, dynamic, and autonomous virtual content that is not under the control of the user of the environment. One aim of AR research is to superimpose (multisensorial) virtual objects on reality that cannot necessarily be distinguished from real objects that are perceived and experienced by the inhabitants of the environment. In this paper, we take it a step further. Especially if we are no longer able to distinguish between virtual and real objects, shouldn't we look for a definition of AR that is more based on experiencing (not necessarily technology-enhanced) reality than on technology? We do this by focusing on multisensorial experiences that augment our world, rather than on the technology, present or not, that enables these experiences and distinguishes our experiences from those of others. That such a viewpoint has not taken shape before is mainly due to the vision-biased view of what AR research should entail.

Anton Nijholt
Open Access
Article
Conference Proceedings

Scenario innovation of virtual reality in medical education: Possibility Advantages and Barriers

The development of digital technology is profoundly transforming the practice of medical education. Virtual simulation is becoming the cornerstone of clinical education and training. With the increasing budget and standardized teaching pressure of universities and related medical institutions, virtual reality is playing a more and more important role on medical simulation teaching. VR can provide cost-effective, repeatable, and standardized clinical training for learners and educators as needed. The future of VR lies in its continuous integration with the curriculum and the technological development that allows the sharing of simulated clinical experience. It can achieve large-scale medical education without time and space limitation, and change the way of future clinical education. Especially in the context of public health crises, virtual medical training systems can greatly alleviate the shortage of professionals in medical institutions, protect medical personnel, and obtain a large number of well-trained medical staff in the short term. As a powerful and highly potential medical education tool, Virtual reality has attracted high attention from top international medical colleges and institutions. This study analyzes the scenario innovation of virtual reality technology in medical education through a combination of theory and case studies, summarizes the possibilities, advantages, and barriers of technology use, and provides reference for the development of related virtual medical education systems.In terms of the possibility of scenario innovation, the following five points can be considered: firstly, virtual reality technology can showcase the functions of medical devices and drug action mechanisms in medical procurement and marketing; Secondly, for doctor-patient communication, education can be provided to medical patients and their families, informing and explaining the patient's condition, surgical operation plan, and the role of "trial operation"; Thirdly, for rehabilitation training, it can help patients receive dual psychological and physiological treatment and rehabilitation guidance; Fourthly, for medical teaching, nursing teaching and clinical training can be conducted; Fifth, for medical science popularization, health knowledge popularization, promotion of healthy lifestyles, emergency rescue training, and disaster response education can be carried out. The advantages of virtual reality technology in medical education innovation mainly include the following three aspects: firstly, for learners, virtual simulation systems equipped with virtual reality technology make learning clinical easier with immersive experience; Secondly, for educators, it can greatly release teachers' time and space; Thirdly, for universities and medical institutions, it is allowed to provide simulated teaching with fewer resources and lower costs. The disadvantages of virtual reality technology in medical education mainly include three aspects: firstly, virtual simulation systems equipped with virtual reality technology are not suitable for all medical education scenarios; Secondly, due to its own technological limitations, the implementation of some teaching activities still requires human support; Thirdly, the system itself which provides a simulated learning approach still cannot replace expert educators.

Yuqi Liu, Yunlu Liu
Open Access
Article
Conference Proceedings

An augmented reality collaborative experiment: evaluation of effectiveness for train remote maintenance tasks.

Recently, the applications of augmented reality (AR) technology in train remote maintenance have attracted much attention from researchers. However, the train remote maintenance is a collaborative task for teams, and the transformation of collaborative methods brought about by AR has a significant impact on user’s performances in the task. In this work, an experiment was used to compare the performance differences between AR remote cooperation and traditional social software remote cooperation in train remote maintenance tasks. Social presence, collaborative availability and system availability affected by the two collaboration methods were analyzed. The results indicate that team members are more inclined to use AR device for remote collaboration in train remote maintenance tasks, and the social presence collaborative availability and system availability of the AR devices are better than traditional remote social methods. These results show that the AR technology has significant application advantages in remote train maintenance and provide a reference for the design of AR remote maintenance systems.

Ruizhen Li, Jinyi ZHI, Qianhui Shen, Zerui XIANG
Open Access
Article
Conference Proceedings

Virtual Reality for Adult Training

In many different industries including IT, business, medicine, engineering, and many more, technology refers to a collection of methods and information that are used to develop, produce, and improve services and products. Technology has evolved quickly in recent years, and innovations have completely changed how we work, live, and interact with the world. For instance, the Internet has made it possible to access a vast quantity of information and quick communication through a variety of applications like Facebook and Instagram, and being able to swiftly access it on our phones enables us to be constantly connected to social networks and various other resources. As such, digital technologies offer a unique opportunity to improve educational standards. On the one hand, teachers and trainers become equipped with cutting-edge tools that help them engage their classroom with contextualized information in a way that is not only personalized and differentiated according to everyone's distinctive progress and needs but also time-efficient. Alternatively, students benefit from a customized learning experience that is also sensitive to their performance, sometimes through an immersive experience, to be able to go on and use their education to contribute to the society they will live in in the future. As such, the advantages of integrating digital technologies with pedagogy to develop an elevated learning environment have become increasingly apparent. Virtual reality is one of the most recent developments in technological innovation that is being used as a tool for educators in the educational process in nations with more developed economies. People are captivated by virtual reality because of the intriguing images, the distinctive experience it provides, and the way it captures their attention consistently.  Immersion in a virtual space becomes an experience through which users can unconsciously integrate the knowledge, images, and content they are exposed to, which has a real and positive effect on their mental health. Training in fields where the direct experience could provoke more cognitive and behavioral damage is training teachers for children with special needs. As it offers a realistic and interactive experience in a regulated and safe setting, virtual reality can be a highly beneficial tool for the training and education of teachers working in such environments. In such an immersive environment, teachers can learn how to react appropriately and deal with challenging situations because a virtual reality application can simulate scenarios that imitate the distinctive behaviors of children. In these scenarios, each child's particular needs and preferences may be attended to, which might be challenging to imitate in the real world. The current work seeks to create experimental game scenarios tailored for special needs classroom training, assess their usefulness, and examine how they affect the growth of children's social and communicative abilities. The game scenarios are based on real teaching practice in different contexts working with special needs children and integrate practical innovative methodologies in instruction. Functionalities to help adults learn and recognize easily real-life scenarios in the classroom context and instruments to manage difficult emotional and behavioral manifestations of children between 3-6 years old make the approach a solution to consider for future training of specialized personal and not only.

Iulia Stefan, Lia Pop, Teodora Praja, Nicolae Costea
Open Access
Article
Conference Proceedings

Cockpit Task Management and Task Prioritization in a VR Flight Environment: A Pilot Study on the Stability-Flexibility Dilemma

Managing complex aircraft control and military tasks simultaneously in flight missions places substantial cognitive demands on pilots. To handle this challenge within the constraints of limited cognitive resources, pilots often employ cockpit task management strategies, such as task prioritization. Cognitive control plays a pivotal role in this process, as it entails directing attention toward relevant tasks while simultaneously filtering out distractions without missing safety-relevant information. The present paper relates these requirements to the stability-flexibility-dilemma of cognitive control. Different performance-related advantages and disadvantages are associated with the stability-flexibility dilemma in multitasking scenarios. On the one hand, cognitive stability is related to improved goal shielding, which in turn is associated with aggravated task switches. On the other hand, cognitive flexibility is linked to facilitated task switching but is also correlated with an increased likelihood of distraction by irrelevant cues. While the stability-flexibility-dilemma has already been investigated via task prioritization in a low-fidelity flight simulator, it remains to be explored in a more real-world flight environment. The presented study simulates a reconnaissance mission with eleven participants in a virtual-reality flight environment. Environmental factors such as weather conditions (non-windy or windy) and hostility levels (low or high) are systematically varied to manipulate task prioritization behavior. The effects of this manipulation on flight performance, workload, and eye-tracking metrics are statistically analyzed with a Bayesian repeated measures ANOVA. Results provide insight into how weather and hostility influence the cognitive control mode via task prioritization in near-realistic flight missions. Implications for the design of future studies are discussed.

Sophie-marie Stasch, Wolfgang Mack
Open Access
Article
Conference Proceedings

Physical human factor for the development of universal XR platform to build a metaverse supporting digital inclusive leisure & culture

Leisure refers to non-obligatory activities that are driven by intrinsic motivation allowing individuals to freely allocate their time based on personal interests including sports, culture, and artistic engagement. Participation in leisure activities is one of the important factors that affect an individual's well-being, including the management of physical and mental health, the ability to cope with stress, and the improvement of quality of life. According to a 2021 survey (10,000 people over 15 years old) by the Ministry of Culture, Sports and Tourism of South Korea, approximately 80% of respondents continue to engage in sports, hobbies, and entertainment activities, and the trend is increasing every year. The government of South Korea is carrying out policies to guarantee fundamental rights to leisure and culture and improve their quality of life. Koreans’ interest in leisure and cultural activities is steadily increasing. Thus, the government is striving to promote the health and well-being of its citizens by implementing policies to guarantee these rights.Metaverse has recently emerged as a future industry. Various digital contents based on the metaverse are emerging to provide leisure services such as exercising, playing games, and watching performances. South Korea is also introducing the metaverse to public services including the cultural sector, to provide universal public services to people. Recently, there has been a growing interest in XR (eXtended Reality) technology, which encompasses immersive technologies such as VR (Virtual Reality), AR (Augmented Reality), and MR (Mixed Reality). This can increase the sense of immersion and realism, so users can have an experience that is very similar to reality through the fusion of XR technology and the metaverse space.However, the current metaverse users in South Korea are overly concentrated in their teens and 20s. In addition, there is a limited amount of content that spans different generations. This is a major obstacle to the universalization of the metaverse for the public and the development of related industries. Moreover, since Korea entered an aging society, the number of elderly people has been steadily increasing every year. This trend can lead to societal problems such as digital exclusion and digital illiteracy. As the aging process progresses, human physical and cognitive functions deteriorate rapidly, which can significantly limit the adaptation and utilization of rapidly evolving technologies. To build a metaverse space that is universally accessible to everyone, it is necessary to take into account the differences in physical and cognitive functions that occur due to aging.Based on the requirement, we are currently developing an XR technology-based metaverse platform and content that allows juniors and seniors to enjoy leisure activities together. The research consists of the development of a metaverse platform based on XR technology, three types of sports (bowling, golf, walking), and three types of games (puzzle, escape, adventure). First, utilizing related devices such as head-mounted displays (HMDs), log data, and Unity plugin-based acquisition technology, we acquire user data related to physical and cognitive abilities while performing six types of content. Subsequently, we will apply data mining techniques to extract significant characteristics in physical and cognitive abilities between junior and senior generations. Finally, we aim to define relevant human factor parameters by analyzing intergenerational differences in these features. The extracted human factor parameters are used in the process of correction and augmentation of intergenerational differences in physical and cognitive abilities.In this study, we aim to provide a detailed description of the concept and process of our ongoing research and introduce our research future directions. In particular, we focus on the human factor extraction process related to six types of content.

Yun-hwan Lee, Jongbae Kim, Yeong-hun Kwon, Hyo-jun Lee, Yohan Ko, Yebon Kim, Soyeong Park, Daehoon Son, Jongsung Kim
Open Access
Article
Conference Proceedings

"Fall PreNoSys": Augmented Reality-based Tripping Hazard Notification System and Initial User Feedback Study

Falls and falling remain a significant problem for people as they age. This work proposes a mobile augmented reality (AR) based system called “Fall Prevention via Notification System” (Fall PreNoSys) to detect likely tripping hazards around the wearer and provide notifications to help them avoid safety problems, along with two phases of user feedback to improve the system design. Blending mobile technologies and human-computer interaction requires significant work on human interface components to become an effective, calm, and useful tool in daily life. A series of studies involving human participants was conducted to gather feedback on the Fall PreNoSys interface design, its utility, and its underlying concepts. Current AR research in gerontechnology and in-home assessments represents a nascent field, and Fall PreNoSys offers a novel approach to fall prevention.Fall PreNoSys uses a Microsoft HoloLens v2 to gather real time 3D models of the space around the user. These models are segmented to identify potential tripping hazards, and the HoloLens scene understanding library is employed to classify objects using an AI classifier. The combination of the Fall PreNoSys algorithm for object segmentation and scene understanding results in a list of objects that can trigger notifications as the user moves around a room.To evaluate notifications style and to get feedback from possible users of the system, two pilot user studies were performed. These studies provided early-stage feedback, initial impressions, guided the continued design of notifications, tested the object detection algorithm's robustness, and evaluated user reactions to static and dynamic notification types developed for Fall PreNoSys.Notifications took the form of 3D visual objects projected onto the HoloLens' AR screen within the wearer's field of view. These notifications were shaped as arrows or OSHA safety-style triangles and were placed on or near identified potential hazards. Based on user feedback from the first phase of the user trial, notifications became interactive, changing color, bouncing in place, and reacting to the participant's relative location to orient their attention to hazards.The study used walking tracks with likely in-home tripping hazards, a combination of machine learning-based detection algorithms, and multiple styles of visual hazard notifications. Study data was collected through two phases of interviews, user feedback of their experiences with the technology, and measurements using the System Usability Study scale to help guide further development of Fall PreNoSys and similar systems in the future. Future work on Fall PreNoSys includes a series of studies with older adults after the latest user feedback from this study is incorporated into the interface design. Additional work includes using eye gaze notification acknowledgements, user path estimations, and out-of-view edge notifications to help people interact with notifications, adapt to the user's walking path, and handle issues with the AR screen's field of view limitations.

Aaron Crandall, Daniel Olivares, Kole Davis, Kevin Dang, Alan Poblette
Open Access
Article
Conference Proceedings

Does Pinocchio get Cybersickness? The Mitigating Effect of a Virtual Nose on Cybersickness

Virtual reality (VR) has many applications. However, not all users can enjoy them equally due to cybersickness, a form of visually induced motion sickness in VR. To increase the accessibility of VR, countermeasures against cybersickness are needed. The requirements for a good countermeasure are a reasonable effect size, especially since susceptibility varies between individuals, while reducing immersion as little as possible. One idea that seems to meet these requirements, the virtual nose, has been tested with small samples – from which large effect sizes can be derived – and allows universal applicability. The mode of action of the virtual nose derives from the rest frame hypothesis: Certain objects that are perceived as stationary serve as a rest frame, facilitating the self-calibration of the body. In addition, the rest frame may not only act as a postural corrector, which should be observable by a reduction of postural sway, but also as a fixation cross, which should be observable by longer and more frequent fixations. This study tested whether a virtual nose (treatment group) significantly reduced cybersickness compared to a group without a virtual nose (control group) and whether physiological process indicators, namely head and eye tracking, differed between the groups with a larger sample size than previous studies. Participants were matched into the treatment and control group according to their gender and previous VR experience, as these aspects are discussed to influence cybersickness susceptibility. Experience groups were divided into three: none, less than 30 min of VR experience, and more than 30 min. A total of 124 participants were recruited, of which 110 were eligible for the analyses (multivariate repeated measures analysis and Holm-corrected univariate post-hoc tests). During the VR exposure, the participants’ task was to explore a virtual city and collect checkpoints. The questionnaires used were the Virtual Reality Sickness Questionnaire (VRSQ) for a pre-post-comparison and the Misery Scale (MISC) applied every 2 min during the VR exposure. The continuously sampled process indicators were cut into these fixed 2-minute intervals for the analyses. The results show no mitigating effect of the treatment. Nevertheless, the reported cybersickness was significantly lower in the more experienced group and significantly higher in the inexperienced group compared to the low-experienced group. The process indicators head and eye tracking mostly confirm the mitigating effect of previous VR experience on cybersickness susceptibility but do not differ between the treatment and control group. It can be argued that the artificiality of a virtual nose that is added to a scene nullifies the mitigating effect by reducing immersion. It may also be that the stimulus needs to be more salient to be effective. In summary, prior experience with VR was the mitigating factor. As the process indicators and the controller input differ, one explanation could be a behavioral adaptation with increasing VR experience. Alternative explanations, such as a gender- or experience-specific pre-selection effect for VR studies, are discussed.

Judith Josupeit
Open Access
Article
Conference Proceedings

Virtual reality platform applied to Ergonomics teaching

This study examines user perceptions and experiences with a digital platform designed to provide insights into ergonomic aspects within work environments. A perception survey was conducted among 43 participants, encompassing diverse demographics. The findings indicate that most participants were from Chile (97.7%), with 54% falling within the 18 to 24 age group and 30% between 25 to 34. Educational backgrounds were diverse, with 79% pursuing undergraduate studies and 19% engaged in postgraduate programs. Engineering accounted for 82% of respondents' fields of study. The survey highlighted that 88% had previous exposure to virtual platforms, while 46.5% lacked formal ergonomics training. User-friendliness was reported by 74%, with 93% encountering no technical issues. Notable challenges included camera movement slowness and limited interaction. Perceptions of ergonomic aspects revealed that 75% found visual information clear, and 79% rated audio clarity positively. Moreover, 86% identified risky postures, 72% observed repetitive tasks, and 67% recognized improper manual load handling. Regarding impact, 79% felt that the platform enhanced their understanding of ergonomic issues. Positive aspects encompassed ease of use and clear information, while areas for improvement included navigation accuracy, camera sensitivity adjustments, audio quality enhancement, and improved graphical representations. Despite limitations such as potential self-reporting bias and limited sample size, this study provides valuable insights into user experiences and perceptions, contributing to discussions on digital platform usability, ergonomics, and overall impact.

Javier Freire, Felipe Meyer, Fabiola Maureira, Jorge Espinoza, Kiralina Brito, Carla Estrada
Open Access
Article
Conference Proceedings

Business automation opportunities to enhance collaboration in automated and virtual environments

Allocating automation can be challenging. There are many instances where humans perform better than automation. However, businesses have many opportunities to automate and support humans to achieve more. There are many reasons why enterprises automate. Researchers have found that automation makes sense for tasks that are impossible or hazardous, difficult or unpleasant, and to extend human capabilities. Sometimes, people automate just because it is technically possible. Automation only makes sense when it is supporting the human. Different levels of automation, transitions, or reskilling of the workforce with environments that facilitate their work can benefit the business and the employee's well-being. As technology evolves, it is essential to understand areas where companies will benefit from some of these technologies. Retaining humans in the loop is a crucial factor in enhancing automated systems. One of the existing challenges is the required involvement in the workflow or process. Sometimes, there needs to be a group collaboration of humans when there are events that are unexpected and need adjustment. Dealing with abnormal situations would require innovative ways to unite people within the automation flow through virtual environments, digital twins, and other spaces supporting instantaneous remote collaboration. This paper provides examples of business opportunities to automate and make processes more efficient, effective, and satisfactory. It includes business case examples for digital twins, digital humans, generative AI, and metaverse. It will also mention future challenges with automation and possibilities for revaluation of its function allocation. In particular, it will refer to the importance of user-centred research in supporting business automation delivery of future systems.

M. Natalia Russi-Vigoya, Jennifer Hatfield
Open Access
Article
Conference Proceedings

The Implementation Challenges of Immersive Technologies in Transportation Simulation

Innovation, effective management of change, and integrating human factor elements into flight operations control distinguishing features of the aviation sector. Immersive technologies (Augmented, Virtual, and Mixed Reality – Digital Twins technology) can be used in aviation training programs to provide an immersive and interactive learning experience for all aviation professionals. Adapting an aviation immersive technology environment in transportation simulation can allow the implementation of new training approaches in a safe and controlled environment without the risk of actual flight or equipment damage. Digital twins are used to create realistic flight simulations, allowing aviation ecosystem actors to practice their skills in various scenarios and conditions. This helps to improve safety and prepare aviation experts for unexpected events during actual flight. Another use for Augmented, Virtual, and Mixed Reality Simulation in aviation training programs is maintenance training. Moreover, Digital Twins can simulate maintenance procedures on aircraft and aviation systems, allowing SMEs to enhance their knowledge and practice their skills in a safe, cost-effective, and controlled environment. Purdue University School of Aviation and Transportation Technology (SATT) Ecosystems' Artificial Intelligence (AI) research roadmap aims to introduce digital twins in aviation training programs to simulate flight-airport operations and air traffic scenarios. Moreover, Purdue's Artificial Intelligence approach for Augmented, Virtual, and Mixed Reality Simulation / Digital Twins focuses on the potential to improve the effectiveness and efficiency of aviation training programs (CBTA globally) by providing a more realistic and immersive learning experience {lean process for training/certification, transition to AI – Advanced Air Mobility (AAM) environment}. Furthermore, this research focuses on implementing challenges and mitigating residual risk in the 'AI black box.' Results were analyzed and evaluated the Artificial Intelligence certification and learning assurance challenges under the Augmented, Virtual, and Mixed Reality Simulation – Digital Twins aspects.

Dimitrios Ziakkas, Abner Flores, Anastasios Plioutsias
Open Access
Article
Conference Proceedings

Development of the model-based planning system for augmented reality in industrial plant maintenance

Augmented reality (AR) is extensively used in modern industrial automation, especially in industrial plant maintenance (lPM). The growing amount of both research and practical projects explore, develop and integrate the AR applications for a variety of tasks in lPM. With the introduction of AR, the accomplishment of tasks like training of maintenance staff, the visualization of instructions during maintenance and error correction, the visualization of plant control processes, and many others become more visual, interactive and are, therefore, considerably simplified. Significant time efficiency with respect to commissioning of industrial equipment may also be achieved. Thus, the incorporation of AR technologies leads to comprehensive benefits in solving the automation tasks during the lPM.At the same time, the expansion of AR in IPM does not match the high potential it has demonstrated. The reasons for this comprise the implementation and adaptation issues (high risks and cost of the specific AR implementation), the technical problems (hardware. and software-related), special developer requirements, etc. Therefore, a model for planning the implementation of AR in IPM and for the benefit prediction in terms of AR efficiency is required. However, the majority of the projects and studies in the IPM area focus on the practical side of the AR implementation. The AR introduction benefits (usually in terms of development speedup or process time reduction) tend to be considered on a case-by-case basis. There is seemingly a lack of scientific papers that review the general planning of AR for the IPM in automation: there are no models to identify the feasibility of solving a particular task using the AR in general or the prediction of AR implementation results. The respective research gap consists primarily of a comprehensive analysis of the factors determining the necessity of implementing AR for a project or a process with the defined characteristics, their relation to the resulting benefits, and the main emphases to be considered when planning and deploying the AR technologies in the IPM area in particular and in the automation in general.To fill this research gap, we propose a model-based planning system (MBPS) for AR in the area of the industrial plant maintenance. This system should provide a deep scientific analysis of the feasibility and necessity of using AR to solve particular tasks in the automation field. Additionally, this MBPS should enable predictable planning and forecasting of the results of AR integration, like efficiency, applicability, quality and other criteria, and therefor support the decision making about AR implementation. This requires a broad study and analysis of criteria for evaluating the results of AR integration and usage in automation in general and in the IPM in particular.

Vadym Bilous, Kirill Sarachuk
Open Access
Article
Conference Proceedings

Developing Multimodal Food Augmentation Techniques to Enhance Satiety

In the contemporary food landscape, where easily accessible and appetizing food options prevail, the issue of overconsumption and its contribution to global obesity concerns remains a critical societal and research challenge. While it is well-established that sensory appeal plays a key role in motivating eating behaviour, recent studies have underscored the direct impact of sensory properties on food consumption, mediated by internal signals like hunger and satiety. Among the various sensory factors influencing eating behaviour, two phenomena have garnered significant scientific interest: sensory-specific satiety (SSS) and sensory-specific appetite (SSA). In this research we aim to explore if augmenting food products through visual, olfactory, and haptic feedback can change eating behaviour and affect SSA or SSS. To further expand our understanding of these phenomena, this study employs a novel system utilizing multimodal augmentation of plant and meat-based products consumed in a controlled environment.

Ahmed Farooq, Jussi Rantala, Antti Sand, Mohit Nayak, Natalia Quintero, Jenni Lappi, Nesli Sözer, Roope Raisamo
Open Access
Article
Conference Proceedings

Degradation in Dynamic Color Discrimination with Waveguide-Based Augmented Reality Displays

Objective: The aim of this study was to evaluate degradation in human color perception that can occur when using augmented reality displays.Background: Stereoscopic augmented reality displays are known to degrade a user’s ability to interpret projected color information. However, a quantitative breakdown of this degradation does not exist for contemporary augmented reality displays that use waveguide optical combiners.Method: Participants performed the Ishihara color test and an augmented reality-focused variant of the Farnsworth-Munsell 100 test on color perception using a set of commercially available augmented reality displays (Microsoft HoloLens, Magic Leap One, and DAQRI Smart Glasses).Results: From our analysis of participant performance, we generated specifications to maximize color discrimination and highlighted common areas of difficulty for each headset.Conclusions: We defined a novel aware modification to a gold-standard test of color discrimination that accounts for spatial color distortion along the lens an AR display. The optimal color usage across displays will vary based on the design of the optical combiner, which necessitates a re-usable color test to characterize color degradation on each headset design.Applications: The design guidelines specified in this article will minimize the degradation in color perception when using augmented reality displays, allowing them to be used in domains that require fine color discrimination.

Adrian Flowers, Arthur Wollocko, Caroline Kingsley, Elizabeth Thiry, Michael Jenkins
Open Access
Article
Conference Proceedings

VR fractal healing design based on self-similarity theory

Through combing the research on self-similarity in Chinese and foreign philosophical thought and combining the perspective of cognitive psychology, this paper explores the connection between self-similarity and the effect of human attention restoration and stress reduction, and puts forward the hypothesis that fractal design in the VR environment has the ability to help people to restore their attention and reduce their stress, and verifies the hypothesis through experiments. The experimental results show the significant effect of healing of cohesive fractal design based on self-similarity theory, and the design strategy of self-similarity VR fractal is summarised.

Jianmin Wang, Huiyan Chen, Wei Cui, Yuchen Wang, Haijie Kong, Guifeng Zheng
Open Access
Article
Conference Proceedings

Degradation in dynamic visual perception with waveguide-based augmented reality displays

We investigated the degradation in visual perception that can occur using augmented reality displays to interact with and interpret real-world reading and spatial response tasks.Background: Stereoscopic augmented reality displays can degrade a user’s visual perception. To distinguish the components of this degradation that result from hardware and software differences, an analysis of this visual degradation for contemporary augmented reality displays is necessary.Method: Participants performed real-world (i.e., not projected in augmented reality) eyechart tests of visual acuity and contrast sensitivity to characterize the degradation of static visual perception caused by each headset in the study (Microsoft HoloLens, Magic Leap One, and DAQRI Smart Glasses), and took a measure of useful field of view to characterize any potential degradation in spatial awareness.Results: From our analysis of user performance, we observed that unlike the headsets previously used for this type of characterization, the majority of contemporary augmented reality displays do not significantly degrade visual perception. However, we did observe slight decreases in visual performance introduced by the Magic Leap One.Conclusions: We defined a methodology to employ real-world measures of visual perception to rapidly characterize degradation of visual perception in augmented reality.Applications: This analysis can inform headset selection and visual stimulus design strategies based on operational requirements and inform future headset development efforts.

Adrian Flowers, Arthur Wollocko, Caroline Kingsley, Elizabeth Thiry, Michael Jenkins
Open Access
Article
Conference Proceedings

A Graded Approach to Simulators: Feature Requirements Mapping to Simulator Types for Nuclear Plant Control Room Research Use Cases

Simulators function as test platforms for validating a broad spectrum of nuclear power plant operations. This spectrum encompasses tasks ranging from updating existing control rooms to fundamentally designing new ones, incorporating innovative operational concepts. The Simulator Feature Framework introduces a generic list of features to ensure that future simulators facilitate research endeavors that cater to both immediate plant modernization needs and the future deployment of advanced reactors (Gideon and Ulrich, 2023). Conducting research via control room simulators requires different simulator types, each varying in fidelity. Integrating the complete set of features outlined in the Simulator Feature Framework into all simulator types could escalate acquisition costs and decrease their commercial appeal for research purposes. A nuanced strategy is required to align simulator types with specific features that adequately underpin the intended research applications. This paper maps five simulator types to the feature categories within the Simulator Feature Framework. By connecting feature categories with simulator types, simulator vendors can incorporate capabilities suitable for distinct simulator tasks without obligatory inclusion of all features. This graded approach harmonizes the cost of simulator acquisition with the anticipated research benefits. Moreover, this alignment equips researchers with a foundational standard for assessing simulators' compatibility with research objectives across varying levels of fidelity. Two use cases are provided to consider simulators for advanced control room development and human reliability analysis data.

Olugbenga Gideon, Ronald Boring
Open Access
Article
Conference Proceedings

Human Error Dynamic Simulation of Work as Performed – Modelling Procedure Deviations with Empirically Derived Failure Mechanisms

HUNTER is an Idaho National Laboratory software tool developed to support dynamic human reliability analysis. The software performs Monte Carlo simulations of a virtual operator performing procedurally prescribed tasks within the context of a dynamic and coupled nuclear power plant model such that changes in the plant state impact what tasks the operator must perform and the operator’s actions impact the plant state. HUNTER supports a limited suite of scenarios with models containing procedures and corresponding human performance context parameters for a loss of feedwater and steam generator tube rupture scenario. The procedure models contain a single path of steps to mitigate the faults within the two scenarios. Failures occur from the dynamically calculated HEP value exceeding a random generated HEP value for any given task or if the elapsed time to complete a task exceeds the allowed time for that task. In the real world, Operators may error by proceeding along the wrong path and then exceed the allowed time to mitigate a fault as a consequence. To improve realism, HUNTER needs to be able to allow the virtual operator to incorrectly deviate along the wrong procedure path due to a diagnostic or understanding errors in addition to the existing HEP and time failures. Procedure deviations are errors of commission. These are quite challenging to model since there are theoretically infinite errors of commissions that could be made at any point in the simulation. Empirical data collected from a recently performed study evaluating computer-based procedures and failures to adhere to the prescribed procedure steps was used to derive failure mechanisms to realistically constrain the possible deviations to a manageable set that could be modelled within the HUNTER simulation. The process to analyze the empirical procedure adherence data and develop generalized forms of the empirically observed failure mechanisms are described along with their implementation within the HUNTER simulation. Future work aims to continue to validate these failure mechanisms outside of the specific context of the loss of feedwater and steam generator tube rupture contexts to understand their generalizability to other scenarios to more accurately model work as performed within nuclear process control.

Thomas Ulrich, Ronald Boring, Jisuk Kim, Roger Lew
Open Access
Article
Conference Proceedings

Future Image making in the era of Metaverse: Focus on Non-Fungible Tokens and the Future of Art

The meaning of visual imagery encapsulates the essence of the era's aspirations. This meaning can vary across different cultures, and metaphorical expressions can also differ significantly. Throughout history, the evolution of images has experienced diverse transformations. In the present day, these images continue to undergo digitization, evolving their meanings through various markets and novel formats. As an illustration, the convergence of art, photography, and the implementation of smart contracts in the form of Non-Fungible Tokens (NFTs) has gained momentum alongside virtual currencies like Bitcoin, serving as a digital means of value exchange.Personal experiences contribute to elevating the value of images, and the subjective nature of value assessment criteria has spurred considerable discourse on valuation methods and problem-solving approaches. In a reality lacking precise standards, both significant and minor societal side effects arise. Moreover, challenges to sustainability and environmental threats have also emerged. In the realm of design, endeavors such as design thinking, speculative design envisioning future scenarios, and design futuring have been employed as alternative approaches to address these issues. These novel design attempts have garnered attention as methods for embracing uncertainties about the future and the consequent problem-solving efforts.Against this backdrop, this study aims to pose the question of how metaphoric images, particularly NFTs, will evolve in the future. As a means of seeking answers, the research intends to explore the value inherent in images by investigating prior studies on their meanings across the past, present, and future. Additionally, the metaphorical expressions embedded in these images will be examined for the implied significations they carry. Furthermore, the trajectory of these images from their origins to their current state will be traced, delving into the frequency of use across cultural and societal strata, as well as the utilization of digital imagery following its establishment in the digital realm.This research will not merely focus on the transformation of artists' and designers' creations into NFTs but will also scrutinize how digital images in the new era acquire value and meaning. Ultimately, it aims to comprehensively explore the implications of future metaphoric images, particularly in the context of NFTs and their connection to human culture. Additionally, the study will examine instances where societal institutions impact NFTs' digital images and reciprocally, where these images influence societal norms. This exploration will encompass the analysis of different nations, epochs, and the digital convergence era. In summation, the synthesized findings will categorize the meanings associated with these images and investigate how they can genuinely add value via historical research, or case studies.

Young Jun Han
Open Access
Article
Conference Proceedings

The Digital Astronaut Simulation

The Digital Astronaut Simulation provides a human biomechanics modeling, simulation, and analysis capability that is enabling spaceflight hardware design to incorporate the human dynamic input early in development cycles as well as characterize performance after prototypes are built. Engineering design may often include posable mannequins for volumetric type assessments or other anthropometric data, but historically lacks higher fidelity multibody dynamics modeling. However, the implementation described herein facilitates quantifying kinematics and dynamic loads which impact hardware function and can hence be used within design iteration. The enhanced toolset includes a modified multibody model, updated motion capture marker sets, and refined methods for scaling and inverse kinematics. These provide increased accuracy for applications in exercise and extravehicular tasks in reduced gravity, especially where upper extremity motion is involved. A core capability highlighted is the calculation of ground reaction forces, moments, and center of pressure based on motion capture, which is compared with force platform measurements. This paper describes 1) the updates made to an OpenSim full body model and motion capture marker sets to improve model scaling methods and inverse kinematics results 2) verification and validation efforts for ground reaction force, moment, and center of pressure computation, and 3) discussion of the human spaceflight applications to date.

Kaitlin Lostroscio, Leslie Quiocho, Charlotte Bell, David Frenkel, Fouad Matari, Lauren Nilsson
Open Access
Article
Conference Proceedings

Evaluating the Efficacy of Structured Analytic Techniques (SATs) as a Support System to Enhance Decision-Making within ISR Mission Environments

U.S. Air Force military operators involved with Intelligence, Surveillance, and Reconnaissance (ISR) missions are required to process, exploit, and disseminate (PED) collected intelligence within friendly and hostile environments in near-real time in order to provide geographical locations and ground movement patterns. Intelligence collected during ISR operations are then implemented into future strategic planning to provide our military an edge in the battlefield. However, the information collected can be vague, incomplete, or ill-defined resulting in operators making poor or inadequate decisions. Therefore, the objective of this study was to evaluate the effectiveness of two structured analytic techniques (SATs) against a control group when interpreting and comprehending narrative content in order to support and facilitate current tool development and future technology transition within the ISR community. Three groups of 25 participants (N=75) were randomly assigned to one of two analytic techniques or a control approach and provided a narrative. The SATs implemented were the Method for Defining Analytical Questions (MDAQ) which was developed in-house by our ISR subject matter experts (SMEs), a Scaffolding approach, or a Control approach. MDAQ is a repeatable process focused on identifying an indicator and its association to a person, place, or event before providing a solution. Scaffolding is founded on determining a problem statement, generating a solution, providing justification, evaluating the hypothesis, and providing a solution. For the Control approach, participants read through the content and provided a solution. The objective of the study was to determine if providing a structured analytic technique would enhance the detection of essential elements of information (EEI) embedded within the narrative leading to improved performance accuracy. The findings provided underlying evidence that implementing a Scaffolding approach significantly improved performance accuracy compared to MDAQ and Control (p<0.01). Moreover, a statistically significant difference was detected within the MDAQ group when participants repeated the process compared to those who only went through the process once (p<0.01). Nevertheless, the findings suggests that providing participants with a structured analytic technique enabled them to identify and interpret critical EEIs that maybe overlooked otherwise resulting in improved performance accuracy. This discovery will support human-computer interactions for future ISR tool development.

Justin Nelson, Anna Maresca, Bradley Schlessman, Jerred Holt, John Kegley, Alan Boydstun
Open Access
Article
Conference Proceedings