Human Factors in Robots, Drones and Unmanned Systems

book-cover

Editors: Alexandra Medina-Borja, Krystyna Gielo-Perczak

Topics: Robots, Drones and Unmanned Systems

Publication Date: 2025

ISBN: 978-1-964867-55-7

DOI: 10.54941/ahfe1005990

Articles

Visual Allocation of Teams in the Construction Industry: Shared Situation Awareness Under Information Overload in Human-AI Collaboration

The integration of AI has significant o pportunities f or e nhancing human-machine collaboration, particularly in dynamic environments like the construction industry, where excessive information affects decision-making and coordination. This study investigates how visual attention distribution relates to SA development under information overload by addressing three research questions: (1) How does visual allocation relate to individual SA under information overload? (2) How does visual allocation influence s hared S A f ormation? ( 3) D o h igh-shared S A t eams exhibit different visual allocation patterns compared to low-shared SA teams? To answer these questions, a multi-sensor virtual reality (VR) construction environment is created as testbed that includes realistic task simulations involving both human teammates and AI-powered cobots (e.g., drones and robotic dog). Participants completed a pipe installation task when navigating construction hazards like falls, trips, and collisions, while experiencing varying degrees of information overload. Shared situation awareness (shared SA)—the shared understanding of tasks and environmental conditions—was assessed using the situation awareness global assessment technique (SAGAT) and eye movements were tracked using Meta Quest Pro. The relationship between eye-tracking metrics and SA/shared SA scores is analyzed using linear mixed-effects models (LMMs) and a two-sample t-test compared visual allocation patterns between high- and low-shared SA teams. Results indicate that eye tracking metrics can predict SA’s levels, an individual’s SA may also be enhanced through dyadic communication with team members, allowing participants to acquire updates without directly seeing the changes. Furthermore, high shared SA teams significantly allocated more attention to environment-related objects and exhibited a more balanced visual allocation pattern (run count and dwell time) on task- and environment-related objects. In contrast, low shared SA teams were more task-focused, potentially reducing their awareness of broader situational risks. These findings helps to identify at-risk workers using their psychophysiological responses. This research contributes to developing safer and more effective human-AI collaboration in construction and other high-risk industries by prioritizing shared SA and AI-driven personalized feedback.

Ching-yu Cheng, Liuchuan Yu, Lap-fai Yu, Behzad Esmaeili
Open Access
Article
Conference Proceedings

Evolution of the Human Factor in Forestry Automation: From Manually Operated Forestry Machinery to Fully Autonomous Systems

The advancement of robotics, drones, and unmanned systems has significantly transformed forestry operations, particularly through the mechanization and automation of crane systems, trucks, and other forestry machinery used for timber harvesting and transportation. This research examines the transition of forestry machinery from manual operation to full autonomy, discussing how different levels of automation will impact technological development, adoption, and societal acceptance. The shift from traditional forestry equipment, requiring skilled human operators, to semi-automated and fully autonomous systems is driven by innovations in sensors, machine learning and advanced system control. While automation enhances efficiency, safety, and sustainability, it also presents challenges related to operator adaptation, job displacement, and trust in AI-driven systems.A key human factor in this transition is the cognitive load on operators. Initially, semi-automated systems increased this burden due to complex control interfaces, but advancements in intuitive human-machine interaction have helped to mitigate these effects. Acceptance among forestry professionals depends on factors such as reliability, ease of use, and perceived safety. Resistance to fully autonomous systems remains, particularly due to concerns about loss of control, unpredictable forest environments, and the ability of AI systems to make appropriate decisions under variable conditions. However, automation also offers significant benefits, including improved efficiency, enhanced safety, and solutions to labor shortages caused by demographic shifts. Additionally, it increases the attractiveness of forestry careers for younger generations. This research assesses the current state of forestry machinery and their respective levels of automation. Using the timber value chain as a case study, it explores how increasing automation will shape the future of forest management and outlines strategies to improve acceptance among operators and society. Ultimately, the research highlights how technological advancements can align worker’s well-being with sustainable management of forest resources.

Alexander Kreis, Mario Hirz, Karl Aumeier
Open Access
Article
Conference Proceedings

Blocking System for Autonomous Flight Drones

In recent years, the misuse of drones has emerged as a critical challenge. Drones use electromagnetic waves, such as GNSS (Global Navigation Satellite System), and remote control signals for navigation. When a malicious drone that intends to engage in criminal activities such as terrorism is detected, it is possible to neutralize drones using systems that generate high-power jamming signals, such as Drone-Buster. However, this approach requires continuous emission of strong jamming signals, which can interfere with nearby devices that rely on electromagnetic waves. Furthermore, it is challenging for a single unit to cover multiple intruder drones simultaneously. In order to address these issues, the following three mechanisms are proposed in this paper:1. Interference with Unauthorized Drone FlightsTypically, drones use GNSS to obtain positional information for navigation. If GNSS signal reception is interrupted, the drone will stop operating. This study proposes multiple array antennas deployed to emit directional signals to achieve this selected jamming. Each jamming signal from one array antenna is weak enough not to affect electronic devices in public areas. However, by concentrating multiple jamming signals within a confined area of a few cubic meters and aligning their phases, the interference intensity can be enhanced to a level that disrupts the regular operation of the drone. A "jamming grid" is created by rapidly scanning this interference area. This grid can then be moved vertically to cover a broader range.2. Safeguarding Authorized DronesThe method mentioned above affects all drones passing through the interference area. To ensure the secure operation of authorized drones, a local positioning system (LPS) that measures positional data locally is implemented. Drones compatible with this system can determine their location and continue operating safely. Additionally, the system employs encryption and authentication using pre-shared key information, preventing malicious drones from calculating positional data or impersonating legitimate drones.3. Provision of Positional Information and Assumption of the Attacker's PerspectiveIn addition to emitting jamming signals, the proposed system can receive and analyze signals reflected from flying objects to determine their position. This capability is especially useful for identifying drones that do not rely on GNSS.Moreover, by incorporating active defense that anticipates the attacker’s psychology and behavior, the system guides intruding drones into deliberately created gaps for interception and neutralization. The human-in-the-loop framework enables flexible, real-time responses.This proposed method can disable unauthorized drones that enter the interference area, and drone operators who do not want to spend time on attacks will inevitably choose to avoid the interference area. Therefore, by setting the interference area to guide drones into the area intended by the defender, it will be possible to effectively deploy more powerful defense devices such as Drone-Buster. Furthermore, the scalability of this approach may also make it possible to defend against large swarms of drones.This paper describes the concept of the proposed method and presents preliminary results from proof-of-concept experiments.

Ryushun Oka, Jumpei Tahara, Ichiro Koshijima, Kenji Watanabe
Open Access
Article
Conference Proceedings

An AI-Based Adaptive Pipeline for Automated Feedback in Immersive Robotics Learning

In this paper, we present a pipeline and framework for the Intelligent Immersive Learning environment for Programming Robotics Operations (IL-PRO), a novel AI-based approach to assess and enhance learner capabilities in an immersive virtual reality (VR) environment. By integrating telemetry data (both continuous and discrete) and speech data, the IL-PRO pipeline evaluates users' motor skills and cognitive understanding to deliver personalized, real-time feedback that links their conceptual understanding with motor skill performance. Telemetry data captures precise physical human-system interactions which are processed and analyzed using Machine Learning (ML) tools to capture and rate motor skill capabilities, while speech data is analyzed using Natural Language Processing (NLP) techniques in concert with a Large Language Model (LLM) to simultaneously assess comprehension and task-related knowledge. These insights are then integrated and used to provide feedback and adapt the learning environment dynamically, tailoring tasks and modules to the learner’s specific needs and progress. To demonstrate the feasibility of this approach, we apply the pipeline to a VR task focused on robot acceleration, which emphasizes how motor skills and cognitive understanding work together when learning about inertia in industrial robotic arms. This use case illustrates the pipeline's comprehensive workflow: data collection, multimodal processing of telemetry and speech using machine learning and AI, integration of cognitive and physical insights, and generation of adaptive, real-time feedback. The IL-PRO pipeline framework advances the development of immersive learning systems, enables research on how users combine motor skills with cognition, and enhances skill acquisition in applied training contexts such as robotics.

Mohammadreza Akbari Lor, Bhanu Vodinepally, Tisa Islam Erana, Bhavleen Kaur, Giancarlo Perez, Seth Corrigan, Shu-ching Chen, Mark Finlayson, Biayna Bogosian, Shahin Vassigh
Open Access
Article
Conference Proceedings

Integrating Robotics, AI, and Immersive Technologies: A Modular Framework for Human-Metahuman-Robot Collaboration

Collaborative robots have been rapidly increasing across industries, particularly in manufacturing settings. This advancement allows humans and robots to work side by side to complete tasks more efficiently. Moreover, with the development of synthetic actors like Metahumans, humans can now enter immersive environments where these Metahumans act as guides and help humans in task executions. However, there is limited implementation of combining both technologies (synthetic actors and collaborative robots) in the industrial field.This paper proposes a system that combines speech recognition, object detection, motion planning, and AI-enabled Metahuman guidance within an immersive environment. The system architecture demonstrates the simple fetching and positioning of components through a robotic arm commanded by a human guided by a Metahuman. The system ensures seamless communication between nodes by utilizing ROS (Noetic version) along with advanced tools for speech recognition, object detection, and motion planning. Such modular architecture allows each component—voice recognition, command parsing, object detection, and robotic motion—to function independently while collaborating through ROS communication protocols. This ensures flexibility, scalability, and ease of maintenance that makes the system adaptable to various environments and use cases. For example, voice recognition simplifies human-robot communication, while computer vision ensures accurate object detection and localization, allowing the robot to perform precise manipulation tasks.Using a Metahuman in an immersive environment enhances the user experience by providing real-time guidance and feedback. This paper aims to demonstrate how metahumans can enhance efficiency and guidance for humans and how collaborative robots can assist them in an industrial context. We present an illustration showcasing how a Metahuman can guide a human, who then commands a robotic arm according to the instructions of the Metahuman during a simple pick-and-place task on an assembly line. The goal of this work is to improve the pedagogical curve of an assembly worker by introducing Metahumans and collaborative robots.However, the system has some limitations that warrant future exploration. Currently, humans are the bridge between the metahuman and the robot. As a result, the user needs to verify with the Metahuman after every step whether the robotic arm has successfully completed its task. If direct communication between the Metahuman and the robotic arm can be established, the Metahuman could modify its commands in real time based on the robot's performance. Another limitation is the dependency on predefined object detection models, which may struggle in cluttered or dynamic environments. Similarly, the speech recognition module could benefit from enhanced capabilities to understand complex or domain-specific commands. Future research could explore reinforcement learning to improve the robot's adaptability and integrate advanced natural language processing models to handle more nuanced interactions.In conclusion, this work demonstrates a practical approach to combining robotics, artificial intelligence, and immersive technologies to create an intuitive and efficient human-robot collaboration system. The modular design facilitates ease of use and flexibility and lays a foundation for future advancements in the field. By bridging the gap between humans and robots, this system paves the way for innovative applications in industrial automation, education, and beyond, showcasing the immense potential of integrating emerging technologies to redefine human-robot interaction. Moreover, the integration of the Metahuman enables non-technical users to effectively interact with advanced robotics, making the system accessible and user-friendly.

Ramisha Fariha Baki, Apostolos Kalatzis, Laura Stanley
Open Access
Article
Conference Proceedings

Identifying the Contributors of Intrinsic, Extraneous, and Germane Load in Human-Robot Collaboration Through Interview Questions

A significant challenge in human-robot collaboration (HRC) is managing the emergent cognitive workload of the human operator. Human-robot collaboration (HRC) relies on communication, decision-making, planning, coordination, situational awareness, and error handling. These cognitive processes can lead to complex tasks for operators and increased workload, which can negatively impact the HRC effectiveness. Current research in HRC focuses on understanding and quantifying the cognitive workload imposed during the task utilizing objective and subjective measures. Objective and subjective measures can identify cognitive workload states. However, these measures cannot distinguish the specific influence of each workload type (intrinsic, extraneous, and germane) on cognitive workload. The intrinsic workload is affected by the task’s difficulty and is influenced by the information needed to be processed and the user’s existing knowledge. Extraneous workload represents the cognitive effort imposed by environmental, instructional, and presentation factors. For example, distractions, irrelevant information, confusing guidance, or information during the task can lead to an extraneous workload. Germene load refers to the cognitive effort required to process and integrate new information into long-term memory. The cognitive processes involved include information organization, connecting task demands to prior knowledge, and constructing mental models to grasp complex concepts. Each cognitive workload type uniquely contributes to cognitive processing; therefore, assessing them is important for understanding workload dynamics and optimizing task design in HRC. To better understand the effect of each workload type on cognitive load, we conducted a human-subject study where participants completed a collaborative task with a robot under low and high cognitive workload states. At the end of the task, participants completed a semi-structured interview. On performing a qualitative analysis of participants' responses, we identified key factors and themes associated with each type of cognitive load. The intrinsic workload was primarily affected by three factors: the robot's speed, the need to multitask, and the learning curve associated with the robot's navigation and design. Regarding the extraneous workload, a central theme was the robot's speed, which triggered distractions for the operator. Finally, the germane load was characterized by the following themes: acquiring knowledge, performing HRC tasks, and enhancing multitasking capabilities such as hand-eye coordination. These results highlight that different aspects of robot design, task design, and task execution contribute uniquely to the overall cognitive workload.

Apostolos Kalatzis, Vishnunarayan Girishan Prabhu, Laura Stanley
Open Access
Article
Conference Proceedings

Improving Airspace Awareness: Possible Conspicuity Solutions For Safe sUAS Operations

Currently, there is no standardized lighting system to enhance the visibility of small Unmanned Aircraft Systems (sUAS), despite reports of their limited conspicuity. This study has identified the characteristics of a lighting system (placement of the light, flash type, movement of the sUAS) that can enhance the detection and visibility of a sUAS. This work was done using a virtual reality (VR) headset, a platform that can offset or mitigate persistent issues with UAS field research. This experiment used a within-subject factorial design to explore the effects of lighting design and sUAS movement on detection and reaction time. The study included three factors: light flashing type (flashing, non-flashing, half-flashing, and half-non-flashing); light placement (top and bottom or around the perimeter of the sUAS); and relative movement of the sUAS (approaching or orbital). Participants viewed 360-degree videos of a sUAS flying. They were tasked to locate the sUAS within six seconds, relying on the drone's lighting and sound. The participants completed a total of 96 trials. Fifty participants (31 female, 19 male; mean age 24.4 years) were recruited from the student (31 participants) and general population (19 participants). Half-flashing and half-solid lights around the perimeter of the sUAS maximized the chance of quick detection. Perimeter lighting increased detection counts [F(1, 590) = 38.295, p < 0.001]. There was also a significant flash type by placement interaction [F(2,98) = 8.87, p < 0.001] for reaction time, with decreased reaction times for half-flashing/half-solid lighting placed around the perimeter. The type of relative movement depends on the vantage point of the observer and did not lead to lighting recommendations. This virtual reality-based study identified lighting configurations that increase sUAS visibility. It also highlighted the potential of VR-based experiments to increase participant turnout, decrease financial stressors, and avoid hazardous accidents. This experiment identified two factors that increase detection and decrease reaction time: a combination of solid and flashing lights around the perimeter of the sUAS. As sUAS use expands to include flying beyond line of sight and more advanced mobility aircraft fly in swarms, these lighting systems may be further examined to minimize human error.

Jennifer Martinez, Justin Macdonald
Open Access
Article
Conference Proceedings

Trust in AI and Autonomous Systems

In 2023, the Office of the Undersecretary of Defense initiated the Center for Calibrated Trust Measurement and Evaluation (CaTE) aimed at establishing methods for assuring trustworthiness in artificial intelligence (AI) systems with an emphasis on the human-autonomy interaction. As part of the CaTE effort, the DEVCOM Armaments Center’s Tactical Behavior Research Laboratory was tasked with developing standards for testing and measuring calibrated trust in AI-enabled armament systems. Qualitative and quantitative measures of trust were collected from over 80 Soldiers in table-top, force on force, simulated environments, and engineering integration events. In particular, a survey instrument, configured specifically for assessing trust in AI weapon systems, has been created for this research. Embedding with Soldiers during operational exercises using actual systems, the researchers were able to gather footage and recordings of possible human systems integration (HSI) issues. Information from this live exercise was used to configure a virtual environment experiment using the same terrain, controllers, and systems as in the live exercise. This presentation will give an overview of the research program, with the emphasis on novel HSI data collection methods.

Elizabeth Mezzacappa, Dominic Cheng, Lucas Hess, Nikola Jovanovic, Robert Demarco, Jose Rodriguez, Madeline Kiel, Kenneth Short, Alexis Cady, Jessika Decker, Mark Germar, Keith Koehler, Nasir Jaffery, Lawrence D'aries
Open Access
Article
Conference Proceedings

Enhancing Trust in Human-AI Interaction through Explainable Decision Support Systems for Mission Planning of UAS-Swarms

The use of Artificial Intelligence (AI) in decision-making systems often raises concerns about transparency and interpretability due to the "black box" nature of many AI models. This lack of explain ability can hinder trust and limit the effective integration of AI into human-machine systems. Fuzzy logic offers a compelling solution to this challenge by providing inherently interpretable decision-making frameworks. Unlike traditional AI approaches, which often obscure their reasoning processes, fuzzy logic operates through intuitive linguistic rules and degrees of truth, making it possible to design systems that are both explainable and adaptable. By combining fuzzy logic with advanced AI techniques, such as machine learning, it becomes feasible to build systems that leverage the power of AI without sacrificing transparency or user trust. Fuzzy logic plays an essential role in advancing these goals by offering a framework for handling uncertainty and modeling human-like reasoning. Unlike classical logic, which relies on binary true/false values, fuzzy logic operates on degrees of truth. This characteristic makes it uniquely suited for real-world applications where data may be incomplete or imprecise. By using linguistic variables and intuitive rules, fuzzy logic enables decision-making systems to align more closely with human perception. For example, instead of rigid thresholds like “temperature > 30°C,” fuzzy logic employs terms such as “moderately warm” or “very hot,” which are easier for humans to understand. This interpretability is particularly valuable in Human-Machine Interface design, where trust and collaboration between users and machines is a key factor. This study proposes a structured approach to integrating fuzzy logic into mission planning systems for safety-critical environments such as autonomous disaster management with drone swarms. The first step involves defining a set of linguistic variables and rules tailored to specific operational contexts—for instance, assessing flight risks or prioritizing tasks during disaster response scenarios. These rules will be designed to align with human reasoning patterns while remaining computationally efficient. Next, fuzzy inference systems will be developed to process uncertain inputs—such as environmental conditions or sensor data—and generate interpretable outputs that guide decisionmaking. To enhance system adaptability, reinforcement learning algorithms will be integrated into the framework, allowing the system to optimize its performance over time based on feedback from real-world operations or simulations. The study seeks to demonstrate how combining fuzzy logic with machine learning and XAI principles can create robust, explainable systems that improve trust, collaboration, and safety in human-autonomous teaming environments.

Batuhan Özcan, Max Friedrich, Jan-Paul Huttner, Kevin Dwinger
Open Access
Article
Conference Proceedings

Designing Multimodal Human-Robot Interaction for Social Robots in Office Environments

Social robots are becoming increasingly relevant and are expected to enter our professional lives. A key challenge lies in designing these robots to ensure their behaviors are easily and intuitively understood by the users and match their expectations. This paper focuses on the design and implementation of a multimodal interaction system for social robots in office environments. Following a user-centered design approach, we iteratively developed and evaluated a prototype through multiple user studies, addressing various office-related use cases such as welcoming arriving guests and guiding them to a room. The results provide key insights into the multimodal interaction user experience of our robot. This helped us to identify key requirements and features for natural and engaging interactions.

Sebastian Pimminger, Werner Kurschl, Johannes Schönböck, Gerald Zwettler
Open Access
Article
Conference Proceedings

Automated vehicles with communication capabilities: Is there an added impact on traffic efficiency at yield sign-controlled intersections?

The present study aimed to evaluate the effects of Automated Vehicles (AVs) with and without communication capabilities on traffic efficiency at yield sign-controlled intersections. When equipped with communication capabilities, the AVs may be able to take earlier decisions on whether they need to yield or not and this may affect, consequently, their travel time and queue formation at the intersection. Detailed models of intersection behaviour were developed for Baseline AVs (that use only their onboard sensors for perception and decision taking) and for Enabled AVs (equipped with communication capabilities). A microscopic simulation study was carried in a yield sign-controlled intersection, with varying traffic volumes and AV penetration rates. The findings support the expectation that the addition of communication capabilities may reduce the travel time and queue length at such intersections. The size of reduction and the number of vehicle routes affected seemed to attenuate as the AV penetration and traffic volume increased. The relation between the number of routes affected and traffic volume was not straightforward.

Niki Georgiou, Evangelia Portouli, Angelos Amditis
Open Access
Article
Conference Proceedings

Shared Design Principles in Human-Robot Systems: A Work Domain Perspective

Amid the growing focus on integrating smart technologies in modern industries, human-centred design in human-robot collaboration (HRC) systems remains underdeveloped and largely research-oriented. Therefore, there is a need for practical approaches that assist designers in creating more effective human-centred HRC systems. In this regard, this study applies Work Domain Analysis (WDA) to interaction design in three multi-human, multi-robot (MH-MR) systems, including industrial assembly, construction, and agriculture, as part of the EU Horizon project SOPRANO. Separate WDAs for each use case, analyse the system from different abstraction levels, identify key interaction points between system components, and form the basis for initial sociotechnical requirements in interaction design. A thematic comparison of these requirements across four key areas— (1) user interfaces and communication systems, (2) control and authority sharing, (3) workflow synchronization, and (4) safety assurance—reveals shared design principles and system-specific considerations. These insights contribute to the development of advanced, human-centred collaborative robotics adaptable to diverse work environments.

Nooshin Atashfeshan, Felix Dreger, Yannick Metzler, Fabian Rösler, Georgios Karantinakis, Georgios Athanassiou
Open Access
Article
Conference Proceedings

Multidisciplinary Perspectives on Ethical AI-Enabled Human-Robot Interaction in Manufacturing

In recent years, AI-enabled technologies have become an integral part of our daily lives. While industries such as finance, healthcare, and logistics have rapidly adopted AI-driven solutions, the manufacturing sector has approached this transition more cautiously. The integration of AI-enabled human-robot interaction (HRI) in manufacturing presents opportunities and challenges impacting workforce sustainability, ergonomics, user acceptance, and ethical deployment. This qualitative study employed operator engagement workshops and semi-structured interviews to identify critical operational and safety concerns in powder handling for beverage production. Key findings revealed significant ergonomic issues, notably physical strain and airborne dust exposure, prompting recommendations for adaptive robotic systems and real-time monitoring sensors to enhance operator comfort and safety. User acceptance emerged as essential but context-specific, driven by mandated interactions and reliant on trust built through transparent communication and standardized training. Ethical concerns focused on transparency, fairness, and privacy, particularly the balance between effective surveillance and respecting worker privacy. Additionally, workforce skill sustainability requires comprehensive training to address emerging roles. The study concludes that a multidisciplinary, human-centered approach is vital for successful, ethical, and sustainable AI integration into manufacturing environments

Maryam Bathaeijavareshk, Iveta Eimontaite, Sarah Fletcher, Nikolaos Koufokotsios
Open Access
Article
Conference Proceedings

Human-Robot Communication: Utilizing Light-Based Signals to Convey Robot Operating State

The field of human-robot interaction has been rapidly expanding but an ever-present obstacle facing this field is developing accessible, reliable, and effective forms of communication. It is often imperative to the efficacy of the robot and the overall human-robot interaction that a robot be capable of expressing information about itself to humans in the environment. Amidst the evolving approaches to this obstacle is the use of light as a communication modality. Light-based communication effectively captures attention, can be seen at a distance, and is commonly utilized in our daily lives. Our team explored the ways light-based signals on robots are being used to improve human understanding of robot operating state. In other words, we sought to determine how light-based signals are being used to help individuals identify the conditions (e.g., capabilities, goals, needs) that comprise and dictate a robot’s current functionality. We identified four operating states (e.g., “Blocked”, “Error”, “Seeking Interaction”, “Not Seeking Interaction”) in which light is utilized to increase individuals’ understanding of the robot’s operations. These operating states are expressed through manipulation of three visual dimensions of the onboard lighting features of robots (e.g., color, pattern of lighting, frequency of pattern). In our work, we outline how these dimensions vary across operating states and the effect they have on human understanding. We also provide potential explanations for the importance of each dimension. Additionally, we discuss the main shortcomings of this technology. The first is the overlapping use of combinations of dimensions across operating states. The remainder relate to the difficulties of leveraging color to convey information. Finally, we provide considerations on how this technology might be improved going into the future through the standardization of light-based signals and increasing the amount of information provided within interactions between agents.

Kurtis Riener, Vipashyana Patel, Lin Jiang, Behin Elahi, Mahima Suresh, Lesther Papa, Yue Luo
Open Access
Article
Conference Proceedings