Human Factors in Robots, Drones and Unmanned Systems

book-cover

Editors: Alex, ra Medina-Borja, Krystyna Gielo-Perczak

Topics: Robots, Drones and Unmanned Systems

Publication Date: 2024

ISBN: 978-1-964867-14-4

DOI: 10.54941/ahfe1005002

Articles

Integrating Episodic and Semantic Memory in Machine Teammates to Enable Explainable After-Action Review and Intervention Planning in HAA Operations

A critical step to ensure that AI systems can function as effective teammates is to develop new modeling approaches for AI based on the full range of human memory processes and systems evidenced by cognitive sciences research. In this paper we introduce novel techniques that integrate episodic and semantic memory within Artificially Intelligent (AI) teammates. We draw inspiration from evidence that points to the key role of episodic memory in representing event-specific knowledge to enable simulation of future experiences, and evidence for a representational organization of conceptual semantic knowledge via self-organizing maps (SOMs). Together, we demonstrate that these two types of memory working in concert can improve machine capabilities in co-learning and co-training scenarios. We evaluate our system in the context of simulated helicopter air ambulance (HAA) trajectories and a formal model of performance and skill, with interventions to enable an AI teammate to improve its capabilities on joint HAA missions. Our modeling approach contrasts with traditional neural network training, in which specific training data is not preserved in the final trained model embedding. In contrast, the training data for our model consists of episodes containing spatial and temporal information that are preserved in the model’s embedding. The trained model creates a structure of relationships among key parameters of these episodes, allowing us to understand the similarity and differences between performers (both human and machine) in outcomes, performance, and trajectory. We further extend these capabilities by enhancing our semantic memory model to encode not just a series of episodes, but labeled directed edges between regions of semantic memory representing meta-episodes. These directed edges represent interventions applied by the performer to improve future episodic outcomes in response to identified gaps in capability. These interventions represent the application of specific co-training strategies as a labeled transition system, linking episodes representing pre-intervention and post-intervention performance. This allows us to represent the expected impact of interventions, simulating improvements and skill decay, providing the machine with team-aligned goals for self-improvement between episodes to positively impact future teamwork.

Eric Davis, Katrina Schleisman
Open Access
Article
Conference Proceedings

Navigating the Seas of Automation: Human-Informed Synthetic Data Augmentation for Enhanced Maritime Object Detection

Digitalization and increased autonomy in transportation have the potential to create sustainable, safer, and more efficient service chains, contributing to a better quality of life and global prosperity. Key technologies, including AI, sensor fusion, and deep learning, are already available for autonomous vessels. However, the challenge lies in effectively integrating these technologies, particularly in the complex and dynamic maritime environment.The demand for autonomous maritime systems has driven the integration of machine learning to enhance intelligence, particularly in object detection with computer vision. This task faces complexities due to factors such as lighting, weather conditions, and waves. However, ensuring the accuracy and trustworthiness of machine learning algorithms poses a significant challenge, primarily related to acquiring a well-prepared dataset. Creating a detailed dataset covering diverse scenarios proves difficult, time-consuming, and costly across various research areas. Data scarcity in maritime settings hampers progress, given the intricate and expensive nature of data collection and labeling. Additionally, the relatively new concept of autonomy in this domain limits the availability of relevant datasets, compounded by challenges posed by diverse weather conditions during data collection.In 2022, our aim was to build a comprehensive image dataset in Finland's maritime domain, consisting of 120,216 RGB annotated images. Evaluation by a maritime expert revealed a lack of diversity in weather conditions within our dataset, prompting the need to incorporate human opinions.To overcome data scarcity, especially in varying weather conditions, we propose a novel approach for maritime object detection. Our method employs human-informed synthetic data augmentation using Generative Adversarial Networks (GANs), implemented through 4Sessions-Net (4S-Net). This innovative strategy positively impacts labeled data and addresses challenges related to dataset imbalance and insufficiency.Synthetic data generation using GAN networks, such as 4S-Net, is a cutting-edge solution to overcome these limitations. This paper introduces 4S-Net, which augments labeled data, positively impacting results. However, the synthetic data's complexity may not match real-world scenarios, necessitating model evaluation with real data.The dataset, collected in the complex Finnish archipelago, was accurately labeled and extended with synthetic data representing different weather conditions. Comparative analysis involving three CNNs on the original and new datasets, including GAN-generated data, reveals superior accuracy in models trained on the new dataset.In summary, while digitalization and autonomy offer promise, data scarcity and environmental challenges in maritime settings hinder progress, requiring a high level of understanding and contribution from domain experts. Synthetic data generation through GAN networks based on expert opinion, as demonstrated with 4S-Net, is a key solution resulting in improved model accuracy. This approach not only addresses the limitations of real-world data collection but also contributes to advancing the application of machine learning in maritime autonomy. The results demonstrate significant improvements in accuracy and reliability while simultaneously reducing the cost and time of data collection through the incorporation of expert opinions in dataset creation.

Amin Majd, Mehdi Asadi, Juha Kalliovaara, Tero Jokela, Jarkko Paavola
Open Access
Article
Conference Proceedings

How Robot Arms can be used for Bin Picking and Shelf Picking

Recently, there has been a substantial amount of concern regarding the utilization of robot arms for bin picking and shelf picking. The escalation in the volumes of e-commerce, online grocery, and automated warehousing has brought increased attention to this issue. In this paper, typical bin picking and shelf picking systems are reviewed, their limitations analyzed, and a novel solution inspired by the grasp detection algorithm is presented.

Seemal Asif, Shifan Li
Open Access
Article
Conference Proceedings

4-DOF Robotic Arm Simulator for Machine Operator Training and Performance Evaluation: Engineering Design and Experimental Validation

Robotic crane operators are essential in construction or forestry (e.g. excavator or forest harvester cranes), where their performance significantly impacts efficiency and safety. Training for crane operators relies on high-fidelity simulations to develop high skill levels. However, productivity analyses revealed large variances among machine operators, with disparities by up to 40%. Therefore, skill acquisition must be advanced through improved training methods, which are based on a deeper understanding of sensory-motor control of the crane. Typically, used training simulators provided by the original equipment manufacturers (OEMs) lack access to e.g. detailed joystick data as well as lack the possibility to modify the simulations to include real-time performance feedback. To address this limitation, a robotic crane simulator was collaboratively designed by the Leibniz Research Centre for Working Environment and Human Factors and the Chair of Computer Graphics from TU Dortmund. The simulator was evaluated within a pilot study with 36 participants who conducted 32 aiming movements with the simulated robotic crane. The results show skill improvements over time and the suitability of the simulator to analyse skill acquisition in robotic crane operations.

Felix Dreger, Sarah Kuhlmann, Frank Weichert, Gerhard Rinkenauer
Open Access
Article
Conference Proceedings

An Update on International Robotic Wheelchair Development

Disability knows no boarders, so the development of assistive technology is an international effort. This review is a follow up to our previous comprehensive review (Leaman 2017) and a recent mini-review (Sivakanthan 2022). The transition from Power Wheelchair to Robotic Wheelchair (RW) with various operating modes like, Docking, Guide Following, and Path Planning for Autonomous Navigation, has become an attainable goal. Thanks to the revolution in Aerial Drones for the consumer market, many of the necessary algorithms for the RW software have already been developed. The challenge is to put forward a system that will be embraced by the population they are meant to serve. The Human Computer Interface (HCI) will have to be interactive, with all input and output methods depending on the user’s physical capabilities. In addition, all operating modes have to be customizable, based on the preferences of each user. Variables like maximum speed, and minimum distance to obstacles, are input conditions for many operating modes that will impact the user’s experience. The HCI should be able to explain its decisions in order to increase its trustworthiness over time. This may be in the form of verbal communication or visual feedback projected into the user’s field of view like augmented reality.Given the commitment of the international research community, and the growing demand, a commercially viable RW should become reality within the next decade. This will have a positive impact on millions of seniors and people with disabilities, their caregivers, and the governments paying for long-term care programs. The RW will pay for itself by reducing the number of caregiver hours needed to provide the same level of independence. The RW should even positively impact the economy since some users will have the confidence to return to work, and many will be able to participate in social events.

Jesse Leaman, Hung La, Bing Li
Open Access
Article
Conference Proceedings

Requirements for Successful Human Robot Collaboration: Design Perspectives of Developers and Users in the Scope of the EU Horizon Project FELICE

To be successful, the development, implementation, and establishment of human-robot collaboration (HRC) should be based on an objective, human-centered requirements analysis. However, developers often neglect the fact that users may possess significantly different but highly relevant perspectives due to task-related experiences.In the EU Horizon FELICE project, which is developing a team cobot as a support system for assembly workers, two focus groups (technical developers vs. users) were conducted. The participants discussed the requirements and possible challenges for successful HRC using the example of a handover task. Both focus groups emphasize usefulness, reliability, and safety as (the most) important criteria for successful HRC, user trust, and user acceptance. Technical developers stress the importance of precise timing, avoidance of task-interruptions and the provision of relevant information during collaboration, while the users highlight that HRC can create unsafe and stressful situations due to poor or no communication, low system reliability, and lack of safety. This underscores the need for a general understanding of the collaborative task design and specific information about the individual actions and events throughout the collaborative task. This may be implemented via training, which is considered to be important by both groups.This example shows that potential human-centered requirements, which affect direct technical requirements, are at the forefront of the developers view. Contrastingly, users focus on the outcome and the impact on the worker as driving the requirements. Despite this gulf, the implications to adapt the design process are minor in this particular case.

Felix Dreger, Melanie Karthaus, Yannick Metzler, Felice Tauro, Vincenzo Carrelli, Georgios Athanassiou, Gerhard Rinkenauer
Open Access
Article
Conference Proceedings

A Method for Human-Robot Collaborative Assembly Action Recognition Based on Skeleton Data and Transfer Learning

Human-robot collaborative assembly (HRCA) has become a vital technology in the current context of intelligent manufacturing. To ensure the efficiency and safety of the HRCA process, robots must rapidly and accurately recognize human assembly actions. However, due to the complexity and variability of the human state, it is challenging to accurately recognize such actions. Furthermore, with the lack of a large-scale assembly action dataset, the model only constructed from the data obtained in a single assembly scenario demonstrates limited robustness when applied to other situations. To achieve rapid and cost-effective action recognition, this paper proposes a method for human action recognition based on skeleton data and transfer learning. First, we screen the action samples which are similar to assembly actions from the NTU-RGB+D dataset to build the source dataset and reduce the dimension of its skeleton data. Afterwards, the Long Short-Term Memory (LSTM) network is used for learning universal features from the source dataset. Second, we use Microsoft Kinect to collect skeleton data of human assembly actions as the initial target dataset and use the sliding time window method to expand its size. After aligning the data of two datasets, the gradient freezing strategy is adopted during the transfer learning process to transfer the features learned from the source dataset into the recognition of HRCA actions. Third, the transfer model is validated through a small-scale reducer assembly task. The experimental results demonstrate that the method proposed can achieve assembly actions recognition rapidly and cost-effectively while ensuring a certain level of accuracy.

Shangsi Wu, Haonan Fang, Peng Wang, Xiaonan Yang, Yaoguang Hu, Jingfei Wang, Chengshun Li
Open Access
Article
Conference Proceedings

Design a robot that is able to…: Gender stereotypes in children’s imagination of robots

Gender stereotypes not only affect relationships between humans but can also be activated in interactions with technologies. This is especially true when technologies take on anthropomorphic or even humanoid aspects, as is the case with many robots. How much gender stereotypes in interactions with robots are already active in children is still a matter of debate. Above all, it can be noted that studies on this issue have been conducted by asking for evaluative judgments with reference to existing robots, whether these were real, simulated, or only represented through images. This approach may have introduced an evaluative bias due to the design of the robots themselves by their manufacturers. Therefore, it appeared necessary to conduct a study in which children were asked to design, more specifically to draw, a robot that was able to perform either a more stereotypically female or a male task.Method and ProcedureParticipantsSixty children (28 girls, 46.6%) aged 11 to 13 years participated in the study. MethodThe study was carried out at school under the presence of the teachers and two researchers. During a first phase, the children were asked to individually draw a robot able to perform either the task of shoveling the snow (stereotypically a male task) or decorating a house (stereotypically a female task). They were told that the aim of this activity was to provide help to designers in implementing such robots. The children had no time limits to complete the drawings.In a second phase, after having drawn the robots and still in the classroom, children were asked to fill out a printed questionnaire with the aim to collect some general personal information (age, gender and class attended) and further data on the robots (gender, age, anthropomorphic characteristics, and the materials the robot was made of).ResultsChildren drew equally (45%) robots that they said were neither male nor female, or that were male. Only 6 (10%) female robots were drawn.This result is not related to whether children chose to draw robots to decorate the house or to shoveling the snow. In fact, drawings related to these two tasks were produced in almost the same percentage: 32 robots for decorating the house (53.33%) and 28 robots for shoveling the snow (46.67%).It is worth noting that the 6 female robots were all drawn by girls and that the male children drew male robots in greater numbers (N = 18) ((2) = 8.81, p < .02).In relation to the level of anthropomorphism, the results suggest that the children wanted to draw robots scarcely resembling human beings: they were almost all made of metal (N = 55; 91.67%), only 15 (25%) had anything resembling a face, and those with an actual face were only 3 (5%).ConclusionFor children, gender stereotypes in reference to robots seem to refer mainly to the fact that these technologies are considered masculine or gender-neutral regardless of their anthropomorphic characteristics.

Paola Palmitesta, Margherita bracci, Francesco Currò, Stefano guidi, Enrica marchigiani, Oronzo parlangeli
Open Access
Article
Conference Proceedings

Combinatorial Effects of Unmanned Vehicles on Operator’s Mental Workload and Performance for Searching

With the advancement of artificial intelligence technology, unmanned vehicle (UV) systems, including unmanned aerial vehicle (UAV) and unmanned ground vehicle (UGV), have been appearing in many scenarios with wide-ranging applications. It is a crucial factor for enhancing human-computer collaboration efficiency to analyze the impact of unmanned vehicle system combinations on operators. The research in this paper presents the effects of different combinations of UAVs and UGVs (1UAV + 1UGV, 1UAV + 2UGVs, 2UAVs + 1UGV, 2UAVs + 2UGVs) on searching performance and operator’s mental workload for accomplishing search tasks. Completion times for tasks and subjective data with operators (N=16) were collected by using Psychopy and questionnaires, respectively. The results in this research indicate that binary growth of controllable unmanned vehicles doesn’t improve the UV utilization rate, but costing longer completion time and increasing operator’s mental workload. Since an excessive number of unmanned vehicles could have a negative impact on task performance, the insights in the paper are helpful for the design of unmanned vehicle systems and the research of human-computer collaboration.

Yonghao Huang, Omar Alqatami, Wei Zhang
Open Access
Article
Conference Proceedings

Re-expression of manual expertise through semi-automatic control of a teleoperated system

While the search for new solvents in the chemical industry is of uttermost importance with respect to environmental considerations, this domain remains strongly tied to highly manual and visual inspection tasks by human experts. As the manipulated chemicals may imply a critical danger (CMR substances), mechanical protection barrier are used (fume hoods, gloveboxes). This, in turn, can induce postural discomfort in the long term. Carrying out this task using a remotely controlled robot to reproduce the desired vial motions would alleviate these postural constraints. Nevertheless, the adoption of such a system will depend on its ability to transcribe the users’ expertise. Particular attention must be paid to the intuitiveness of the system : transparency of the actions performed, relevance of the perceptual feedback, etc. and, in particular, the fidelity of the movements performed in relation to the user's commands. However, the extent of the rotational movements to be generated and the task interactivity complicates the problem both from the point of view of the motor capacities of industrial robots and for the transparency/responsiveness of the control.To tackle the problen of guaranteeing a secure and reactive expression of the manual characteristics of this task, we propose to separate the control of movement into two parts: control of the path (set of spatial poses) and of the trajectories associated with this path (speed, direction of travel along the path). The user can then partially control the robot's movements, by choosing the type of generic, secure path and modulating the trajectory performed on this path in real time. Although this drastically limits the possibilities for interaction, we assume that this teleoperated system can enable this type of observation task to be carried out as effectively as for direct manipulation. This hypothesis was tested through an experiment in which a reading task, less dangerous but with similar characteristics to the application task, had to be performed using different variants of trajectory modulation. This experiment consisted in reading words printed on four white capsules (dimensions 6 x 12 mm) placed into cylindrical vials ( dimensions 16 mm x 70 mm). Four randomly selected vials were tested by each variant. Firstly, users had to perform the task via direct handling, then under conditions secured by a protection barrier. Users were then invited to perform the task using different trajectory modulation variants (modulation and passive viewing of a pre-recorded video, modulation of the trajectory of a Franka-Emika Panda robot performing the task in real time in front of a monocular Logitech Brio 4K camera). After each trial of a variant, users evaluate different aspects of this variant (manual and visual performance, ease of use, acceptability of the interface) through a questionnaire. During the trials, various objective criteria are also measured (number and nature of interaction with the interface, time and degree of success in the task). This experiment was carried out with 37 subjects (age : 27±5, 20 females). The data recorded showed that the proportion of successes, as well as the subjects' perceptions of visual performance, comfort of use and acceptability of the interface, were similar and high for all the variants. This suggests that this task is indeed achievable via the proposed interface. However, data also showed that average task completion times when using the trajectory modulation variants were significantly higher than handling by hand variants, which implies that the proposed remote semi-automatic control procedure fails to achieve satisfactory performance regarding execution time. An interface allowing more reactive manipulation of the vial's movements seems necessary, and will be tested in a future experiment.

Erwann Landais, Nasser Rezzoug, Vincent Padois
Open Access
Article
Conference Proceedings

Identification of management and supervision critical tasks for a multi-effector automated system. An applied cognitive approach

Recent advances in Artificial Intelligence (AI) have opened up new possibilities for automation, particularly in dynamic, high-risk environments such as fighter aircraft operations. Pilots' activities will change considerably as a consequence. This will be particularly the case when fighter pilots have to manage and supervise the behaviour of future automated multi-effector systems in a post release phase. These systems will have the ability to adjust their own actions and will therefore become the pilot's partners, capable of interacting like real teammates. This capability is being explored using the theoretical concept of Human Autonomy Teaming (HAT). Recent studies in this field (O'Neil, 2022) point to the importance of several characteristics needed to establish and maintain this human-system cooperation. Exploratory research carried out within a French fighter squadron identified a set of tasks making up this new activity and linked to the use of these future multi-effector systems. This study highlighted two key issues: (i) the need to measure the importance of each task in relation to the overall mission, and (ii) the relevance of quantifying the reduction in mental effort enabled by the automation of these tasks. Our current study focused on identifying the tasks assessed as critical by fighter pilots when using these multi-effector systems in post release phase. We also aimed to determine which tasks would benefit from automation, in order to reduce the cognitive cost without compromising operational performance. To this end, we immersed 21 pilots in the scenario of an air operation, including the tasks inherent in managing the multi-effector system after release. The pilots assessed the criticality of each task for the success of the mission and estimated the mental effort required for each operating mode - which increasingly incorporate automation. In order to reveal the potential cognitive benefits of automation of each task. These operating modes were adapted from the cooperation modes (Hoc, 2006). Our results show significant differences in criticality and in the mental effort required, depending on the task and the level of automation. These findings make it possible to identify a set of tasks linked to the management of the firing plan as priorities for the integration of automation into new weapon systems. This research underlines the imperative of understanding both the cognitive and operational needs of pilots in this technological evolution for effective cooperation between the human and the system in high-risk environments.

Benjamin Coulomb, Julien Donnot, Aurélie Klein, Jean-marc Andre, Françoise Darses
Open Access
Article
Conference Proceedings

Bidirectional Human-AI/Machine Collaborative and Autonomous Teams: Risk, Trust and Safety

We address the bidirectional challenges in developing and managing interdependence for AI/machine collaboration in autonomous human-machine teams. Recent advances surrounding Large Language Models have increased apprehension in the public and among users about the next generation of AI for collaboration and human-machine teams. The anxieties that have grown regard the risk, trust, and safety from the potential uses of AI/machines in open environments, including unknown issues that might also arise. These concerns represent major hurdles to the development of verified and validated engineered systems involving bi-directionality across the human-machine frontier. Bi-directionality is a state of interdependence. It requires understanding the design and operational consequences that machine agents may have on humans, and, interdependently, the design and operational effects that humans may have on machine agents. Current discussions on human-AI interactions focus on the impact of AI on human stakeholders; potential ways of involving humans in computational interventions (e.g., human factors; data annotation; approval for drone actions); but these discussions overlook the interdependent need for a machine to intervene for dysfunctional humans (e.g., in 2015, the copilot aboard a Germanwings airliner committed suicide, killing all aboard; in 2023, a pilot ejected from an F-35, allowing the plane to fly unguided for an additional 60 miles). Technology is advancing rapidly: Self-driving cars; drones able to fly and land autonomously; self-landing reusable rockets; Air Force loyal wingmen. The technology is available today for bi-directional AI/machine collaboration and autonomous human-machine teams to better protect human life now and in the future. Thus, despite the engineering challenges faced, we believe that the technical challenges associated with humans and AI/machines cannot be adequately addressed if the social concerns related to risk, trust and safety caused by bi-directional forces are not also taken into consideration.

William Lawless
Open Access
Article
Conference Proceedings

Human Autonomy Teaming: proposition of a new model of trust

The literature regarding trust between a human and a technological system is abundant. In this context, trust does not seem to follow a simple dynamic given the multiple factors that impact it: mode of communication of the system, appearance, severity of possible system failures, factors favoring recovery, etc.In this work, we propose a modeling of the dynamics of the trust of a human agent towards an autonomous system (Human Autonomy Teaming HAT) which is inspired by a hysteresis cycle. The latter reflects a delay in the effect in the behavior of materials called inertia. According to this same principle, the variation in confidence would be based on a non-linear relationship between confidence and expectation. Thus, these variations would appear as interactions occur (like a discrete variable), rather than on a continuous time scale.Furthermore, we suggest that trust varies depending on: the conformity of expectations, the previous level of trust, the duration of maintaining a good or bad level of trust, and the interindividual characteristics of the human agent.Expectations reflect the evaluation of the situation estimated by the human agent on the basis of the knowledge at its disposal and the expected performance of the system. At each confrontation with reality, if the perceived reality agrees with the expected then the expectations are consistent, otherwise they are non-compliant. Depending on the initial state of trust, these expectations will influence the variation in trust. The latter is determined through the hysteresis cycle. At both ends of the cycle, the level of trust is characterized as either calibrated trust or distrust. Indeed, confidence does not increase towards a maximum, but towards an optimal level: calibrated confidence. This is a level of confidence adapted to the capabilities of the autonomous system. Conversely, trust decreases to a level of distrust. This corresponds to the situation where the individual does not trust the system and rejects it. In our context of use, the individual is obliged to continue to interact with the autonomous system, which opens the possibility of overcoming this distrust and restoring all or part of the initial trust.We propose that maintaining this level of calibrated trust or distrust results in an inertia effect. The more trust is maintained at one of these levels, the greater the inertia. Thus, calibrated trust established over a short period of time will be more affected by non-compliant expectations than calibrated trust established over the long term.Furthermore, the evolution of trust is influenced by individual criteria. Although the model described here is generic, it can be personalized according to the predispositions of the human agent: propensity for trust, personality trait, attitudes towards technological systems, etc.The model presented is not intended to debate the nature of trust. It illustrates and explains the dynamics of trust, a key factor in the HAT relationship, both at the origin of this interaction and for the results it produces.

Helene Unrein, Théodore Letouzé, Jean-Marc André, Sylvain Hourlier
Open Access
Article
Conference Proceedings

Robotics and autonomous systems in public realm: an exploration of human, ethical and societal issues in emergency first response operations

In the face of increasing threats from climate change and natural hazards, the need for faster, safer, and more effective first response operations has become paramount. This has led to a growing focus on the potential of robotics aids and autonomous systems to support first responders in their duties. While these technologies hold promise for more efficient onsite operations and reduced risk exposure for first responders, there are emerging concerns about their adaptability to real environment constraints, usability, and societal impacts. Scientific literature only mention high-level concerns about human-centric approach and generic ethical issues, but these are worthy to be identified and elicited in parallel with the evolution of technical requirements and specification, to build capacity of estimating the extent of new operating methods and procedures impact on victims and responders, but also on other stakeholders. Guidelines to steer choices of emergency personnel already exist, for instance in the case of medical personnel, but first response automation might imply unknown or indefinite dilemmas on aspects such as fairness and discrimination, false or excessive expectations, privacy, physical and psychological safety, liability. The paper proposes a review of the current status of human and societal issues in robotics and automation, eliciting human factors and ergonomics specific issues to foster the human-centric approach claimed by European Union.

Gabriella Duca, Raffaella Russo, Vittorio Sangermano
Open Access
Article
Conference Proceedings