Situation awareness training as a prerequisite for handling complexity in Human-autonomy teaming: Demonstration and experiment proposal

Open Access
Article
Conference Proceedings
Authors: Rune StensrudSigmund ValakerAleksander SimonsenOlav Rune Nummedal

Abstract: Human-autonomy teaming (HAT) is characterized by high degrees of interdependence between humans and machine (Lyons, 2021). This underscore the need for human-autonomy teams (HATs) defined as “at least one human working cooperatively with at least one autonomous agent” (McNeese et al., 2018, p.262). However, this interdependence may vary, for example according to how well the human (s) and machine (s) may solve subtasks autonomously. Drawing on the extant literature on human decision making, the ability to project future events is essential to prioritize and use both human and machine resources in ways that accomplish tasks (Endsley, 2000). The question arises as to how humans and machines can be enabled to make such projections together. We here focus on the human’s part of this information processing and decision making and the needs for for adjusting the mode of collaboration due to the changes in the environment (Lundberg & Johansson, 2021; Stensrud, Mikkelsen & Valaker, 2023). The human may or may not take the initiative to change the mode of collaboration, such as in engaging in more detailed collaboration. What may ensure that the human is enabled and do take the initiative to change the mode of collaboration? Both cognitive, emotional as well as issues such as task load may influence the degree to which the human changes its mode from loose to tight control and/or input or vice versa (Endsley & Garland, 2000) what we may also call nuances to configurations of the team architecture (O’Neill et al., 2023). Recent reviews indicate that maintaining awareness is critical, yet can be impaired over prolonged time (Casner & Hutchins, 2019). Specifically we concentrate on the situation awareness (SA) level 3, projecting future state of elements in the environment and the switching from one way of collaborating to another. In short, in our example this concerns the ability to foresee a change from a relatively stable environment with easily observable entities to one that has more complexity regarding the entities to observe and their interrelations. Our reasoning is that if the human are able to form predictions of changes in the environment it can also be enabled to change its way of collaborating. Given the detrimental effects of time-pressure and task-load and fatigue etc. that may impede the forming of sound predictions (Endsley & Garland, 2000), we propose preparation that reduce the risk of such impediments and that empower the human to make predictions.Automation can [traditionally] be defined as that in which “the system functions with no/little human operator involvement: however, the system performance is limited to the specific actions it has been designed to do”. Endsley (2015, “Autonomous Horizons” (p. 3)). Autonomy is often characterized in terms of the degree to which the system has the capability to achieve mission goals independently, performing well under significant uncertainties, for extended periods of time, with limited or non-existent communication, and with the ability to compensate for system failures, all without external intervention. Autonomy can be thought of as a significant extension of automation in which very high-level mission-oriented commands will be successfully executed under a variety of possibly not fully anticipated circumstances […] given adequate independence and task execution authority. Autonomy can be considered as well designed and highly capable automation.[..] over the next 30 years […] we will see a gradual evolution of system control, with intermediate levels of autonomy being applied to various functions. As the autonomy developed becomes more capable over time, can handle a greater range of functions, and can handle greater ranges of variability in the environment, systems will slowly evolve to more autonomous operations for longer periods of time. (Endsley (2015), «Autonomous Horizons» (p. 4)).Having mapped the interdependencies we indicate ways of formally testing the influence of SA level 3 training on human adaptivity. Firstly we suggest an inductive approach whereby a group of humanoperators are followed through their learning and familiarization with the system. Included are time spent on making plans for and contingencies plans for the employment of the autonomous system.During phases of teaming with the system we will record the input the humans make to their system interface. We will categorize what type of “orders” and other input the humans make. We willcategorize them according to the delegating supervisor approach (Levels of automation) versus more collaborative (mixed initiative and coactive design) types of approaches. This mapping willenable us to describe potential learning cycles in the familiarization and make hypothesis about the role of SA level 3 training.Our future research endeavors will embark on this and report the empirical results in the near future.

Keywords: Levels of automation, Mixed-initiative design, Co-active design, Human Systems Integration, Systems Engineering, Environmental characteristics, Coordination, Situation awareness, training, Human-autonomy teaming, Experiment

DOI: 10.54941/ahfe1004514

Cite this paper:

Downloads
133
Visits
470
Download