Human Autonomy Teaming and AI Metacognition in Maritime Threat Assessment

Open Access
Article
Conference Proceedings
Authors: Kathryn SchulzeAdele GallantTanya S PaulCindy ChamberlandDaniel LafondSebastien TremblayHeather Neyedli

Abstract: Human-autonomy teaming (HAT) is implemented across numerous industries as a means of increasing workload capabilities, without increasing worker cognitive load. However, autonomous systems face a major sociotechnical integration challenge when they must collaborate with human operators, which hinders their effectiveness. Specifically, human-AI agent teamwork comes with new cognitive costs and skill requirements for humans and artificial agents. By improving shared understanding and mutual adaptation it is possible to overcome these gaps, specifically through human-AI co-learning (HACL) of teamwork and taskwork. We hypothesize that to be effective, HAT systems must focus on more than simply human and AI-based counterparts learning how to perform required taskwork. They must implement HACL to learn how to engage together in the teamwork processes, developing mutual understanding and trust for effective mission management and adaptation. Implementing an adaptive command and control process with adjustable HAT, augmented by AI metacognition, has significant potential to instigate HACL. Cognitive Shadow (CS) is an expert policy capturing toolkit that can automatically learn human decision patterns using a combination of supervised machine learning algorithms, classification or regression. Its main goal is to learn from experts and then provide real-time automation support, enhancing HAT effectiveness though judgmental bootstrapping. Moreover, CS provides real-time, dynamic model adjustments based on immediate user feedback, facilitating continuous improvement in decision-making recommendations. New AI metacognition capabilities have expanded CS, using a recursive approach to model its own reliability based on situation attributes. The meta-model supervises the decision support model, learning to predict when it is likely to be correct and when it has a greater risk of being wrong, on a 0-1 scale. This AI metacognition capability provides an empirically grounded reliability metric to help the human collaborator decide whether or not to rely on the AI. For HAT systems, this metacognitive capability allows for the setting of a self-confidence threshold. This threshold permits autonomous decisions for high-certainty model predictions and reduces AI-autonomy for low-certainty cases. HAT systems have been successfully integrated into various industries, including aspects of national defence. In the Canadian Arctic waterways, climate change continues to increase available routes and therefore increase maritime traffic. This increase necessitates more enhanced and efficient surveillance strategies, such as HACL. Our framework was tested in simulated maritime surveillance scenarios, in the Canadian Arctic waterways, where entities were assessed and assigned threat levels by human operators. Concurrently, CS was implemented to capture decision-making patterns, aligning AI threat assessments with those of human operators. Using a workload perception and situational awareness questionnaire, and trust and self-confidence scales, we are able to quantify the human factors associated with implementing HACL. Additionally, performance outcomes in surveillance scenarios can be quantitatively assessed through key metrics, including classification accuracy, critical change detection, time to classify, and omission rates. This ongoing work contributes to the acquisition of knowledge for the design of effective HACL systems, offers new applied cognitive science perspectives on human and AI-agent collaboration and provides a new testbed with benchmark data for iteratively testing successive versions of this new HACL capability.

Keywords: Human Factors, Human-Autonomy Teaming, Human-AI Co-Learning, Metacognition

DOI: 10.54941/ahfe1007173

Cite this paper:

Downloads
0
Visits
1
Download