Defining and Modeling AI Technical Fluency for Effective Human Machine Interaction
Open Access
Article
Conference Proceedings
Authors: Susan Campbell, Rosalind Nguyen, Elizabeth Bonsignore, Breana Carter, Catherine Neubauer
Abstract: Working and interacting with artificial intelligence (AI) and autonomous systems is becoming an integral part of many jobs both in civilian and military settings. However, AI fluency skills, which we define as competencies that allow one to effectively evaluate and successfully work with AI, and the training that supports them have not kept pace with the development of AI technology. Specific subgroups of individuals who work in these areas, such as cyber and emerging technologies professions, are going to be required to team with increasingly sophisticated software and technological components, while also ensuring their skills are equivalent and not ‘overmatched’ by those of the AI. If not addressed, short term consequences of this gap may include degraded performance of sociotechnical systems using AI technologies and mismatches between humans’ trust in AI and the AI’s actual capabilities. In the long term, such gaps can lead to problems with appropriately steering, regulating, and auditing AI capabilities. We propose that assessing and supporting AI fluency is an integral part of promoting future appropriate use of AI and autonomous systems. The impact of AI fluency on the successful use of AI may differ depending on the role that the human has with the agent. For example, agents built on machine learning may update their behavior based on changes in the environment, changes in the task, or changes in input from humans. When an agent changes its behavior, humans must detect and adapt to the change. Furthermore, future agents may require human input to learn new behaviors or shape existing practices. These examples, where further intervention is needed from the human, emphasize the cyclic relationship between the agent and the human. However, humans vary in their ability to detect and respond to such changes based on their skills and experience. This example is just one potential aspect of the impact of one’s AI fluency on future human-AI interactions. To increment towards optimal performance, it is crucial to understand how and where differences in various aspects of AI fluency may help or hinder successful use of AI. The impact of AI fluency will be even stronger in the domain of interaction with autonomous systems built on AI technology, where agents may exhibit physical and information behaviors that affect human teammates’ safety. In this paper, we present a working definition and initial model of AI Technical Fluency (ATF) that relates predictors of ATF to potential outcome measures that would reflect one’s degree of ATF, including having accurate mental models of agents and the ability to interact with or use agents successfully. Additionally, we propose a preliminary set of assessments that might establish an individual’s ATF and discuss how (and the degree to which) different aspects of ATF may impact the various outcome measures. By gaining a better understanding of what factors contribute to one’s ATF and the impacts and limitations of ATF on the successful use of AI, we hope to contribute towards the ongoing research and development of new methods of interactions between humans and agents.
Keywords: Human, Machine Teaming, Individual Differences, Assessment
DOI: 10.54941/ahfe1003743
Cite this paper:
Downloads
283
Visits
673