Measuring Trust in a Simulated Human Agent Team Task
Open Access
Article
Conference Proceedings
Authors: Cherrise Ficke, Arianna Addis, Daniel Nguyen, Kendall Carmody, Amanda Thayer, Jessica Wildman, Meredith Carroll
Abstract: Due to improvements in agent capabilities through technological advancements, the prevalence of human-agent teams (HATs) are expanding into more dynamic and complex environments. Prior research suggests that human trust in agents plays a pivotal role in the team’s success and mission effectiveness (Yu et al., 2019; Kohn et al., 2020). Therefore, understanding and being able to accurately measure trust in HATs is critical. The literature presents numerous approaches to capture and measure trust in HATs, including behavioral indicators, self-report survey items, and physiological measures to capture and quantify trust. However, deciding when and which measures to use can be an overwhelming and tedious process. To combat this issue, we previously developed a theoretical framework to guide researchers in what measures to use and when to use them in a HAT context (Ficke et al., 2022). More specifically, we evaluated common measures of trust in HATs according to eight criteria and demonstrated the utility of different types of measures in various scenarios according to how dynamic trust was expected to be and how often teammates interacted with one another. In the current work, we operationalize this framework in a simulation-based research setting. In particular, we developed a simulated search and rescue task paradigm in which a human teammate interacts with two subteams of autonomous agents to identify and respond to targets such as enemies, IEDs and trapped civilians. Using the Ficke et al. (2022) framework as a guide, we identified self-report, behavioral, and physiological measures to capture human trust in their autonomous agent counterparts, at the individual, subteam, and full team levels. Measures included single-item and multi-item self report surveys, chosen due to their accessibility and prevalence across research domains, as well as their simplistic ability to assess multifaceted constructs. These self-report measures will also be used to assess convergent validity of newly developed unobtrusive (i.e., behavioral, physiological) measures of trust. Further, using the six-step Rational Approach to Developing Systems-based Measures (RADSM) process, we cross-referenced theory on trust with available data from the paradigm to develop context-appropriate behavioral measures of trust. The RADSM process differs from traditional data-led approaches in that it is simultaneously a top-down (data-driven) and bottom-up (theory-driven) approach (Orvis, et al., 2013). Through this process, we identified a range of measures such as usage behaviors (to use or misuse an entity), monitoring behaviors, response time, and other context-specific actions to capture trust. We also incorporated tools to capture physiological responses, including electrocardiogram readings and galvanic skin responses. These measures will be utilized in a series of simulation-based experiments examining the effect of trust violation and repair strategies on trust as well as to evaluate the validity and reliability of the measurement framework. This paper will describe the methods used to identify, develop and/or implement these measures, the resulting measure implementation and how the resulting measurement toolbox maps onto the evaluation criteria (e.g., temporal resolution, diagnosticity), and guidance for implementation in other domains.
Keywords: human agent teams, unobtrusive measurements, physiological measures, dynamic trust
DOI: 10.54941/ahfe1003560
Cite this paper:
Downloads
167
Visits
405