Exploring Trust and Performance in Human-Automation Interaction: Novel Perspectives on Incorrect Reassurances from Imperfect Automation

Open Access
Article
Conference Proceedings
Authors: Jin Yong KimSzu Tung ChenCorey LesterX. Jessie Yang

Abstract: Consider the following hypothetical scenario: Sarah, a skilled pharmacist, is responsible for filling the medication bottles for prescription orders. Recently, her pharmacy introduced an AI computer vision system that scans the filled bottles and identifies the medication, which serves as an additional layer of verification before dispensing. Sarah receives a prescription for patient Noah, who needs medication “X”. Five possible cases could occur:Case A: Sarah correctly fills the prescription bottle with pill "X", and the automated decision aid correctly predicts it as "X".Case B: Sarah correctly fills the prescription bottle with pill "X", and the automated decision aid incorrectly predicts it as "Y". Case C: incorrectly fills the prescription bottle with pill "Z", and the automated decision aid correctly predicts it as "Z".Case D: Sarah incorrectly fills the prescription bottle with pill "Z", and the automated decision aid incorrectly predicts it as "Y".Case E: Sarah incorrectly fills the prescription bottle with pill "Z", and the automated decision aid incorrectly predicts it as "X".The scenario presents unique characteristics that are not examined in existing research paradigms examining trust in and dependence on automation, wherein automated decision aids give recommendations based on the raw information. In contrast, in the hypothetical scenario, the input to the AI system is human-provided data (i.e., A pharmacist fills the bottle). This research aims to investigate the effects of the different cases on participants’ trust and performance. We developed a testbed involving human participants performing mental rotation tasks with the help of imperfect automation. In the experimental task, participants were presented with a reference image alongside five answer choices and needed to select the choice that matched the reference image. Participants provided initial answer choices, received automation predictions, and made final answer choices for 60 trials.Thirty-five university students participated in the experiment. The study employed a within-subject design to examine the cases. Dependent variables were trust adjustment, performance, reaction time, and confidence.Results revealed that Case E, when participants received incorrect reassurance from automation for wrong initial answers, had the largest trust decrement and the worst final performance. This result confirms our hypothesis that Case E is problematic and requires further in-depth investigation. Case B, when participants’ right initial answer choice was followed by incorrect machine prediction, had the second largest trust decrement. In addition, we found across the majority of the cases that invalid recommendations harmed users’ trust more than valid recommendations increased trust, which aligns with the “negativity bias” property reported in prior literature. Furthermore, for each pattern, participants' trust decrement was greater when their final answers were wrong, indicating valid recommendations are penalized when final performance is harmed and invalid recommendations are less penalized when final performance is not harmed. These findings contribute to a fundamental understanding of how human trust is influenced in scenarios of automation failures when input information is provided by humans. These insights have practical implications for the design and implementation of semi-automated decision aids in domains where safety and effectiveness are critical.

Keywords: Trust in automation, human-automation interaction, human-computer interaction, automated decision aid

DOI: 10.54941/ahfe1004412

Cite this paper:

Downloads
126
Visits
366
Download