Evaluating the Effect of Time on Trust Calibration of Explainable Artificial Intelligence

Open Access
Conference Proceedings
Authors: Ezekiel BernardoRosemary Seva

Abstract: Explainable Artificial Intelligence (XAI) has played a significant role in human-computer interaction. The cognitive resources it carries allow humans to understand the complex algorithm powering Artificial Intelligence (AI), virtually resolving the acceptance and adoption barrier from the lack of transparency. This resulted in more systems leveraging XAI and triggering interest and efforts to develop newer and more capable techniques. However, though the research stream is expanding, little is known about the extent of its effectiveness on end-users. Current works have only measured XAI effects on either moment time effect or compared it cross-sectionally on various types of users. Filling this out can improve the understanding of existing studies and provide practical limitations on its use for trust calibration. To address this gap, a multi-time research experiment was conducted with 103 participants to use and evaluate XAI in an image classification application for three days. Measurement that was considered is on perceived usefulness for its cognitive contribution, integral emotions for affective change, trust, and reliance, and was analyzed via covariance-based structural equation modelling. Results showed that time only moderates the path from cognitive to trust and reliance as well as trust to reliance, with its effect dampening through time. On the other hand, affective change has remained consistent in all interactions. This shows that if an AI system uses XAI over a longer time frame, prioritization should be on its affective properties (i.e., things that will trigger emotional change) rather than purely on its cognitive purpose to maximize the positive effect of XAI.

Keywords: Explainable AI, XAI, Artificial Intelligence, AI, Trust, Affect, Time, Moderation, SEM

DOI: 10.54941/ahfe1003280

Cite this paper: