Affective Analysis of Explainable Artificial Intelligence in the Development of Trust in AI Systems
Open Access
Article
Conference Proceedings
Authors: Ezekiel Bernardo, Rosemary Seva
Abstract: The rise of Explainable Artificial Intelligence (XAI) has been a game changer for the growth of Artificial Intelligence (AI) powered systems. By providing human-level explanations, it systematically solves the most significant issue that AI faces: the black-box paradox realized from the complex hidden layers of deep and machine learning that powers it. Fundamentally, it allows users to learn how the AI operates and comes to decisions, thus enabling cognitive calibration of trust and subsequent reliance on the system. This conclusion has been supported by various research under different contexts and has motivated the development of newer XAI techniques. However, as human-computer interaction and social science studies suggest, these findings might be limited as the emotional component, which is also established from the interaction, was not considered. Emotions have long been determined to play a dominant role in decision-making as they can rapidly and unconsciously be infused in judgments. This insinuates an idea that XAI might facilitate trust calibration not solely because of the cognitive information it provides but of the emotions developed on the explanations. Considering this idea has not been explored, this study aims to examine the effects of emotions associated with the interaction with XAI towards trust, reliance, and explanation satisfaction. One hundred twenty-three participants were invited to partake in an online experiment anchored in an image classification testbed. The premise was that they were hired to classify different species of animals and plants, with an XAI-equipped image classification AI available to give them recommendations. At the end of each trial, they were tasked to rate their emotions upon interaction with the XAI, trust in the system, and satisfaction with the explanation. Reliance was measured based on whether they accepted the recommendations of AI. Results show that users who felt surprisingly happy and trusting emotions reported high trust, reliance, and satisfaction. On the other hand, users that developed fearfully dismayed and anxiously suspicious emotions have a significant negative relationship with satisfaction. Essentially, as supported by the post-interview, the study surfaced three critical findings on the affective functionality of XAI. First, emotions developed are mainly attributed to the design and overall composition rather than the information it carries. Second, trust and reliance can only be developed from positive emotions. Users might not trust and rely on an AI system even if it has a meaningful explanation if it develops negative emotions to the user. Third, explanation satisfaction can be triggered by both positive and negative emotions. The former is mainly from the presentation of XAI, while the latter is because of understanding the limitation of the AI.
Keywords: XAI, Trust, Affective Analysis
DOI: 10.54941/ahfe1002861
Cite this paper:
Downloads
452
Visits
1385