The Impact of Explanation Design on User Perception in Autonomous Driving Scenarios
Open Access
Article
Conference Proceedings
Authors: Shuting Jin, Fang Le, Chen Xingtong, Stephen Jia Wang
Abstract: The ability of autonomous vehicles (AVs) to communicate their decisions effectively is essential for user trust, safety, and acceptance. Explainable AI (XAI) research in the AV domain has emphasized transparency, yet most studies have focused on what information should be conveyed, when it should be delivered, and through which modality it should be presented. However, limited studies have examined the distinct impacts of rational and affective explanation styles across driving scenarios. To bridge this gap, this study explores how rational and affective explanation styles affect user perceptions in representative autonomous driving scenarios.We conducted an online experiment using a 3 (driving scenario: vehicle following, lane changing, emergency braking) × 3 (explanation style: no explanation, rational explanation, affective explanation) mixed factorial design. Driving scenarios were presented through first-person simulation videos (15–25 seconds), and the explanations in each scenario were provided in both voice and text. A total of 281 participants were randomly assigned to one of the three driving scenarios and exposed to three types of explanations. Following each condition, participants evaluated five dimensions of user perception using validated Likert-scale measures, including explanation satisfaction, perceived risk, trust, emotional experience, and intention to use. After excluding invalid responses, 270 valid samples were analyzed using two-way ANOVA with post-hoc tests. The analysis revealed several key findings. First, explanation style showed a significant main effect on user perception. Both rational and affective explanations significantly reduced perceived risk (F(2, 801) = 12.51, p < .001). Post-hoc comparisons indicated that affective explanations (M_diff = -0.34, p < .001) and rational explanations (M_diff = -0.25, p = .001) were more effective than no explanation. Explanation style also had a significant effect on trust (F(2, 801) = 8.21, p < .001). Participants reported higher trust in both affective (M_diff = 0.27, p < .001) and rational explanations (M_diff = 0.16, p = .04) compared to no explanation. For emotional experience, explanation style demonstrated a significant effect (F(2, 801) = 13.74, p < .001): affective explanations produced more positive experiences than both rational (M_diff = 0.18, p = .032) and no explanations (M_diff = 0.37, p < .001), while rational explanations also outperformed no explanations (M_diff = 0.19, p = .019). Second, driving scenario significantly influenced explanation satisfaction (F(2, 801) = 12.62, p < .001), with emergency braking (M_diff = 0.31, p < .001) and lane changing (M_diff = 0.24, p = .001) resulting in higher satisfaction than vehicle following, suggesting stronger demand for transparency in higher-risk contexts. However, no significant interaction effects were found between scenario and explanation style, indicating stable performance of explanation styles across different scenarios. This study confirms the importance of explanations in critical driving scenarios, extends the scope of XAI research in AVs by highlighting the role of affective explanations, and offers guidance for the design of explanation mechanisms that support transparency, trust, and user experience. More broadly, the findings underscore the value of explanation strategies for human-AI communication in safety-critical domains, contributing to the development of trustworthy and user-oriented intelligent systems.
Keywords: Autonomous Driving, Explainable AI (XAI), User Perception
DOI: 10.54941/ahfe1006873
Cite this paper:
Downloads
13
Visits
55


AHFE Open Access