User Trust Towards an AI-assisted Healthcare Decision Support System under Varied Explanation Formats and Expert Opinions
Open Access
Article
Conference Proceedings
Authors: Da Tao, Zehua Liu, Tingru Zhang, Chengxiang Liu, Tieyan Wang
Abstract: While Artificial Intelligence (AI) has been increasingly applied in healthcare contexts, how AI recommendations should be explained to achieve higher user trust is yet to be determined. This study was aimed to investigate users' trust towards an AI-assisted healthcare decision support system under varied explanation formats and expert opinions. Twenty participants participated in a lab-based experiment where they were asked to complete a series of dosage adjustment tasks in chronic disease care scenarios with the help of a simulated AI-assisted decision support system. Four explanation formats and three types of expert opinions were examined. Data on subjective trust, task performance and physiological measures were collected. The results showed that explanation formats had significant effects on subjective trust, task performance and skin conductance. Expert opinion had significant effects on subjective trust and task performance. There existed an interaction effect on compliance rate between explanation format and expert opinion. It appears that AI recommendations that are explained by counterfactual reasoning way and supported by medical experts are likely to achieve higher user trust. The findings can provide references for better design of explainable AI in AI-assisted healthcare contexts.
Keywords: Explainable AI, Trust, Healthcare, Explanation Format, Expert Opinion
DOI: 10.54941/ahfe1004186
Cite this paper:
Downloads
121
Visits
302