Transparency for Trust: Enhancing Acceptance and System Integration of Intelligent AI in Healthcare
Open Access
Article
Conference Proceedings
Authors: Nikita Islam, Ancuta Margondai, Julie Rader, Sara Willox, Cindy Von Ahlefeldt, Mustapha Mouloua, Valentina Ezcurra
Abstract: The integration of intelligent systems into healthcare has transformed how diagnosis, therapy, and clinical decision-making are conceptualized and delivered. Artificial intelligence (AI) now supports a wide range of functions, from predictive analytics to personalized interventions. Despite these advances, the acceptance of AI in healthcare remains uneven, shaped not only by technical performance but also by the degree of transparency surrounding its capabilities and limitations. Without clear communication, trust becomes unstable, oscillating between overreliance and outright rejection. This paper examines transparency as the essential foundation for trust calibration, proposing that transparent AI systems enhance user confidence, preserve the therapeutic alliance, and ultimately contribute to better patient care. Building on prior research in neuroadaptive AI and virtual reality therapy for children with autism spectrum disorder, where transparent EEG-based engagement metrics increased acceptance by clinicians and caregivers, the authors argue that transparency should be understood as a core design principle for the system integration of intelligent AI in healthcare. A synthesis of literature across healthcare AI, trust-in-automation, and human–computer interaction demonstrates consistent evidence that transparency mechanisms improve acceptance. Studies on explainable AI indicate that visual explanations and confidence indicators significantly increase appropriate reliance while reducing the risks of miscalibration. The Human Identity and Autonomy Gap (HIAG) framework provides a valuable lens for interpreting these outcomes, illustrating how transparency mediates trust across cognitive, emotional, and social dimensions. Cognitively, transparency clarifies the reliability and scope of AI decision-making; emotionally, it reduces user uncertainty and anxiety; socially, it preserves clinician authority while fostering collaboration with patients and caregivers. Yet transparency must go beyond technical disclosure. Systems must communicate strengths and limitations, such as bias, data dependency, and contextual blind spots, while ensuring that transparency does not overwhelm users with excessive detail. Evidence also shows that transparency must be culturally adaptive, since trust and adoption vary across professional and cultural contexts, with some prioritizing certainty and governance while others value autonomy, discretion, and relational trust. This paper contributes to theory and practice by proposing design and policy guidelines that embed transparency into healthcare AI development. Strategies include adaptive interfaces that communicate uncertainty through confidence dashboards, culturally sensitive explanations that reflect global variability, and training modules that prepare clinicians and caregivers to interpret AI outputs responsibly. By positioning transparency as a prerequisite rather than an afterthought, intelligent systems can be integrated into healthcare workflows in ways that align with human values, safeguard professional autonomy, and foster equitable adoption across diverse settings. Ultimately, transparency transforms AI from a black-box technology into a trusted partner in healthcare innovation. These insights provide not only a conceptual framework for understanding trust calibration in AI-enabled healthcare but also a roadmap for developing intelligent systems that deliver meaningful, safe, and ethically grounded improvements in patient care, ensuring that future applications truly advance medical practice and human well-being.
Keywords: Transparency in Artificial Intelligence, Trust Calibration, Healthcare AI Integration, Explainable AI (XAI), Human-System Interaction in Healthcare
DOI: 10.54941/ahfe1007088
Cite this paper:
Downloads
1
Visits
4


AHFE Open Access