Artificial Intelligence in Healthcare: The Explainability Ethical Paradox

Open Access
Article
Conference Proceedings
Authors: Patrick SeitzingerJay Kalra

Abstract: Explainability is among the most debated and pivotal discussions in the advancement of Artificial Intelligence (AI) technologies across the globe. The development of AI in medicine has reached a tipping point in medicine with implications across all sectors. How we proceed with the issue of explainability will shape the direction and manner in which healthcare evolves. We require new tools that brings us beyond our current levels of medical understanding and capabilities. However, we limit ourselves to tools that we can fully understand and explain. Implementing a tool that cannot be fully understandable by clinicians or patients violates medical ethics of informed consent. Yet, denying patients and the population attainable benefits of a new resource violates medical ethics of justice, health equity and autonomy. Fear of the unknown is not by itself a reason to halt the progression of medicine. Many of our current advancements were implemented prior to fully understanding its intricacies. To convey competence, some subfields of AI research have emphasized validity testing over explainability as a way to verify accuracy and build trust in AI systems. As a tool AI has shown immense potential in idea generation, data analysis, and pattern identification. AI will never be an independent system and will always require human oversight to ensure healthcare quality and ethical implementation. By using AI to augment, rather than replace clinical judgement, the caliber of patient care that we provide can be enhanced in a safe and sustainable manner. Addressing the explainability paradox in AI requires a multidisciplinary approach to address technical, legal, medical, and ethical aspects of this challenge.

Keywords: Artificial Intelligence, Explainability, Ethical, Healthcare

DOI: 10.54941/ahfe1003466

Cite this paper:

Downloads
215
Visits
568
Download