Augmented decision making model for responsible actors in healthcare
Open Access
Article
Conference Proceedings
Authors: Francesco Polese, Luca Carrubbo, Antonietta Megaro
Abstract: The purpose of this paper is to understand if the use of artificial intelligence (AI)-based tools may enable augmented decision-making for responsible actors (Spohrer, 2021) in healthcare. The biggest challenge for AI in healthcare is its own full applicability in daily clinical practices, due to fragmented data and their poor quality, further complicated by the patients’ reluctance to share them (Shinners et al., 2020) for privacy issues. The motivation of this work is to search for models and methodologies capable of overcoming these criticalities. In this sense, transparent AI deserves to be much more explored because it would enable augmented decisions (not just automatic decisions) as they result from an effective HMI (Zhu et al., 2018). Then, this can also contribute to strengthening the patient’s perception of the reliability and safety of the tool (de Fine Licht, 2020), by improving their trust in the healthcare operations (Das, 2020). A literature review has been carried out to propose a framework (Share-to-Care) to encourage a great acceptance by actors of new technologies in healthcare thanks to reasoned transparency. Methodologically, the ‘theory synthesis’ (Jaakkola, 2020) helps us in intending how to give back drivers and suggestions to researchers and practitioners in designing and using AI in healthcare. Findings concern transparent AI as leverage able to foster the spread of collaborative behaviors useful for augmented decision-making not only powered by technologies but mainly by humans.
Keywords: Transparency, artificial intelligence, augmented decisions, decision-making models, healthcare management
DOI: 10.54941/ahfe1002560
Cite this paper:
Downloads
275
Visits
437