'Design for integrating explainable AI for dynamic risk prediction in prehospital IT systems

Open Access
Article
Conference Proceedings
Authors: David WallsténGregory AxtonAnna BakidouEunji LeeBengt Arne SjöqvistStefan Candefjord

Abstract: Demographic changes in the West with an increasingly elderly population puts stress on current healthcare systems. New technologies are necessary to secure patient safety. AI development shows great promise in improving care, but the question of how necessary it is to be able to explain AI results and how to do it remains to be evaluated in future research. This study designed a prototype of eXplainable AI (XAI) in a prehospital IT system, based on an AI model for risk prediction of severe trauma to be used by Emergency Medical Services (EMS) clinicians. The design was then evaluated on seven EMS clinicians to gather information about usability and AI interaction.Through ethnography, expert interviews and literature review, knowledge was gathered for the design. Then several ideas developed through stages of prototyping were verified by experts in prehospital healthcare. Finally, a high-fidelity prototype was evaluated by the EMS clinicians. The primary design was based around a tablet, the most common hardware for ambulances. Two input pages were included, with the AI interface working as both an indicator at the top of the interface and a more detailed overlay. The overlay could be accessed at any time while interacting with the system. It included the current risk prediction, based on the colour codes of the South African Triage Scale (SATS), as well as a recommendation based on guidelines. That was followed by two rows of predictors, for or against a serious condition. These were ordered from left to right, depending on importance. Beneath this, the most important missing variables were accessible, allowing for quick input.The EMS clinicians thought that XAI was necessary for them to trust the prediction. They make the final decision, and if they can’t base it on specific parameters, they feel they can’t make a proper judgement. In addition, both rows of predictors and missing variables served as reminders of what they might have missed in patient assessment, as stated by the EMS clinicians to be a common issue. If given a prediction from the AI that was different from their own, it might cause them to think more about their decision, moving it away from the normally relatively automatic process and likely reducing the risk of bias.While focused on trauma, the overall design was created to be able to include other AI models as well. Current models for risk prediction in ambulances have so far not seen a big benefit of using artificial neural networks (ANN) compared to more transparent models. This study can help guide the future development of AI for prehospital healthcare and give insights into the potential benefits and implications of its implementation.

Keywords: Healthcare, ambulance, AI, explainable AI, XAI, decision making, risk prediction, interaction design, UX, usability, tablet, it system

DOI: 10.54941/ahfe1004199

Cite this paper:

Downloads
168
Visits
438
Download