Author:
Abgrall Gwénolé,Holder Andre L.,Chelly Dagdia Zaineb,Zeitouni Karine,Monnet Xavier
Abstract
AbstractIn the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. “Explainable AI” (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.
Funder
Fondation pour la Recherche Médicale
Société de Réanimation de Langue Française
Publisher
Springer Science and Business Media LLC
Reference50 articles.
1. Saqib M, Iftikhar M, Neha F, Karishma F, Mumtaz H. Artificial intelligence in critical illness and its impact on patient care: a comprehensive review. Front Med. 2023;20(10):1176192.
2. Van De Sande D, Van Genderen ME, Braaf H, Gommers D, Van Bommel J. Moving towards clinical use of artificial intelligence in intensive care medicine: business as usual? Intensive Care Med. 2022;48(12):1815–7.
3. Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform. 2021;113: 103655.
4. Grote T. Allure of simplicity: on interpretable machine learning models in healthcare. Philod Med. 2023;4:1.
5. Article 29 Data Protection Working Party, ‘Guidelines on Automated individual decision-making and Profiling For the purposes of Regulation 2016/679’ [2017].