Affiliation:
1. Department of Artificial Intelligence, Universidad Politécnica de Madrid, 28040 Madrid, Spain
2. Department of Computer Science, Rutgers University, New Brunswick, NJ 08854, USA
Abstract
Due to the success of artificial intelligence (AI) applications in the medical field over the past decade, concerns about the explainability of these systems have increased. The reliability requirements of black-box algorithms for making decisions affecting patients pose a challenge even beyond their accuracy. Recent advances in AI increasingly emphasize the necessity of integrating explainability into these systems. While most traditional AI methods and expert systems are inherently interpretable, the recent literature has focused primarily on explainability techniques for more complex models such as deep learning. This scoping review critically analyzes the existing literature regarding the explainability and interpretability of AI methods within the clinical domain. It offers a comprehensive overview of past and current research trends with the objective of identifying limitations that hinder the advancement of Explainable Artificial Intelligence (XAI) in the field of medicine. Such constraints encompass the diverse requirements of key stakeholders, including clinicians, patients, and developers, as well as cognitive barriers to knowledge acquisition, the absence of standardised evaluation criteria, the potential for mistaking explanations for causal relationships, and the apparent trade-off between model accuracy and interpretability. Furthermore, this review discusses possible research directions aimed at surmounting these challenges. These include alternative approaches to leveraging medical expertise to enhance interpretability within clinical settings, such as data fusion techniques and interdisciplinary assessments throughout the development process, emphasizing the relevance of taking into account the needs of final users to design trustable explainability methods.
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference145 articles.
1. Artificial intelligence and human trust in healthcare: Focus on clinicians;Asan;J. Med. Internet Res.,2020
2. Explainable AI and Multi-Modal Causability in Medicine;Holzinger;i-com,2020
3. Adadi, A., and Berrada, M. (2020). Embedded Systems and Artificial Intelligence, Springer.
4. The practical implementation of artificial intelligence technologies in medicine;He;Nat. Med.,2019
5. Joshi, G., Jain, A., Adhikari, S., Garg, H., and Bhandari, M. (2023). FDA approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: An updated 2022 landscape. medRxiv.