Abstract
Abstract
When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to the Validation View, it is sufficient that the AI system has been validated using established standards for safety and reliability. I defend the Explanation View against two lines of criticism, and I argue that within the framework of evidence-based medicine mere validation seems insufficient for the use of AI output. I end by characterizing the epistemic responsibility of clinicians and point out how a mere AI output cannot in itself ground a practical conclusion about what to do.
Publisher
Cambridge University Press (CUP)
Subject
Health Policy,Issues, ethics and legal aspects,Health (social science)
Reference24 articles.
1. Artificial intelligence and patient-centered decision-making;Bjerring;Philosophy,2020
2. Understanding, Idealization, and Explainable AI
3. The Ideal of Shared Decision Making Between Physicians and Patients
4. Explanation in artificial intelligence: Insights from the social sciences
5. Sanity checks for saliency maps;Adebayo;Advances in Neural Information Processing Systems,2018
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献