Abstract
This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, I argue that interpretability by design is most promising to overcome opacity in medical ML. Looking beyond the individual opacity amelioration strategies, the paper also contributes to a deeper understanding of the problem space and the solution space regarding opacity in medical ML.
Funder
Deutsche Forschungsgemeinschaft
Carl-Zeiss-Stiftung
Baden-Württemberg Stiftung
Publisher
University Library System, University of Pittsburgh
Reference85 articles.
1. Adebayo, Julius, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. “Sanity Checks for Saliency Maps.” Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett. https://proceedings.neurips.cc/paper/2018/file/294a8ed24b1ad22ec2e7efea049b8737-Paper.pdf.
2. Arrieta, Alejandro Barredo, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, et al. 2020. “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.” Information Fusion 58: 82–115. https://doi.org/10.1016/j.inffus.2019.12.012.
3. Arun, Nishanth, Nathan Gaw, Praveer Singh, Ken Chang, Mehak Aggarwal, Bryan Chen, Katharina Hoebel, et al. 2021.“Assessing the Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging.” Radiology: Artificial Intelligence 3, no. 6. https://doi.org/10.1148/ryai.2021200267.
4. Ayhan, Murat Seçkin, Louis Benedikt Kümmerle, Laura Kühlewein, Werner Inhoffen, Gulnar Aliyeva, Focke Ziemssen, and Philipp Berens. 2022. “Clinical Validation of Saliency Maps for Understanding Deep Neural Networks in Ophthalmology.” Medical Image Analysis 77, art. 102364. https://doi.org/10.1016/j.media.2022.102364.
5. Babic, Boris, Sara Gerke, Theodoros Evgeniou, and I. Glenn Cohen. 2021. “Beware Explanations from AI in Health Care.” Science , no. 6552: 284–286. https://doi.org/10.1126/science.abg1834.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献