Abstract
AbstractIn medical image classification tasks like the detection of diabetic retinopathy from retinal fundus images, it is highly desirable to get visual explanations for the decisions of black-box deep neural networks (DNNs). However, gradient-based saliency methods often fail to highlight the diseased image regions reliably. On the other hand, adversarially robust models have more interpretable gradients than plain models but suffer typically from a significant drop in accuracy, which is unacceptable for clinical practice. Here, we show that one can get the best of both worlds by ensembling a plain and an adversarially robust model: maintaining high accuracy but having improved visual explanations. Also, our ensemble produces meaningful visual counterfactuals which are complementary to existing saliency-based techniques. Code is available under https://github.com/valentyn1boreiko/Fundus_VCEs.
Publisher
Cold Spring Harbor Laboratory
Reference36 articles.
1. Kaggle competition on diabetic retinopathy detection (2015), https://www.kaggle.com/c/diabetic-retinopathy-detection/data, Accessed: 2022-02-02
2. Assessing the trustworthiness of saliency maps for localizing abnormalities in medical imaging;Radiology: Artificial Intelligence,2021
3. Augustin, M. , Meinke, A. , Hein, M. : Adversarial robustness on in- and out-distribution improves explainability. In: ECCV (2020)
4. Ayhan, M.S. , Kühlewein, L. , Aliyeva, G. , Inhoffen, W. , Ziemssen, F. , Berens, P. : Expert-validated estimation of diagnostic uncertainty for deep neural networks in diabetic retinopathy detection. Medical Image Analysis 64 (2020)
5. Ayhan, M.S. , Kümmerle, L.B. , Kühlewein, L. , Inhoffen, W. , Aliyeva, G. , Ziemssen, F. , Berens, P. : Clinical validation of saliency maps for understanding deep neural networks in ophthalmology. Medical Image Analysis p. 102364 (2022)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献