Author:
Philipp William,Yashwanthika R.,Sikha O. K.,Benitez Raul
Abstract
AbstractAlthough Deep Learning networks generally outperform traditional machine learning approaches based on tailored features, they often lack explainability. To address this issue, numerous methods have been proposed, particularly for image-related tasks such as image classification or object segmentation. These methods generate a heatmap that visually explains the classification problem by identifying the most important regions for the classifier. However, these explanations remain purely visual. To overcome this limitation, we introduce a novel CNN explainability method that identifies the most relevant regions in an image and generates a decision tree based on meaningful regional features, providing a rule-based explanation of the classification model. We evaluated the proposed method on a synthetic blob’s dataset and subsequently applied it to two cell image classification datasets with healthy and pathological patterns.
Funder
Universitat Politècnica de Catalunya
Publisher
Springer Science and Business Media LLC
Reference25 articles.
1. Brinker TJ, Hekler A, Utikal JS, Grabe N, Schadendorf D, Klode J, Berking C, Steeb T, Enk AH, Von Kalle C (2018) Skin cancer classification using convolutional neural networks: systematic review. J Med Internet Res 20(10):11936
2. Sakinis T, Milletari F, Roth H, Korfiatis P, Kostandy P, Philbrick K, Akkus Z, Xu Z, Xu D, Erickson BJ (2019) Interactive segmentation of medical images through fully convolutional neural networks. arXiv:1903.08205
3. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556
4. Samek W, Montavon G, Vedaldi A, Hansen LK, Müller K-R (2019) Explainable AI: interpreting, explaining and visualizing deep learning. Springer, Berlin
5. Ribeiro MT, Singh S, Guestrin C (2016) " why should i trust you?" explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144