Abstract
AbstractIn deep learning-based image classification, the entropy of a neural network’s output is often taken as a measure of its uncertainty. We introduce an explainability method that identifies those features in the input that impact most this uncertainty. Learning the corresponding features by straightforward backpropagation typically leads to results that are hard to interpret. We propose an extension of the recently proposed oriented, modified integrated gradients (OMIG) technique as an alternative to produce perturbations of the input that have a visual quality comparable to explainability methods from the literature but marks features that have a substantially higher impact on the entropy. The potential benefits of the modified OMIG method are demonstrated by comparison with current state-of-the-art explainability methods on several popular databases. In addition to a qualitative analysis of explainability results, we propose a metric for their quantitative comparison, which evaluates the impact of identified features on the entropy of a prediction.
Funder
Physikalisch-Technische Bundesanstalt (PTB)
Publisher
Springer Science and Business Media LLC
Reference57 articles.
1. Zhang D, Mishra S, Brynjolfsson E, Etchemendy J, Ganguli D, Grosz B, Lyons T, Manyika J, Niebles JC, Sellitto M et al (2021) The AI index 2021 annual report. arXiv:2103.06312
2. Holzinger A (2021) The next frontier: Ai we can really trust. In: Joint European conference on machine learning and knowledge discovery in databases, Springer, pp 427–440
3. Holzinger A, Dehmer M, Emmert-Streib F, Cucchiara R, Augenstein I, Del Ser J, Samek W, Jurisica I, Díaz-Rodríguez N (2022) Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence. Inf Fusion 79:263–278
4. Martin J, Elster C (2021) Detecting unusual input to neural networks. Appl Intell 51:2198–2209
5. Das A, Rad P (2020) Opportunities and challenges in explainable artificial intelligence (XAI): a survey. arXiv:2006.11371
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献