Author:
Farahani Farzad V.,Fiok Krzysztof,Lahijanian Behshad,Karwowski Waldemar,Douglas Pamela K.
Abstract
Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the “black box” and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a post-hoc capacity. We then focus on reviewing recent applications of post-hoc relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls.
Reference181 articles.
1. “Sanity checks for saliency maps,”;Adebayo;Advances in Neural Information Processing Systems,2018
2. “Generative adversarial networks for brain lesion detection,”;Alex;Proc.SPIE.,2017
3. Comparing statistical methods for constructing large scale gene networks;Allen;PLoS ONE,2012
4. On the robustness of interpretability methods;Alvarez-Melis;arXiv [Preprint],2018
5. Towards better understanding of gradient-based attribution methods for deep neural networks;Ancona;arXiv [Preprint],2017
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献