Abstract
AbstractIn recent years, many neuroimaging studies have begun to integrate gradient-based explainability methods to provide insight into key features. However, existing explainability approaches typically generate a point estimate of importance and do not provide insight into the degree of uncertainty associated with explanations. In this study, we present a novel approach for estimating explanation uncertainty for convolutional neural networks (CNN) trained on neuroimaging data. We train a CNN for classification of individuals with schizophrenia (SZs) and controls (HCs) using resting state functional magnetic resonance imaging (rs-fMRI) dynamic functional network connectivity (dFNC) data. We apply Monte Carlo batch normalization (MCBN) and generate an explanation following each iteration using layer-wise relevance propagation (LRP). We then examine whether the resulting distribution of explanations differs between SZs and HCs and examine the relationship between MCBN-based LRP explanations and regular LRP explanations. We find a number of significant differences in LRP relevance for SZs and HCs and find that traditional LRP values frequently diverge from the MCBN relevance distribution. This study provides a novel approach for obtaining insight into the level of uncertainty associated with gradient-based explanations in neuroimaging and represents a significant step towards increasing reliability of explainable deep learning methods within a clinical setting.
Publisher
Cold Spring Harbor Laboratory
Reference14 articles.
1. A. W. Thomas , H. R. Heekeren , K.-R. Müller , and W. Samek , “Analyzing Neuroimaging Data Through Recurrent Deep Learning Models,” Front. Neurosci., Oct. 2019, [Online]. Available: http://arxiv.org/abs/1810.09945.
2. The false hope of current approaches to explainable artificial intelligence in health care
3. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
4. Bayesian uncertainty estimation for batch normalized deep networks;in 35th International Conference on Machine Learning, ICML 2018,2018
5. C. A. Ellis , D. A. Carbajal , R. Zhang , R. L. Miller , V. D. Calhoun , and M. D. Wang , “An Explainable Deep Learning Approach for Multimodal Electrophysiology Classification,” bioRxiv, pp. 12–15, 2021.