Abstract
AbstractIn order to trust the predictions of a machine learning algorithm, it is necessary to understand the factors that contribute to those predictions. In the case of probabilistic and uncertainty-aware models, it is necessary to understand not only the reasons for the predictions themselves, but also the reasons for the model’s level of confidence in those predictions. In this paper, we show how existing methods in explainability can be extended to uncertainty-aware models and how such extensions can be used to understand the sources of uncertainty in a model’s predictive distribution. In particular, by adapting permutation feature importance, partial dependence plots, and individual conditional expectation plots, we demonstrate that novel insights into model behaviour may be obtained and that these methods can be used to measure the impact of features on both the entropy of the predictive distribution and the log-likelihood of the ground truth labels under that distribution. With experiments using both synthetic and real-world data, we demonstrate the utility of these approaches to understand both the sources of uncertainty and their impact on model performance.
Publisher
Springer Science and Business Media LLC
Reference37 articles.
1. Antoran J, Bhatt U, Adel T, et al (2021) Getting a CLUE: a method for explaining uncertainty estimates. In: International conference on learning representations
2. Blundell C, Cornebise J, Kavukcuoglu K, et al (2015) Weight uncertainty in neural networks. In: International conference on machine learning
3. Breiman L (2001) Random forests. Mach Learn 45:5–32. https://doi.org/10.1023/A:1010933404324
4. Casalicchio G, Molnar C, Bischl B (2018) Visualizing the feature importance for black box models. In: Machine learning and knowledge discovery in databases: European conference, ECML PKDD, Springer, pp 655–670. https://doi.org/10.1007/978-3-030-10925-7_40
5. Chai LR (2018) Uncertainty estimation in Bayesian neural networks and links to interpretability. Master’s thesis, University of Cambridge