Affiliation:
1. Institute of Aerospace Thermodynamics, University of Stuttgart, Stuttgart, Germany
2. Institute of Aerospace Thermodynamics, University of Stuttgart, Stuttgart Germany
Abstract
Feature attribution methods (AMs) are a simple means to provide explanations for the predictions of black-box models like neural networks. Due to their conceptual differences, the numerous different methods, however, yield ambiguous explanations. While this allows for obtaining different insights into the model, it also complicates the decision which method to adopt. This paper, therefore, summarizes the current state of the art regarding AMs, which includes the requirements and desiderata of the methods themselves as well as the properties of their explanations. Based on a survey of existing methods, a representative subset consisting of the
δ
-sensitivity index, permutation feature importance, variance-based feature importance in artificial neural networks and DeepSHAP, is described in greater detail and, for the first time, benchmarked in a regression context. Specifically for this purpose, a new verification strategy for model-specific AMs is proposed. As expected, the explanations’ agreement with the intuition and among each other clearly depends on the AMs’ properties. This has two implications: First, careful reasoning about the selection of an AM is required. Secondly, it is recommended to apply multiple AMs and combine their insights in order to reduce the model’s opacity even further.
Publisher
Association for Computing Machinery (ACM)
Reference146 articles.
1. K. Aas M. Jullum and A. Løland. 2021. Explaining individual predictions when features are dependent: More accurate approximations to Shapley values. Artif. Intell. 298 Article 103502 (2021). https://doi.org/10.1016/j.artint.2021.103502
2. M. Abadi A. Agarwal P. Barham E. Brevdo Z. Chen C. Citro G.S. Corrado A. Davis J. Dean M. Devin S. Ghemawat I. Goodfellow A. Harp G. Irving M. Isard Y. Jia R. Jozefowicz L. Kaiser M. Kudlur J. Levenberg D. Mané R. Monga S. Moore D. Murray C. Olah M. Schuster J. Shlens B. Steiner I. Sutskever K. Talwar P. Tucker V. Vanhoucke V. Vasudevan F. Viégas O. Vinyals P. Warden M. Wattenberg M. Wicke Y. Yu and X. Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. https://www.tensorflow.org/ Software available from tensorflow.org.
3. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
4. I. Ahern A. Noack L. Guzman-Nateras D. Dou B. Li and J. Huan. 2019. NormLime: A New feature importance metric for explaining deep neural networks. https://doi.org/10.48550/ARXIV.1909.04200