Publisher
Springer Nature Switzerland
Reference26 articles.
1. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
2. Agarwal, C., Queen, O., Lakkaraju, H., Zitnik, M.: Evaluating explainability for graph neural networks. Sci. Data 10(144) (2023). https://www.nature.com/articles/s41597-023-01974-x
3. Arras, L., Osman, A., Samek, W.: CLEVR-XAI: a benchmark dataset for the ground truth evaluation of neural network explanations. Inf. Fusion 81, 14–40 (2022)
4. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)
5. Böhle, M., Fritz, M., Schiele, B.: B-cos networks: alignment is all we need for interpretability. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10329–10338 (2022)