1. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. In Advances in neural information processing systems. Curran Associates, Inc.
2. Alvarez-Melis, D., & Jaakkola, T. S. (2018). Towards robust interpretability with self-explaining neural networks. In Proceedings of the 32nd international conference on neural information processing systems (pp. 7786–7795). Curran Associates Inc., Red Hook, NY, USA, NIPS’18.
3. Antoran, J., Bhatt, U., Adel, T., Weller, A., & Hernandez-Lobato, J. M. (2020). Getting a clue: A method for explaining uncertainty e (2021). Getting a clue: A method for explaining uncertainty estimates. In International conference on learning representations.
4. Arras, L., Osman, A., & Samek, W. (2022). Clevr-xai: A benchmark dataset for the ground truth evaluation of neural network explanations. Information Fusion, 81, 14–40. https://doi.org/10.1016/j.inffus.2021.11.008
5. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE, 10(7), e0130140. https://doi.org/10.1371/journal.pone.0130140