1. Julius Adebayo , Justin Gilmer , Michael Muelly , Ian Goodfellow , Moritz Hardt , and Been Kim . 2018. Sanity checks for saliency maps. Advances in neural information processing systems 31 ( 2018 ). Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. Advances in neural information processing systems 31 (2018).
2. Chirag Agarwal Nari Johnson Martin Pawelczyk Satyapriya Krishna Eshika Saxena Marinka Zitnik and Himabindu Lakkaraju. 2022. Rethinking Stability for Attribution-based Explanations. http://arxiv.org/abs/2203.06877 arXiv:2203.06877 [cs]. Chirag Agarwal Nari Johnson Martin Pawelczyk Satyapriya Krishna Eshika Saxena Marinka Zitnik and Himabindu Lakkaraju. 2022. Rethinking Stability for Attribution-based Explanations. http://arxiv.org/abs/2203.06877 arXiv:2203.06877 [cs].
3. Chirag Agarwal Marinka Zitnik and Himabindu Lakkaraju. 2022. Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods. http://arxiv.org/abs/2106.09078 arXiv:2106.09078 [cs]. Chirag Agarwal Marinka Zitnik and Himabindu Lakkaraju. 2022. Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods. http://arxiv.org/abs/2106.09078 arXiv:2106.09078 [cs].
4. David Alvarez-Melis and Tommi S. Jaakkola. 2018. On the Robustness of Interpretability Methods. http://arxiv.org/abs/1806.08049 arXiv:1806.08049 [cs stat]. David Alvarez-Melis and Tommi S. Jaakkola. 2018. On the Robustness of Interpretability Methods. http://arxiv.org/abs/1806.08049 arXiv:1806.08049 [cs stat].
5. David Alvarez-Melis and Tommi S. Jaakkola. 2018. Towards Robust Interpretability with Self-Explaining Neural Networks. http://arxiv.org/abs/1806.07538 arXiv:1806.07538 [cs stat]. David Alvarez-Melis and Tommi S. Jaakkola. 2018. Towards Robust Interpretability with Self-Explaining Neural Networks. http://arxiv.org/abs/1806.07538 arXiv:1806.07538 [cs stat].