1. Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M and Kim B (2018) Sanity checks for saliency maps. Adv Neural Inf Process Syst 31
2. Agarwal C, Saxena E, Krishna S, Pawelczyk M, Johnson N, Puri I, Zitnik M and Lakkaraju H (2022) OpenXAI: towards a transparent evaluation of model explanations. arXiv preprint arXiv:2206.11104
3. Alvarez-Melis D and Jaakkola TS (2018) On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049
4. Apley DW, Zhu J (2020) Visualizing the effects of predictor variables in black box supervised learning models. J R Stat Soc Ser B 82(4):1059–1086
5. Barocas S, Hardt M and Narayanan A (2019) Fairness and machine learning. fairmlbook.org. http://www.fairmlbook.org.