1. Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B (2018) Sanity checks for saliency maps. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds) Advances in neural information processing systems, vol 31. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2018/file/294a8ed24b1ad22ec2e7efea049b8737-Paper.pdf
2. Adebayo J, Muelly M, Liccardi I, Kim B (2020) Debugging tests for model explanations. In: Larochelle H, Ranzato M, Hadsell R, Balcan M, Lin H (eds) Advances in neural information processing systems, vol 33. Curran Associates, Inc., pp 700–712. https://proceedings.neurips.cc/paper_files/paper/2020/file/075b051ec3d22dac7b33f788da631fd4-Paper.pdf
3. Alufaisan Y, Marusich L, Bakdash J, Zhou Y, Kantarcioglu M (2021) Does explainable artificial intelligence improve human decision-making? Proc AAAI Conf Artif Intell 35:6618–6626. https://doi.org/10.1609/aaai.v35i8.16819
4. Apley DW, Zhu J (2020) Visualizing the effects of predictor variables in black box supervised learning models. J R Stat Soc Ser B: Stat Methodol 82(4):1059–1086
5. Barshan E, Brunet ME, Dziugaite GK (2020) Relatif: identifying explanatory training examples via relative influence. PMLR