1. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2. Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. In Proceedings of the NeurIPS, Vol. 31. Curran Associates. Retrieved from https://proceedings.neurips.cc/paper/2018/file/294a8ed24b1ad22ec2e7efea049b8737-Paper.pdf.
3. Julius Adebayo, Michael Muelly, Ilaria Liccardi, and Been Kim. 2020. Debugging tests for model explanations. In Proceedings of the NeurIPS.
4. Tameem Adel, Zoubin Ghahramani, and Adrian Weller. 2018. Discovering interpretable representations for both deep generative and discriminative models. In Proceedings of the ICML, Vol. 80. PMLR.
5. Auditing black-box models for indirect influence