1. Ahern, I., Noack, A., Guzman-Nateras, L., Dou, D., Li, B., & Huan, J. (2019). Normlime: A new feature importance metric for explaining deep neural networks. CoRR abs/1909.04200 . Retrieved from http://arxiv.org/abs/1909.04200
2. Alvarez-Melis, D., & Jaakkola, T. S. (2018). On the robustness of interpretability methods. In Proceedings of the 2018 icml workshop on human interpretability in machine learning. Retrieved from http://arxiv.org/abs/1806.08049
3. Burns, C., Thomason, J., & Tansey, W. (2019). Interpreting black box models via hypothesis testing (pp. 47–57). Association for Computing Machinery, Inc. Retrieved from https://arxiv.org/abs/1904.00045v3. 10.1145/3412815.3416889
4. Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018). Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV). Retrieved from http://dx.doi.org/10.1109/WACV.2018.00097. 10.1109/wacv.2018.00097
5. Damelin, S. B., & Hoang, N. S. (2018). On surface completion and image inpainting by biharmonic functions: Numerical aspects, vol. 2018. Hindawi Limited 10.1155/2018/3950312