1. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems NIPS 2018, pp. 9525–9536. Curran Associates Inc., Red Hook (2018)
2. Ahmad, M.A., Eckert, C., Teredesai, A.: Interpretable machine learning in healthcare. In: Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics - BCB 2018. ACM Press (2018)
3. Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence);N-M Aliman,2018
4. Alvarez-Melis, D., Jaakkola, T.S.: Towards robust interpretability with self-explaining neural networks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems NIPS 2018, pp. 7786–7795. Curran Associates Inc., Red Hook (2018)
5. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv eprints: 1909.03012 (2019)