1. Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4768–4777. (2017)
2. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. (2016)
3. Anjomshoae, S., Främling, K., Najjar, A.: Explanations of black-box model predictions by contextual importance and utility. In International Workshop on Explainable, Transparent Autonomous Agents and Multi- Agent Systems, pp. 95–109. Springer (2019)
4. Altmann, A., Toloşi, L., Sander, O., Lengauer, T.: Permutation importance: a corrected feature importance measure. Bioinformatics 26(10), 1340–1347 (2010)
5. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ser. ICML’17. JMLR.org, pp. 3145–3153 (2017)