Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Signal Processing,Software
Reference37 articles.
1. M.T. Ribeiro, S. Singh, C. Guestrin, why should i trust you?: Explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 2016, pp. 1135–1144.
2. Incorporating interpretability into latent factor models via fast influence analysis;Cheng,2019
3. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV);Kim,2018
4. A unified approach to interpreting model predictions;Lundberg,2017
5. D. Slack, S. Hilgard, E. Jia, et al., Fooling lime and shap: Adversarial attacks on post hoc explanation methods, in: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 180–186.