1. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2. Ulrich Aïvodji, Alexandre Bolot, and Sébastien Gambs. 2020. Model extraction from counterfactual explanations. arXiv preprint arXiv:2009.01884 (2020).
3. Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. 2017. Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv preprint arXiv:1711.06104 (2017).
4. Varun Chandrasekaran, Kamalika Chaudhuri, Irene Giacomelli, Somesh Jha, and Songbai Yan. 2020. Exploring connections between active learning and model extraction. In 29th USENIX Security Symposium (USENIX Security 20). 1309--1326.
5. Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models