1. A. Athalye, N. Carlini, D. Wagner, Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018)
2. F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, J. Zhu, Defense against adversarial attacks using high-level representation guided denoiser, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 1778–1787
3. R. Sahay, R. Mahfuz, A. El Gamal, A computationally efficient method for defending adversarial deep learning attacks. arXiv preprint arXiv:1906.05599 (2019)
4. S. Cheng, Y. Dong, T. Pang, H. Su, J. Zhu, Improving black-box adversarial attacks with a transfer-based prior. Adv. Neural Inf. Process. Syst. 10934–10944 (2019)
5. S. Qiu, Q. Liu, S. Zhou, W. Chunjiang, Review of artificial intelligence adversarial attack and defense technologies. Appl. Sci. 9(5), 909 (2019)