1. Aigrain J, Detyniecki M (2019) Detecting adversarial examples and other misclassifications in neural networks by introspection. CoRR, abs/1905.09186
2. Akhtar N, Mian A (2018) Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6:14410–14430
3. Aldahdooh A, Hamidouche W, Déforges O (2021) Revisiting model’s uncertainty and confidences for adversarial example detection. arXiv preprint arXiv:2103.05354
4. Athalye A, Engstrom L, Ilyas A, Kwok K (2018a) Synthesizing robust adversarial examples. In: International conference on machine learning, PMLR, pp 284–293
5. Athalye A, Carlini N, Wagner DA (2018b) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: Dy JG, Krause A (eds) Proceedings of the 35th international conference on machine learning, ICML 2018, proceedings of machine learning research, PMLR, vol 80, Stockholmsmässan, Stockholm, Sweden, 10–15 July 2018, pp 274–283