1. Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 6(August), 14410–14430. https://doi.org/10.1109/ACCESS.2018.2807385.
2. Allen-Zhu, Z., & Li, Y. (2022). Feature purification: How adversarial training performs robust deep learning. In IEEE 62nd annual symposium on foundations of computer science, FOCS. https://doi.org/10.1109/FOCS52979.2021.00098.
3. Andriushchenko, M., & Flammarion, N. (2020). Understanding and improving fast adversarial training. In Advances in neural information processing systems, NeurIPS. https://proceedings.neurips.cc/paper/2020/file/b8ce47761ed7b3b6f48b583350b7f9e4-Paper.pdf.
4. Athalye, A., Carlini, N., & Wagner, D.A. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the 35th international conference on machine learning, ICML. http://proceedings.mlr.press/v80/athalye18a.html.
5. Athalye, A., Engstrom, L., Ilyas, A., & Kwok, K. (2018). Synthesizing robust adversarial examples. In Proceedings of the 35th international conference on machine learning, ICML. http://proceedings.mlr.press/v80/athalye18b.html.