1. Are labels required for improving adversarial robustness?;Alayrac;Advances in Neural Information Processing Systems (NIPS),2019
2. Understanding and improving fast adversarial training;Andriushchenko;Advances in Neural Information Processing Systems (NIPS),2020
3. Athalye, A., Carlini, N., & Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning (pp. 274–283).
4. Bai, T., Luo, J., Zhao, J., & Wen, B. (2021). Recent Advances in Adversarial Training for Adversarial Robustness. In International joint conference on artificial intelligence (pp. 4312–4321).
5. Buckman, J., Roy, A., Raffel, C., & Goodfellow, J. I. (2018). Thermometer Encoding: One Hot Way To Resist Adversarial Examples. In International conference on learning representations.