1. Addepalli, S., Jain, S., & Radhakrishnan, V. B. (2022). Efficient and effective augmentation strategy for adversarial training. In Neural information processing systems (NeurIPS).
2. Athalye, A., Carlini, N., & Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning (ICML).
3. Azizi, S., Kornblith, S., Saharia, C., Norouzi, M., & Fleet, D. J. (2023). Synthetic data from diffusion models improves imagenet classification. In Transactions on machine learning research (TMLR).
4. Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. In IEEE symposium on security and privacy (SP).
5. Carmon, Y., Raghunathan, A., Schmidt, L., Duchi, J. C., & Liang, P. S. (2019). Unlabeled data improves adversarial robustness. In Neural information processing systems (NeurIPS).