1. Andriushchenko, M., Flammarion, N.: Understanding and improving fast adversarial training. In: Advances in Neural Information Processing Systems, vol. 33 (2020)
2. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 274–283. PMLR (10–15 Jul 2018). https://proceedings.mlr.press/v80/athalye18a.html
3. Golgooni, Z., Saberi, M., Eskandar, M., Rohban, M.H.: Zerograd: mitigating and explaining catastrophic overfitting in fgsm adversarial training (2021). arXiv:2103.15476
4. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR) (2015)
5. Guo, Y., Pan, J.S., Qiu, C., Xie, F., Luo, H., Shang, H., Liu, Z., Tan, J.: Singan-based asteroid surface image generation. J. Database Manag. (JDM) 32(4), 28–47 (2021)