1. Alzantot, M., Sharma, Y., Elgohary, A., Ho, B., Srivastava, M., & Chang, K. (2018). Generating natural language adversarial examples. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, (pp. 2890–2896).
2. Andriushchenko, M., & Flammarion, N. (2020). Understanding and improving fast adversarial training. In: Proceedings of Advances in Neural Information Processing Systems, 33, (pp. 16048-16059).
3. Athalye, A., Carlini, N., & Wagner, D. (2018, July). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: International Conference on Machine Learning (pp. 274-283).
4. Baytaş, İ. M., & Deb, D. (2023). Robustness-via-synthesis: Robust training with generative adversarial perturbations. Neurocomputing, 516, 49-60. https://doi.org/10.1016/j.neucom.2022.10.034
5. Carlini, N., Mishra, P., Vaidya, T., Zhang, Y., Sherr, M., Shields, C., ... & Zhou, W. (2016). Hidden voice commands. In: 25th USENIX security symposium (USENIX security 16), (pp. 513-530).