1. Wang, X., Jin, H., and He, K. (2022, September 13). Natural Language Adversarial Attack and Defense in Word Level. Available online: https://openreview.net/forum?id=BJl_a2VYPH.
2. Morris, J.X., Lifland, E., Yoo, J.Y., and Qi, Y. (2022, September 13). Textattack: A Framework for Adversarial Attacks in Natural Language Processing. Proceedings of the 2020 EMNLP. Available online: https://qdata.github.io/secureml-web/4VisualizeBench/.
3. Carlini, N., and Wagner, D. (2018, January 24–24). Audio adversarial examples: Targeted attacks on speech-to-text. Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA.
4. Schönherr, L., Kohls, K., Zeiler, S., Holz, T., and Kolossa, D. (2018). Adversarial attacks against automatic speech recognition systems via psychoacoustic hiding. arXiv.
5. Goodfellow, I.J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv.