1. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018). https://doi.org/10.1109/ACCESS.2018.2807385
2. Bastounis, A., Hansen, A.C., Vlaĉić, V.: The mathematics of adversarial attacks in AI-Why deep learning is unstable despite the existence of stable neural networks. arXiv:2109.06098 [cs.LG] (2021)
3. Beerens, L., Higham, D.J.: Adversarial ink: Componentwise backward error attacks on deep learning. IMA J. Appl. Math. (2023). https://doi.org/10.1093/imamat/hxad017
4. Beuzeville, T., Boudier, P., Buttari, A., Gratton, S., Mary, T., Pralet, S.: Adversarial attacks via backward error analysis, December 2021. Working paper or preprint. https://ut3-toulouseinp.hal.science/hal-03296180. https://ut3-toulouseinp.hal.science/hal-03296180v3/file/Adversarial_BE.pdf. hal-03296180. Version 3
5. Fawzi, A., Fawzi, O., Frossard, P.: Analysis of classifiers’ robustness to adversarial perturbations. Mach. Learn. 107, 481–508 (2018)