1. Aldahdooh, A., Hamidouche, W., & Deforges, O. (2021). Reveal of vision transformers robustness against adversarial attacks. arXiv preprint arXiv:2106.03734
2. Athalye, A., Carlini, N., & Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning, pp. 274–283
3. Bai, Y., Mei, J., Yuille, A. L., & Xie, C. (2021). Are transformers more robust than CNNs? In Advances in Neural information processing systems, pp. 26831–26843.
4. Barbu, A., Mayo, D., Alverio, J., Luo, W., Wang, C., Gutfreund, D., Tenenbaum, J., & Katz, B. (2019). Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In Advances in neural information processing systems, pp. 9453–9463
5. Beyer, L., Hánaff, O. J., Kolesnikov, A., Zhai, X., & Oord, A. V. D. (2020). Are we done with imagenet? arXiv preprint arXiv:2006.07159