1. Athalye, A., Carlini, N., & Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In J. Dy & A. Krause (Eds.), Proceedings of the 35th international conference on machine learning, proceedings of machine learning research (vol. 8, pp. 274–283). Stockholm Sweden: Stockholmsmässan.
2. Awasthi, P., Frank, N., & Mohri, M. (2020). Adversarial learning guarantees for linear hypotheses and neural networks. In H. D. III & A. Singh (Eds.), Proceedings of the 37th international conference on machine learning, proceedings of machine learning research (vol. 119, pp. 431–441). PMLR.
3. Bartlett, P. L., & Mendelson, S. (2002). Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3, 463–482.
4. Ben-Tal, A., El Ghaoui, L., & Nemirovski, A. (2009). Robust optimization (Vol. 28). Princeton University Press.
5. Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., & Roli, F. (2013). Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases (pp. 387–402). Springer.