1. Athalye A, Carlini N, Wagner D (2018a) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: Dy J, Krause A (eds) Proceedings of the 35th International Conference on Machine Learning, PMLR, Stockholmsmässan. http://proceedings.mlr.press/v80/athalye18a.html, vol 80. Proceedings of Machine Learning Research, Stockholm Sweden, pp 274–283
2. Athalye A, Engstrom L, Ilyas A, Kwok K (2018b) Synthesizing robust adversarial examples. In: Dy J, Krause A (eds) Proceedings of the 35th International Conference on Machine Learning, PMLR. http://proceedings.mlr.press/v80/athalye18b.html, vol 80. Proceedings of Machine Learning Research, Stockholmsmässan, pp 284–293
3. Balaji Y, Goldstein T, Hoffman J (2019) Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets. arXiv:1910.08051
4. Balunovic M, Vechev M (2020) Adversarial training and provable defenses: Bridging the gap. In: International Conference on Learning Representations, https://openreview.net/forum?id=SJxSDxrKDr
5. Bietti A, Mialon G, Chen D, Mairal J (2019) A kernel perspective for regularizing deep neural networks. In: International Conference on Machine Learning. PMLR, pp 664–674