1. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples;Athalye,2018
2. End to end learning for self-driving cars;Bojarski,2016
3. Carlini, N., & Wagner, D. A. (2017). Towards Evaluating the Robustness of Neural Networks. In IEEE symposium on security and privacy (pp. 39–57).
4. Certified adversarial robustness via randomized smoothing;Cohen,2019
5. Discovering adversarial examples with momentum;Dong,2017