1. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples;Athalye,2018
2. Evasion attacks against machine learning at test time;Biggio,2013
3. Adversarial patch;Brown,2017
4. Towards evaluating the robustness of neural networks;Carlini,2016
5. Adversarial examples are not easily detected: Bypassing ten detection methods;Carlini,2017