1. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples;Athalye,2018
2. Can we gain more from orthogonality regularizations in training deep CNNs?;Bansal,2018
3. Towards evaluating the robustness of neural networks;Carlini,2017
4. Certified adversarial robustness via randomized smoothing;Cohen,2019
5. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks;Croce,2020