1. IEEE Trans. Multim 2020 Towards improving robustness of deep neural networks to adversarial perturbations
2. Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. 2020. Square attack: A query-efficient black-box adversarial attack via random search. In European Conference Computer Vision.
3. Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning.
4. Nicholas Carlini and David A. Wagner. 2017. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy.