1. Alayrac, J., Uesato, J., & Huang, P. et al. (2019). Are labels required for improving adversarial robustness?. In Neural information processing systems.
2. Andriushchenko, M., Croce, F., & Flammarion, N., et al. (2019). Square attack: a query-efficient black-box adversarial attack via random search. arXiv:1912.00049.
3. Athalye, A., Carlini, N., & Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning.
4. Bai, Y., Zeng, Y., Jiang, Y., et al. (2020). Improving adversarial robustness via channel-wise activation suppressing. In International conference on learning representations.
5. Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. In S &P.