1. Athalye, A., Carlini, N., & Wagner, D.A. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: ICML, pp 274–283
2. Buckman, J., Roy, A., Raffel, C., et al. (2018). Thermometer encoding: One hot way to resist adversarial examples. In: ICLR
3. Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. In: IEEE S &P, pp 39–57
4. Dabouei, A., Soleymani, S., Taherkhani, F., et al. (2020). Exploiting joint robustness to adversarial perturbations. In: CVPR, pp 1122–1131
5. Deng, Z., Dong, Y., Pang, T., et al. (2020). Adversarial distributional training for robust deep learning. In: NeurIPS, pp 8270–8283