1. Addepalli, S., Vivek, B. S., Baburaj, A., Sriramanan, G., & Venkatesh Babu, R. (2020). Towards achieving adversarial robustness by enforcing feature consistency across bit planes. In IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 1017–1026).
2. Alemi, A. A., Fischer, I., Dillon, J. V., & Murphy, K. (2017). Deep variational information bottleneck. In International conference on learning representations (ICLR).
3. Athalye, A., Carlini, N., & Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In 35th international conference on machine learning (ICML) (pp. 436–448).
4. Athalye, A., Engstrom, L., Ilyas, A., & Kwok, K. (2018). Synthesizing robust adversarial examples. In 2018 international conference on machine learning (ICML).
5. Robustness of bayesian neural networks to gradient-based attacks;Carbone,2020