1. Anish Athalye , Nicholas Carlini , and David Wagner . 2018 . Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples . In Proceedings of the 35th International Conference on Machine Learning , Vol. 80 . PMLR, Stockholm, Sweden, 274--283. Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In Proceedings of the 35th International Conference on Machine Learning, Vol. 80. PMLR, Stockholm, Sweden, 274--283.
2. Eugene Bagdasaryan and Vitaly Shmatikov . 2021 . Blind Backdoors in Deep Learning Models. In 30th USENIX Security Symposium (USENIX Security 21) . USENIX Association, Virtual, 1505--1521. Eugene Bagdasaryan and Vitaly Shmatikov. 2021. Blind Backdoors in Deep Learning Models. In 30th USENIX Security Symposium (USENIX Security 21). USENIX Association, Virtual, 1505--1521.
3. CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
4. Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). Ieee 39--57. Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). Ieee 39--57.
5. Xinyun Chen Chang Liu Bo Li Kimberly Lu and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arxiv: 1712.05526 Xinyun Chen Chang Liu Bo Li Kimberly Lu and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arxiv: 1712.05526