1. Anish Athalye , Nicholas Carlini , and David A. Wagner . 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples . In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan , Stockholm, Sweden , July 10-15, 2018 , Jennifer G. Dy and Andreas Krause (Eds.) (Proceedings of Machine Learning Research, Vol. 80). PMLR, 274–283. Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, Jennifer G. Dy and Andreas Krause (Eds.) (Proceedings of Machine Learning Research, Vol. 80). PMLR, 274–283.
2. Anish Athalye , Logan Engstrom , Andrew Ilyas , and Kevin Kwok . 2018 . Synthesizing Robust Adversarial Examples . In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan , Stockholm, Sweden , July 10-15, 2018, Jennifer G. Dy and Andreas Krause (Eds.) (Proceedings of Machine Learning Research, Vol. 80). PMLR, 284–293. Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2018. Synthesizing Robust Adversarial Examples. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, Jennifer G. Dy and Andreas Krause (Eds.) (Proceedings of Machine Learning Research, Vol. 80). PMLR, 284–293.
3. Quantitative Verification of Neural Networks and Its Security Applications
4. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
5. ZOO