1. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
2. Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the International Conference on Machine Learning.
3. Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2018. Synthesizing robust adversarial examples. In Proceedings of the International Conference on Machine Learning.
4. Osbert Bastani, Yani Ioannou, Leonidas Lampropoulos, Dimitrios Vytiniotis, Aditya Nori, and Antonio Criminisi. 2016. Measuring neural net robustness with constraints. In Advances in Neural Information Processing Systems.
5. Dimensionality reduction as a defense against evasion attacks on machine learning classifiers;Bhagoji Arjun Nitin;arXiv:1704.02654,2017