1. Anish Athalye Nicholas Carlini and David Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In ICML. https://arxiv.org/abs/1802.00420 Anish Athalye Nicholas Carlini and David Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In ICML. https://arxiv.org/abs/1802.00420
2. Learning visual similarity for product design with convolutional neural networks
3. Battista Biggio Igino Corona Davide Maiorca Blaine Nelson Nedim Srndic Pavel Laskov Giorgio Giacinto and Fabio Roli. 2017. Evasion Attacks against Machine Learning at Test Time. CoRR Vol. abs/1708.06131 (2017). http://arxiv.org/abs/1708.06131 Battista Biggio Igino Corona Davide Maiorca Blaine Nelson Nedim Srndic Pavel Laskov Giorgio Giacinto and Fabio Roli. 2017. Evasion Attacks against Machine Learning at Test Time. CoRR Vol. abs/1708.06131 (2017). http://arxiv.org/abs/1708.06131
4. Wieland Brendel Jonas Rauber and Matthias Bethge. 2018. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. In ICLR. https://openreview.net/forum?id=SyZI0GWCZ Wieland Brendel Jonas Rauber and Matthias Bethge. 2018. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. In ICLR. https://openreview.net/forum?id=SyZI0GWCZ