1. Abbasi M, Gagné C (2017) Robustness to adversarial examples through an ensemble of specialists. In: Proceedings of the 5th international conference on learning representations (ICLR), Toulon, France, April 24–26, 2017
2. Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: Proceedings of the 35th international conference on machine learning (ICML), Stockholm, Sweden, July 10–15, 2018, pp 274–283
3. Basak J, De RK, Pal SK (1998) Unsupervised feature selection using a neuro-fuzzy approach. Pattern Recognit Lett 19(11):997–1006
4. Bradshaw J, Matthews AGdG, Ghahramani Z (2017) Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks. arXiv preprint arXiv:1707.02476
5. Brown TB, Mané D, Roy A, Abadi M, Gilmer J (2017) Adversarial patch. arXiv preprint arXiv:1712.09665