Author:
Sotgiu Angelo,Demontis Ambra,Melis Marco,Biggio Battista,Fumera Giorgio,Feng Xiaoyi,Roli Fabio
Abstract
AbstractDespite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. In this work, we propose a deep neural rejection mechanism to detect adversarial examples, based on the idea of rejecting samples that exhibit anomalous feature representations at different network layers. With respect to competing approaches, our method does not require generating adversarial examples at training time, and it is less computationally demanding. To properly evaluate our method, we define an adaptive white-box attack that is aware of the defense mechanism and aims to bypass it. Under this worst-case setting, we empirically show that our approach outperforms previously proposed methods that detect adversarial examples by only analyzing the feature representation provided by the output network layer.
Publisher
Springer Science and Business Media LLC
Subject
Computer Science Applications,Signal Processing
Reference40 articles.
1. A. D. Joseph, B. Nelson, B. I. P. Rubinstein, J. Tygar, Adversarial machine learning (Cambridge University Press, 2018).
2. B. Biggio, F. Roli, Wild patterns: ten years after the rise of adversarial machine learning. Pattern. Recog.84:, 317–331 (2018).
3. N Dalvi, P Domingos, Mausam, S Sanghai, D Verma, in Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). Adversarial classification (Seattle, 2004), pp. 99–108.
4. D. Lowd, C. Meek, in Second Conference on Email and Anti-Spam (CEAS). Good word attacks on statistical spam filters (Mountain ViewUSA, 2005).
5. B. Biggio, B. Nelson, P. Laskov, in 29th Int’l Conf. on Machine Learning, ed. by J. Langford, J. Pineau. Poisoning attacks against support vector machines (Omnipress, 2012), pp. 1807–1814.
Cited by
39 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献