Affiliation:
1. Università Ca’ Foscari Venezia, Italy
Abstract
Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable to evasion attacks, i.e., maliciously crafted perturbations of inputs designed to force mispredictions. In this article we propose a novel technique to certify the security of machine learning models against evasion attacks with respect to an expressive threat model, where the attacker can be represented by an arbitrary imperative program. Our approach is based on a transformation of the model under attack into an equivalent imperative program, which is then analyzed using the traditional abstract interpretation framework. This solution is sound, efficient and general enough to be applied to a range of different models, including decision trees, logistic regression and neural networks. Our experiments on publicly available datasets show that our technique yields only a minimal number of false positives and scales up to cases which are intractable for a competitor approach.
Subject
Computer Networks and Communications,Hardware and Architecture,Safety, Risk, Reliability and Quality,Software
Reference39 articles.
1. Evasion Attacks against Machine Learning at Test Time
2. Wild patterns: Ten years after the rise of adversarial machine learning;Biggio;Pattern Recognit.,2018
3. C.M. Bishop, Pattern Recognition and Machine Learning, 5th edn, Information Science and Statistics, Springer, 2007, https://www.worldcat.org/oclc/71008143. ISBN 9780387310732.
4. Random forests;Breiman;Mach. Learn.,2001
5. L. Breiman, J.H. Friedman, R.A. Olshen and C.J. Stone, Classification and Regression Trees, Wadsworth, 1984. ISBN 0-534-98053-8.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献