Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

Author:

Cinà Antonio Emanuele1ORCID,Grosse Kathrin2ORCID,Demontis Ambra3ORCID,Vascon Sebastiano4ORCID,Zellinger Werner5ORCID,Moser Bernhard A.5ORCID,Oprea Alina6ORCID,Biggio Battista7ORCID,Pelillo Marcello1ORCID,Roli Fabio8ORCID

Affiliation:

1. DAIS, Ca’ Foscari University of Venice, Italy

2. VITA Lab, École Polytechnique Fédérale de Lausanne, Switzerland

3. DIEE, University of Cagliari, Italy

4. DAIS, Ca’ Foscari University of Venice and European Center for Living Technology, Italy

5. Software Competence Center Hagenberg GmbH (SCCH), Austria

6. Khoury College of Computer Sciences, Northeastern University, MA, USA

7. DIEE, University of Cagliari, CINI, and Pluribus One, Italy

8. DIBRIS, University of Genoa, CINI, and Pluribus One, Italy

Abstract

The success of machine learning is fueled by the increasing availability of computing power and large training datasets. The training data is used to learn new models or update existing ones, assuming that it is sufficiently representative of the data that will be encountered at test time. This assumption is challenged by the threat of poisoning, an attack that manipulates the training data to compromise the model’s performance at test time. Although poisoning has been acknowledged as a relevant threat in industry applications, and a variety of different attacks and defenses have been proposed so far, a complete systematization and critical review of the field is still missing. In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the past 15 years. We start by categorizing the current threat models and attacks and then organize existing defenses accordingly. While we focus mostly on computer-vision applications, we argue that our systematization also encompasses state-of-the-art attacks and defenses for other data modalities. Finally, we discuss existing resources for research in poisoning and shed light on the current limitations and open research questions in this research field.

Funder

PRIN 2017 project RexLearn

Italian Ministry of Education, University and Research

EU’s Horizon Europe research and innovation program under the project ELSA

“TrustML: Towards Machine Learning that Humans Can Trust,”

Fondazione di Sardegna; the NRRP MUR program funded by the EU - NGEU under the project SERICS

COMET Programme managed by FFG in the COMET Module S3AI

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science,Theoretical Computer Science

Reference215 articles.

1. VENOMAVE: Clean-label poisoning against speech recognition;Aghakhani Hojjat;arXiv:2010.10682,2020

2. Hojjat Aghakhani, Dongyu Meng, Yu-Xiang Wang, Christopher Kruegel, and Giovanni Vigna. 2021. Bullseye polytope: A scalable clean-label poisoning attack with improved transferability. In European Symposium on Security and Privacy (EuroS&P). 159–178.

3. Poisoning deep reinforcement learning agents with in-distribution triggers;Ashcraft Chace;arXiv:2106.07798,2021

4. Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In 23rd International Conference on AI and Statistics. PMLR, 2938–2948.

5. Baseline pruning-based approach to trojan detection in neural networks;Bajcsy Peter;arXiv:2101.12016,2021

Cited by 29 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3