Affiliation:
1. Department of Mathematics University of Padua, Italy
2. School of Business Informatics University of Liechtenstein, Liechtenstein
3. Department of Mathematics University of Padua, Italy and Department of Computer Science Delft University of Technology, Netherlands
Abstract
Existing literature on adversarial Machine Learning (ML) focuses either on showing attacks that break every ML model, or defenses that withstand most attacks. Unfortunately, little consideration is given to the actual
feasibility
of the attack or the defense. Moreover, adversarial samples are often crafted in the “feature-space”, making the corresponding evaluations of questionable value. Simply put, the current situation does not allow to estimate the actual threat posed by adversarial attacks, leading to a lack of secure ML systems.
We aim to clarify such confusion in this paper. By considering the application of ML for Phishing Website Detection (PWD), we formalize the “evasion-space” in which an adversarial perturbation can be introduced to fool an ML-PWD—demonstrating that even perturbations in the “feature-space” are useful. Then, we propose a realistic threat model describing evasion attacks against ML-PWD that are cheap to stage, and hence intrinsically more attractive for real phishers. After that, we perform the first statistically validated assessment of state-of-the-art ML-PWD against 12 evasion attacks. Our evaluation shows (i) the true efficacy of evasion attempts that are more likely to occur; and (ii) the impact of perturbations crafted in different evasion-spaces; Our realistic evasion attempts induce a statistically significant degradation (3–10% at
p
< 0.05), and their cheap cost makes them a subtle threat. Notably, however, some ML-PWD are immune to our most realistic attacks (
p
=0.22).
Finally, as an additional contribution of this journal publication, we are the first to propose and empirically evaluate the intriguing case wherein an attacker introduces perturbations in multiple evasion-spaces
at the same time
. These new results show that simultaneously applying perturbations in the problem- and feature-space can cause a drop in the detection rate from 0.95 to 0.
Our contribution paves the way for a much-needed re-assessment of adversarial attacks against ML systems for cybersecurity.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Computer Science Applications,Hardware and Architecture,Safety Research,Information Systems,Software
Reference102 articles.
1. 2015. UCI Phishing Websites Dataset. https://archive.ics.uci.edu/ml/datasets/phishing+websites. 2015. UCI Phishing Websites Dataset. https://archive.ics.uci.edu/ml/datasets/phishing+websites.
2. 2021. S&T Artificial Intelligence and Machine Learning Strategic Plan. Technical Report. US Department of Homeland Security. 24 pages. https://www.dhs.gov/sites/default/files/publications/21_0730_st_ai_ml_strategic_plan_2021.pdf 2021. S&T Artificial Intelligence and Machine Learning Strategic Plan. Technical Report. US Department of Homeland Security. 24 pages. https://www.dhs.gov/sites/default/files/publications/21_0730_st_ai_ml_strategic_plan_2021.pdf
3. 2022. All Adversarial Examples Papers. https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html. 2022. All Adversarial Examples Papers. https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html.
4. 2022. Machine Learning Security Evasion Competition. https://mlsec.io/. 2022. Machine Learning Security Evasion Competition. https://mlsec.io/.
5. 2022. PhishTank. https://phishtank.org/. 2022. PhishTank. https://phishtank.org/.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Applied Machine Learning for Information Security;Digital Threats: Research and Practice;2024-03-31