FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments

Author:

Chernikova Alesia1ORCID,Oprea Alina1ORCID

Affiliation:

1. Northeastern University, Boston, MA, USA

Abstract

As advances in Deep Neural Networks (DNNs) demonstrate unprecedented levels of performance in many critical applications, their vulnerability to attacks is still an open question. We consider evasion attacks at testing time against Deep Learning in constrained environments, in which dependencies between features need to be satisfied. These situations may arise naturally in tabular data or may be the result of feature engineering in specific application domains, such as threat detection in cyber security. We propose a general iterative gradient-based framework called FENCE for crafting evasion attacks that take into consideration the specifics of constrained domains and application requirements. We apply it against Feed-Forward Neural Networks trained for two cyber security applications: network traffic botnet classification and malicious domain classification, to generate feasible adversarial examples. We extensively evaluate the success rate and performance of our attacks, compare their improvement over several baselines, and analyze factors that impact the attack success rate, including the optimization objective and the data imbalance. We show that with minimal effort (e.g., generating 12 additional network connections), an attacker can change the model’s prediction from the Malicious class to Benign and evade the classifier. We show that models trained on datasets with higher imbalance are more vulnerable to our FENCE attacks. Finally, we demonstrate the potential of performing adversarial training in constrained domains to increase the model resilience against these evasion attacks.

Funder

NSF

Google Security and Privacy Award

U.S. Army Combat Capabilities Development Command Army Research Laboratory under Cooperative Agreement

U.S. Army Contracting Command - Aberdeen Proving Ground (ACC-APG) and the Defense Advanced Research Projects Agency

Publisher

Association for Computing Machinery (ACM)

Subject

Safety, Risk, Reliability and Quality,General Computer Science

Reference85 articles.

1. Examining the Robustness of Learning-Based DDoS Detection in Software Defined Networks

2. Adversarial machine learning in network intrusion detection systems;Alhajjar Elie;arXiv preprint arXiv:2004.11898,2020

3. Generating natural language adversarial examples;Alzantot Moustafa;arXiv preprint arXiv:1804.07998,2018

4. Learning to evade static PE machine learning malware models via reinforcement learning;Anderson Hyrum S.;arXiv preprint arXiv:1801.08917,2018

5. Evading Botnet Detectors Based on Flows and Random Forest with Adversarial Samples

Cited by 13 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Constraining Adversarial Attacks on Network Intrusion Detection Systems: Transferability and Defense Analysis;IEEE Transactions on Network and Service Management;2024-06

2. IDS-GAN: Adversarial Attack against Intrusion Detection Based on Generative Adversarial Networks;2024 5th International Conference on Computer Vision, Image and Deep Learning (CVIDL);2024-04-19

3. AdverSPAM: Adversarial SPam Account Manipulation in Online Social Networks;ACM Transactions on Privacy and Security;2024-03-14

4. Evasion Attack and Defense On Machine Learning Models in Cyber-Physical Systems: A Survey;IEEE Communications Surveys & Tutorials;2024

5. ProGen: Projection-Based Adversarial Attack Generation Against Network Intrusion Detection;IEEE Transactions on Information Forensics and Security;2024

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3