Affiliation:
1. Oak Ridge National Laboratory, USA
2. Stanford University, USA
3. Amazon Inc., USA
4. Lockheed Martin, USA
5. MITRE Corporation, USA
6. Lirio LLC, USA
7. Security Scorecard, USA
Abstract
There is a lack of scientific testing of commercially available malware detectors, especially those that boast accurate classification of never-before-seen (i.e., zero-day) files using machine learning (ML). Consequently, efficacy of malware detectors is opaque, inhibiting end users from making informed decisions and researchers from targeting gaps in current detectors. In this article, we present a scientific evaluation of four prominent commercial malware detection tools to assist an organization with two primary questions: To what extent do ML-based tools accurately classify previously and never-before-seen files? Is purchasing a network-level malware detector worth the cost? To investigate, we tested each tool against 3,536 total files (2,554 or 72% malicious and 982 or 28% benign) of a variety of file types, including hundreds of malicious zero-days, polyglots, and APT-style files, delivered on multiple protocols. We present statistical results on detection time and accuracy, consider complementary analysis (using multiple tools together), and provide two novel applications of the recent cost–benefit evaluation procedure of Iannacone and Bridges. Although the ML-based tools are more effective at detecting zero-day files and executables, the signature-based tool might still be an overall better option. Both network-based tools provide substantial (simulated) savings when paired with either host tool, yet both show poor detection rates on protocols other than HTTP or SMTP. Our results show that all four tools have near-perfect precision but alarmingly low recall, especially on file types other than executables and office files: Thirty-seven percent of malware, including all polyglot files, were undetected. Priorities for researchers and takeaways for end users are given. Code for future use of the cost model is provided.
Funder
Department of Defense
Naval Information Warfare Systems Command
Department of Energy
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Computer Science Applications,Hardware and Architecture,Safety Research,Information Systems,Software
Reference50 articles.
1. Ange Albertini. 2015. The International Journal of Proof of Concept GTFO PoC or GTFO 0x07; Abusing file formats or Corkami the Novella. https://www.alchemistowl.org/pocorgtfo/pocorgtfo07.pdf.
2. Investigation of Possibilities to Detect Malware Using Existing Tools
3. A Comprehensive Review on Malware Detection Approaches
4. AV-TEST. 2022. The best Windows antivirus software for business users. https://www.av-test.org/en/antivirus/business-windowsclient/windows-10/april-2022/.2022-06-23.
5. Sergey Bratus, Travis Goodspeed, Ange Albertini, and Debanjum S. Solanky. 2016. Fillory of PHY: Toward a periodic table of signal corruption exploits and polyglots in digital radio. In Proceedings of the10th USENIX Workshop on Offensive Technologies (WOOT’16). USENIX Association, Austin, TX. https://www.usenix.org/conference/woot16/workshop-program/presentation/bratus.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Evading malware classifiers using RL agent with action-mask;International Journal of Information Security;2023-07-07