Affiliation:
1. University of Utah, Salt Lake City, UT, USA
2. Oregon State University, Corvallis, OR, USA
Abstract
Aggressive random testing tools ("fuzzers") are impressively effective at finding compiler bugs. For example, a single test-case generator has resulted in more than 1,700 bugs reported for a single JavaScript engine. However, fuzzers can be frustrating to use: they indiscriminately and repeatedly find bugs that may not be severe enough to fix right away. Currently, users filter out undesirable test cases using ad hoc methods such as disallowing problematic features in tests and grepping test results. This paper formulates and addresses the fuzzer taming problem: given a potentially large number of random test cases that trigger failures, order them such that diverse, interesting test cases are highly ranked. Our evaluation shows our ability to solve the fuzzer taming problem for 3,799 test cases triggering 46 bugs in a C compiler and 2,603 test cases triggering 28 bugs in a JavaScript engine.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design,Software
Cited by
102 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Enumerating Valid Non-Alpha-Equivalent Programs for Interpreter Testing;ACM Transactions on Software Engineering and Methodology;2024-06-04
2. On the Effectiveness of Synthetic Benchmarks for Evaluating Directed Grey-Box Fuzzers;2023 30th Asia-Pacific Software Engineering Conference (APSEC);2023-12-04
3. SJFuzz: Seed and Mutator Scheduling for JVM Fuzzing;Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering;2023-11-30
4. On the Caching Schemes to Speed Up Program Reduction;ACM Transactions on Software Engineering and Methodology;2023-11-24
5. Uncovering Bugs in Code Coverage Profilers via Control Flow Constraint Solving;IEEE Transactions on Software Engineering;2023-11