Affiliation:
1. University of Illinois at Urbana-Champaign, USA
2. TU Darmstadt, Germany
3. Peking University, China
4. Northeastern University, USA
Abstract
Flaky tests are tests that can non-deterministically pass or fail for the same code version. These tests undermine regression testing efficiency, because developers cannot easily identify whether a test fails due to their recent changes or due to flakiness. Ideally, one would detect flaky tests right when flakiness is introduced, so that developers can then immediately remove the flakiness. Some software organizations, e.g., Mozilla and Netflix, run some tools—detectors—to detect flaky tests as soon as possible. However, detecting flaky tests is costly due to their inherent non-determinism, so even state-of-the-art detectors are often impractical to be used on all tests for each project change. To combat the high cost of applying detectors, these organizations typically run a detector solely on newly added or directly modified tests, i.e., not on unmodified tests or when other changes occur (including changes to the test suite, the code under test, and library dependencies). However, it is unclear how many flaky tests can be detected or missed by applying detectors in only these limited circumstances.
To better understand this problem, we conduct a large-scale longitudinal study of flaky tests to determine when flaky tests become flaky and what changes cause them to become flaky. We apply two state-of-theart detectors to 55 Java projects, identifying a total of 245 flaky tests that can be compiled and run in the code version where each test was added. We find that 75% of flaky tests (184 out of 245) are flaky when added, indicating substantial potential value for developers to run detectors specifically on newly added tests. However, running detectors solely on newly added tests would still miss detecting 25% of flaky tests. The percentage of flaky tests that can be detected does increase to 85% when detectors are run on newly added or directly modified tests. The remaining 15% of flaky tests become flaky due to other changes and can be detected only when detectors are always applied to all tests. Our study is the first to empirically evaluate when tests become flaky and to recommend guidelines for applying detectors in the future.
Funder
National Science Foundation
Publisher
Association for Computing Machinery (ACM)
Subject
Safety, Risk, Reliability and Quality,Software
Cited by
39 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Non-Flaky and Nearly-Optimal Time-based Treatment of Asynchronous Wait Web Tests;ACM Transactions on Software Engineering and Methodology;2024-09-13
2. Neurosymbolic Repair of Test Flakiness;Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis;2024-09-11
3. Cost of Flaky Tests in Continuous Integration: An Industrial Case Study;2024 IEEE Conference on Software Testing, Verification and Validation (ICST);2024-05-27
4. Regression-Test History Data for Flaky-Test Research;Proceedings of the 1st International Workshop on Flaky Tests;2024-04-14
5. Can ChatGPT Repair Non-Order-Dependent Flaky Tests?;Proceedings of the 1st International Workshop on Flaky Tests;2024-04-14