Abstract
AbstractTest flakiness is a phenomenon occurring when a test case is non-deterministic and exhibits both a passing and failing behavior when run against the same code. Over the last years, the problem has been closely investigated by researchers and practitioners, who all have shown its relevance in practice. The software engineering research community has been working toward defining approaches for detecting and addressing test flakiness. Despite being quite accurate, most of these approaches rely on expensive dynamic steps, e.g., the computation of code coverage information. Consequently, they might suffer from scalability issues that possibly preclude their practical use. This limitation has been recently targeted through machine learning solutions that could predict the flakiness of tests using various features, like source code vocabulary or a mixture of static and dynamic metrics computed on individual snapshots of the system. In this paper, we aim to perform a step forward and predict test flakiness only using static metrics. We propose a large-scale experiment on 70 Java projects coming from the iDFlakies and FlakeFlagger datasets. First, we statistically assess the differences between flaky and non-flaky tests in terms of 25 test and production code metrics and smells, analyzing both their individual and combined effects. Based on the results achieved, we experiment with a machine learning approach that predicts test flakiness solely based on static features, comparing it with two state-of-the-art approaches. The key results of the study show that the static approach has performance comparable to those of the baselines. In addition, we found that the characteristics of the production code might impact the performance of the flaky test prediction models.
Funder
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung
Ministero dell’Istruzione, dell’Università e della Ricerca
Università degli Studi di Salerno
Publisher
Springer Science and Business Media LLC
Reference99 articles.
1. Alshammari A, Morris C, Hilton M, Bell J (2021) Flakeflagger: Predicting flakiness without rerunning tests. In: ICSE 2021, IEEE, pp 1572–1584
2. Association IS (1998) 829-1998 IEEE standard for software test documentation, Tech. rep., Technical report
3. Azeem MI, Palomba F, Shi L, Wang Q (2019) Machine learning techniques for code smell detection: a systematic literature review and meta-analysis. Inf Softw Technol 108:115–138
4. Azhagusundari B, Thanamani AS, et al. (2013) Feature selection based on information gain. Int J Innov Technol Exploring Eng (IJITEE) 2(2):18–21
5. Banko M, Brill E (2001) Scaling to very very large corpora for natural language disambiguation. In: Proceedings of the 39th annual meeting of the association for computational linguistics, pp 26–33
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献