Affiliation:
1. University of Illinois at Urbana-Champaign, USA
2. University of Texas at Dallas, USA
Abstract
Regression test selection (RTS) aims to speed up regression testing by rerunning only tests that are affected by code changes. RTS can be performed using static or dynamic analysis techniques. Our prior study showed that static and dynamic RTS perform similarly for medium-sized Java projects. However, the results of that prior study also showed that static RTS can be unsafe, missing to select tests that dynamic RTS selects, and that reflection was the only cause of unsafety observed among the evaluated projects.
In this paper, we investigate five techniques—three purely static techniques and two hybrid static-dynamic techniques—that aim to make static RTS safe with respect to reflection. We implement these reflection-aware (RA) techniques by extending the reflection-unaware (RU) class-level static RTS technique in a tool called STARTS. To evaluate these RA techniques, we compare their end-to-end times with RU, and with RetestAll, which reruns all tests after every code change. We also compare safety and precision of the RA techniques with Ekstazi, a state-of-the-art dynamic RTS technique; precision is a measure of unaffected tests selected.
Our evaluation on 1173 versions of 24 open-source Java projects shows negative results. The RA techniques improve the safety of RU but at very high costs. The purely static techniques are safe in our experiments but decrease the precision of RU, with end-to-end time at best 85.8% of RetestAll time, versus 69.1% for RU. One hybrid static-dynamic technique improves the safety of RU but at high cost, with end-to-end time that is 91.2% of RetestAll. The other hybrid static-dynamic technique provides better precision, is safer than RU, and incurs lower end-to-end time—75.8% of RetestAll, but it can still be unsafe in the presence of test-order dependencies. Our study highlights the challenges involved in making static RTS safe with respect to reflection.
Funder
National Science Foundation
Publisher
Association for Computing Machinery (ACM)
Subject
Safety, Risk, Reliability and Quality,Software
Reference75 articles.
1. Apache Software Foundation. 2019a. Apache Camel. (2019). http://camel.apache.org/ . Apache Software Foundation. 2019a. Apache Camel. (2019). http://camel.apache.org/ .
2. Apache Software Foundation. 2019b. Apache Commons Math. (2019). https://commons.apache.org/proper/commons-math/ . Apache Software Foundation. 2019b. Apache Commons Math. (2019). https://commons.apache.org/proper/commons-math/ .
3. Linda Badri Mourad Badri and Daniel St-Yves. 2005. Supporting predictive change impact analysis: A control call graph based technique. In APSEC. 167–175. Linda Badri Mourad Badri and Daniel St-Yves. 2005. Supporting predictive change impact analysis: A control call graph based technique. In APSEC. 167–175.
Cited by
24 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Speeding up Genetic Improvement via Regression Test Selection;ACM Transactions on Software Engineering and Methodology;2024-07-23
2. Hybrid Regression Test Selection by Synergizing File and Method Call Dependences;Companion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering;2024-07-10
3. An Approach to Regression Testing Selection based on Code Changes and Smells;8th Brazilian Symposium on Systematic and Automated Software Testing;2023-09-25
4. Optimizing Continuous Development by Detecting and Preventing Unnecessary Content Generation;2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE);2023-09-11
5. Extracting Inline Tests from Unit Tests;Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis;2023-07-12