Affiliation:
1. Fondazione Bruno Kessler, Trento, Italy
2. University of Milano Bicocca, Milano, Italy
3. Fondazione Bruno Kessler and SnT Centre, University of Luxembourg, Luxembourg
Abstract
Several techniques and tools have been proposed for the automatic generation of test cases. Usually, these tools are evaluated in terms of fault-revealing or coverage capability, but their impact on the manual debugging activity is not considered. The question is whether automatically generated test cases are equally effective in supporting debugging as manually written tests.
We conducted a family of three experiments (five replications) with humans (in total, 55 subjects) to assess whether the features of automatically generated test cases, which make them less readable and understandable (e.g., unclear test scenarios, meaningless identifiers), have an impact on the effectiveness and efficiency of debugging. The first two experiments compare different test case generation tools (Randoop vs. EvoSuite). The third experiment investigates the role of code identifiers in test cases (obfuscated vs. original identifiers), since a major difference between manual and automatically generated test cases is that the latter contain meaningless (obfuscated) identifiers.
We show that automatically generated test cases are as useful for debugging as manual test cases. Furthermore, we find that, for less experienced developers, automatic tests are more useful on average due to their lower static and dynamic complexity.
Publisher
Association for Computing Machinery (ACM)
Cited by
34 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Toward granular search-based automatic unit test case generation;Empirical Software Engineering;2024-05-17
2. TestSpark: IntelliJ IDEA's Ultimate Test Generation Companion;Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings;2024-04-14
3. Investigating the readability of test code;Empirical Software Engineering;2024-02-26
4. A Comparison Study for Test Case Management Tools;Studies in Systems, Decision and Control;2024
5. Improving Model-Based Testing Through Interactive Validation, Evaluation and Reconstruction of Test Cases;Communications in Computer and Information Science;2024