Affiliation:
1. University of L’Aquila, L’Aquila, Italy
2. Software Institute - USI, Lugano, Swizerland
3. University of Molise, Pesche (IS), Italy
Abstract
Refactoring aims at improving the maintainability of source code without modifying its external behavior. Previous works proposed approaches to recommend refactoring solutions to software developers. The generation of the recommended solutions is guided by metrics acting as proxy for maintainability (e.g., number of code smells removed by the recommended solution). These approaches ignore the impact of the recommended refactorings on other non-functional requirements, such as performance, energy consumption, and so forth. Little is known about the impact of refactoring operations on non-functional requirements other than maintainability.
We aim to fill this gap by presenting the largest study to date to investigate the impact of refactoring on software performance, in terms of execution time. We mined the change history of 20 systems that defined performance benchmarks in their repositories, with the goal of identifying commits in which developers implemented refactoring operations impacting code components that are exercised by the performance benchmarks. Through a quantitative and qualitative analysis, we show that refactoring operations can significantly impact the execution time. Indeed, none of the investigated refactoring types can be considered “safe” in ensuring no performance regression. Refactoring types aimed at decomposing complex code entities (e.g., Extract Class/Interface, Extract Method) have higher chances of triggering performance degradation, suggesting their careful consideration when refactoring performance-critical code.
Funder
Swiss National Science foundation
Publisher
Association for Computing Machinery (ACM)
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Evaluating Search-Based Software Microbenchmark Prioritization;IEEE Transactions on Software Engineering;2024-07
2. Automated construction of reference model for software remodularization through software evolution;Journal of Software: Evolution and Process;2024-06-19
3. An Empirical Study on Code Coverage of Performance Testing;Proceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering;2024-06-18
4. Time Series Forecasting of Runtime Software Metrics: An Empirical Study;Proceedings of the 15th ACM/SPEC International Conference on Performance Engineering;2024-05-07
5. Creative and Correct: Requesting Diverse Code Solutions from AI Foundation Models;Proceedings of the 2024 IEEE/ACM First International Conference on AI Foundation Models and Software Engineering;2024-04-14