Author:
Cruz Daniel,Santana Amanda,Figueiredo Eduardo
Abstract
Code smells are symptoms of bad design choices implemented on the source code. Several code smell detection tools and strategies have been proposed over the years, including the use of machine learning algorithms. However, we lack empirical evidence on how expert feedback could improve machine learning based detection of code smells. This paper aims to propose and evaluate a conceptual strategy to improve machine-learning detection of code smells by means of continuous feedback. To evaluate the strategy, we follow an exploratory evaluation design to compare results of the smell detection before and after feedback provided by a service - acting as a software expert. We focus on four code smells - God Class, Long Method, Feature Envy, and Refused Bequest - detected in 20 Java systems. As results, we observed that continuous feedback improves the performance of code smell detection. For the detection of the class-level code smells, God Class and Refused Bequest, we achieved an average improvement in terms of F1 of 0.13 and 0.58, respectively, after 50 iterations of feedback. For the method-level code smells, Long Method and Feature Envy, the improvements of F1 were 0.66 and 0.72, respectively.
Publisher
Sociedade Brasileira de Computação
Reference39 articles.
1. An exploratory evaluation of continuous feedback to enhance machine learning code smell detection online replication package, 2023. URL [link].
2. M. Abbes, F. Khomh, Y. Gueheneuc, and G. Antoniol. An empirical study of the impact of two antipatterns, blob and spaghetti code, on program comprehension. In European Conference on Software Maintenance and Reengineering (CSMR), 2011.
3. L. Amorim, E. Costa, N. Antunes, B. Fonseca, and M. Ribeiro. Experience report: Evaluating the effectiveness of decision trees for detecting code smells. In Int. Symposium on Software Reliability Engineering (ISSRE), pages 261–269, 2015.
4. M. Aniche. Java code metrics calculator (CK), 2015. Available in [link].
5. H. Barkmann, R. Lincke, and W. Löwe. Quantitative evaluation of software quality metrics in open-source projects. In Int’l Conf. on Advanced Information Networking and Applications Workshops (AINA), pages 1067–1072, 2009.