Affiliation:
1. University of Luxembourg, Luxembourg
2. Huawei, China
3. Sabanci University, Turkey
4. Monash University, Australia
5. Chongqing University, China
Abstract
A large body of the literature on automated program repair develops approaches where patches are automatically generated to be validated against an oracle (e.g., a test suite). Because such an oracle can be imperfect, the generated patches, although validated by the oracle, may actually be incorrect. While the state of the art explores research directions that require dynamic information or rely on manually-crafted heuristics, we study the benefit of learning code representations in order to learn deep features that may encode the properties of patch correctness. Our empirical work investigates different representation learning approaches for code changes to derive embeddings that are amenable to similarity computations of patch correctness identification, and assess the possibility of accurate classification of correct patch by combining learned embeddings with engineered features. Experimental results demonstrate the potential of learned embeddings to empower
Leopard
(a patch correctness predicting framework implemented in this work) with learning algorithms in reasoning about patch correctness: a machine learning predictor with BERT transformer-based learned embeddings associated with XGBoost achieves an AUC value of about 0.803 in the prediction of patch correctness on a new dataset of 2,147 labeled patches that we collected for the experiments. Our investigations show that deep learned embeddings can lead to complementary/better performance when comparing against the state-of-the-art, PATCH-SIM, which relies on dynamic information. By combining deep learned embeddings and engineered features,
Panther
(the upgraded version of
Leopard
implemented in this work) outperforms
Leopard
with higher scores in terms of AUC, +Recall and -Recall, and can accurately identify more (in)correct patches that cannot be predicted by the classifiers only with learned embeddings or engineered features. Finally, we use an explainable ML technique, SHAP, to empirically interpret how the learned embeddings and engineered features are contributed to the patch correctness prediction.
Publisher
Association for Computing Machinery (ACM)
Reference76 articles.
1. Miltiadis Allamanis , Earl T. Barr , Premkumar T. Devanbu , and Charles A. Sutton . 2018. A Survey of Machine Learning for Big Code and Naturalness. Comput. Surveys 51, 4 ( 2018 ), 81:1–81:37. DOI: https://doi.org/10.1145/3212695 10.1145/3212695 Miltiadis Allamanis, Earl T. Barr, Premkumar T. Devanbu, and Charles A. Sutton. 2018. A Survey of Machine Learning for Big Code and Naturalness. Comput. Surveys 51, 4 (2018), 81:1–81:37. DOI: https://doi.org/10.1145/3212695
2. code2vec: learning distributed representations of code
3. Getafix: learning to fix bugs automatically
4. The plastic surgery hypothesis
5. Testing and Verification of Compilers (Dagstuhl Seminar 17502);Chen Junjie;Dagstuhl Reports,2017
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献