Don’t Complete It! Preventing Unhelpful Code Completion for Productive and Sustainable Neural Code Completion Systems

Author:

Sun Zhensu1ORCID,Du Xiaoning2ORCID,Song Fu3ORCID,Wang Shangwen4ORCID,Ni Mingze5ORCID,Li Li6ORCID,Lo David1ORCID

Affiliation:

1. Singapore Management University, Singapore

2. Monash University, Australia

3. Key Laboratory of System Software (Chinese Academy of Sciences), State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China

4. National University of Defense Technology, China

5. University of Technology Sydney, Australia

6. Beihang University, Beijing, China

Abstract

Currently, large pre-trained language models are widely applied in neural code completion systems. Though large code models significantly outperform their smaller counterparts, around 70% of displayed code completions from Github Copilot are not accepted by developers. Being reviewed but not accepted, their help to developer productivity is considerably limited and may conversely aggravate the workload of developers, as the code completions are automatically and actively generated in state-of-the-art code completion systems as developers type out once the service is enabled. Even worse, considering the high cost of the large code models, it is a huge waste of computing resources and energy, which severely goes against the sustainable development principle of AI technologies. However, such waste has never been realized, not to mention effectively addressed, in the research community for neural code completion. Hence, preventing such unhelpful code completions from happening in a cost-friendly way is of urgent need. To fill this significant gap, we first investigate the prompts of unhelpful code completions, called “low-return prompts”. We empirically identify four observable patterns in low-return prompts, each lacking necessary information, making it difficult to address through enhancements to the model’s accuracy alone. This demonstrates the feasibility of identifying such low-return prompts based on the prompts themselves. Motivated by this finding, we propose an early-rejection mechanism to turn down low-return prompts by foretelling the code completion qualities. The prompts that are estimated to receive unhelpful code completions will not be sent to the model. Furthermore, we investigated five types of estimators to demonstrate the feasibility of the mechanism. The experimental results show that the estimator can reject 20% of code completion requests with a 97.4% Precision. To the best of our knowledge, it is the first systemic approach to address the problem of unhelpful code completions and this work also sheds light on an important research direction of large code models.

Publisher

Association for Computing Machinery (ACM)

Reference54 articles.

1. 1999. Code Conventions for the Java Programming Language: 9. Naming Conventions. Retrieved December 28, 2022 from https://www.oracle.com/java/technologies/javase/codeconventions-namingconventions.html

2. 2022. aiXcoder. Retrieved December 28, 2022 from https://www.aixcoder.com/en/

3. 2022. Code faster with AI completions / TabNine. Retrieved December 28, 2022 from https://www.tabnine.com/

4. 2022. GitHub Copilot · Your AI pair programmer. Retrieved December 28, 2022 from https://copilot.github.com/

5. 2022. ML-powered coding companion – Amazon CodeWhisperer – Amazon Web Services. Retrieved December 28, 2022 from https://aws.amazon.com/codewhisperer/

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3