Affiliation:
1. College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
2. Faculty of Information Technology, Monash University, Victoria, Australia
3. School of Information Systems, Singapore Management University, Singapore
Abstract
Code summarization aims at generating a code comment given a block of source code and it is normally performed by training machine learning algorithms on existing code block-comment pairs. Code comments in practice have different intentions. For example, some code comments might explain how the methods work, while others explain why some methods are written. Previous works have shown that a relationship exists between a code block and the category of a comment associated with it. In this article, we aim to investigate to which extent we can exploit this relationship to improve code summarization performance. We first classify comments into six intention categories and manually label 20,000 code-comment pairs. These categories include
“what,”
“why,”
“how-to-use,”
“how-it-is-done,”
“property,”
and
“others.”
Based on this dataset, we conduct an experiment to investigate the performance of different state-of-the-art code summarization approaches on the categories. We find that the performance of different code summarization approaches varies substantially across the categories. Moreover, the category for which a code summarization model performs the best is different for the different models. In particular, no models perform the best for
“why”
and
“property”
comments among the six categories. We design a composite approach to demonstrate that comment category prediction can boost code summarization to reach better results. The approach leverages classified code-category labeled data to train a classifier to infer categories. Then it selects the most suitable models for inferred categories and outputs the composite results. Our composite approach outperforms other approaches that do not consider comment categories and obtains a relative improvement of 8.57% and 16.34% in terms of
ROUGE-L
and
BLEU-4
score, respectively.
Funder
Australian Research Council's Discovery Early Career Researcher Award
National Key R8D Program of China
NSFC Program
Publisher
Association for Computing Machinery (ACM)
Cited by
52 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献