Affiliation:
1. The College of Information Science and Technology Dalian Maritime University Dalian China
2. Dalian Key Laboratory of Artificial Intelligence Dalian China
Abstract
AbstractSource code summaries improve the readability and intelligibility of code, help developers understand programs, and improve the efficiency of software maintenance and upgrade processes. Unfortunately, these code comments are often mismatched, missing, or outdated in software projects, resulting in developers needing to infer functionality from source code, affecting the efficiency of software maintenance and evolution. Various methods based on neuronal networks are proposed to solve the problem of synthesis of source code. However, the current work is being carried out on resource‐rich programming languages such as Java and Python, and some low‐resource languages may not perform well. In order to solve the above challenges, we propose a context‐based transfer learning model for low resource code summarization (LRCS), which learns the common information from the language with rich resources, and then transfers it to the target language model for further learning. It consists of two components: the summary generation component is used to learn the syntactic and semantic information of the code, and the learning transfer component is used to improve the generalization ability of the model in the learning process of cross‐language code summarization. Experimental results show that LRCS outperforms baseline methods in code summarization in terms of sentence‐level BLEU, corpus‐level BLEU and METEOR. For example, LRCS improves corpus‐level BLEU scores by 52.90%, 41.10%, and 14.97%, respectively, compared to baseline methods.
Funder
National Natural Science Foundation of China
Natural Science Foundation of Liaoning Province
Fundamental Research Funds for the Central Universities
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献