Author:
Ming Daoyang,Xiong Weicheng
Abstract
Code summarization provides the main aim described in natural language of the given function, it can benefit many tasks in software engineering. Due to the special grammar and syntax structure of programming languages and various shortcomings of different deep neural networks, the accuracy of existing code summarization approaches is not good enough. This work proposes to adopt the hierarchical attention mechanism to enable the code summarization framework to translate three representations of source code to the hidden spce and then it injects them into a deep reinforcement learning model to enhance the performance of code summarization. We conduct a few of experiments, and the results of which prove that the proposed approaches can obtain better accuracy compared with the baseline approaches.
Publisher
Darcy & Roy Press Co. Ltd.
Reference40 articles.
1. N.KALCHBRENNER,L. ESPEHOLT, K. SIMONYAN, A. V. D. OORD, AND K. KAVUKCUOGLU, Neural machine translation in linear time, arXiv preprint arXiv:1610.10099, (2016).
2. A.GRAVES,Generating sequences with recurrent neural networks, arXiv preprint arXiv:1308.0850, (2013).
3. A.LOUIS,S.K. DASH, E. T. BARR, AND C. SUTTON, Deep learning to detect redundant method comments, arXiv preprint arXiv:1806.04616, (2018).
4. R.COLLOBERT AND J.WESTON, A unified architecture for natural language processing: Deep neural networks with multitask learning, in Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), 2008, pp. 160–167.
5. S.IYER,I.KONSTAS,A.CHEUNG,ANDL.ZETTLEMOYER, Summarizing source code using a neural attention model., in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (1), 2016, pp. 2073–2083.