Affiliation:
1. Shandong University, Qingdao, China
2. Beihang University, Beijing, China
3. City University of Hong Kong, Hong Kong, China
4. Peking University, Beijing, China
Abstract
Code translation tools, namely transpilers, are developed for automatic source-to-source translation. Latest learning-based transpilers have shown impressive enhancement against rule-based counterparts in both translation accuracy and readability, owing to their task-specific pre-training on extensive monolingual corpora. Nevertheless, their current performance still remains unsatisfactory for practical deployment, and the associated training resources are also prohibitively expensive. Large Language Models (LLMs), pre-trained on huge amounts of human-written code/text, have shown remarkable performance in many code intelligence tasks due to their powerful generality, even without task-specific re-training/fine-tuning. Thus, LLMs can potentially circumvent the above limitations, but they have not been exhaustively explored yet. This paper investigates diverse LLMs and learning-based transpilers for automated code translation tasks, finding that: although certain LLMs have outperformed current transpilers, they still have some accuracy issues, where most of the failures are induced by a lack of comprehension of source programs (38.51%), missing clear instructions on I/O types in translation (14.94%), and ignoring discrepancies between source and target programs (41.38%). Enlightened by the above findings, we further propose
UniTrans
, a
Uni
fied code
Trans
lation framework, applicable to various LLMs, for unleashing their power in this field. Specifically,
UniTrans
first crafts a series of test cases for target programs with the assistance of source programs. Next, it harnesses the above auto-generated test cases to augment the code translation and then evaluate their correctness via execution. Afterward,
UniTrans
further (iteratively) repairs incorrectly translated programs prompted by test case execution results. Extensive experiments are conducted on six settings of translation datasets between Python, Java, and C++. Three recent LLMs of diverse sizes, including GPT-3.5 and LLaMA-13B/7B, are tested with
UniTrans
, and all achieve substantial improvements in terms of computational accuracy and exact match accuracy among almost all translation settings, showing the universal effectiveness of
UniTrans
in practice.
Funder
National Key R&D Program
National Natural Science Foundation of China
Shandong Province Overseas Outstanding Youth Fund
City University of Hong Kong
Key Program of Hubei
Publisher
Association for Computing Machinery (ACM)
Reference67 articles.
1. [n. d.]. EMMA: a free Java code coverage tool. https://emma.sourceforge.net/ (Accessed on 05/06/2024)
2. [n. d.]. FSE-24-UniTrans. https://github.com/yz1019117968/FSE-24-UniTrans (Accessed on 04/19/2024)
3. [n. d.]. GeeksforGeeks. https://www.geeksforgeeks.org/ (Accessed on 05/06/2024)
4. [n. d.]. gotranspile/cxgo: Tool for transpiling C to Go.. https://github.com/gotranspile/cxgo (Accessed on 05/06/2024)
5. [n. d.]. Hugging Face – The AI community building the future.. https://huggingface.co/ (Accessed on 05/06/2024)