Affiliation:
1. Fudan University, Shanghai, China
Abstract
Unit testing plays an essential role in detecting bugs in functionally-discrete program units (e.g., methods). Manually writing high-quality unit tests is time-consuming and laborious. Although the traditional techniques are able to generate tests with reasonable coverage, they are shown to exhibit low readability and still cannot be directly adopted by developers in practice. Recent work has shown the large potential of large language models (LLMs) in unit test generation. By being pre-trained on a massive developer-written code corpus, the models are capable of generating more human-like and meaningful test code.
In this work, we perform the first empirical study to evaluate the capability of ChatGPT (i.e., one of the most representative LLMs with outstanding performance in code generation and comprehension) in unit test generation. In particular, we conduct both a quantitative analysis and a user study to systematically investigate the quality of its generated tests in terms of correctness, sufficiency, readability, and usability. We find that the tests generated by ChatGPT still suffer from correctness issues, including diverse compilation errors and execution failures (mostly caused by incorrect assertions); but the passing tests generated by ChatGPT almost resemble manually-written tests by achieving comparable coverage, readability, and even sometimes developers' preference. Our findings indicate that generating unit tests with ChatGPT could be very promising if the correctness of its generated tests could be further improved.
Inspired by our findings above, we further propose ChatTester, a novel ChatGPT-based unit test generation approach, which leverages ChatGPT itself to improve the quality of its generated tests. ChatTester incorporates an initial test generator and an iterative test refiner. Our evaluation demonstrates the effectiveness of ChatTester by generating 34.3% more compilable tests and 18.7% more tests with correct assertions than the default ChatGPT. In addition to ChatGPT, we further investigate the generalization capabilities of ChatTester by applying it to two recent open-source LLMs (i.e., CodeLLama-Instruct and CodeFuse) and our results show that ChatTester can also improve the quality of tests generated by these LLMs.
Publisher
Association for Computing Machinery (ACM)
Reference89 articles.
1. 2019. http://javaparser.org/. ( 2019 ).
2. 2022. https://the-decoder. com/chatgpt-guide-prompt-strategies/. ( 2022 ).
3. 2022. https://www.jacoco.org/jacoco/. ( 2022 ).
4. CodeLlama 34b Instruct. 2023. ( 2023 ). https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf
5. Unified Pre-training for Program Understanding and Generation
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献