Affiliation:
1. School of Computer Science, University College Dublin, D04 V1W8 Dublin, Ireland
2. School of Computer Science and Statistics, Trinity College Dublin, D02 PN40 Dublin, Ireland
3. School of Computer Science, University of Galway, H91 TK33 Galway, Ireland
Abstract
The ability to automatically generate code, i.e., program synthesis, is one of the most important applications of artificial intelligence (AI). Currently, two AI techniques are leading the way: large language models (LLMs) and genetic programming (GP) methods—each with its strengths and weaknesses. While LLMs have shown success in program synthesis from a task description, they often struggle to generate the correct code due to ambiguity in task specifications, complex programming syntax, and lack of reliability in the generated code. Furthermore, their generative nature limits their ability to fix erroneous code with iterative LLM prompting. Grammar-guided genetic programming (G3P, i.e., one of the top GP methods) has been shown capable of evolving programs that fit a defined Backus–Naur-form (BNF) grammar based on a set of input/output tests that help guide the search process while ensuring that the generated code does not include calls to untrustworthy libraries or poorly structured snippets. However, G3P still faces issues generating code for complex tasks. A recent study attempting to combine both approaches (G3P and LLMs) by seeding an LLM-generated program into the initial population of the G3P has shown promising results. However, the approach rapidly loses the seeded information over the evolutionary process, which hinders its performance. In this work, we propose combining an LLM (specifically ChatGPT) with a many-objective G3P (MaOG3P) framework in two parts: (i) provide the LLM-generated code as a seed to the evolutionary process following a grammar-mapping phase that creates an avenue for program evolution and error correction; and (ii) leverage many-objective similarity measures towards the LLM-generated code to guide the search process throughout the evolution. The idea behind using the similarity measures is that the LLM-generated code is likely to be close to the correct fitting code. Our approach compels any generated program to adhere to the BNF grammar, ultimately mitigating security risks and improving code quality. Experiments on a well-known and widely used program synthesis dataset show that our approach successfully improves the synthesis of grammar-fitting code for several tasks.
Funder
Science Foundation Ireland
Reference67 articles.
1. Hara, A., Kushida, J.I., Tanabe, S., and Takahama, T. (2013, January 13). Parallel Ant Programming using genetic operators. Proceedings of the IEEE IWCIA, Hiroshima, Japan.
2. Introduction of ABCEP as an automatic programming method;Masood;Inf. Sci.,2021
3. Rule-centred genetic programming (RCGP): An imperialist competitive approach;Abdollahi;Appl. Intell.,2020
4. A conditional dependency based probabilistic model building grammatical evolution;Kim;IEICE Trans. Inf. Syst.,2016
5. GSP: An automatic programming technique with gravitational search algorithm;Mahanipour;Appl. Intell.,2019