1. Better Patching Using LLM Prompting, via Self-Consistency
2. Ramakrishna Bairi, Atharv Sonwane, Aditya Kanade, C VageeshD, Arun Shankar Iyer, Suresh Parthasarathy, Sriram K. Rajamani, B. Ashok, and Shashank P. Shet. 2023. CodePlan: Repository-level Coding using LLMs and Planning. ArXiv abs/2309.12499 (2023). https://api.semanticscholar.org/CorpusID:262217135
3. Mark Chen Jerry Tworek Heewoo Jun Qiming Yuan Henrique Ponde de Oliveira Pinto and et al.2021. Evaluating Large Language Models Trained on Code. arxiv:2107.03374 [cs.LG]
4. Yangruibo Ding, Zijian Wang, Wasi Uddin Ahmad, Murali Krishna Ramanathan, Ramesh Nallapati, Parminder Bhatia, Dan Roth, and Bing Xiang. 2022. CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context. ArXiv abs/2212.10007 (2022). https://api.semanticscholar.org/CorpusID:254877371
5. Qingxiu Dong Lei Li Damai Dai Ce Zheng Zhiyong Wu Baobao Chang Xu Sun Jingjing Xu and Zhifang Sui. 2022. A Survey on In-context Learning. https://api.semanticscholar.org/CorpusID:255372865