1. Chain of thought prompting elicits reasoning in large language models;wei;ArXiv Preprint,2022
2. Antecedent predictions are dominant for tree-based code generation;dong;ArXiv Preprint,2022
3. Large language models are zero-shot reasoners;kojima;ArXiv Preprint,2022
4. An AST Structure Enhanced Decoder for Code Generation
5. Self- consistency improves chain of thought reasoning in language models;wang;ArXiv Preprint,2022