1. 37 Million Compilations
2. Language models are few-shot learners;Brown Tom;Advances in Neural Information Processing Systems,2020
3. “Unmuddying” course content using muddiest point reflections
4. Mark Chen , Jerry Tworek , Heewoo Jun , Qiming Yuan , Henrique Ponde de Oliveira Pinto , Jared Kaplan, Harri Edwards, Yuri Burda , Nicholas Joseph, Greg Brockman , 2021 . Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374(2021). Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374(2021).
5. Bringing "High-level" Down to Earth