1. Brown, T.B., Mann, B., and Ryder, N. (2020). Language Models are Few-Shot Learners. arXiv.
2. Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H.P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., and Brockman, G. (2021). Evaluating large language models trained on code. arXiv.
3. Wahde, M., and Virgolin, M. (2022). Conversational agents: Theory and applications. arXiv.
4. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2023, April 26). Language Models Are Unsupervised Multitask Learners. OpenAI Blog. Available online: https://life-extension.github.io/2020/05/27/GPT%E6%8A%80%E6%9C%AF%E5%88%9D%E6%8E%A2/language-models.pdf.
5. Wei, J., Bosma, M., Zhao, V.Y., Guu, K., Yu, A.W., Lester, B., Du, N., Dai, A.M., and Le, Q.V. (2022). Finetuned language models are zero-shot learners. arXiv.