1. Radford, A., and Narasimhan, K. (2024, July 25). Improving Language Understanding by Generative Pre-Training, OpenAI Blog 2018. Available online: https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf.
2. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2024, July 25). Language Models Are Unsupervised Multitask Learners, OpenAI Blog 2019. Available online: https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf.
3. Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., and Agarwal, S. (2020). Language models are few-shot learners. arXiv.
4. Liu, J., Shen, D., Zhang, Y., Dolan, B., Carin, L., and Chen, W. (2021). What Makes Good In-Context Examples for GPT-3?. arXiv.
5. Use of ChatGPT in academia: Academic integrity hangs in the balance;Sadallah;Technol. Soc.,2023