1. Howard, J., and Ruder, S. (2018, January 1). Universal Language Model Fine-Tuning for Text Classification. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia.
2. Chaudhuri, K., and Salakhutdinov, R. (2019, January 9–15). Parameter-Efficient Transfer Learning for NLP. No. 97. Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA.
3. Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., and Askell, A. (2005). Language Models Are Few-Shot Learners. arXiv, Available online: https://arxiv.org/abs/2005.14165v4.
4. Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., and Rocktäschel, T. (2005). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv, Available online: http://arxiv.org/abs/2005.11401.
5. Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., Bi, Y., Dai, Y., Sun, J., Guo, Q., and Wang, M. (2023). Retrieval-Augmented Generation for Large Language Models: A Survey. arXiv, Available online: http://arxiv.org/abs/2312.10997.