1. Zhao, W.X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y. (2023). ..., Tang, X., Liu, Z., Liu, P., Nie, J., & Wen, J. 2023. A Survey of Large Language Models. arXiv, abs/2303.18223.
2. Katherine Lee Daphne Ippolito Andrew Nystrom Chiyuan Zhang Douglas Eck Chris Callison-Burch and Nicholas Carlini. 2021. Deduplicating training data makes language models better. arXiv preprint arXiv:2107.06499.
3. Nayeon Lee Wei Ping Peng Xu Mostofa Patwary Mohammad Shoeybi and Bryan Catanzaro. 2022. Factuality enhanced language models for open-ended text generation. arXiv preprint arXiv:2206.04624.
4. Ren R. Wang Y. Qu Y. Zhao W.X. Liu J. Tian H. Wu H. Wen J. & Wang H. 2023. Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation. arXiv abs/2307.11019.
5. Chen F. & Feng Y. 2023. Chain-of-Thought Prompt Distillation for Multimodal Named Entity Recognition and Multimodal Relation Extraction.