1. Touvron H, Lavril T, Izacard G, Martinet X, Lachaux M-A, Lacroix T, Rozière B, Goyal N, Hambro E, Azhar F et al (2023) Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Accessed 09 June 2023
2. Anil R, Dai AM, Firat O, Johnson M, Lepikhin D, Passos A, Shakeri S, Taropa E, Bailey P, Chen Z et al (2023) Palm 2 technical report. arXiv preprint arXiv:2305.10403. Accessed 03 July 2023
3. Bai J, Bai S, Chu Y, Cui Z, Dang K, Deng X, Fan Y, Ge W, Han Y, Huang F et al (2023) Qwen technical report. arXiv preprint arXiv:2309.16609. Accessed 07 Dec 2023
4. Zhang Y, Li Y, Cui L, Cai D, Liu L, Fu T, Huang X, Zhao E, Zhang Y, Chen Y et al (2023) Siren’s song in the ai ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219. Accessed 08 Aug 2023
5. Martino A, Iannelli M, Truong C (2023) Knowledge injection to counter large language model (llm) hallucination. European Semantic Web Conference. Springer, New York, pp 182–185