1. Large language models and the perils of their hallucinations
2. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, et al. 2022. Improving Language Models by Retrieving from Trillions of Tokens. In International Conference on Machine Learning, ICML 2022, 17--23 July 2022, Baltimore, Maryland, USA (Proceedings of Machine Learning Research, Vol. 162). PMLR, 2206--2240. https://proceedings.mlr.press/v162/borgeaud22a.html
3. Andrew Drozdov, Nathanael Sch"arli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, and Denny Zhou. 2023. Compositional Semantic Parsing with Large Language Models. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=gJW8hSGBys8
4. Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, et al. 2018. T-REx: A Large Scale Alignment of Natural Language with Knowledge Base Triples. In Proceedings of the 2018 Conference on LREC. European Language Resources Association (ELRA), Miyazaki, Japan. https://aclanthology.org/L18--1544