1. Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
2. Giovanni S Alberti, Ernesto De Vito, Matti Lassas, Luca Ratti, and Matteo Santacesaria. 2021. Learning the optimal Tikhonov regularizer for inverse problems. In Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J.Wortman Vaughan (Eds.), Vol. 34. Curran Associates, Inc., 25205--25216. https://proceedings.neurips.cc/paper_files/paper/2021/file/ d3e6cd9f66f2c1d3840ade4161cf7406-Paper.pdf
3. Teodoro Baldazzi Luigi Bellomarini Stefano Ceri Andrea Colombo Andrea Gentili and Emanuel Sallinger. 2023. Fine-Tuning Large Enterprise Language Models via Ontological Reasoning. In Rules and Reasoning Anna Fensel Ana Ozaki Dumitru Roman and Ahmet Soylu (Eds.). Springer Nature Switzerland Cham 86--94.
4. Chris M Bishop. 1995. Training with noise is equivalent to Tikhonov regularization. Neural computation 7, 1 (1995), 108--116.
5. Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020) 1877--1901.