1. Zhang S, Roller S, Goyal N, Artetxe M, Chen M, Chen S, Dewan C, Diab M, Li X, Lin XV, Mihaylov T, Ott M, Shleifer S, Shuster K, Simig D, Koura PS, Sridhar A, Wang T, Zettlemoyer L. Opt: open pre-trained transformer language models. arXiv preprint, 2205.01068, 2022
2. Zhang Z, Gu Y, Han X, Chen S, Xiao C, Sun Z, Yao Y, Qi F, Guan J, Ke P, Cai Y, Zeng G, Tan Z, Liu Z, Huang M, Han W, Liu Y, Zhu X, Sun M. AI Open, 2021, 2: 216–224
3. Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ. J Mach Learn Res, 2020, 21: 1–67
4. Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler DM, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D. Language models are few-shot learners. arXiv preprint, 2005.14165, 2020
5. https://chat.openai.com/