Publisher
Springer Nature Switzerland
Reference53 articles.
1. Bai, Y., et al.: Training a helpful and harmless assistant with reinforcement learning from human feedback (2022)
2. Barrat, A., Barthélemy, M., Pastor-Satorras, R., Vespignani, A.: The architecture of complex weighted networks. Proc. Natl. Acad. Sci. U.S.A. 101(11), 3747–52 (2004). https://doi.org/10.1073/pnas.0400087101
3. Beltagy, I., Lo, K., Cohan, A.: SciBERT: a pretrained language model for scientific text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3615–3620. Association for Computational Linguistics, Hong Kong, China (Nov 2019). https://doi.org/10.18653/v1/D19-1371, https://aclanthology.org/D19-1371
4. Bertsch, A., Alon, U., Neubig, G., Gormley, M.R.: Unlimiformer: Long-range transformers with unlimited length input (2023)
5. Brown, T.B., et al.: Language models are few-shot learners (2020)