1. Costa-jussà, M., Cross, J., Çelebi, O., Elbayad, M., Heafield, K., Heffernan, K., Kalbassi, E., Lam, J., Licht, D., and Maillard, J. (2022). No language left behind: Scaling human-centered machine translation. arXiv.
2. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., and Askell, A. (2020, January 6–12). Language models are few-shot learners. Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, BC, Canada. Available online: https://dl.acm.org/doi/pdf/10.5555/3495724.3495883.
3. Strubell, E., Ganesh, A., and McCallum, A. (August, January 28). Energy and Policy Considerations for Deep Learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Available online: https://aclanthology.org/P19-1355/.
4. Towards the systematic reporting of the energy and carbon footprints of machine learning;Henderson;J. Mach. Learn. Res.,2020
5. adaptNMT: An open-source, language-agnostic development environment for Neural Machine Translation;Lankford;Lang. Resour. Eval.,2023