1. Anthony, L. F. W., Kanding, B., and Selvan, R. (2020). Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. ICML Workshop on Challenges in Deploying and monitoring Machine Learning Systems. arXiv:2007.03051.
2. Assunção, V. D., Haniya, M. V., de Oliveira Fonseca, A. C., Jacob, A. C. P., Junior, J. T. A., de Magalhães, P. C., de Oliveira, A. K. B., and de Sousa, M. M. (2023). Análise dos desastres naturais em petrópolis ocorridos em fevereiro de 2022. ENCONTRO NACIONAL DE DESASTRES, 3.
3. Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 610–623, New York, NY, USA. Association for Computing Machinery.
4. Bouza, L., Bugeau, A., and Lannelongue, L. (2023). How to estimate carbon footprint when training deep learning models? a guide and review. Environmental Research Communications, 5.
5. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.