1. Google Research: Colaboratory. https://research.google.com/colaboratory/intl/es/faq.html
2. Wilkinson, B., Allen, C.M.: Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers. Pearson/Prentice Hall (2005)
3. Krzywaniak, A., Czarnul, P., Proficz, J.: GPU power capping for energy-performance trade-offs in training of deep convolutional neural networks for image recognition. In: Groen, D., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds.) Computational Science – ICCS 2022. ICCS 2022. Lecture Notes in Computer Science, vol. 13350, pp. 667–681. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-08751-6_48
4. You, J., Chung, J.-W., Chowdhury, M.: Zeus: understanding and optimizing GPU energy consumption of DNN training (2022). http://arxiv.org/abs/2208.06102
5. Kirby, A.C., Samsi, S., Jones, M., Reuther, A., Kepner, J., Gadepally, V.: Layer-parallel training with GPU concurrency of deep residual neural networks via nonlinear multigrid (2020). http://arxiv.org/abs/2007.07336