1. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., … Zheng, X. (2016). TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. ArXiv, abs/1603.04467. https://doi.org/10.48550/ARXIV.1603.04467
2. Belkin, M., Hsu, D., Ma, S., & Mandal, S. (2019). Reconciling modern machine-learning practice and the classical bias-variance trade-off. Proceedings of the National Academy of Sciences, 116(32), 15849-15854. https://doi.org/10.1073/pnas.1903070116
3. Blalock, D., Ortiz, J. J. G., Frankle, J., & Guttag, J. (2020). What is the state of neural network pruning?. ArXiv, abs/2003.03033. https://doi.org/10.48550/arXiv.2003.03033
4. Bouthillier, X., Delaunay, P., Bronzi, M., Trofimov, A., Nichyporuk, B., Szeto, J., Sepah, N., Raff, E., Madan, K., Voleti, V., Kahou, S. E., Michalski, V., Serdyuk, D., Arbel, T., Pal, C., Varoquaux, G., & Vincent, P. (2021). Accounting for variance in machine learning benchmarks. ArXiv, abs/2103.03098. https://doi.org/10.48550/ARXIV.2103.03098
5. Choi, Y., El-Khamy, M., & Lee, J. (2016). Towards the limit of network quantization. ArXiv, abs/1612.01543. https://doi.org/10.48550/arXiv.1612.01543