1. Siu, K., Stuart, D.M., Mahmoud, M., and Moshovos, A. (October, January 30). Memory Requirements for Convolutional Neural Network Hardware Accelerators. Proceedings of the 2018 IEEE International Symposium on Workload Characterization (IISWC), Raleigh, NC, USA.
2. Chen, T., Li, M., Li, Y., Lin, M., Wang, N., Wang, M., Xiao, T., Xu, B., Zhang, C., and Zhang, Z. (2015), January 7–12). MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. Proceedings of the Neural Information Processing Systems, Workshop on Machine Learning Systems, Montreal, QC, Canada.
3. Gruslys, A., Munos, R., Danihelka, I., Lanctot, M., and Graves, A. (2016, January 5–10). Memory-Efficient Backpropagation Through Time. Proceedings of the NIPS’16: 30th International Conference on Neural Information Processing Systems, Barcelona Spain.
4. Diamos, G., Sengupta, S., Catanzaro, B., Chrzanowski, M., Coates, A., Elsen, E., Engel, J., Hannun, A., and Satheesh, S. (2016, January 19–24). Persistent RNNs: Stashing recurrent weights on-chip. Proceedings of the ICML’16: 33rd International Conference on International Conference on Machine Learning, New York, NY, USA.
5. Hagan, M., Demuth, H.B., Beale, M.H., and De Jesus, O. (2014). Neural Network Design, Martin Hagan. [2nd ed.]. Available online: https://hagan.okstate.edu/nnd.html.