1. Molchanov, P., Tyree, S., Karras, T., Aila, T., and Kautz, J. (2017). Pruning Convolutional Neural Networks for Resource Efficient Inference. arXiv.
2. Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., and Garnett, R. (2015). Advances in Neural Information Processing Systems, Curran Associates, Inc.
3. Hu, H., Peng, R., Tai, Y.W., and Tang, C.K. (2016). Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures. arXiv.
4. Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H.P. (2017). Pruning Filters for Efficient ConvNets. arXiv.
5. Frankle, J., and Carbin, M. (2019, January 6–9). The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA.