1. Blalock, D., Gonzalez Ortiz, J.J., Frankle, J., Guttag, J.: What is the state of neural network pruning? Proc. Mach. Learn. Syst. 2, 129–146 (2020)
2. Finnoff, W., Hergert, F., Zimmermann, H.G.: Improving model selection by nonconvergent methods. Neural Netw. 6(6), 771–783 (1993). https://doi.org/10.1016/S0893-6080(05)80122-4
3. Gale, T., Elsen, E., Hooker, S.: The State of Sparsity in Deep Neural Networks (2019). https://doi.org/10.48550/arXiv.1902.09574, arXiv:1902.09574 [cs, stat]
4. Hagiwara, M.: Removal of hidden units and weights for back propagation networks. In: Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan). vol. 1, pp. 351–354 vol 1 (1993). https://doi.org/10.1109/IJCNN.1993.713929
5. Han, S., Pool, J., Tran, J., Dally, W.: Learning both Weights and Connections for Efficient Neural Network. In: Advances in Neural Information Processing Systems. vol. 28. Curran Associates, Inc. (2015)