Author:
Wu Jincheng,Hu Dongfang,Zheng Zhitong
Publisher
Springer Nature Singapore
Reference36 articles.
1. Chen, T.Y., et al.: Only train once: a one-shot neural network training and pruning framework. In: Advances in Neural Information Processing Systems, vol. 34 (2021)
2. Chin, T.W., Ding, R.Z., Zhang, C., Marculescu, D.: Towards efficient model compression via learned global ranking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1518–1528 (2020)
3. Ding, X.H., Hao, T.X., et al.: Resrep: Lossless CNN pruning via decoupling remembering and forgetting. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4510–4520 (2021)
4. Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635 (2018)
5. Gao, S.Q., Huang, F.H., Cai, W.D., Huang, H.: Network pruning via performance maximization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9270–9280 (2021)