1. Learning structured sparsity in deep neural networks;wen;Advances in neural information processing systems,2016
2. Training neural networks without gradients: A scalable admm approach;taylor;International Conference on Machine Learning,2016
3. Adversarial Robustness vs. Model Compression, or Both?
4. Channel Pruning for Deep Neural Networks Via a Relaxed Groupwise Splitting Method
5. A channel-pruned and weightbinarized convolutional neural network for keyword spotting;lyu;Advanced Computational Methods for Knowledge Engineering ICCSAMA 2019,2020