1. Lq-nets: Learned quantization for highly accurate and compact deep neural networks;zhang;Proceedings of the European Conference on Computer Vision (ECCV),2018
2. A unified framework of dnn weight pruning and weight clustering/quantization using admm;ye,2018
3. Progressive dnn compression: A key to achieve ultra-high weight pruning and quantization rates using admm;ye,2019
4. Mixed precision quantization of convnets via differentiable neural architecture search;wu,2018
5. Incremental network quantization: Towards lossless cnns with low-precision weights;zhou;5th International Conference on Learning Representations (ICLR),2017