Affiliation:
1. Department of Computer Engineering, University of Zanjan, Zanjan, Iran
2. Department of Electrical and Computer Engineering, University of Zanjan, Zanjan, Iran
3. Neurozentrum Department, Universitätsklinikum Freiburg, Freiburg, Germany
Abstract
Recent advancements in machine learning achieved by Deep Neural Networks (DNNs) have been significant. While demonstrating high accuracy, DNNs are associated with a huge number of parameters and computations, which leads to high memory usage and energy consumption. As a result, deploying DNNs on devices with constrained hardware resources poses significant challenges. To overcome this, various compression techniques have been widely employed to optimize DNN accelerators. A promising approach is quantization, in which the full-precision values are stored in low bit-width precision. Quantization not only reduces memory requirements but also replaces high-cost operations with low-cost ones. DNN quantization offers flexibility and efficiency in hardware design, making it a widely adopted technique in various methods. Since quantization has been extensively utilized in previous works, there is a need for an integrated report that provides an understanding, analysis, and comparison of different quantization approaches. Consequently, we present a comprehensive survey of quantization concepts and methods, with a focus on image classification. We describe clustering-based quantization methods and explore the use of a scale factor parameter for approximating full-precision values. Moreover, we thoroughly review the training of a quantized DNN, including the use of a straight-through estimator and quantization regularization. We explain the replacement of floating-point operations with low-cost bitwise operations in a quantized DNN and the sensitivity of different layers in quantization. Furthermore, we highlight the evaluation metrics for quantization methods and important benchmarks in the image classification task. We also present the accuracy of the state-of-the-art methods on CIFAR-10 and ImageNet. This article attempts to make the readers familiar with the basic and advanced concepts of quantization, introduce important works in DNN quantization, and highlight challenges for future research in this field.
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Theoretical Computer Science
Reference191 articles.
1. ImageNet classification with deep convolutional neural networks;Krizhevsky Alex;Adv. Neural Inf. Process. Syst.,2012
2. Going deeper with convolutions
3. Alexis Conneau Holger Schwenk Loıc Barrault and Yann Lecun. 2016. Very deep convolutional networks for natural language processing. arXiv:1606.0178 1 vol. 2 http://arxiv.org/abs/1606.01781
4. Xiaodong Liu Pengcheng He Weizhu Chen and Jianfeng Gao. 2019. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv:1904.09482 . http://arxiv.org/abs/1904.09482
5. Xiaodong Liu Yu Wang Jianshu Ji Hao Cheng Xueyun Zhu Emmanuel Awa Pengcheng He Weizhu Chen Hoifung Poon Guihong Cao and Jianfeng Gao. 2020. The microsoft toolkit of multi-task deep neural networks for natural language understanding. arXiv:2002.07972 2020. http://arxiv.org/abs/2002.07972