Quantization of Weights of Neural Networks with Negligible Decreasing of Prediction Accuracy
-
Published:2021-09-24
Issue:3
Volume:50
Page:558-569
-
ISSN:2335-884X
-
Container-title:Information Technology and Control
-
language:
-
Short-container-title:ITC
Author:
Peric Zoran,Denic Bojan,Savic Milan,Dincic Milan,Mihajlov Darko
Abstract
Quantization and compression of neural network parameters using the uniform scalar quantization is carried out in this paper. The attractiveness of the uniform scalar quantizer is reflected in a low complexity and relatively good performance, making it the most popular quantization model. We present a design approach for the memoryless Laplacian source with zero-mean and unit variance, which is based on iterative rule and uses the minimal mean-squared error distortion as a performance criterion. In addition, we derive closed-form expressions for SQNR (Signal to Quantization Noise Ratio) in a wide dynamic range of variance of input data. To show effectiveness on real data, the proposed quantizer is used to compress the weights of neural networks using bit rates from 9 to 16 bps (bits/sample) instead of standardly used 32 bps full precision bit rate. The impact of weights compression on the NN (neural network) performance is analyzed, indicating good matching with the theoretical results and showing negligible decreasing of the prediction accuracy of the NN even in the case of high variance-mismatch between the variance of NN weights and the variance used for the design of quantizer, if the value of the bit-rate is properly chosen according to the rule proposed in the paper.
Publisher
Kaunas University of Technology (KTU)
Subject
Electrical and Electronic Engineering,Computer Science Applications,Control and Systems Engineering
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献