AMED: Automatic Mixed-Precision Quantization for Edge Devices

Author:

Kimhi Moshe1ORCID,Rozen Tal1,Mendelson Avi1,Baskin Chaim1

Affiliation:

1. Computer Science Department, Technion IIT, Haifa 3200003, Israel

Abstract

Quantized neural networks are well known for reducing the latency, power consumption, and model size without significant harm to the performance. This makes them highly appropriate for systems with limited resources and low power capacity. Mixed-precision quantization offers better utilization of customized hardware that supports arithmetic operations at different bitwidths. Quantization methods either aim to minimize the compression loss given a desired reduction or optimize a dependent variable for a specified property of the model (such as FLOPs or model size); both make the performance inefficient when deployed on specific hardware, but more importantly, quantization methods assume that the loss manifold holds a global minimum for a quantized model that copes with the global minimum of the full precision counterpart. Challenging this assumption, we argue that the optimal minimum changes as the precision changes, and thus, it is better to look at quantization as a random process, placing the foundation for a different approach to quantize neural networks, which, during the training procedure, quantizes the model to a different precision, looks at the bit allocation as a Markov Decision Process, and then, finds an optimal bitwidth allocation for measuring specified behaviors on a specific device via direct signals from the particular hardware architecture. By doing so, we avoid the basic assumption that the loss behaves the same way for a quantized model. Automatic Mixed-Precision Quantization for Edge Devices (dubbed AMED) demonstrates its superiority over current state-of-the-art schemes in terms of the trade-off between neural network accuracy and hardware efficiency, backed by a comprehensive evaluation.

Funder

Israel Innovation Authority, Nofar grant

Publisher

MDPI AG

Reference57 articles.

1. Lebedev, V., Ganin, Y., Rakhuba, M., Oseledets, I., and Lempitsky, V.S. (2015). Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition. arXiv.

2. Ullrich, K., Meeds, E., and Welling, M. (2017). Soft Weight-Sharing for Neural Network Compression. arXiv.

3. Chmiel, B., Baskin, C., Zheltonozhskii, E., Banner, R., Yermolin, Y., Karbachevsky, A., Bronstein, A.M., and Mendelson, A. (2020, January 19–24). Feature Map Transform Coding for Energy-Efficient CNN Inference. Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK.

4. CAT: Compression-Aware Training for bandwidth reduction;Baskin;J. Mach. Learn. Res.,2021

5. Han, S., Pool, J., Tran, J., and Dally, W.J. (2015). Learning both Weights and Connections for Efficient Neural Network. arXiv.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3