EIE

Author:

Han Song1,Liu Xingyu1,Mao Huizi1,Pu Jing1,Pedram Ardavan1,Horowitz Mark A.1,Dally William J.2

Affiliation:

1. Stanford University

2. Stanford University and NVIDIA

Abstract

State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120× energy saving; Exploiting sparsity saves 10×; Weight sharing gives 8×; Skipping zero activations from ReLU saves another 3×. Evaluated on nine DNN benchmarks, EIE is 189× and 13× faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102 GOPS working directly on a compressed network, corresponding to 3 TOPS on an uncompressed network, and processes FC layers of AlexNet at 1.88×10 4 frames/sec with a power dissipation of only 600mW. It is 24,000× and 3,400× more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9×, 19× and 3× better throughput, energy efficiency and area efficiency.

Publisher

Association for Computing Machinery (ACM)

Reference44 articles.

1. A. Krizhevsky I. Sutskever and G. E. Hinton "Imagenet classification with deep convolutional neural networks " in NIPS 2012. A. Krizhevsky I. Sutskever and G. E. Hinton "Imagenet classification with deep convolutional neural networks " in NIPS 2012.

2. C. Szegedy W. Liu Y. Jia P. Sermanet S. Reed D. Anguelov D. Erhan V. Vanhoucke and A. Rabinovich "Going deeper with convolutions " arXiv:1409.4842 2014. C. Szegedy W. Liu Y. Jia P. Sermanet S. Reed D. Anguelov D. Erhan V. Vanhoucke and A. Rabinovich "Going deeper with convolutions " arXiv:1409.4842 2014.

3. K. Simonyan and A. Zisserman "Very deep convolutional networks for large-scale image recognition " arXiv:1409.1556 2014. K. Simonyan and A. Zisserman "Very deep convolutional networks for large-scale image recognition " arXiv:1409.1556 2014.

4. T. Mikolov M. Karafiát L. Burget J. Cernocky' and S. Khudanpur "Recurrent neural network based language model." in INTER-SPEECH September 26-30 2010 2010 pp. 1045--1048. T. Mikolov M. Karafiát L. Burget J. Cernocky' and S. Khudanpur "Recurrent neural network based language model." in INTER-SPEECH September 26-30 2010 2010 pp. 1045--1048.

Cited by 552 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3