Affiliation:
1. Carnegie Mellon University, Forbes Avenue, Pittsburgh, PA, USA
Abstract
Hardware implementations of deep neural networks (DNNs) have been adopted in many systems because of their higher classification speed. However, while they may be characterized by better accuracy, larger DNNs require significant energy and area, thereby limiting their wide adoption. The energy consumption of DNNs is driven by both memory accesses and computation. Binarized neural networks (BNNs), as a tradeoff between accuracy and energy consumption, can achieve great energy reduction and have good accuracy for large DNNs due to their regularization effect. However, BNNs show poor accuracy when a smaller DNN configuration is adopted. In this article, we propose a new DNN architecture, LightNN, which replaces the multiplications to one shift or a constrained number of shifts and adds. Our theoretical analysis for LightNNs shows that their accuracy is maintained while dramatically reducing storage and energy requirements. For a fixed DNN configuration, LightNNs have better accuracy at a slight energy increase than BNNs, yet are more energy efficient with only slightly less accuracy than conventional DNNs. Therefore, LightNNs provide more options for hardware designers to trade off accuracy and energy. Moreover, for large DNN configurations, LightNNs have a regularization effect, making them better in accuracy than conventional DNNs. These conclusions are verified by experiment using the MNIST and CIFAR-10 datasets for different DNN configurations. Our FPGA implementation for conventional DNNs and LightNNs confirms all theoretical and simulation results and shows that LightNNs reduce latency and use fewer FPGA resources compared to conventional DNN architectures.
Funder
National Science Foundation
Publisher
Association for Computing Machinery (ACM)
Reference48 articles.
1. YodaNN: An Ultra-Low Power Convolutional Neural Network Accelerator Based on Binary Weights
2. D. Dua and E. Karra Taniskidou. 2017. UCI Machine Learning Repository. University of California School of Information and Computer Science. http://archive.ics.uci.edu/ml. D. Dua and E. Karra Taniskidou. 2017. UCI Machine Learning Repository. University of California School of Information and Computer Science. http://archive.ics.uci.edu/ml.
3. ImageNet: A large-scale hierarchical image database
4. New types of deep neural network learning for speech recognition and related applications: an overview
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献