Affiliation:
1. Georgia Institute of Technology, Atlanta, GA, USA
Abstract
Deep neural networks (DNN) have demonstrated highly promising results across computer vision and speech recognition, and are becoming foundational for ubiquitous AI. The computational complexity of these algorithms and a need for high energy-efficiency has led to a surge in research on hardware accelerators. % for this paradigm. To reduce the latency and energy costs of accessing DRAM, most DNN accelerators are spatial in nature, with hundreds of processing elements (PE) operating in parallel and communicating with each other directly. DNNs are evolving at a rapid rate, and it is common to have convolution, recurrent, pooling, and fully-connected layers with varying input and filter sizes in the most recent topologies.They may be dense or sparse. They can also be partitioned in myriad ways (within and across layers) to exploit data reuse (weights and intermediate outputs). All of the above can lead to different dataflow patterns within the accelerator substrate. Unfortunately, most DNN accelerators support only fixed dataflow patterns internally as they perform a careful co-design of the PEs and the network-on-chip (NoC). In fact, the majority of them are only optimized for traffic within a convolutional layer. This makes it challenging to map arbitrary dataflows on the fabric efficiently, and can lead to underutilization of the available compute resources. DNN accelerators need to be programmable to enable mass deployment. For them to be programmable, they need to be configurable internally to support the various dataflow patterns that could be mapped over them. To address this need, we present MAERI, which is a DNN accelerator built with a set of modular and configurable building blocks that can easily support myriad DNN partitions and mappings by appropriately configuring tiny switches. MAERI provides 8-459% better utilization across multiple dataflow mappings over baselines with rigid NoC fabrics.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design,Software
Reference64 articles.
1. K. Simonyan and A. Zisserman "Very deep convolutional networks for large-scale image recognition " arXiv preprint arXiv:1409.1556 2014. K. Simonyan and A. Zisserman "Very deep convolutional networks for large-scale image recognition " arXiv preprint arXiv:1409.1556 2014.
2. K. Ding N. Du E. Elsen J. Engel W. Fang L. Fan C. Fougner L. Gao C. Gong A. Hannun T. Han Vaino L. Johannes B. Jiang C. Ju B. Jun P. LeGresley L. Lin J. Liu Y. Liu W. Li X. Li D. Ma S. Narang A. Ng S. Ozair Y. Peng R. Prenger S. Qian Z. Quan J. Raiman V. Rao S. Satheesh D. Seetapun S. Sengupta K. Srinet A. Sriram H. Tang L. Tang C. Wang J. Wang K. Wang Y. Wang Z. Wang Z. Wang S. Wu L. Wei B. Xiao W. Xie Y. Xie D. Yogatama B. Yuan J. Zhan and Z. Zhu "Deep speech 2: End-to-end speech recognition in english and mandarin " arXiv preprint arXiv:1512.02595 2015. K. Ding N. Du E. Elsen J. Engel W. Fang L. Fan C. Fougner L. Gao C. Gong A. Hannun T. Han Vaino L. Johannes B. Jiang C. Ju B. Jun P. LeGresley L. Lin J. Liu Y. Liu W. Li X. Li D. Ma S. Narang A. Ng S. Ozair Y. Peng R. Prenger S. Qian Z. Quan J. Raiman V. Rao S. Satheesh D. Seetapun S. Sengupta K. Srinet A. Sriram H. Tang L. Tang C. Wang J. Wang K. Wang Y. Wang Z. Wang Z. Wang S. Wu L. Wei B. Xiao W. Xie Y. Xie D. Yogatama B. Yuan J. Zhan and Z. Zhu "Deep speech 2: End-to-end speech recognition in english and mandarin " arXiv preprint arXiv:1512.02595 2015.
3. T. Chen Z. Du N. Sun J. Wang C. Wu Y. Chen and O. Temam "Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning " in ASPLOS pp. 269--284 2014. 10.1145/2541940.2541967 T. Chen Z. Du N. Sun J. Wang C. Wu Y. Chen and O. Temam "Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning " in ASPLOS pp. 269--284 2014. 10.1145/2541940.2541967
4. Y. Chen T. Luo S. Liu S. Zhang L. He J. Wang L. Li T. Chen Z. Xu N. Sun and O. Temam "Dadiannao: A machine-learning supercomputer " in MICRO pp. 609--622 2014. 10.1109/MICRO.2014.58 Y. Chen T. Luo S. Liu S. Zhang L. He J. Wang L. Li T. Chen Z. Xu N. Sun and O. Temam "Dadiannao: A machine-learning supercomputer " in MICRO pp. 609--622 2014. 10.1109/MICRO.2014.58
5. ShiDianNao
Cited by
101 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献