Affiliation:
1. Institute of Computing Technology (ICT), CAS, China and University of CAS, China
2. EPFL, Switzerland
3. Institute of Computing Technology (ICT), CAS, China
4. Inria, France
Abstract
In recent years, neural network accelerators have been shown to achieve both high energy efficiency and high performance for a broad application scope within the important category of recognition and mining applications.
Still, both the energy efficiency and performance of such accelerators remain limited by memory accesses. In this paper, we focus on image applications, arguably the most important category among recognition and mining applications. The neural networks which are state-of-the-art for these applications are
Convolutional Neural Networks
(CNN), and they have an important property: weights are shared among many neurons, considerably reducing the neural network memory footprint. This property allows to entirely map a CNN within an SRAM, eliminating all DRAM accesses for weights. By further hoisting this accelerator next to the image sensor, it is possible to eliminate all remaining DRAM accesses, i.e., for inputs and outputs.
In this paper, we propose such a CNN accelerator, placed next to a CMOS or CCD sensor. The absence of DRAM accesses combined with a careful exploitation of the specific data access patterns within CNNs allows us to design an accelerator which is 60× more energy efficient than the previous state-of-the-art neural network accelerator. We present a full design down to the layout at 65 nm, with a modest footprint of 4.86mm
2
and consuming only 320mW, but still about
30×
faster than high-end GPUs.
Publisher
Association for Computing Machinery (ACM)
Reference61 articles.
1. Berkeley Vision and Learning Center "Caffe: a deep learning framework." Available: http://caffe.berkeleyvision.org/ Berkeley Vision and Learning Center "Caffe: a deep learning framework." Available: http://caffe.berkeleyvision.org/
2. A dynamically configurable coprocessor for convolutional neural networks
3. DianNao
4. DaDianNao: A Machine-Learning Supercomputer
Cited by
57 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献