NEURA ghe

Author:

Meloni Paolo1ORCID,Capotondi Alessandro2,Deriu Gianfranco3,Brian Michele4,Conti Francesco5,Rossi Davide2,Raffo Luigi1,Benini Luca5

Affiliation:

1. Università di Cagliari, Italy

2. Università di Bologna, Italy

3. Università di Cagliari, Italy and T3LAB, Bologna, Italy

4. T3LAB, Bologna, Italy

5. Università di Bologna, Italy and ETH Zurich, Switzerland, Italy

Abstract

Deep convolutional neural networks (CNNs) obtain outstanding results in tasks that require human-level understanding of data, like image or speech recognition. However, their computational load is significant, motivating the development of CNN-specialized accelerators. This work presents NEURA ghe , a flexible and efficient hardware/software solution for the acceleration of CNNs on Zynq SoCs. NEURA ghe leverages the synergistic usage of Zynq ARM cores and of a powerful and flexible Convolution-Specific Processor deployed on the reconfigurable logic. The Convolution-Specific Processor embeds both a convolution engine and a programmable soft core, releasing the ARM processors from most of the supervision duties and allowing the accelerator to be controlled by software at an ultra-fine granularity. This methodology opens the way for cooperative heterogeneous computing: While the accelerator takes care of the bulk of the CNN workload, the ARM cores can seamlessly execute hard-to-accelerate parts of the computational graph, taking advantage of the NEON vector engines to further speed up computation. Through the companion NeuDNN SW stack, NEURA ghe supports end-to-end CNN-based classification with a peak performance of 169GOps/s, and an energy efficiency of 17GOps/W. Thanks to our heterogeneous computing model, our platform improves upon the state-of-the-art, achieving a frame rate of 5.5 frames per second (fps) on the end-to-end execution of VGG-16 and 6.6fps on ResNet-18.

Funder

European Union's HORIZON 2020 Research and Innovation programme

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science

Reference45 articles.

1. Altera. 2017. Altera Arria 10. Retrieved from https://www.altera.com/products/fpga/arria-series/arria-10/overview.html. Altera. 2017. Altera Arria 10. Retrieved from https://www.altera.com/products/fpga/arria-series/arria-10/overview.html.

2. Neurostream: Scalable and energy-efficient deep learning with smart memory cubes;Azarkhish E.;IEEE Trans. Parallel Distrib. Syst. PP,2017

3. Runtime Support for Multiple Offload-Based Programming Models on Clustered Manycore Accelerators

4. A 803GOp/s/W convolutional network accelerator;Cavigelli L.;IEEE Trans. Circ. Syst. Video Technol.,2016

5. Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks

Cited by 47 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3