A Massively Parallel, Energy Efficient Programmable Accelerator for Learning and Classification

Author:

Majumdar Abhinandan1,Cadambi Srihari1,Becchi Michela1,Chakradhar Srimat T.1,Graf Hans Peter1

Affiliation:

1. NEC Laboratories America, Inc.

Abstract

Applications that use learning and classification algorithms operate on large amounts of unstructured data, and have stringent performance constraints. For such applications, the performance of general purpose processors scales poorly with data size because of their limited support for fine-grained parallelism and absence of software-managed caches. The large intermediate data in these applications also limits achievable performance on many-core processors such as GPUs. To accelerate such learning applications, we present a programmable accelerator that can execute multiple learning and classification algorithms. To architect such an accelerator, we profile five representative workloads, and find that their computationally intensive portions can be formulated as matrix or vector operations generating large amounts of intermediate data, which are then reduced by a secondary operation such as array ranking, finding max/min and aggregation. Our proposed accelerator, called MAPLE, has hundreds of simple processing elements (PEs) laid out in a two-dimensional grid, with two key features. First, it uses dynamic in-memory processing where on-chip memory blocks perform the secondary reduction operations. Second, MAPLE uses banked off-chip memory, and organizes its PEs into independent groups each with its own off-chip memory bank. These two features allow MAPLE to scale its performance with data size. We also present an Atom based energy-efficient heterogeneous system with MAPLE as the accelerator that satisfies the application’s performance requirements at a lower system power. This article describes the MAPLE architecture, explores its design space with a simulator, illustrates how to automatically map application kernels to the hardware, and presents its performance improvement and energy benefits over classic server-based implementations. We implement a 512-PE FPGA prototype of MAPLE and find that it is 1.5-10x faster than a 2.5 GHz quad-core Xeon processor despite running at a modest 125 MHz clock rate. With MAPLE connected to a 1.6GHz dual-core Atom, we show an energy improvement of 38-84% over the Xeon server coupled to a 1.3 GHz 240 core Tesla GPU.

Publisher

Association for Computing Machinery (ACM)

Subject

Hardware and Architecture,Information Systems,Software

Reference35 articles.

1. Alpha-Data. http://www.alpha-data.com/products.php?product=adm-xrc-5t2. Alpha-Data . http://www.alpha-data.com/products.php?product=adm-xrc-5t2.

2. AT3N7A-I. Specification. http://www.asus.com/product.aspx?P_ID=xrR7wto9Z5BL42aU&templete=2. AT3N7A-I . Specification. http://www.asus.com/product.aspx?P_ID=xrR7wto9Z5BL42aU&templete=2.

3. Bai B. Weston J. Grangier D. Collobert R. Sadamasa K. Qi Y. Chapelle O. and Weinberge K. 2009. Learning to rank with (a lot of) word features. Info. Retrieval J. 13 (Special Issue on Learning to Rank) 291--314. 10.1007/s10791-009-9117-9 Bai B. Weston J. Grangier D. Collobert R. Sadamasa K. Qi Y. Chapelle O. and Weinberge K. 2009. Learning to rank with (a lot of) word features. Info. Retrieval J. 13 (Special Issue on Learning to Rank) 291--314. 10.1007/s10791-009-9117-9

4. Scaling to the end of silicon with EDGE architectures

5. C2050/C2070 Power. http://www.nvidia.com/object/product_tesla_C2050_C2070_us.html. C2050/C2070 Power . http://www.nvidia.com/object/product_tesla_C2050_C2070_us.html.

Cited by 28 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Accelerating Convolutional Neural Network by Exploiting Sparsity on GPUs;ACM Transactions on Architecture and Code Optimization;2023-07-19

2. CNNFlow: Memory-driven Data Flow Optimization for Convolutional Neural Networks;ACM Transactions on Design Automation of Electronic Systems;2023-03-19

3. OnSRAM: Efficient Inter-Node On-Chip Scratchpad Management in Deep Learning Accelerators;ACM Transactions on Embedded Computing Systems;2022-10-18

4. AI accelerator on IBM Telum processor;Proceedings of the 49th Annual International Symposium on Computer Architecture;2022-06-11

5. Efficient Machine Learning execution with Near-Data Processing;Microprocessors and Microsystems;2022-04

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3