CoMeFa: Deploying Compute-in-Memory on FPGAs for Deep Learning Acceleration

Author:

Arora Aman1ORCID,Bhamburkar Atharva2ORCID,Borda Aatman3ORCID,Anand Tanmay3ORCID,Sehgal Rishabh1ORCID,Hanindhito Bagus1ORCID,Gaillardon Pierre-Emmanuel4ORCID,Kulkarni Jaydeep1ORCID,John Lizy K.1ORCID

Affiliation:

1. University of Texas, USA

2. BITS Pilani Goa, India

3. BITS Pilani, India

4. University of Utah, USA

Abstract

Block random access memories (BRAMs) are the storage houses of FPGAs, providing extensive on-chip memory bandwidth to the compute units implemented using logic blocks and digital signal processing slices. We propose modifying BRAMs to convert them to CoMeFa  ( Co mpute-in- Me mory Blocks for F PG A s) random access memories (RAMs). These RAMs provide highly parallel compute-in-memory by combining computation and storage capabilities in one block. CoMeFa RAMs utilize the true dual-port nature of FPGA BRAMs and contain multiple configurable single-bit bit-serial processing elements. CoMeFa RAMs can be used to compute with any precision, which is extremely important for applications like deep learning (DL). Adding CoMeFa RAMs to FPGAs significantly increases their compute density while also reducing data movement. We explore and propose two architectures of these RAMs: CoMeFa-D (optimized for delay) and CoMeFa-A (optimized for area). Compared to existing proposals, CoMeFa RAMs do not require changing the underlying static RAM technology like simultaneously activating multiple wordlines on the same port, and are practical to implement. CoMeFa RAMs are especially suitable for parallel and compute-intensive applications like DL, but these versatile blocks find applications in diverse applications like signal processing and databases, among others. By augmenting an Intel Arria 10–like FPGA with CoMeFa-D (CoMeFa-A) RAMs at the cost of 3.8% (1.2%) area, and with algorithmic improvements and efficient mapping, we observe a geomean speedup of 2.55× (1.85×) across microbenchmarks from various applications and a geomean speedup of up to 2.5× across multiple deep neural networks. Replacing all or some BRAMs with CoMeFa RAMs in FPGAs can make them better accelerators of DL workloads.

Funder

National Science Foundation

Intel Rising Star Faculty

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science

Reference57 articles.

1. Achronix. 2019. Speedster7t FPGAs. Retrieved June 9 2023 from https://www.achronix.com/product/speedster7t-fpgas.

2. Compute Caches

3. X-SRAM: Enabling In-Memory Boolean Computations in CMOS Static Random Access Memories

4. Altera. 2015. Designing Filters for High Performance. Retrieved June 9 2023 from https://www.intel.cn/content/dam/www/programmable/us/en/pdfs/literature/wp/wp-01260-stratix10-designing-filters-for-high-performance.pdf.

5. Aman Arora, Samidh Mehta, Vaughn Betz, and Lizy K. John. 2021. Tensor slices to the rescue: Supercharging ML acceleration on FPGAs. In Proceedings of the International Symposium on Field-Programmable Gate Arrays (FPGA’21). 23–33.

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. The BRAM is the Limit: Shattering Myths, Shaping Standards, and Building Scalable PIM Accelerators;2024 IEEE 32nd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM);2024-05-05

2. Efficient Approaches for GEMM Acceleration on Leading AI-Optimized FPGAs;2024 IEEE 32nd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM);2024-05-05

3. An All-digital Compute-in-memory FPGA Architecture for Deep Learning Acceleration;ACM Transactions on Reconfigurable Technology and Systems;2024-02-12

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3