Affiliation:
1. University of Texas, USA
2. BITS Pilani Goa, India
3. BITS Pilani, India
4. University of Utah, USA
Abstract
Block random access memories (BRAMs) are the storage houses of FPGAs, providing extensive on-chip memory bandwidth to the compute units implemented using logic blocks and digital signal processing slices. We propose modifying BRAMs to convert them to CoMeFa (
Co
mpute-in-
Me
mory Blocks for
F
PG
A
s) random access memories (RAMs). These RAMs provide highly parallel compute-in-memory by combining computation and storage capabilities in one block. CoMeFa RAMs utilize the true dual-port nature of FPGA BRAMs and contain multiple configurable single-bit bit-serial processing elements. CoMeFa RAMs can be used to compute with any precision, which is extremely important for applications like deep learning (DL). Adding CoMeFa RAMs to FPGAs significantly increases their compute density while also reducing data movement. We explore and propose two architectures of these RAMs: CoMeFa-D (optimized for delay) and CoMeFa-A (optimized for area). Compared to existing proposals, CoMeFa RAMs do not require changing the underlying static RAM technology like simultaneously activating multiple wordlines on the same port, and are practical to implement. CoMeFa RAMs are especially suitable for parallel and compute-intensive applications like DL, but these versatile blocks find applications in diverse applications like signal processing and databases, among others. By augmenting an Intel Arria 10–like FPGA with CoMeFa-D (CoMeFa-A) RAMs at the cost of 3.8% (1.2%) area, and with algorithmic improvements and efficient mapping, we observe a geomean speedup of 2.55× (1.85×) across microbenchmarks from various applications and a geomean speedup of up to 2.5× across multiple deep neural networks. Replacing all or some BRAMs with CoMeFa RAMs in FPGAs can make them better accelerators of DL workloads.
Funder
National Science Foundation
Intel Rising Star Faculty
Publisher
Association for Computing Machinery (ACM)
Reference57 articles.
1. Achronix. 2019. Speedster7t FPGAs. Retrieved June 9 2023 from https://www.achronix.com/product/speedster7t-fpgas.
2. Compute Caches
3. X-SRAM: Enabling In-Memory Boolean Computations in CMOS Static Random Access Memories
4. Altera. 2015. Designing Filters for High Performance. Retrieved June 9 2023 from https://www.intel.cn/content/dam/www/programmable/us/en/pdfs/literature/wp/wp-01260-stratix10-designing-filters-for-high-performance.pdf.
5. Aman Arora, Samidh Mehta, Vaughn Betz, and Lizy K. John. 2021. Tensor slices to the rescue: Supercharging ML acceleration on FPGAs. In Proceedings of the International Symposium on Field-Programmable Gate Arrays (FPGA’21). 23–33.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. The BRAM is the Limit: Shattering Myths, Shaping Standards, and Building Scalable PIM Accelerators;2024 IEEE 32nd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM);2024-05-05
2. Efficient Approaches for GEMM Acceleration on Leading AI-Optimized FPGAs;2024 IEEE 32nd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM);2024-05-05
3. An All-digital Compute-in-memory FPGA Architecture for Deep Learning Acceleration;ACM Transactions on Reconfigurable Technology and Systems;2024-02-12