Author:
Singh Gian,Wagle Ankit,Khatri Sunil,Vrudhula Sarma
Abstract
This paper presents a DRAM-based processing-in-memory (PIM) architecture, called CIDAN-XE. It contains a novel computing unit called the neuron processing element (NPE). Each NPE can perform a variety of operations that include logical, arithmetic, relational, and predicate operations on multi-bit operands. Furthermore, they can be reconfigured to switch operations during run-time without increasing the overall latency or power of the operation. Since NPEs consume a small area and can operate at very high frequencies, they can be integrated inside the DRAM without disrupting its organization or timing constraints. Simulation results on a set of operations such as AND, OR, XOR, addition, multiplication, etc., show that CIDAN-XE achieves an average throughput improvement of 72X/5.4X and energy efficiency improvement of 244X/29X over CPU/GPU. To further demonstrate the benefits of using CIDAN-XE, we implement several convolutional neural networks and show that CIDAN-XE can improve upon the throughput and energy efficiency over the latest PIM architectures.
Reference46 articles.
1. In-memory Low-Cost Bit-Serial Addition Using Commodity Dram Technology;Ali;IEEE Trans. Circuits Syst. I,2020
2. GraphiDe: A Graph Processing Accelerator Leveraging In-DRAM-Computing;Angizi,2019
3. ReDRAM: A Reconfigurable Processing-In-DRAM Platform for Accelerating Bulk Bit-Wise Operations;Angizi,2019
4. Accelerating Deep Neural Networks in Processing-In-Memory Platforms: Analog or Digital Approach;Angizi,2019
5. An In-Depth Performance Characterization of Cpu- and Gpu-Based Dnn Training on Modern Architectures;Awan,2017
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献