Affiliation:
1. Georgia Institute of Technology, Atlanta, GA, US
Abstract
Compute-in-memory (CIM) is an attractive solution to address the “memory wall” challenges for the extensive computation in deep learning hardware accelerators. For custom ASIC design, a specific chip instance is restricted to a specific network during runtime. However, the development cycle of the hardware is normally far behind the emergence of new algorithms. Although some of the reported CIM-based architectures can adapt to different deep neural network (DNN) models, few details about the dataflow or control were disclosed to enable such an assumption. Instruction set architecture (ISA) could support high flexibility, but its complexity would be an obstacle to efficiency. In this article, a runtime reconfigurable design methodology of CIM-based accelerators is proposed to support a class of convolutional neural networks running on one prefabricated chip instance with ASIC-like efficiency. First, several design aspects are investigated: (1) the reconfigurable weight mapping method; (2) the input side of data transmission, mainly about the weight reloading; and (3) the output side of data processing, mainly about the reconfigurable accumulation. Then, a system-level performance benchmark is performed for the inference of different DNN models, such as VGG-8 on a CIFAR-10 dataset and AlexNet GoogLeNet, ResNet-18, and DenseNet-121 on an ImageNet dataset to measure the trade-offs between runtime reconfigurability, chip area, memory utilization, throughput, and energy efficiency.
Publisher
Association for Computing Machinery (ACM)
Subject
Electrical and Electronic Engineering,Computer Graphics and Computer-Aided Design,Computer Science Applications
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Implementation and analysis of custom instructions on RISC-V for Edge-AI applications;14th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies (HEART'24));2024-06-19
2. A survey on processing-in-memory techniques: Advances and challenges;Memories - Materials, Devices, Circuits and Systems;2023-07
3. Wurtzite and fluorite ferroelectric materials for electronic memory;Nature Nanotechnology;2023-04-27
4. An Energy-Efficient Inference Engine for a Configurable ReRAM-Based Neural Network Accelerator;IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems;2023-03
5. Evaluating HPC Kernels for Processing in Memory;Proceedings of the 2022 International Symposium on Memory Systems;2022-10-03