Energy-efficient In-Memory Address Calculation

Author:

Yousefzadeh Amirreza1ORCID,Stuijt Jan1ORCID,Hijdra Martijn1ORCID,Liu Hsiao-Hsuan2ORCID,Gebregiorgis Anteneh3ORCID,Singh Abhairaj3ORCID,Hamdioui Said3ORCID,Catthoor Francky2ORCID

Affiliation:

1. IMEC, Netherlands

2. IMEC, Belgium

3. Delft University of Technology, Netherlands

Abstract

Computation-in-Memory (CIM) is an emerging computing paradigm to address memory bottleneck challenges in computer architecture. A CIM unit cannot fully replace a general-purpose processor. Still, it significantly reduces the amount of data transfer between a traditional memory unit and the processor by enriching the transferred information. Data transactions between processor and memory consist of memory access addresses and values. While the main focus in the field of in-memory computing is to apply computations on the content of the memory (values), the importance of CPU-CIM address transactions and calculations for generating the sequence of access addresses for data-dominated applications is generally overlooked. However, the amount of information transactions used for “address” can easily be even more than half of the total transferred bits in many applications. In this article, we propose a circuit to perform the in-memory Address Calculation Accelerator. Our simulation results showed that calculating address sequences inside the memory (instead of the CPU) can significantly reduce the CPU-CIM address transactions and therefore contribute to considerable energy saving, latency, and bus traffic. For a chosen application of guided image filtering, in-memory address calculation results in almost two orders of magnitude reduction in address transactions over the memory bus.

Funder

EU H2020

DAIS (KDT JU

Publisher

Association for Computing Machinery (ACM)

Subject

Hardware and Architecture,Information Systems,Software

Reference33 articles.

1. 2020. Product Brief: Virtex UltraScale+ HBM FPGA. https://www.xilinx.com/content/dam/xilinx/publications/product-briefs/virtex-ultrascale-plus-hbm-product-brief.pdf.

2. Near Data Acceleration with Concurrent Host Access

3. Mapping a Guided Image Filter on the HARP Reconfigurable Architecture Using OpenCL

4. In‐Memory Database Query

Cited by 5 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. COMPAD: A heterogeneous cache-scratchpad CPU architecture with data layout compaction for embedded loop-dominated applications;Journal of Systems Architecture;2023-12

2. An Overview of Computation-in-Memory (CIM) Architectures;Design and Applications of Emerging Computer Systems;2023-08-17

3. Optimization of Access Address Calculation for LLVM;2023 4th International Conference on Information Science, Parallel and Distributed Systems (ISPDS);2023-07-14

4. Tutorial on memristor-based computing for smart edge applications;Memories - Materials, Devices, Circuits and Systems;2023-07

5. Efficient Signed Arithmetic Multiplication on Memristor-Based Crossbar;IEEE Access;2023

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3