Mentor: A Memory-Efficient Sparse-dense Matrix Multiplication Accelerator Based on Column-Wise Product

Author:

Lu Xiaobo1ORCID,Fang Jianbin1ORCID,Peng Lin1ORCID,Huang Chun1ORCID,Du Zidong2ORCID,Zhao Yongwei2ORCID,Wang Zheng3ORCID

Affiliation:

1. School of Computer Science and Technology, National University of Defense Technology, Changsha, China

2. Institute Of Computing Technology, Chinese Academy of Sciences, Beijing, China

3. Northwest University, Xi'an, China

Abstract

Sparse-dense matrix multiplication (SpMM) is the performance bottleneck of many high-performance and deep-learning applications, making it attractive to design specialized SpMM hardware accelerators. Unfortunately, existing hardware solutions do not take full advantage of data reuse opportunities of the input and output matrices or suffer from irregular memory access patterns. Their strategies increase the off-chip memory traffic and bandwidth pressure, leaving much room for improvement. We present Mentor , a new approach to designing SpMM accelerators. Our key insight is that column-wise dataflow, while rarely exploited in prior works, can address these issues in SpMM computations. Mentor is a software-hardware co-design approach for leveraging column-wise dataflow to improve data reuse and regular memory accesses of SpMM. On the software level, Mentor incorporates a novel streaming construction scheme to preprocess the input matrix for enabling a streaming access pattern. On the hardware level, it employs a fully pipelined design to unlock the potential of column-wise dataflow further. The design of Mentor is underpinned by a carefully designed analytical model to find the trade-off between performance and hardware resources. We have implemented an FPGA prototype of Mentor . Experimental results show that Mentor achieves speedup by geomean 2.05 × (up to 3.98 ×), reduces the memory traffic by geomean 2.92 × (up to 4.93 ×), and improves bandwidth utilization by geomean 1.38 × (up to 2.89 ×), compared to the state-of-art hardware solutions.

Publisher

Association for Computing Machinery (ACM)

Reference63 articles.

1. Hartwig Anzt Stanimire Tomov and Jack J Dongarra. 2015. Accelerating the LOBPCG method on GPUs using a blocked sparse matrix vector product.. In SpringSim (HPS). 75–82.

2. Bahar Asgari, Ramyad Hadidi, and Hyesoon Kim. 2020. Ascella: Accelerating sparse computation by enabling stream accesses to memory. In 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 318–321.

3. ALRESCHA: A Lightweight Reconfigurable Sparse-Computation Accelerator

4. Daehyeon Baek Soojin Hwang Taekyung Heo Daehoon Kim and Jaehyuk Huh. 2021. InnerSP: A Memory Efficient Sparse Matrix Multiplication Accelerator with Locality-Aware Inner Product Processing. In 2021 30th International Conference on Parallel Architectures and Compilation Techniques (PACT). IEEE 116–128.

5. Probing the Efficacy of Hardware-Aware Weight Pruning to Optimize the SpMM Routine on Ampere GPUs

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3