In-place transposition of rectangular matrices on accelerators

Author:

Sung I-Jui1,Gómez-Luna Juan2,González-Linares José María3,Guil Nicolás3,Hwu Wen-Mei W.4

Affiliation:

1. MulticoreWare, Inc, Champaign, IL, USA

2. University of Córdoba, Córdoba, Spain

3. University of Málaga, Málaga, Spain

4. University of Illinois at Urbana-Champaign, Urbana, IL, USA

Abstract

Matrix transposition is an important algorithmic building block for many numeric algorithms such as FFT. It has also been used to convert the storage layout of arrays. With more and more algebra libraries offloaded to GPUs, a high performance in-place transposition becomes necessary. Intuitively, in-place transposition should be a good fit for GPU architectures due to limited available on-board memory capacity and high throughput. However, direct application of CPU in-place transposition algorithms lacks the amount of parallelism and locality required by GPUs to achieve good performance. In this paper we present the first known in-place matrix transposition approach for the GPUs. Our implementation is based on a novel 3-stage transposition algorithm where each stage is performed using an elementary tiled-wise transposition. Additionally, when transposition is done as part of the memory transfer between GPU and host, our staged approach allows hiding transposition overhead by overlap with PCIe transfer. We show that the 3-stage algorithm allows larger tiles and achieves 3X speedup over a traditional 4-stage algorithm, with both algorithms based on our high-performance elementary transpositions on the GPU. We also show our proposed low-level optimizations improve the sustained throughput to more than 20 GB/s. Finally, we propose an asynchronous execution scheme that allows CPU threads to delegate in-place matrix transposition to GPU, achieving a throughput of more than 3.4 GB/s (including data transfers costs), and improving current multithreaded implementations of in-place transposition on CPU.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design,Software

Reference27 articles.

1. The Design and Implementation of FFTW3

2. K-Means for Parallel Architectures Using All-Prefix-Sum Sorting and Updating Steps

3. Anatomy of high-performance matrix multiplication

4. Intel MKL: Intel Math Kernel Library (January 2013) Intel MKL: Intel Math Kernel Library (January 2013)

5. Ruetsch G. Micikevicius P.: Optimizing matrix transpose in CUDA. (January 2009) Ruetsch G. Micikevicius P.: Optimizing matrix transpose in CUDA. (January 2009)

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Optimized Computation for Determinant of Multivariate Polynomial Matrices on GPGPU;2022 IEEE 24th Int Conf on High Performance Computing & Communications; 8th Int Conf on Data Science & Systems; 20th Int Conf on Smart City; 8th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys);2022-12

2. AMT: asynchronous in-place matrix transpose mechanism for sunway many-core processor;The Journal of Supercomputing;2022-01-17

3. Highly efficient GPU eigensolver for three-dimensional photonic crystal band structures with any Bravais lattice;Computer Physics Communications;2019-12

4. Efficient Processing of Large Data Structures on GPUs: Enumeration Scheme Based Optimisation;International Journal of Parallel Programming;2017-07-04

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3