On optimizing machine learning workloads via kernel fusion

Author:

Ashari Arash1,Tatikonda Shirish2,Boehm Matthias2,Reinwald Berthold2,Campbell Keith3,Keenleyside John3,Sadayappan P.1

Affiliation:

1. Ohio State University, USA

2. IBM, USA

3. IBM, Canada

Abstract

Exploitation of parallel architectures has become critical to scalable machine learning (ML). Since a wide range of ML algorithms employ linear algebraic operators, GPUs with BLAS libraries are a natural choice for such an exploitation. Two approaches are commonly pursued: (i) developing specific GPU accelerated implementations of complete ML algorithms; and (ii) developing GPU kernels for primitive linear algebraic operators like matrix-vector multiplication, which are then used in developing ML algorithms. This paper extends the latter approach by developing fused kernels for a combination of primitive operators that are commonly found in popular ML algorithms. We identify the generic pattern of computation (alpha * X^T (v * (X * y)) + beta * z) and its various instantiations. We develop a fused kernel to optimize this computation on GPUs -- with specialized techniques to handle both sparse and dense matrices. This approach not only reduces the cost of data loads due to improved temporal locality but also enables other optimizations like coarsening and hierarchical aggregation of partial results. We also present an analytical model that considers input data characteristics and available GPU resources to estimate near-optimal settings for kernel launch parameters. The proposed approach provides speedups ranging from 2 to 67 for different instances of the generic pattern compared to launching multiple operator-level kernels using GPU accelerated libraries. We conclude by demonstrating the effectiveness of the approach in improving end-to-end performance on an entire ML algorithm.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design,Software

Cited by 12 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. NIOT: A Novel Inference Optimization of Transformers on Modern CPUs;IEEE Transactions on Parallel and Distributed Systems;2023-06

2. Collage;Proceedings of the International Conference on Parallel Architectures and Compilation Techniques;2022-10-08

3. Mobile or FPGA? A Comprehensive Evaluation on Energy Efficiency and a Unified Optimization Framework;ACM Transactions on Embedded Computing Systems;2022-09-30

4. Triton Join: Efficiently Scaling to a Large Join State on GPUs with Fast Interconnects;Proceedings of the 2022 International Conference on Management of Data;2022-06-10

5. FuseME: Distributed Matrix Computation Engine based on Cuboid-based Fused Operator and Plan Generation;Proceedings of the 2022 International Conference on Management of Data;2022-06-10

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3