Modeling the Interplay between Loop Tiling and Fusion in Optimizing Compilers Using Affine Relations
-
Published:2023-11-30
Issue:1-4
Volume:41
Page:1-45
-
ISSN:0734-2071
-
Container-title:ACM Transactions on Computer Systems
-
language:en
-
Short-container-title:ACM Trans. Comput. Syst.
Author:
Zhao Jie1ORCID, Xu Jinchen2ORCID, Di Peng3ORCID, Nie Wang3ORCID, Hu Jiahui3ORCID, Yi Yanzhi3ORCID, Yang Sijia3ORCID, Geng Zhen3ORCID, Zhang Renwei3ORCID, Li Bojie3ORCID, Gan Zhiliang3ORCID, Jin Xuefeng3ORCID
Affiliation:
1. College of Computer Science and Electronic Engineering, Hunan University, China 2. Information Engineering University, China 3. Huawei Technologies Co. Ltd., China
Abstract
Loop tiling and fusion are two essential transformations in optimizing compilers to enhance the data locality of programs. Existing heuristics either perform loop tiling and fusion in a particular order, missing some of their profitable compositions, or execute ad-hoc implementations for domain-specific applications, calling for a generalized and systematic solution in optimizing compilers.
In this article, we present a so-called
basteln
(an abbreviation for backward slicing of tiled loop nests)
strategy
in polyhedral compilation to better model the interplay between loop tiling and fusion. The basteln strategy first groups loop nests by preserving their parallelism/tilability and next performs rectangular/parallelogram tiling to the output groups that produce data consumed outside the considered program fragment. The memory footprints required by each tile are then computed, from which the upward exposed data are extracted to determine the tile shapes of the remaining fusion groups. Such a tiling mechanism can construct complex tile shapes imposed by the dependences between these groups, which are further merged by a post-tiling fusion algorithm for enhancing data locality without losing the parallelism/tilability of the output groups. The basteln strategy also takes into account the amount of redundant computations and the fusion of independent groups, exhibiting a general applicability.
We integrate the basteln strategy into two optimizing compilers, with one a general-purpose optimizer and the other a domain-specific compiler for deploying deep learning models. The experiments are conducted on CPU, GPU, and a deep learning accelerator to demonstrate the effectiveness of the approach for a wide class of application domains, including deep learning, image processing, sparse matrix computation, and linear algebra. In particular, the basteln strategy achieves a mean speedup of 1.8× over cuBLAS/cuDNN and 1.1× over TVM on GPU when used to optimize deep learning models; it also outperforms PPCG and TVM by 11% and 20%, respectively, when generating code for the deep learning accelerator.
Funder
National Natural Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Subject
General Computer Science
Reference70 articles.
1. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’16). 265–283. https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi 2. Polyhedral auto-transformation with no integer linear programming 3. Effective Loop Fusion in Polyhedral Compilation Using Fusion Conflict Graphs 4. Learning to optimize halide with tree search and random programs 5. OpenTuner
|
|