PolyDL

Author:

Tavarageri Sanket1,Heinecke Alexander1,Avancha Sasikanth1,Kaul Bharat1,Goyal Gagandeep2,Upadrasta Ramakrishna2

Affiliation:

1. Intel Labs, Bengaluru, Karnataka, India

2. IIT Hyderabad

Abstract

Deep Neural Networks (DNNs) have revolutionized many aspects of our lives. The use of DNNs is becoming ubiquitous, including in software for image recognition, speech recognition, speech synthesis, language translation, to name a few. The training of DNN architectures, however, is computationally expensive. Once the model is created, its use in the intended application—the inference task, is computationally heavy too and the inference needs to be fast for real time use. For obtaining high performance today, the code of Deep Learning (DL) primitives optimized for specific architectures by expert programmers exposed via libraries is the norm. However, given the constant emergence of new DNN architectures, creating hand optimized code is expensive, slow and is not scalable. To address this performance-productivity challenge, in this article we present compiler algorithms to automatically generate high-performance implementations of DL primitives that closely match the performance of hand optimized libraries. We develop novel data reuse analysis algorithms using the polyhedral model to derive efficient execution schedules automatically. In addition, because most DL primitives use some variant of matrix multiplication at their core, we develop a flexible framework where it is possible to plug in library implementations of the same in lieu of a subset of the loops. We show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance. We develop compiler algorithms to also perform operator fusions that reduce data movement through the memory hierarchy of the computer system. Using Convolution Neural Network (CNN) models and matrix multiplication operations, we demonstrate that our approach automatically creates high performing DNN building blocks whose performance matches the performance of hand-crafted kernels of Intel’s oneDNN library on high end CPUs. At the same time, our techniques take only a fraction of time (1/20 or less) compared to AutoTVM, a deep learning auto-tuner to create optimized implementations.

Publisher

Association for Computing Machinery (ACM)

Subject

Hardware and Architecture,Information Systems,Software

Reference55 articles.

1. 2020. Google is AI first: 12 AI projects powering Google products. Retrieved from https://blog.aimultiple.com/ai-is-already-at-the-heart-of-google. 2020. Google is AI first: 12 AI projects powering Google products. Retrieved from https://blog.aimultiple.com/ai-is-already-at-the-heart-of-google.

2. 2020. How to optimize GEMM on CPU. Retrieved from https://tvm.apache.org/docs/tutorials/optimize/opt_gemm.html. 2020. How to optimize GEMM on CPU. Retrieved from https://tvm.apache.org/docs/tutorials/optimize/opt_gemm.html.

3. 2020. Library targeting Intel Architecture for specialized dense and sparse matrix operations and deep learning primitives.Retrieved from https://github.com/hfp/libxsmm. 2020. Library targeting Intel Architecture for specialized dense and sparse matrix operations and deep learning primitives.Retrieved from https://github.com/hfp/libxsmm.

4. 2020. oneAPI Deep Neural Network Library (oneDNN). Retrieved from https://github.com/oneapi-src/oneDNN. 2020. oneAPI Deep Neural Network Library (oneDNN). Retrieved from https://github.com/oneapi-src/oneDNN.

5. 2015. Why GEMM is at the heart of deep learning. Retrieved from https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/. 2015. Why GEMM is at the heart of deep learning. Retrieved from https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/.

Cited by 14 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. TensorMap: A Deep RL-Based Tensor Mapping Framework for Spatial Accelerators;IEEE Transactions on Computers;2024-08

2. Soter: Analytical Tensor-Architecture Modeling and Automatic Tensor Program Tuning for Spatial Accelerators;2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA);2024-06-29

3. Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures;2024 IEEE International Parallel and Distributed Processing Symposium (IPDPS);2024-05-27

4. Optimizing Deep Learning Inference via Global Analysis and Tensor Expressions;Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 1;2024-04-17

5. oneDNN Graph Compiler: A Hybrid Approach for High-Performance Deep Learning Compilation;2024 IEEE/ACM International Symposium on Code Generation and Optimization (CGO);2024-03-02

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3