Abstract
Previous work has demonstrated that it is possible to generate efficient and highly parallel code for multicore CPUs and GPUs from combinator-based array languages for a range of applications. That work, however, has been limited to operating on flat, rectangular structures without any facilities for irregularity or nesting.
In this paper, we show that even a limited form of nesting provides substantial benefits both in terms of the expressiveness of the language (increasing modularity and providing support for simple irregular structures) and the portability of the code (increasing portability across resource-constrained devices, such as GPUs). Specifically, we generalise Blelloch's flattening transformation along two lines: (1) we explicitly distinguish between definitely regular and potentially irregular computations; and (2) we handle multidimensional arrays. We demonstrate the utility of this generalisation by an extension of the embedded array language Accelerate to include irregular streams of multidimensional arrays. We discuss code generation, optimisation, and irregular stream scheduling as well as a range of benchmarks on both multicore CPUs and GPUs.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design,Software
Reference40 articles.
1. Martín Abadi Ashish Agarwal Paul Barham Eugene Brevdo Zhifeng Chen Craig Citro Greg S. Corrado Andy Davis Jefrey Dean Matthieu Devin Sanjay Ghemawat Ian Goodfellow Andrew Harp Geofrey Irving Michael Isard Yangqing Jia Rafal Jozefowicz Lukasz Kaiser Manjunath Kudlur Josh Levenberg Dan Mané Rajat Monga Sherry Moore Derek Murray Chris Olah Mike Schuster Jonathon Shlens Benoit Steiner Ilya Sutskever Kunal Talwar Paul Tucker Vincent Vanhoucke Vijay Vasudevan Fernanda Viégas Oriol Vinyals Pete Warden Martin Wattenberg Martin Wicke Yuan Yu and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. (2015). htp://tensorflow.org/ Martín Abadi Ashish Agarwal Paul Barham Eugene Brevdo Zhifeng Chen Craig Citro Greg S. Corrado Andy Davis Jefrey Dean Matthieu Devin Sanjay Ghemawat Ian Goodfellow Andrew Harp Geofrey Irving Michael Isard Yangqing Jia Rafal Jozefowicz Lukasz Kaiser Manjunath Kudlur Josh Levenberg Dan Mané Rajat Monga Sherry Moore Derek Murray Chris Olah Mike Schuster Jonathon Shlens Benoit Steiner Ilya Sutskever Kunal Talwar Paul Tucker Vincent Vanhoucke Vijay Vasudevan Fernanda Viégas Oriol Vinyals Pete Warden Martin Wattenberg Martin Wicke Yuan Yu and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. (2015). htp://tensorflow.org/
2. Lars Bergstrom Matthew Fluet Mike Rainey John Reppy Stephen Rosen and Adam Shaw. 2013. Data-Only Flattening for Nested Data Parallelism. In PPoPP’13: Principles and Practice of Parallel Programming. ACM 81ś92. 10.1145/2442516.2442525 Lars Bergstrom Matthew Fluet Mike Rainey John Reppy Stephen Rosen and Adam Shaw. 2013. Data-Only Flattening for Nested Data Parallelism. In PPoPP’13: Principles and Practice of Parallel Programming. ACM 81ś92. 10.1145/2442516.2442525
3. Nested data-parallelism on the gpu
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Gaiwan: A size-polymorphic typesystem for GPU programs;Science of Computer Programming;2023-08
2. On Generating Out-Of-Core GPU Code for Multi-Dimensional Array Operations;Proceedings of the 34th Symposium on Implementation and Application of Functional Languages;2022-08-31
3. In-Place-Folding of Non-Scalar Hyper-Planes of Multi-Dimensional Arrays;33rd Symposium on Implementation and Application of Functional Languages;2021-09
4. Generating high performance code for irregular data structures using dependent types;Proceedings of the 9th ACM SIGPLAN International Workshop on Functional High-Performance and Numerical Computing;2021-08-22
5. Generating fast sparse matrix vector multiplication from a high level generic functional IR;Proceedings of the 29th International Conference on Compiler Construction;2020-02-22