Affiliation:
1. Politecnico di Milano, Milano, Italy
2. Technische Universität Dresden, Dresden, Germany
Abstract
Numerical simulations can help solve complex problems. Most of these algorithms are massively parallel and thus good candidates for FPGA acceleration thanks to spatial parallelism. Modern FPGA devices can leverage high-bandwidth memory technologies, but when applications are memory-bound designers must craft advanced communication and memory architectures for efficient data movement and on-chip storage. This development process requires hardware design skills that are uncommon in domain-specific experts. In this article, we propose an automated tool flow from a domain-specific language for tensor expressions to generate massively parallel accelerators on high-bandwidth-memory-equipped FPGAs. Designers can use this flow to integrate and evaluate various compiler or hardware optimizations. We use
computational fluid dynamics (CFD)
as a paradigmatic example. Our flow starts from the high-level specification of tensor operations and combines a multi-level intermediate representation–based compiler with an in-house hardware generation flow to generate systems with parallel accelerators and a specialized memory architecture that moves data efficiently, aiming at fully exploiting the available CPU-FPGA bandwidth. We simulated applications with millions of elements, achieving up to 103 GFLOPS with one compute unit and custom precision when targeting a Xilinx Alveo U280. Our FPGA implementation is up to 25× more energy efficient than expert-crafted Intel CPU implementations.
Funder
EU Horizon 2020 Programme
Publisher
Association for Computing Machinery (ACM)
Reference65 articles.
1. Proceedings of the Real World Domain Specific Languages Workshop 2018 on - RWDSL2018
2. Meta-programming for cross-domain tensor optimizations
3. Factorizing the factorization – a spectral-element solver for elliptic equations with linear operation count
4. Martín Abadi Paul Barham Jianmin Chen Zhifeng Chen Andy Davis Jeffrey Dean Matthieu Devin Sanjay Ghemawat Geoffrey Irving Michael Isard Manjunath Kudlur Josh Levenberg Rajat Monga Sherry Moore Derek G. Murray Benoit Steiner Paul Tucker Vijay Vasudevan Pete Warden Martin Wicke Yuan Yu and Xiaoqiang Zheng. 2016. TensorFlow: A system for large-scale machine learning. arxiv:1605.08695 [cs.DC]. Retrieved from https://arxiv.org/abs/1605.08695.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Automated Buffer Sizing of Dataflow Applications in a High-level Synthesis Workflow;ACM Transactions on Reconfigurable Technology and Systems;2024-01-27
2. base2: An IR for Binary Numeral Types;Proceedings of the 13th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies;2023-06-14
3. Iris;Proceedings of the 28th Asia and South Pacific Design Automation Conference;2023-01-16