Abstract
AbstractHigh-Level Synthesis (HLS) tools simplify the design of hardware accelerators by automatically generating Verilog/VHDL code starting from a general-purpose software programming language. Because of the mismatch between the requirements of hardware descriptions and the characteristics of input languages, HLS tools still require hardware design knowledge and non-trivial design space exploration, which might be an obstacle for domain scientists seeking to accelerate applications written, for example, in Python-based programming frameworks. This research proposes a modern approach based on multi-level compiler technologies to bridge the gap between HLS and high-level frameworks, and to use domain-specific abstractions to solve domain-specific problems. The key enabling technology is the Multi-Level Intermediate Representation (MLIR), a framework that supports building reusable compiler infrastructure. The proposed approach uses MLIR to introduce new optimizations at appropriate levels of abstraction outside the HLS tool while still relying on years of HLS research in the low-level hardware generation steps; users and developers of HLS tools can thus increase their productivity, obtain accelerators with higher performance, and not be limited by the features of a specific (possibly closed-source) backend. The presented tools and techniques were designed, implemented, and tested to synthesize machine learning algorithms, but they are broadly applicable to any input specification written in a language that has a translation to MLIR. Generated accelerators can be deployed on Field Programmable Gate Arrays or Application-Specific Integrated Circuits, and they can reach high energy efficiency without any manual optimization of the code.
Publisher
Springer Nature Switzerland
Reference21 articles.
1. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, et al (2016) TensorFlow: a system for large-scale machine learning. In: 11th USENIX symposium on operating systems design and implementation (OSDI), pp 265–283
2. Agostini NB, Curzel S, Limaye A, Amatya V, Minutoli M, Castellana VG, Manzano J, et al (2022) The SODA approach: leveraging high-level synthesis for hardware/software co-design and hardware specialization. In: Proceedings of the 59th ACM/IEEE design automation conference (DAC), pp 1359–1362
3. AMD-Xilinx: Vitis HLS LLVM 2021.2 (2021). https://github.com/Xilinx/HLS
4. Blott M, Preußer TB, Fraser NJ, Gambardella G, O’brien K, Umuroglu Y, et al (2018) FINN-R: an end-to-end deep-learning framework for fast exploration of quantized neural networks. ACM Trans Reconfigurable Technol Syst 11(3):1–23
5. Bohm Agostini N, Curzel S, Amatya V, Tan C, Minutoli M, Castellana VG, Manzano J, et al (2022) An MLIR-based compiler flow for system-level design and hardware acceleration. In: IEEE/ACM international conference on computer aided design (ICCAD), pp 1–9