Affiliation:
1. University of California - Los Angeles, CA, USA
Abstract
With reduced data reuse and parallelism, recent convolutional neural networks (CNNs) create new challenges for FPGA acceleration. Systolic arrays (SAs) are efficient, scalable architectures for convolutional layers, but without proper optimizations, their efficiency drops dramatically for reasons: (1) the different dimensions within same-type layers, (2) the different convolution layers especially transposed and dilated convolutions, and (3) CNN’s complex dataflow graph. Furthermore, significant overheads arise when integrating FPGAs into machine learning frameworks. Therefore, we present a flexible, composable architecture called FlexCNN, which delivers high computation efficiency by employing dynamic tiling, layer fusion, and data layout optimizations. Additionally, we implement a novel versatile SA to process normal, transposed, and dilated convolutions efficiently. FlexCNN also uses a fully pipelined software-hardware integration that alleviates the software overheads. Moreover, with an automated compilation flow, FlexCNN takes a CNN in the ONNX
1
representation, performs a design space exploration, and generates an FPGA accelerator. The framework is tested using three complex CNNs: OpenPose, U-Net, and E-Net. The architecture optimizations achieve 2.3× performance improvement. Compared to a standard SA, the versatile SA achieves close-to-ideal speedups, with up to 5.98× and 13.42× for transposed and dilated convolutions, with a 6% average area overhead. The pipelined integration leads to a 5× speedup for OpenPose.
Funder
NSF/Intel
NSF NeuroNex project
CRISP center under the JUMP program, and CDSC industrial partners
Publisher
Association for Computing Machinery (ACM)
Reference63 articles.
1. DPUCAHX8H Resource Utilization. (n.d.). Retrieved from https://docs.xilinx.com/r/en-US/pg367-dpucahx8h/Resource-Utilization.
2. DPUCAHX8L Resource Utilization. (n.d.). Retrieved from https://docs.xilinx.com/r/en-US/pg366-dpucahx8l/Resource-Utilization.
3. U280 Performance with 14E300 MHz DPUCAHX8H. (n.d.). Retrieved from https://docs.xilinx.com/r/1.4.1-English/ug1354-xilinx-ai-sdk/Alveo-U280-Data-Accelerator-Card.
4. Vitis AI. (n.d.). Retrieved from https://www.xilinx.com/products/design-tools/vitis/vitis-ai.html.
5. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard et al. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI’16). 265–283.
Cited by
17 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献