Affiliation:
1. University of Naples Federico II, Department of Electrical Engineering and Information Technology, Italy
Abstract
Deep Learning is today ubiquitous and is increasingly moving from the cloud down to the edge of networked infrastructures, where it enables embedded applications to perform complex inference tasks close to the data sources, reducing long-distance data movement and alleviating the need for a powerful cloud infrastructure. Edge-class MPSoC devices featuring an on-chip FPGA fabric offer key advantages for Deep Learning inference tasks, especially for complex applications where multiple models may be run concurrently in the same platform. In this work, we propose an approach and a practical framework for the systematic characterization of multithreaded Deep Learning inference on edge FPGA MPSoCs. We instantiate the framework into a real-world MPSoC platform, targeting Xilinx Vitis-AI as a representative example of a commercial Deep Learning acceleration toolkit for edge environments. We design a comprehensive experimental campaign and apply it to the platform for several CNNs, each trained on three different datasets. We show that our approach can be used for both hardware- and software-level analysis of a target system. Among other findings, the analysis revealed a suboptimal behaviour of the underlying toolkit runtime, involving the utilization of the accelerator cores and the uneven software latency of the support library, influenced by the shapes of the input tensors.
Publisher
Association for Computing Machinery (ACM)
Subject
Hardware and Architecture,Software
Reference44 articles.
1. A Survey and Taxonomy of FPGA-based Deep Learning Accelerators
2. Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks
3. Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges
4. FARNN: FPGA-GPU Hybrid Acceleration Platform for Recurrent Neural Networks
5. Matthieu Courbariaux Itay Hubara Daniel Soudry Ran El-Yaniv and Yoshua Bengio. 2016. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. https://arxiv.org/abs/1602.02830 Matthieu Courbariaux Itay Hubara Daniel Soudry Ran El-Yaniv and Yoshua Bengio. 2016. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. https://arxiv.org/abs/1602.02830