Affiliation:
1. Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California
Abstract
Recent efforts for improving the performance of neural network (NN) accelerators that meet today’s application requirements have given rise to a new trend of logic-based NN inference relying on fixed function combinational logic. Mapping such large Boolean functions with many input variables and product terms to digital signal processors (DSPs) on Field-programmable gate arrays (FPGAs) needs a novel framework considering the structure and reconfigurability of DSP blocks during this process. The proposed methodology in this article maps the fixed function combinational logic blocks to a set of Boolean functions where Boolean operations corresponding to each function are mapped to DSP devices rather than look-up tables on the FPGAs to take advantage of the high performance, low latency, and parallelism of DSP blocks. This article also presents an innovative design and optimization methodology for compilation and mapping of NNs, utilizing fixed function combinational logic to DSPs on FPGAs employing high-level synthesis flow. Our experimental evaluations across several datasets and selected NNs demonstrate the comparable performance of our framework in terms of the inference latency and output accuracy compared to prior art FPGA-based NN accelerators employing DSPs.
Funder
National Science Foundation
Publisher
Association for Computing Machinery (ACM)
Reference21 articles.
1. ABC: An Academic Industrial-Strength Verification Tool
2. Matthieu Courbariaux Itay Hubara Daniel Soudry Ran El-Yaniv and Yoshua Bengio. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1. arXiv:1602.02830. Retrieved from https://arxiv.org/abs/1602.02830.
3. The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]
4. Alex Krizhevsky Geoffrey Hinton et al. 2009. Learning multiple layers of features from tiny images.
5. Gradient-based learning applied to document recognition