Affiliation:
1. University of Warwick, Coventry, United Kingdom
2. King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
Abstract
Long-Short Term Memory (LSTM)
networks, and
Recurrent Neural Networks (RNNs)
in general, have demonstrated their suitability in many time series data applications, especially in
Natural Language Processing (NLP)
. Computationally, LSTMs introduce dependencies on previous outputs in each layer that complicate their computation and the design of custom computing architectures, compared to traditional feed-forward networks. Most neural network acceleration work has focused on optimising the core matrix-vector operations on highly capable FPGAs in server environments. Research that considers the embedded domain has often been unsuitable for streaming inference, relying heavily on batch processing to achieve high throughput. Moreover, many existing accelerator architectures have not focused on fully exploiting the underlying FPGA architecture, resulting in designs that achieve lower operating frequencies than the theoretical maximum. This paper presents a flexible overlay architecture for LSTMs on FPGA SoCs that is built around a streaming dataflow arrangement, uses DSP block capabilities directly, and is tailored to keep parameters within the architecture while moving input data serially to mitigate external memory access overheads. The architecture is designed as an overlay that can be configured to implement alternative models or update model parameters at runtime. It achieves higher operating frequency and demonstrates higher performance than other lightweight LSTM accelerators, as demonstrated in an FPGA SoC implementation.
Funder
U.K. Engineering and Physical Sciences Research Council
Royal Academy of Engineering/The Leverhulme Trust Research Fellowship
Publisher
Association for Computing Machinery (ACM)
Reference43 articles.
1. Show and Tell: Lessons Learned from the 2015 MSCOCO Image Captioning Challenge
2. Network Intrusion Detection Using Neural Networks on FPGA SoCs
3. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks. In Advances in Neural Information Processing Systems, Vol. 29. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2016/file/d8330f857a17c53d217014ee776bfd50-Paper.pdf.
4. FINN-L: Library Extensions and Design Trade-Off Analysis for Variable Precision LSTM Networks on FPGAs
5. Song Han, Jeff Pool, John Tran, and William J. Dally. 2015. Learning both weights and connections for efficient neural networks. In International Conference on Neural Information Processing Systems (NIPS). 1135–1143.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献