Abstract
Tiny machine learning (TinyML) has become an emerging field according to the rapid growth in the area of the internet of things (IoT). However, most deep learning algorithms are too complex, require a lot of memory to store data, and consume an enormous amount of energy for calculation/data movement; therefore, the algorithms are not suitable for IoT devices such as various sensors and imaging systems. Furthermore, typical hardware accelerators cannot be embedded in these resource-constrained edge devices, and they are difficult to drive real-time inference processing as well. To perform the real-time processing on these battery-operated devices, deep learning models should be compact and hardware-optimized, and hardware accelerator designs also have to be lightweight and consume extremely low energy. Therefore, we present an optimized network model through model simplification and compression for the hardware to be implemented, and propose a hardware architecture for a lightweight and energy-efficient deep learning accelerator. The experimental results demonstrate that our optimized model successfully performs object detection, and the proposed hardware design achieves 1.25× and 4.27× smaller logic and BRAM size, respectively, and its energy consumption is approximately 10.37× lower than previous similar works with 43.95 fps as a real-time process under an operating frequency of 100 MHz on a Xilinx ZC702 FPGA.
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference40 articles.
1. Artificial Intelligence: A Survey on Evolution and Future Trends;Mijwil;Asian J. Appl. Sci.,2021
2. Deep Learning in Neural Networks: An Overview;Schmidhuber;Neural Netw.,2015
3. A tutorial survey of architectures, algorithms, and applications for deep learning;Deng;APSIPA Trans. Signal Inf. Process.,2014
4. Representation Learning: A Review and New Perspectives;Bengio;IEEE Trans. Pattern Anal. Mach. Intell.,2013
5. Bengio, Y. (2013, January 29–31). Deep learning of representations: Looking forward. Proceedings of the International Conference on Statistical Language and Speech Processing, Tarragona, Spain.
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献