Reduced-Parameter YOLO-like Object Detector Oriented to Resource-Constrained Platform
Author:
Zheng Xianbin1ORCID, He Tian1ORCID
Affiliation:
1. College of Mechanical and Electrical Engineering, Qingdao University, Qingdao 266071, China
Abstract
Deep learning-based target detectors are in demand for a wide range of applications, often in areas such as robotics and the automotive industry. The high computational requirements of deep learning severely limit its ability to be deployed on resource-constrained and energy-first devices. To address this problem, we propose a class YOLO target detection algorithm and deploy it to an FPGA platform. Based on the FPGA platform, we can make full use of its computational features of parallel computing, and the computational units such as convolution, pooling and Concat layers in the model can be accelerated for inference.To enable our algorithm to run efficiently on FPGAs, we quantized the model and wrote the corresponding hardware operators based on the model units. The proposed object detection accelerator has been implemented and verified on the Xilinx ZYNQ platform. Experimental results show that the detection accuracy of the algorithm model is comparable to that of common algorithms, and the power consumption is much lower than that of the CPU and GPU. After deployment, the accelerator has a fast inference speed and is suitable for deployment on mobile devices to detect the surrounding environment.
Funder
National Defense Science and Technology Innovation Zone Foundation of China
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference39 articles.
1. Sun, X., Zhu, X., Wang, P., and Chen, H. (2018, January 19–23). A review of robot control with visual servoing. Proceedings of the 2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Tianjin, China. 2. Object detection recognition and robot grasping based on machine learning: A survey;Bai;IEEE Access,2020 3. Girshick, R. (2005, January 17–21). Fast r-cnn. Proceedings of the IEEE International Conference on Computer Vision, Beijing, China. 4. He, K., Gkioxari, G., Dollár, P., and Girshick, R. (2005, January 17–21). Mask r-cnn. Proceedings of the IEEE International Conference on Computer Vision, Beijing, China. 5. Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2017, January 21–26). You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|