Affiliation:
1. College of Mechanical Engineering, Tianjin University of Science and Technology, Tianjin 300222, China
Abstract
In practice, the object detection algorithm is limited by a complex detection environment, hardware costs, computing power, and chip running memory. The performance of the detector will be greatly reduced during operation. Determining how to realize real-time, fast, and high-precision pedestrian recognition in a foggy traffic environment is a very challenging problem. To solve this problem, the dark channel de-fogging algorithm is added to the basis of the YOLOv7 algorithm, which effectively improves the de-fogging efficiency of the dark channel through the methods of down-sampling and up-sampling. In order to further improve the accuracy of the YOLOv7 object detection algorithm, the ECA module and a detection head are added to the network to improve object classification and regression. Moreover, an 864 × 864 network input size is used for model training to improve the accuracy of the object detection algorithm for pedestrian recognition. Then the combined pruning strategy was used to improve the optimized YOLOv7 detection model, and finally, the optimization algorithm YOLO-GW was obtained. Compared with YOLOv7 object detection, YOLO-GW increased Frames Per Second (FPS) by 63.08%, mean Average Precision (mAP) increased by 9.06%, parameters decreased by 97.66%, and volume decreased by 96.36%. Smaller training parameters and model space make it possible for the YOLO-GW target detection algorithm to be deployed on the chip. Through analysis and comparison of experimental data, it is concluded that YOLO-GW is more suitable for pedestrian detection in a fog environment than YOLOv7.
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference43 articles.
1. Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014). Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. arXiv.
2. Faster r-cnn: Towards real-time object detection with region proposal networks;Ren;IEEE Trans. Pattern Anal. Mach. Intell.,2017
3. Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27–30). You Only Look Once: Unified, Real-Time Object Detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.
4. Berg, A.C., Fu, C.Y., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016). SSD: Single Shot MultiBox Detector. arXiv.
5. Yoo, D., Park, S., Lee, J.-Y., Paek, A.S., and Kweon, I.S. (2015, January 11–18). AttentionNet: Aggregating Weak Directions for Accurate Object Detection. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献