Affiliation:
1. School of Mathematical Sciences, Harbin Normal University, Harbin 150500, China
Abstract
Exploring the ocean’s resources requires finding underwater objects, which is a challenging task due to blurry images and small, densely packed targets. To improve the accuracy of underwater target detection, we propose an enhanced version of the YOLOv7 network called YOLOv7-SN. Our goal is to optimize the effectiveness and accuracy of underwater target detection by introducing a series of innovations. We incorporate the channel attention module SE into the network’s key part to improve the extraction of relevant features for underwater targets. We also introduce the RFE module with dilated convolution behind the backbone network to capture multi-scale information. Additionally, we use the Wasserstein distance as a new metric to replace the traditional loss function and address the challenge of small target detection. Finally, we employ probe heads carrying implicit knowledge to further enhance the model’s accuracy. These methods aim to optimize the efficacy of underwater target detection and improve its ability to deal with the complexity and challenges of underwater environments. We conducted experiments on the URPC2020, and RUIE datasets. The results show that the mean accuracy (mAP) is improved by 5.9% and 3.9%, respectively, compared to the baseline model.
Reference42 articles.
1. An overview of next-generation underwater target detection and tracking: An integrated underwater architecture;Ghafoor;IEEE Access,2019
2. Underwater object detection using collaborative weakly supervision;Cai;Comput. Electr. Eng.,2022
3. Underwater target detection algorithm based on improved yolov4 with semidsconv and fiou loss function;Zhang;Front. Mar. Sci.,2023
4. Dalal, N., and Triggs, B. (2005, January 20–26). Histograms of oriented gradients for human detection. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA.
5. Pinto, F., Torr, P.H., and Dokania, P.K. (2022, January 23–27). An impartial take to the cnn vs transformer robustness contest. Proceedings of the Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献