Affiliation:
1. School of Mathematics, Southeast Univeristy, Nanjing 211102, China
2. Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
Abstract
Synthetic aperture radar (SAR) enables precise object localization and imaging, which has propelled the rapid development of algorithms for maritime ship identification and detection. However, most current deep learning-based algorithms tend to increase network depth to improve detection accuracy, which may result in the loss of effective features of the target. In response to this challenge, this paper innovatively proposes an object-enhanced network, OE-YOLO, designed specifically for SAR ship detection. Firstly, we input the original image into an improved CFAR detector, which enhances the network’s ability to localize and perform object extraction by providing more information through an additional channel. Additionally, the Coordinate Attention mechanism (CA) is introduced into the backbone of YOLOv7-tiny to improve the model’s ability to capture spatial and positional information in the image, thereby alleviating the problem of losing the position of small objects. Furthermore, to enhance the model’s detection capability for multi-scale objects, we optimize the neck part of the original model to integrate the Asymptotic Feature Fusion (AFF) network. Finally, the proposed network model is thoroughly tested and evaluated using publicly available SAR image datasets, including the SAR-Ship-Dataset and HRSID dataset. In comparison to the baseline method YOLOv7-tiny, OE-YOLO exhibits superior performance with a lower parameter count. When compared with other commonly used deep learning-based detection methods, OE-YOLO demonstrates optimal performance and more accurate detection results.
Funder
National Natural Science Foundation of China
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献