Affiliation:
1. Xinjiang Agricultural University
Abstract
Abstract
Safflower detection technology in the field has a crucial role in automated harvesting and the acquisition of row navigation information. Due to the small overall size of safflower clusters, their distribution is relatively dense. The environment between rows is complex, and as a result, uneven lighting severely hinders the detection of clusters. Current safflower bulb detection technology suffers from insufficient detection accuracy and a large amount of computation and complexity, which is not conducive to the deployment of automation and intelligent harvesting robots.
To address the above issues, this study presents an enhanced SF-YOLO model for target detection that substitutes Ghos_conv for the conventional convolutional block in the backbone network, for improved computational efficiency. To improve the model's characterisation ability, the backbone network is embedded with the CBAM attention mechanism. The introduction of a fusion L(CIOU+NWD) loss function enhances the accuracy of feature extraction and expedites loss convergence, thus allowing precise feature extraction and improved adaptive fusion while accelerating loss convergence. Hence, the model becomes more adaptive and faster at feature extraction. The updated K-means clustering algorithm yields anchor frames, which substitute for the original COCO dataset anchor frames. This enhances the model’s ability to adjust to multi-scale safflower information across farmlands. The model’s adaptability to multi-scale information between rows of safflowers on the dataset is enhanced through data augmentation techniques such as Gaussian blur, Gaussian noise, sharpening, and channel disruptions. This ensures better robustness against illumination, noise, and angle changes. SF-YOLO surpasses the original YOLOv5s model in tests on a self-constructed safflower dataset under complex background information, where GFlops decrease from 15.8 G to 13.2 G, and Params from 7.013 M to 5.34 M, for respective reductions of 16.6% and 23.9%, and 𝑚𝐴𝑃0.5 improves by 1.3%, to 95.3%. Safflower detection accuracy is enhanced in complex farmland environments, serving as a reference for the subsequent development of autonomous navigation and non-destructive harvesting equipment.
Publisher
Research Square Platform LLC
Reference19 articles.
1. Safflower picking recognition in complex environments based on an improved YOLOv7;Wang XR;Trans Chin Soc Agricultural Eng,2023
2. LeCun Y, Bengio Y, Hinton G. Deep Learn Nat. 2015;521:436–44.
3. Ohi N et al. Design of an autonomous precision pollination robot. 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2018.
4. Detecting tomato flowers in greenhouses using computer vision;Oppenheim D;Int J Comput Inform Eng,2017
5. Lightweight detection networks for tea bud on complex agricultural environment via improved YOLO v4;Li J;Comput Electron Agric,2023