Enhanced Real-Time Target Detection for Picking Robots Using Lightweight CenterNet in Complex Orchard Environments
-
Published:2024-06-30
Issue:7
Volume:14
Page:1059
-
ISSN:2077-0472
-
Container-title:Agriculture
-
language:en
-
Short-container-title:Agriculture
Author:
Fan Pan1234ORCID, Zheng Chusan13ORCID, Sun Jin1, Chen Dong1, Lang Guodong2, Li Yafeng13
Affiliation:
1. School of Computer, Baoji University of Arts and Science, Baoji 721016, China 2. Apple Full Mechanized Scientific Research Base of Ministry of Agriculture and Rural Affairs, Yangling 712100, China 3. School of Mathematics and Information Sciences, Baoji University of Arts and Science, Baoji 721013, China 4. The Youth Innovation Team of Shaanxi Universities, Xi’an 710061, China
Abstract
The rapid development of artificial intelligence and remote sensing technologies is indispensable for modern agriculture. In orchard environments, challenges such as varying light conditions and shading complicate the tasks of intelligent picking robots. To enhance the recognition accuracy and efficiency of apple-picking robots, this study aimed to achieve high detection accuracy in complex orchard environments while reducing model computation and time consumption. This study utilized the CenterNet neural network as the detection framework, introducing gray-centered RGB color space vertical decomposition maps and employing grouped convolutions and depth-separable convolutions to design a lightweight feature extraction network, Light-Weight Net, comprising eight bottleneck structures. Based on the recognition results, the 3D coordinates of the picking point were determined within the camera coordinate system by using the transformation relationship between the image’s physical coordinate system and the camera coordinate system, along with depth map distance information of the depth map. Experimental results obtained using a testbed with an orchard-picking robot indicated that the proposed model achieved an average precision (AP) of 96.80% on the test set, with real-time performance of 18.91 frames per second (FPS) and a model size of only 17.56 MB. In addition, the root-mean-square error of positioning accuracy in the orchard test was 4.405 mm, satisfying the high-precision positioning requirements of the picking robot vision system in complex orchard environments.
Funder
Research Program of the Shaanxi Provincial Department of Education R&D Program of the Shaanxi Province of China
Reference56 articles.
1. Ma, L., Zhao, L., Wang, Z., Zhang, J., and Chen, G. (2023). Detection and Counting of Small Target Apples under Complicated Environments by Using Improved YOLOv7-tiny. Agronomy, 13. 2. Tong, S., Yue, Y., Li, W., Wang, Y., Kang, F., and Feng, C. (2022). Branch Identification and Junction Points Location for Apple Trees Based on Deep Learning. Remote Sens., 14. 3. Zhang, C., Kang, F., and Wang, Y. (2022). An Improved Apple Object Detection Method Based on Lightweight YOLOv4 in Complex Backgrounds. Remote Sens., 14. 4. Sekharamantry, P.K., Melgani, F., and Malacarne, J. (2023). Deep Learning-Based Apple Detection with Attention Module and Improved Loss Function in YOLO. Remote Sens., 15. 5. Yan, B., Fan, P., Lei, X., Liu, Z., and Yang, F. (2021). A Real-Time Apple Targets Detection Method for Picking Robot Based on Improved YOLOv5. Remote Sens., 13.
|
|