Affiliation:
1. Mechanical Engineering Training Centre, College of Engineering, China Agricultural University, Beijing 100083, China
Abstract
Corn, as one of the three major grain crops in China, plays a crucial role in ensuring national food security through its yield and quality. With the advancement of agricultural intelligence, agricultural robot technology has gained significant attention. High-precision navigation is the basis for realizing various operations of agricultural robots in corn fields and is closely related to the quality of operations. Corn leaf and stalk recognition and ranging are the prerequisites for achieving high-precision navigation and have attracted much attention. This paper proposes a corn leaf and stalk recognition and ranging algorithm based on multi-sensor fusion. First, YOLOv8 is used to identify corn leaves and stalks. Considering the large differences in leaf morphology and the large changes in field illumination that lead to discontinuous identification, an equidistant expansion polygon algorithm is proposed to post-process the leaves, thereby increasing the average recognition completeness of the leaves to 86.4%. Secondly, after eliminating redundant point clouds, the IMU data are used to calculate the confidence of the LiDAR and depth camera ranging point clouds, and point cloud fusion is performed based on this to achieve high-precision ranging of corn leaves. The average ranging error is 2.9 cm, which is lower than the measurement error of a single sensor. Finally, the stalk point cloud is processed and clustered using the FILL-DBSCAN algorithm to identify and measure the distance of the same corn stalk. The algorithm combines recognition accuracy and ranging accuracy to meet the needs of robot navigation or phenotypic measurement in corn fields, ensuring the stable and efficient operation of the robot in the corn field.
Reference24 articles.
1. Summary on Sensors in Agricultural Robots;Su;Proceedings of the 2019 International Conference on Image and Video Processing, and Artificial Intelligence,2019
2. Xu, B., and Chen, Z. (2018, January 18–23). Multi-Level Fusion Based 3D Object Detection from Monocular Images. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.
3. Choi, J.D., and Kim, M.Y. (2021, January 17–20). A Sensor Fusion System with Thermal Infrared Camera and LiDAR for Autonomous Vehicles: Its Calibration and Application. Proceedings of the 2021 Twelfth International Conference on Ubiquitous and Future Networks (ICUFN), Jeju Island, Republic of Korea.
4. Cheng, J. (2014). Real-Time Object Detection Based on 3D LiDAR. [Master’s Thesis, Zhejiang University].
5. RT3D: Real-Time 3-D Vehicle Detection in LiDAR Point Cloud for Autonomous Driving;Zeng;IEEE Robot. Autom. Lett.,2018