Affiliation:
1. Department of of Electrical Engeneering Pontifical Catholic University of Rio de Janeiro Rio de Janeiro RJ Brazil
2. Faculty of Science and Technology Norwegian University of Life Sciences Ås Norway
Abstract
AbstractAgricultural automation emerges as a vital tool to increase field efficiency, pest control, and reduce labor burdens. While agricultural mobile robots hold promise for automation, challenges persist, particularly in navigating a plantation environment. Accurate robot localization is already possible, but existing Global Navigation Satellite System with Real‐time Kinematic systems are costly, while also demanding careful and precise mapping. In response, onboard navigation approaches gain traction, leveraging sensors like cameras and light detection and rangings. However, the machine learning methods used in camera‐based systems are highly sensitive to the training data set used. In this paper, we study the effects of data set diversity on a proposed deep learning‐based visual navigation system. Leveraging multiple data sets, we assess the model robustness and adaptability while investigating the effects of data diversity available during the training phase. The system is presented with a range of different camera configurations, hardware, field structures, as well as a simulated environment. The results show that mixing images from different cameras and fields can improve not only system robustness to changing conditions but also its single‐condition performance. Real‐world tests were conducted which show that good results can be achieved with reasonable amounts of data.