Author:
Zhao Wenning,Yao Xin,Wang Bixin,Ding Jiayi,Li Jialu,Zhang Xiong,Wan Shuting,Zhao Jingyi,Guo Rui,Cai Wei
Abstract
AbstractAccurately identifying the coupler operating handle during the operation of the hook-picking robot has a significant impact on production activities. This article is based on the YOLOv8 model. Due to the limited variety of on-site coupler operating handles and working environment, it is difficult to ensure the richness of image categories in the dataset. Before the experiment, a series of expansion operations were performed on the dataset, such as rotation, translation, and brightness adjustment. Use the expanded images to simulate the images detected by the hook-picking robot in harsh environments. This model performs feature extraction and target recognition on the expanded coupler handle dataset, thereby achieving recognition of the coupler handle. The experimental results show that the accuracy of the model for the coupler handle in complex environments is 98.8%, which effectively reduces the time required for training and testing. Compared with the commonly used SSD300 model and YOLOv4Tiny model, it not only has higher accuracy, but also shows obvious advantages in parameter quantity, weight file size, and other aspects, which can be well deployed in actual production.
Publisher
Springer Nature Singapore