Abstract
As the need for efficient warehouse logistics has increased in manufacturing systems, the use of automated guided vehicles (AGVs) has also increased to reduce travel time. The AGVs are controlled by a system using laser sensors or floor-embedded wires to transport pallets and their loads. Because such control systems have only predefined palletizing strategies, AGVs may fail to engage incorrectly positioned pallets. In this study, we consider a vision sensor-based method to address this shortcoming by recognizing a pallet’s position. We propose a multi-task deep learning architecture that simultaneously predicts distances and rotation based on images obtained from a visionary sensor. These predictions complement each other in learning, allowing a multi-task model to learn and execute tasks impossible with single-task models. The proposed model can accurately predict the rotation and displacement of the pallets to derive information necessary for the control system. This information can be used to optimize a palletizing strategy. The superiority of the proposed model was verified by an experiment on images of stored pallets that were collected from a visionary sensor attached to an AGV.
Funder
National Research Foundation of Korea
Institute of Information & Communications Technology Planning & Evaluation
Korea Creative Content Agency
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献