Author:
Zuo Guoyu,Tong Jiayuan,Liu Hongxing,Chen Wenbai,Li Jianfeng
Abstract
To grasp the target object stably and orderly in the object-stacking scenes, it is important for the robot to reason the relationships between objects and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. This paper proposes a novel graph-based visual manipulation relationship reasoning network (GVMRN) that directly outputs object relationships and manipulation order. The GVMRN model first extracts features and detects objects from RGB images, and then adopts graph convolutional network (GCN) to collect contextual information between objects. To improve the efficiency of relation reasoning, a relationship filtering network is built to reduce object pairs before reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in object-stacking scenes. The GVMRN model is also tested on the images we collected and applied on the robot grasping platform. The results demonstrated the generalization and applicability of our method in real environment.
Subject
Artificial Intelligence,Biomedical Engineering
Reference20 articles.
1. Real-world multiobject, multigrasp detection;Chu;IEEE Rob. Autom. Lett,2018
2. Visual sorting of express parcels based on multi-task deep learning;Han;Sensors,2020
3. Semi-supervised classification with graph convolutional networks;Kipf,2016
4. Deep learning for detecting robotic grasps;Lenz;Int. J. Rob. Res,2015
5. Visual relationship detection with language priors;Lu,2016
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献