Affiliation:
1. University of Science and Technology of China
2. JD Explore Academy, JD.com
3. The University of Sydney
Abstract
Affordance detection refers to identifying the potential action possibilities of objects in an image, which is an important ability for robot perception and manipulation. To empower robots with this ability in unseen scenarios, we consider the challenging one-shot affordance detection problem in this paper, i.e., given a support image that depicts the action purpose, all objects in a scene with the common affordance should be detected. To this end, we devise a One-Shot Affordance Detection (OS-AD) network that firstly estimates the purpose and then transfers it to help detect the common affordance from all candidate images. Through collaboration learning, OS-AD can capture the common characteristics between objects having the same underlying affordance and learn a good adaptation capability for perceiving unseen affordances. Besides, we build a Purpose-driven Affordance Dataset (PAD) by collecting and labeling 4k images from 31 affordance and 72 object categories. Experimental results demonstrate the superiority of our model over previous representative ones in terms of both objective metrics and visual quality. The benchmark suite is at ProjectPage.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Grounded Affordance from Exocentric View;International Journal of Computer Vision;2023-12-26
2. A Survey of Visual Affordance Recognition Based on Deep Learning;IEEE Transactions on Big Data;2023-12
3. One-Shot Learning for Task-Oriented Grasping;IEEE Robotics and Automation Letters;2023-12
4. Open-Vocabulary Affordance Detection in 3D Point Clouds;2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS);2023-10-01
5. Grounding 3D Object Affordance from 2D Interactions in Images;2023 IEEE/CVF International Conference on Computer Vision (ICCV);2023-10-01