Author:
Sheng G,Zhiyang L,Ruiteng Z,Lei Z,Chengran Y,Zhengshen Z,Ang M H
Abstract
Abstract
In the realm of conventional affordance detection, the primary objective is to provide insights into the potential uses of objects. However, a significant limitation remains as these conventional methods merely treat affordance detection as a semantic segmentation task, disregarding the crucial aspect of interpreting affordances for actions that can be performed by manipulator. To address this critical gap, we present a novel pipeline incorporating the Intelligent Action Library (IAL) concept. This framework enables affordance interpretation for various manipulation tasks, allowing robots to be taught and guided on how to execute specific actions based on the detected affordances and human-robot interaction. Through real-world experiments, we have demonstrated the ingenuity and dependability of our pipeline, effectively bridging the gap between affordance detection and manipulation task planning and execution. The integration of IAL facilitates a seamless connection between understanding affordances and empowering robots to perform tasks with precision and efficiency. The demo link is available to the public: https://youtu.be/_oBAer2Vl8k
Reference11 articles.
1. One-shot transfer of affordance regions? affcorrs! [C];Hadjivelichkov,2023
2. Affordancenet: An end-to-end deep learning approach for object affordance detection [C];Do,2018
3. Weakly supervised affordance detection [C];Sawatzky,2017
4. Learning affordance segmentation for real-world robotic manipulation via synthetic images [J];Chu;IEEE Robotics and Automation Letters,2019
5. Detecting object affordances with convolutional neural networks [C];Nguyen,2016