Simultaneous Multi-View Object Recognition and Grasping in Open-Ended Domains
-
Published:2024-04-16
Issue:2
Volume:110
Page:
-
ISSN:1573-0409
-
Container-title:Journal of Intelligent & Robotic Systems
-
language:en
-
Short-container-title:J Intell Robot Syst
Author:
Kasaei HamidrezaORCID, Kasaei Mohammadreza, Tziafas Georgios, Luo Sha, Sasso Remo
Abstract
AbstractTo aid humans in everyday tasks, robots need to know which objects exist in the scene, where they are, and how to grasp and manipulate them in different situations. Therefore, object recognition and grasping are two key functionalities for autonomous robots. Most state-of-the-art approaches treat object recognition and grasping as two separate problems, even though both use visual input. Furthermore, the knowledge of the robot is fixed after the training phase. In such cases, if the robot encounters new object categories, it must be retrained to incorporate new information without catastrophic forgetting. To resolve this problem, we propose a deep learning architecture with an augmented memory capacity to handle open-ended object recognition and grasping simultaneously. In particular, our approach takes multi-views of an object as input and jointly estimates pixel-wise grasp configuration as well as a deep scale- and rotation-invariant representation as output. The obtained representation is then used for open-ended object recognition through a meta-active learning technique. We demonstrate the ability of our approach to grasp never-seen-before objects and to rapidly learn new object categories using very few examples on-site in both simulation and real-world settings. Our approach empowers a robot to acquire knowledge about new object categories using, on average, less than five instances per category and achieve $$95\%$$
95
%
object recognition accuracy and above $$91\%$$
91
%
grasp success rate on (highly) cluttered scenarios in both simulation and real-robot experiments. A video of these experiments is available online at: https://youtu.be/n9SMpuEkOgk
Publisher
Springer Science and Business Media LLC
Reference54 articles.
1. Wang, J., Chakraborty, R., Stella, X.Y.: Spatial transformer for 3d point clouds. IEEE Trans. Pattern Anal. Mach. Intell. (2021) 2. Yu, C., Wang, J., Gao, C., Yu, G., Shen, C., Sang, N.: Context prior for scene segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), (2020) 3. Fang, H.-S., Wang, C., Gou, M., Lu, C.: Graspnet-1billion: a large-scale benchmark for general object grasping. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11 444–11 453 (2020) 4. Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017) 5. Bohg, J., Morales, A., Asfour, T., Kragic, D.: Data-driven grasp synthesis–a survey. IEEE Trans. Rob. 30(2), 289–309 (2013)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|