Affiliation:
1. School of Computer Science, University of Birmingham, Edgbaston, Birmingham, UK
2. Institute of Control and Information Engineering, Poznan University of Technology, Poznan, Poland
Abstract
This paper concerns the problem of how to learn to grasp dexterously, so as to be able to then grasp novel objects seen only from a single viewpoint. Recently, progress has been made in data-efficient learning of generative grasp models that transfer well to novel objects. These generative grasp models are learned from demonstration (LfD). One weakness is that, as this paper shall show, grasp transfer under challenging single-view conditions is unreliable. Second, the number of generative model elements increases linearly in the number of training examples. This, in turn, limits the potential of these generative models for generalization and continual improvement. In this paper, it is shown how to address these problems. Several technical contributions are made: (i) a view-based model of a grasp; (ii) a method for combining and compressing multiple grasp models; (iii) a new way of evaluating contacts that is used both to generate and to score grasps. Together, these improve grasp performance and reduce the number of models learned. These advances, in turn, allow the introduction of autonomous training, in which the robot learns from self-generated grasps. Evaluation on a challenging test set shows that, with innovations (i)–(iii) deployed, grasp transfer success increases from 55.1% to 81.6%. By adding autonomous training this rises to 87.8%. These differences are statistically significant. In total, across all experiments, 539 test grasps were executed on real objects.
Subject
Applied Mathematics,Artificial Intelligence,Electrical and Electronic Engineering,Mechanical Engineering,Modelling and Simulation,Software
Cited by
25 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献