Abstract
Mobile manipulation has a broad range of applications in robotics. However, it is usually more challenging than fixed-base manipulation due to the complex coordination of a mobile base and a manipulator. Although recent works have demonstrated that deep reinforcement learning is a powerful technique for fixed-base manipulation tasks, most of them are not applicable to mobile manipulation. This paper investigates how to leverage deep reinforcement learning to tackle whole-body mobile manipulation tasks in unstructured environments using only on-board sensors. A novel mobile manipulation system which integrates the state-of-the-art deep reinforcement learning algorithms with visual perception is proposed. It has an efficient framework decoupling visual perception from the deep reinforcement learning control, which enables its generalization from simulation training to real-world testing. Extensive simulation and experiment results show that the proposed mobile manipulation system is able to grasp different types of objects autonomously in various simulation and real-world scenarios, verifying the effectiveness of the proposed mobile manipulation system.
Funder
National Natural Science Foundation of China
Engineering and Physical Sciences Research Council
National Key Research and Development Program of China
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference42 articles.
1. DoorGym: A Scalable Door Opening Environment And Baseline Agent;Urakami;CoRR,2019
2. Learning Latent Plans from Play;Lynch;CoRR,2019
3. Learning agile and dynamic motor skills for legged robots
4. End-to-end training of deep visuomotor policies;Levine;J. Mach. Learn. Res.,2016
5. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection
Cited by
56 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献