Author:
Wang Shuaijun,Sun Lining,Zha Fusheng,Guo Wei,Wang Pengfei
Abstract
In this paper, we propose a deep reinforcement learning-based framework that enables adaptive and continuous control of a robot to push unseen objects from random positions to the target position. Our approach takes into account contact information in the design of the reward function, resulting in improved success rates, generalization for unseen objects, and task efficiency compared to policies that do not consider contact information. Through reinforcement learning using only one object in simulation, we obtain a learned policy for manipulating a single object, which demonstrates good generalization when applied to the task of pushing unseen objects. Finally, we validate the effectiveness of our approach in real-world scenarios.
Subject
Artificial Intelligence,Biomedical Engineering
Reference33 articles.
1. “A probabilistic data-driven model for planar pushing,”30083015
BauzaM.
RodriguezA.
New York, NYIEEE2017 IEEE International Conference on Robotics and Automation (ICRA)2017
2. “A data-efficient approach to precise and controlled pushing,”336345
BauzaM.
HoganF. R.
RodriguezA.
Conference on Robot Learning2018
3. Object rearrangement through planar pushing: a theoretical analysis and validation;Chai;IEEE Transact. Robot,2022
4. Reinforcement learning with vision-proprioception model for robot planar pushing;Cong;Front. Neurorobot,2022
5. CoumansE.
BaiY.
Pybullet, a python module for physics simulation for games, robotics and machine learning. 2016
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献