Author:
Wang Chuang,Su Chupeng,Sun Baozheng,Chen Gang,Xie Longhan
Abstract
IntroductionRobotic assembly tasks require precise manipulation and coordination, often necessitating advanced learning techniques to achieve efficient and effective performance. While residual reinforcement learning with a base policy has shown promise in this domain, existing base policy approaches often rely on hand-designed full-state features and policies or extensive demonstrations, limiting their applicability in semi-structured environments.MethodsIn this study, we propose an innovative Object-Embodiment-Centric Imitation and Residual Reinforcement Learning (OEC-IRRL) approach that leverages an object-embodiment-centric (OEC) task representation to integrate vision models with imitation and residual learning. By utilizing a single demonstration and minimizing interactions with the environment, our method aims to enhance learning efficiency and effectiveness. The proposed method involves three key steps: creating an object-embodiment-centric task representation, employing imitation learning for a base policy using via-point movement primitives for generalization to different settings, and utilizing residual RL for uncertainty-aware policy refinement during the assembly phase.ResultsThrough a series of comprehensive experiments, we investigate the impact of the OEC task representation on base and residual policy learning and demonstrate the effectiveness of the method in semi-structured environments. Our results indicate that the approach, requiring only a single demonstration and less than 1.2 h of interaction, improves success rates by 46% and reduces assembly time by 25%.DiscussionThis research presents a promising avenue for robotic assembly tasks, providing a viable solution without the need for specialized expertise or custom fixtures.
Reference34 articles.
1. Do as I can, not as I say: grounding language in robotic affordances;Ahn;arXiv preprint arXiv:2204.01691,2022
2. Residual reinforcement learning from demonstrations;Alakuijala;arXiv preprint arXiv:2106.08050,2021
3. Neurorobotic reinforcement learning for domains with parametrical uncertainty;Amaya;Front. Neurorob,2023
4. Learning force control for contact-rich manipulation tasks with rigid position-controlled robots;Beltran-Hernandez;IEEE Robot. Autom. Lett,2020
5. “Adapting object-centric probabilistic movement primitives with residual reinforcement learning,”;Carvalho,2022