Author:
Jin Piaopiao,Lin Yinjie,Song Yaoxian,Li Tiefeng,Yang Wei
Abstract
Contact-rich robotic manipulation tasks such as assembly are widely studied due to their close relevance with social and manufacturing industries. Although the task is highly related to vision and force, current methods lack a unified mechanism to effectively fuse the two sensors. We consider coordinating multimodality from perception to control and propose a vision-force curriculum policy learning scheme to effectively fuse the features and generate policy. Experiments in simulations indicate the priorities of our method, which could insert pegs with 0.1 mm clearance. Furthermore, the system is generalizable to various initial configurations and unseen shapes, and it can be robustly transferred from simulation to reality without fine-tuning, showing the effectiveness and generalization of our proposed method. The experiment videos and code will be available at https://sites.google.com/view/vf-assembly.
Subject
Artificial Intelligence,Biomedical Engineering
Reference45 articles.
1. Learning dexterous in-hand manipulation;Andrychowicz;Int. J. Rob. Res,2020
2. “Curriculum learning,”;Bengio;Proceedings of the 26th Annual International Conference on Machine Learning,2009
3. Sim2real for peg-hole insertion with eye-in-hand camera;Bogunowicz;arXiv,2020
4. “Closing the sim-to-real loop: adapting simulation randomization with real world experience,”;Chebotar,2019
5. “Search strategies for peg-in-hole assemblies with position uncertainty,”;Chhatpar;Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst,2001