Author:
Liu Gang,Yao Bitao,Xu Wenjun,Liu Xuedong
Abstract
Abstract
With the advent of Industrial 4.0, industrial robots have been widely used in various sectors, e.g., in assembly. Peg-in-hole (PiH) assembly is the most typical one among assembly tasks. In PiH tasks, the robot transfers from a non-contact mode to a contact-rich mode. Instead of switching the position/force control mode with a pause when contact is detected, we deploy a non-diagonal stiffness compliance control for planning the adaptive trajectory to improve the task efficiency and ensure contact safety. In this paper, we proposed a deep reinforcement learning (DRL) method to achieve the above compliance. A compliance controller based on a virtual forward dynamic (FD) model is built. A DRL agent is deployed to optimize the parameters of the non-diagonal stiffness matrix for the built compliance controller to generate a trajectory that adapts to the changing contact condition. The experiment shows the proposed method can control the contact force within a safe range and improve the efficiency of assembly tasks.
Subject
General Physics and Astronomy
Reference14 articles.
1. A Robotic Peg-in-Hole Assembly Strategy Based on Variable Compliance Center;Wang;IEEE Access,2019
2. Impedance Control: An Approach to Manipulation;Hogan,1984
3. Virtual forward dynamics models for cartesian robot control;Scherzinger,2020
4. Robust Adaptive Position and Force Tracking Control Strategy for Door-Opening Behaviour;Chen;International Journal of Simulation Modelling,2016
5. Admittance Control Based on a Stiffness Ellipse for Rapid Trajectory Deformation;Oikawa,2020
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献