Affiliation:
1. University of Florida Department of Mechanical and Aerospace Engineering, , 1064 Center Dr., Rm. 181, Gainesville, FL 32653
Abstract
Abstract
In human–robot collaboration, robots and humans must work together in shared, overlapping, workspaces to accomplish tasks. If human and robot motion can be coordinated, then collisions between robot and human can seamlessly be avoided without requiring either of them to stop work. A key part of this coordination is anticipating humans’ future motion so robot motion can be adapted proactively. In this work, a generative neural network predicts a multi-step sequence of human poses for tabletop reaching motions. The multi-step sequence is mapped to a time-series based on a human speed versus motion distance model. The input to the network is the human’s reaching target relative to current pelvis location combined with current human pose. A dataset was generated of human motions to reach various positions on or above the table in front of the human starting from a wide variety of initial human poses. After training the network, experiments showed that the predicted sequences generated by this method matched the actual recordings of human motion within an L2 joint error of 7.6 cm and L2 link roll–pitch–yaw error of 0.301 rad on average. This method predicts motion for an entire reach motion without suffering from the exponential propagation of prediction error that limits the horizon of prior works.
Funder
National Science Foundation
Reference24 articles.
1. Recurrent Neural Network for Motion Trajectory Prediction in Human-Robot Collaborative Assembly;Zhang;CIRP Ann. Manuf. Technol.,2020
2. Human Motion Prediction for Human–Robot Collaboration;Liu;ASME J. Manuf. Syst.,2017
3. Learning Interaction for Collaborative Tasks With Probabilistic Movement Primitives;Maeda,2014
4. Probabilistic Movement Primitives for Coordination of Multiple Human-Robot Collaborative Tasks;Maeda;Auton. Rob.,2017
5. Human-Robot Collaborative Manipulation Planning Using Early Prediction of Human Motion;Mainprice,2013