Affiliation:
1. University of Science and Technology of China, Hefei, China
Abstract
Reconstructing interacting hands from monocular RGB data is a challenging task, as it involves many interfering factors, e.g., self- and mutual occlusion and similar textures. Previous works only leverage information from a single RGB image without modeling their physically plausible relation, which leads to inferior reconstruction results. In this work, we are dedicated to explicitly exploiting spatial-temporal information to achieve better interacting hand reconstruction. On the one hand, we leverage temporal context to complement insufficient information provided by the single frame and design a novel temporal framework with a temporal constraint for interacting hand motion smoothness. On the other hand, we further propose an interpenetration detection module to produce kinetically plausible interacting hands without physical collisions. Extensive experiments are performed to validate the effectiveness of our proposed framework, which achieves new state-of-the-art performance on public benchmarks.
Funder
GPU cluster built by MCC Lab of Information Science and Technology Institution and the Supercomputing Center of the USTC
Publisher
Association for Computing Machinery (ACM)