Affiliation:
1. Microsoft
2. University of Oxford Information Engineering Oxford UK
Abstract
AbstractThe recent increase in popularity of volumetric representations for scene reconstruction and novel view synthesis has put renewed focus on animating volumetric content at high visual quality and in real‐time. While implicit deformation methods based on learned functions can produce impressive results, they are ‘black boxes’ to artists and content creators, they require large amounts of training data to generalize meaningfully, and they do not produce realistic extrapolations outside of this data. In this work, we solve these issues by introducing a volume deformation method which is real‐time even for complex deformations, easy to edit with off‐the‐shelf software and can extrapolate convincingly. To demonstrate the versatility of our method, we apply it in two scenarios: physics‐based object deformation and telepresence where avatars are controlled using blendshapes. We also perform thorough experiments showing that our method compares favourably to both volumetric approaches combined with implicit deformation and methods based on mesh deformation.