Affiliation:
1. ENS de Lyon
2. Inria, Université Côte d'Azur
3. Université Laval
4. MIT CSAIL
Abstract
AbstractRecent advances in Neural Radiance Fields enable the capture of scenes with motion. However, editing the motion is hard; no existing method allows editing beyond the space of motion existing in the original video, nor editing based on physics. We present the first approach that allows physically‐based editing of motion in a scene captured with a single hand‐held video camera, containing vibrating or periodic motion. We first introduce a Lagrangian representation, representing motion as the displacement of particles, which is learned while training a radiance field. We use these particles to create a continuous representation of motion over the sequence, which is then used to perform a modal analysis of the motion thanks to a Fourier transform on the particle displacement over time. The resulting extracted modes allow motion synthesis, and easy editing of the motion, while inheriting the ability for free‐viewpoint synthesis in the captured 3D scene from the radiance field. We demonstrate our new method on synthetic and real captured scenes.
Funder
H2020 European Research Council
Subject
Computer Graphics and Computer-Aided Design
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Modeling Ambient Scene Dynamics for Free-view Synthesis;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers '24;2024-07-13
2. Recent Trends in 3D Reconstruction of General Non‐Rigid Scenes;Computer Graphics Forum;2024-04-30