Affiliation:
1. Department of Mechanical Engineering, University of Washington, Seattle,98195 WA, USA
2. Department of Applied Mathematics, University of Washington, Seattle,98195 WA, USA
Abstract
A central challenge in data-driven model discovery is the presence of hidden, or latent, variables that are not directly measured but are dynamically important. Takens’ theorem provides conditions for when it is possible to augment partial measurements with time delayed information, resulting in an attractor that is diffeomorphic to that of the original full-state system. This diffeomorphism is typically unknown, and learning the dynamics in the embedding space has remained an open challenge for decades. Here, we design a deep autoencoder network to learn a coordinate transformation from the delay embedded space into a new space, where it is possible to represent the dynamics in a sparse, closed form. We demonstrate this approach on the Lorenz, Rössler and Lotka–Volterra systems, as well as a Lorenz analogue from a video of a chaotic waterwheel experiment. This framework combines deep learning and the sparse identification of nonlinear dynamics methods to uncover interpretable models within effective coordinates.
Funder
National Science Foundation AI Institute in Dynamic Systems
Army Research Office
Subject
General Physics and Astronomy,General Engineering,General Mathematics
Cited by
23 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献