Convergent autoencoder approximation of low bending and low distortion manifold embeddings
-
Published:2023-11-16
Issue:
Volume:
Page:
-
ISSN:2822-7840
-
Container-title:ESAIM: Mathematical Modelling and Numerical Analysis
-
language:
-
Short-container-title:ESAIM: M2AN
Author:
Braunsmann Juliane,Rajkovic Marko,Wirth Benedikt,Rumpf Martin
Abstract
Autoencoders are widely used in machine learning for dimension reduction of high-dimensional data. The encoder embeds the input data manifold into a lower-dimensional latent space, while the decoder represents the inverse map, providing a parametrization of the data manifold by the manifold in latent space. We propose and analyze a novel regularization for learning the encoder component of an autoencoder: a loss functional that prefers isometric, extrinsically flat embeddings and allows to train the encoder on its own. To perform the training, it is assumed that the local Riemannian distance and the local Riemannian average can be evaluated for pairs of nearby points on the input manifold. The loss functional is computed via Monte Carlo integration. Our main theorem identifies a geometric loss functional of the embedding map as the $\Gamma$-limit of the sampling-dependent loss functionals. Numerical tests, using image data that encodes different explicitly given data manifolds, show that smooth manifold embeddings into latent space are obtained. Due to the promotion of extrinsic flatness, interpolation between not too distant points on the manifold is well approximated by linear interpolation in latent space.
Funder
Deutsche Forschungsgemeinschaft
Germany's Excellence Strategy