Author:
Farias Tiago de Souza,Maziero Jonas
Abstract
Reversibility in artificial neural networks allows us to retrieve the input given an output. We present feature alignment, a method for approximating reversibility in arbitrary neural networks. We train a network by minimizing the distance between the output of a data point and the random output with respect to a random input. We applied the technique to the MNIST, CIFAR-10, CelebA, and STL-10 image datasets. We demonstrate that this method can roughly recover images from just their latent representation without the need of a decoder. By utilizing the formulation of variational autoencoders, we demonstrate that it is possible to produce new images that are statistically comparable to the training data. Furthermore, we demonstrate that the quality of the images can be improved by coupling a generator and a discriminator together. In addition, we show how this method, with a few minor modifications, can be used to train networks locally, which has the potential to save computational memory resources.
Funder
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Instituto Nacional de Ciência e Tecnologia de Informação Quântica
Reference89 articles.
1. An analysis of single layer networks in unsupervised feature learning,;Adam,2011
2. Analyzing inverse problems with invertible neural networks;Ardizzone;arXiv:1808.04730 [cs, stat,2019
3. Wasserstein GAN;Arjovsky;arXiv:1701.07875 [cs, stat,2017
4. Improving the realism of synthetic images through a combination of adversarial and perceptual losses,;Atapattu,2019
5. One-step neural network inversion with PDF learning and emulation,;Baird,2005
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献