Affiliation:
1. Chongqing University-University of Cincinnati Joint Co-op Institution, Chongqing University, Chongqing, China
Abstract
With the rapid development of GAN (generative adversarial network), recent years have witnessed an increasing number of tasks on reference-guided facial attributes transfer. Most state-of-the-art methods consist of facial information extraction, latent space disentanglement, and target attribute manipulation. However, they either adopt reference-guided translation methods for manipulation or monolithic modules for diverse attribute exchange, which cannot accurately disentangle the exact facial attributes with specific styles from the reference image. In this paper, we propose a deep realistic facial editing method (termed LMGAN) based on target region focusing and dual label constraint. The proposed method, manipulating target attributes by latent space exchange, consists of subnetworks for every individual attribute. Each subnetwork exerts label-restrictions on both the target attributes exchanging stage and the training process aimed at optimizing generative quality and reference-style correlation. Our method performs greatly on disentangled representation and transferring the target attribute’s style accurately. A global discriminator is introduced to combine the generated editing regional image with other nonediting areas of the source image. Both qualitative and quantitative results on the CelebA dataset verify the ability of the proposed LMGAN.
Subject
General Mathematics,General Medicine,General Neuroscience,General Computer Science
Reference46 articles.
1. Genegan: learning object transfiguration and attribute subspace from unpaired data;S. Zhou,2017
2. Dna-gan: learning disentangled representations from multi-attribute images;T. Xiao,2017
3. Elegant: exchanging latent encodings with gan for transferring multiple face attributes;T. Xiao,2018
4. Stargan: unified generative adversarial networks for multi-domain image-toimage translation;Y. Choi,2018
5. AttGAN: Facial Attribute Editing by Only Changing What You Want