Affiliation:
1. SCSET, Bennett University
2. CSE, UIET, Panjab University
3. Thapar Institute of Engineering & Technology
Abstract
Abstract
Image-to-image translation emerges as a significant utility of conditional Generative Adversarial Networks (CoGANs). This research introduces a fresh application of conditional GANs, aiming to uncover hidden facial attributes. Our methodology involves enhancing the Pix2Pix GAN framework through the integration of a modified UNET + + architecture, which serves as an inventive generator model. In this setup, the Pix2Pix model employs a PatchGAN architecture within the discriminator, producing an activation map with values utilized to authenticate depicted faces. Through the incorporation of the UNET + + architecture into the generator, we effectively narrow down the semantic gap between the encoder and decoder feature maps. This strategic adjustment results in a noticeable enhancement in gradient flow. To gauge the effectiveness of our proposed approach, we conducted experiments on a bespoke dataset intentionally crafted for training paired image-to-image translation GANs. Our model is comprehensively compared against other leading models designed for revealing concealed facial features. Significantly, our proposed model convincingly surpasses these alternatives across a range of evaluation criteria.
Publisher
Research Square Platform LLC