CapGAN: Text-to-Image Synthesis Using Capsule GANs
-
Published:2023-10-09
Issue:10
Volume:14
Page:552
-
ISSN:2078-2489
-
Container-title:Information
-
language:en
-
Short-container-title:Information
Author:
Omar Maryam1, Ur Rehman Hafeez12ORCID, Samin Omar Bin3, Alazab Moutaz24ORCID, Politano Gianfranco5ORCID, Benso Alfredo5
Affiliation:
1. Department of Computer Science, National University of Computing and Emerging Sciences, Hayatabad, Peshawar 24720, Pakistan 2. School of Computing and Data Sciences, Oryx Universal College with Liverpool John Moores University, Doha 34110, Qatar 3. Center for Excellence in Information Technology, Institute of Management Sciences, Hayatabad, Peshawar 24720, Pakistan 4. Department of Intelligent Systems, Faculty of Artificial Intelligence, Al-Balqa Applied University, Al-Salt 19117, Jordan 5. Department of Control and Computer Engineering (DAUIN), Politecnico di Torino, 10129 Turin, Italy
Abstract
Text-to-image synthesis is one of the most critical and challenging problems of generative modeling. It is of substantial importance in the area of automatic learning, especially for image creation, modification, analysis and optimization. A number of works have been proposed in the past to achieve this goal; however, current methods still lack scene understanding, especially when it comes to synthesizing coherent structures in complex scenes. In this work, we propose a model called CapGAN, to synthesize images from a given single text statement to resolve the problem of global coherent structures in complex scenes. For this purpose, skip-thought vectors are used to encode the given text into vector representation. This encoded vector is used as an input for image synthesis using an adversarial process, in which two models are trained simultaneously, namely: generator (G) and discriminator (D). The model G generates fake images, while the model D tries to predict what the sample is from training data rather than generated by G. The conceptual novelty of this work lies in the integrating capsules at the discriminator level to make the model understand the orientational and relative spatial relationship between different entities of an object in an image. The inception score (IS) along with the Fréchet inception distance (FID) are used as quantitative evaluation metrics for CapGAN. IS recorded for images generated using CapGAN is 4.05 ± 0.050, which is around 34% higher than images synthesized using traditional GANs, whereas the FID score calculated for synthesized images using CapGAN is 44.38, which is ab almost 9% improvement from the previous state-of-the-art models. The experimental results clearly demonstrate the effectiveness of the proposed CapGAN model, which is exceptionally proficient in generating images with complex scenes.
Funder
Qatar National Library
Subject
Information Systems
Reference39 articles.
1. Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., and Lee, H. (2016, January 20–22). Generative Adversarial Text-to-Image Synthesis. Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA. 2. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014, January 8–13). Generative adversarial nets. Proceedings of the Annual Conference on Neural Information Processing Systems 2014, Montreal, QC, Canada. 3. Dash, A., Gamboa, J.C.B., Ahmed, S., Liwicki, M., and Afzal, M.Z. (2017). TAC-GAN-text conditioned auxiliary classifier generative adversarial network. arXiv. 4. Zhang, H., Xu, T., Li, H., Zhang, S., Huang, X., Wang, X., and Metaxas, D. (2017, January 22–29). Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. Proceedings of the IEEE International Conference on Computer Vision 2017, Venice, Italy. 5. Dong, H., Zhang, J., McIlwraith, D., and Guo, Y. (2017, January 17–20). I2T2I: Learning text to image synthesis with textual data augmentation. Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China.
|
|