Author:
Cha Miriam,Gwon Youngjune L.,Kung H. T.
Abstract
We describe a new approach that improves the training of generative adversarial nets (GANs) for synthesizing diverse images from a text input. Our approach is based on the conditional version of GANs and expands on previous work leveraging an auxiliary task in the discriminator. Our generated images are not limited to certain classes and do not suffer from mode collapse while semantically matching the text input. A key to our training methods is how to form positive and negative training examples with respect to the class label of a given image. Instead of selecting random training examples, we perform negative sampling based on the semantic distance from a positive example in the class. We evaluate our approach using the Oxford-102 flower dataset, adopting the inception score and multi-scale structural similarity index (MS-SSIM) metrics to assess discriminability and diversity of the generated images. The empirical results indicate greater diversity in the generated images, especially when we gradually select more negative training examples closer to a positive example in the semantic space.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
28 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Understanding GANs: fundamentals, variants, training challenges, applications, and open problems;Multimedia Tools and Applications;2024-05-14
2. Feature-Grounded Single-Stage Text-to-Image Generation;Tsinghua Science and Technology;2024-04
3. Enhancing Text-to-Image Model Evaluation: SVCS and UCICM;2023 6th International Conference on Recent Trends in Advance Computing (ICRTAC);2023-12-14
4. Layout-Bridging Text-to-Image Synthesis;IEEE Transactions on Circuits and Systems for Video Technology;2023-12
5. Multimodal Image Synthesis and Editing: The Generative AI Era;IEEE Transactions on Pattern Analysis and Machine Intelligence;2023-12