Abstract
AbstractGenerative Adversarial Networks (GAN) show excellent performance in various problems of computer vision, computer graphics, and machine learning, but require large amounts of data and huge computational resources. There is also the issue of unstable training. If the generator and discriminator diverge during the training process, the GAN is subsequently difficult to converge. In order to tackle these problems, various transfer learning methods have been introduced; however, mode collapse, which is a form of overfitting, often arises. Moreover, there were limitations in learning the distribution of the training data. In this paper, we provide a comprehensive review of the latest transfer learning methods as a solution to the problem, propose the most effective method of fixing some layers of the generator and discriminator, and discuss future prospects. The model to be used for the experiment is StyleGAN, and the performance evaluation uses Fréchet Inception Distance (FID), coverage, and density. Results of the experiment revealed that the proposed method did not overfit. The model was able to learn the distribution of the training data relatively well compared to the previously proposed methods. Moreover, it outperformed existing methods at the Stanford Cars, Stanford Dogs, Oxford Flower, Caltech-256, CUB-200–2011, and Insect-30 datasets.
Funder
Korea Institute of Energy Technology Evaluation and Planning
National Research Foundation of Korea
Publisher
Springer Science and Business Media LLC
Reference59 articles.
1. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems (NIPS)
2. Brock A, Donahue J, Simonyan K (2019) Large scale GAN training for high fidelity natural image synthesis. In: International Conference on Learning Representations (ICLR)
3. Mo S, Cho M, Shin J (2019) Instagan: Instance-aware image-to-image translation. In: International Conference on Learning Representations (ICLR)
4. Zhou T, Li Q, Lu H, Cheng Q, Zhang X (2023) GAN review: Models and medical image fusion applications. Inf Fusion 91:134–148
5. Park S-W, Huh J-H, Kim J-C (2020) BEGAN v3: avoiding mode collapse in GANs using variational inference. Electronics 9(4):688
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献