Affiliation:
1. University of Science and Technology of China, Heifei, Auhui, China
2. JD Explore Academy, Beijing, China
Abstract
Generative Adversarial Networks (GANs) have been widely applied in different scenarios thanks to the development of deep neural networks. The original GAN was proposed based on the non-parametric assumption of the infinite capacity of networks. However, it is still unknown whether GANs can fit the target distribution without any prior information. Due to the overconfident assumption, many issues remain unaddressed in GANs training, such as non-convergence, mode collapses, and gradient vanishing. Regularization and normalization are common methods of introducing prior information to stabilize training and improve discrimination. Although a handful number of regularization and normalization methods have been proposed for GANs, to the best of our knowledge, there exists no comprehensive survey that primarily focuses on objectives and development of these methods, apart from some incomprehensive and limited-scope studies. In this work, we conduct a comprehensive survey on the regularization and normalization techniques from different perspectives of GANs training. First, we systematically describe different perspectives of GANs training and thus obtain the different objectives of regularization and normalization. Based on these objectives, we propose a new taxonomy. Furthermore, we compare the performance of the mainstream methods on different datasets and investigate the applications of regularization and normalization techniques that have been frequently employed in state-of-the-art GANs. Finally, we highlight potential future directions of research in this domain. Code and studies related to the regularization and normalization of GANs in this work are summarized at
https://github.com/iceli1007/GANs-Regularization-Review
.
Funder
National Natural Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Subject
General Computer Science,Theoretical Computer Science
Reference174 articles.
1. Jonas Adler and Sebastian Lunz. 2018. Banach wasserstein GAN. In Advances in Neural Information Processing Systems. 6754–6763.
2. Ivan Anokhin Kirill Demochkin Taras Khakhulin Gleb Sterkin Victor Lempitsky and Denis Korzhenkov. 2020. Image Generators with Conditionally-Independent Pixel Synthesis. (2020). arxiv:cs.CV/2011.13775
3. Towards principled methods for training generative adversarial networks;Arjovsky Martin;arXiv preprint arXiv:1701.04862,2017
4. Martin Arjovsky and Léon Bottou. 2017. Towards Principled Methods for Training Generative Adversarial Networks. (2017). arxiv:stat.ML/1701.04862
5. Wasserstein GAN;Arjovsky Martin;arXiv preprint arXiv:1701.07875,2017
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献