Affiliation:
1. Department of Industrial Engineering and Operations Research University of California Berkeley California USA
Abstract
AbstractTraining generative adversarial networks (GANs) are known to be difficult, especially for financial time series. This paper first analyzes the well‐posedness problem in GANs minimax games and the widely recognized convexity issue in GANs objective functions. It then proposes a stochastic control framework for hyper‐parameters tuning in GANs training. The weak form of dynamic programming principle and the uniqueness and the existence of the value function in the viscosity sense for the corresponding minimax game are established. In particular, explicit forms for the optimal adaptive learning rate and batch size are derived and are shown to depend on the convexity of the objective function, revealing a relation between improper choices of learning rate and explosion in GANs training. Finally, empirical studies demonstrate that training algorithms incorporating this adaptive control approach outperform the standard ADAM method in terms of convergence and robustness. From GANs training perspective, the analysis in this paper provides analytical support for the popular practice of “clipping,” and suggests that the convexity and well‐posedness issues in GANs may be tackled through appropriate choices of hyper‐parameters.
Subject
Applied Mathematics,Economics and Econometrics,Social Sciences (miscellaneous),Finance,Accounting
Reference71 articles.
1. Arjovsky M. &Bottou L.(2017).Towards principled methods for training generative adversarial networks.arXiv preprint arXiv:1701.04862.
2. Arjovsky M. Chintala S. &Bottou L.(2017).Wasserstein generative adversarial networks. InInternational Conference on Machine Learning (pp. 214–223). PMLR.
3. Convergence of approximation schemes for fully nonlinear second order equations
4. A Weak Dynamic Programming Principle for Zero-Sum Stochastic Differential Games with Unbounded Controls
5. Berard H. Gidel G. Almahairi A. Vincent P. &Lacoste‐Julien S.(2020).A closer look at the optimization landscape of generative adversarial networks. InInternational Conference on Learning Representations.