Author:
Nakkiran Preetum,Kaplun Gal,Bansal Yamini,Yang Tristan,Barak Boaz,Sutskever Ilya
Abstract
Abstract
We show that a variety of modern deep learning tasks exhibit a ‘double-descent’ phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.
Subject
Statistics, Probability and Uncertainty,Statistics and Probability,Statistical and Nonlinear Physics
Reference32 articles.
1. High-dimensional dynamics of generalization error in neural networks;Advani,2017
2. Benign overfitting in linear regression;Bartlett,2019
3. Reconciling modern machine learning and the bias-variance trade-off;Belkin,2018
4. Two models of double descent for weak features;Belkin,2019
5. A new look at an old problem: a universal learning approach to linear regression;Bibas,2019
Cited by
138 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献