Fluctuations, bias, variance and ensemble of learners: exact asymptotics for convex losses in high-dimension
*
-
Published:2023-11-01
Issue:11
Volume:2023
Page:114001
-
ISSN:1742-5468
-
Container-title:Journal of Statistical Mechanics: Theory and Experiment
-
language:
-
Short-container-title:J. Stat. Mech.
Author:
Loureiro Bruno,Gerbelot Cédric,Refinetti Maria,Sicuro Gabriele,Krzakala Florent
Abstract
Abstract
From the sampling of data to the initialisation of parameters, randomness is ubiquitous in modern Machine Learning practice. Understanding the statistical fluctuations engendered by the different sources of randomness in prediction is therefore key to understanding robust generalisation. In this manuscript we develop a quantitative and rigorous theory for the study of fluctuations in an ensemble of generalised linear models trained on different, but correlated, features in high-dimensions. In particular, we provide a complete description of the asymptotic joint distribution of the empirical risk minimiser for generic convex loss and regularisation in the high-dimensional limit. Our result encompasses a rich set of classification and regression tasks, such as the lazy regime of overparametrised neural networks, or equivalently the random features approximation of kernels. While allowing to study directly the mitigating effect of ensembling (or bagging) on the bias-variance decomposition of the test error, our analysis also helps disentangle the contribution of statistical fluctuations, and the singular role played by the interpolation threshold that are at the roots of the ‘double-descent’ phenomenon.
Subject
Statistics, Probability and Uncertainty,Statistics and Probability,Statistical and Nonlinear Physics
Reference59 articles.
1. The neural tangent kernel in high dimensions: triple descent and a multi-scale theory of generalization;Adlam,2020b
2. Understanding double descent requires a fine-grained bias-variance decomposition;Adlam,2020a
3. High-dimensional dynamics of generalization error in neural networks;Advani,2017
4. Benign overfitting in linear regression;Bartlett;Proc. Natl Acad. Sci.,2020
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献