Abstract
“No free lunch” results state the impossibility of obtaining meaningful bounds on the error of a learning algorithm without prior assumptions and modelling, which is more or less realistic for a given problem. Some models are “expensive” (strong assumptions, such as sub-Gaussian tails), others are “cheap” (simply finite variance). As it is well known, the more you pay, the more you get: in other words, the most expensive models yield the more interesting bounds. Recent advances in robust statistics have investigated procedures to obtain tight bounds while keeping the cost of assumptions minimal. The present paper explores and exhibits what the limits are for obtaining tight probably approximately correct (PAC)-Bayes bounds in a robust setting for cheap models.
Subject
General Physics and Astronomy
Reference20 articles.
1. A primer on PAC-Bayesian learning;Guedj;arXiv,2019
2. A theory of the learnable
3. Robust machine learning by median-of-means: Theory and practice
4. A Probabilistic Theory of Pattern Recognition;Devroye,1996
5. Concentration Inequalities: A Nonasymptotic Theory of Independence;Boucheron,2013
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献