Abstract
We obtain sharp bounds on the estimation error of the Empirical Risk Minimization procedure, performed in a convex class and with respect to the squared loss, without assuming that class members and the target are bounded functions or have rapidly decaying tails.
Rather than resorting to a concentration-based argument, the method used here relies on a “small-ball” assumption and thus holds for classes consisting of heavy-tailed functions and for heavy-tailed targets.
The resulting estimates scale correctly with the “noise level” of the problem, and when applied to the classical, bounded scenario, always improve the known bounds.
Funder
an Israel Science Foundation grant
Mathematical Sciences Institute -- The Australian National University
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Hardware and Architecture,Information Systems,Control and Systems Engineering,Software
Cited by
108 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献