Author:
Spigler Stefano,Geiger Mario,Wyart Matthieu
Abstract
Abstract
How many training data are needed to learn a supervised task? It is often observed that the generalization error decreases as n
−β
where n is the number of training examples and β is an exponent that depends on both data and algorithm. In this work we measure β when applying kernel methods to real datasets. For MNIST we find β ≈ 0.4 and for CIFAR10 β ≈ 0.1, for both regression and classification tasks, and for Gaussian or Laplace kernels. To rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, we study the teacher–student framework for kernels. In this scheme, a teacher generates data according to a Gaussian random field, and a student learns them via kernel regression. With a simplifying assumption—namely that the data are sampled from a regular lattice—we derive analytically β for translation invariant kernels, using previous results from the kriging literature. Provided that the student is not too sensitive to high frequencies, β depends only on the smoothness and dimension of the training data. We confirm numerically that these predictions hold when the training points are sampled at random on a hypersphere. Overall, the test error is found to be controlled by the magnitude of the projection of the true function on the kernel eigenvectors whose rank is larger than n. Using this idea we predict the exponent β from real data by performing kernel PCA, leading to β ≈ 0.36 for MNIST and β ≈ 0.07 for CIFAR10, in good agreement with observations. We argue that these rather large exponents are possible due to the small effective dimension of the data.
Subject
Statistics, Probability and Uncertainty,Statistics and Probability,Statistical and Nonlinear Physics
Reference34 articles.
1. Deep learning scaling is predictable, empirically;Hestness,2017
2. Neural tangent kernel: convergence and generalization in neural networks;Arthur;Adv. Neural Inf. Process. Syst.,2018
3. Invariant scattering convolution networks;Bruna;IEEE Trans. Pattern Anal. Mach. Intell.,2013
4. On exact computation with an infinitely wide neural net;Arora,2019
5. Distance-based classification with lipschitz functions;von Luxburg;J. Mach. Learn. Res.,2004
Cited by
20 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献