Author:
Geiger Mario,Spigler Stefano,Jacot Arthur,Wyart Matthieu
Abstract
Abstract
Two distinct limits for deep learning have been derived as the network width h → ∞, depending on how the weights of the last layer scale with h. In the neural tangent Kernel (NTK) limit, the dynamics becomes linear in the weights and is described by a frozen kernel Θ (the NTK). By contrast, in the mean-field limit, the dynamics can be expressed in terms of the distribution of the parameters associated with a neuron, that follows a partial differential equation. In this work we consider deep networks where the weights in the last layer scale as αh
−1/2 at initialization. By varying α and h, we probe the crossover between the two limits. We observe two the previously identified regimes of ‘lazy training’ and ‘feature training’. In the lazy-training regime, the dynamics is almost linear and the NTK barely changes after initialization. The feature-training regime includes the mean-field formulation as a limiting case and is characterized by a kernel that evolves in time, and thus learns some features. We perform numerical experiments on MNIST, Fashion-MNIST, EMNIST and CIFAR10 and consider various architectures. We find that: (i) the two regimes are separated by an α* that scales as
1
h
. (ii) Network architecture and data structure play an important role in determining which regime is better: in our tests, fully-connected networks perform generally better in the lazy-training regime, unlike convolutional networks. (iii) In both regimes, the fluctuations δF induced on the learned function by initial conditions decay as
δ
F
∼
1
/
h
, leading to a performance that increases with h. The same improvement can also be obtained at an intermediate width by ensemble-averaging several networks that are trained independently. (iv) In the feature-training regime we identify a time scale
t
1
∼
h
α
, such that for t ≪ t
1 the dynamics is linear. At t ∼ t
1, the output has grown by a magnitude
h
and the changes of the tangent kernel | |ΔΘ| | become significant. Ultimately, it follows
|
|
Δ
Θ
|
|
∼
(
h
α
)
−
a
for ReLU and Softplus activation functions, with a < 2 and a → 2 as depth grows. We provide scaling arguments supporting these findings.
Subject
Statistics, Probability and Uncertainty,Statistics and Probability,Statistical and Nonlinear Physics
Reference34 articles.
1. High-dimensional dynamics of generalization error in neural networks;Advani,2017
2. A convergence theory for deep learning via over-parameterization;Allen-Zhu,2018
3. On exact computation with an infinitely wide neural ne;Arora,2019
4. Minnorm training: an algorithm for training overcomplete deep neural networks;Bansal,2018
5. Comparing dynamics: deep neural networks versus glassy systems;Baity-Jesi,2018
Cited by
22 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献